Dec 11 13:48:28 crc systemd[1]: Starting Kubernetes Kubelet... Dec 11 13:48:28 crc restorecon[4763]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:28 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 13:48:29 crc restorecon[4763]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 13:48:29 crc restorecon[4763]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 11 13:48:29 crc kubenswrapper[5050]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 11 13:48:29 crc kubenswrapper[5050]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 11 13:48:29 crc kubenswrapper[5050]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 11 13:48:29 crc kubenswrapper[5050]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 11 13:48:29 crc kubenswrapper[5050]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 11 13:48:29 crc kubenswrapper[5050]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.337502 5050 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.342741 5050 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.342768 5050 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.342778 5050 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.342785 5050 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.342791 5050 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.342799 5050 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.342806 5050 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.342814 5050 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.342821 5050 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.342827 5050 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.342833 5050 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.342839 5050 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.342845 5050 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.342851 5050 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.342857 5050 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.342863 5050 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.342869 5050 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.342875 5050 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.342881 5050 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.342887 5050 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.342893 5050 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.342898 5050 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.342904 5050 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.342911 5050 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.342917 5050 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.342922 5050 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.342931 5050 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.342937 5050 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.342945 5050 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.342953 5050 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.342960 5050 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.342966 5050 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.342972 5050 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.342980 5050 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.342989 5050 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.342996 5050 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.343001 5050 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.343036 5050 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.343042 5050 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.343047 5050 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.343052 5050 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.343060 5050 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.343066 5050 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.343071 5050 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.343076 5050 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.343081 5050 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.343086 5050 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.343091 5050 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.343095 5050 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.343102 5050 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.343108 5050 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.343114 5050 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.343120 5050 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.343126 5050 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.343132 5050 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.343138 5050 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.343144 5050 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.343149 5050 feature_gate.go:330] unrecognized feature gate: Example Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.343155 5050 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.343161 5050 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.343167 5050 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.343174 5050 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.343182 5050 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.343188 5050 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.343194 5050 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.343199 5050 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.343204 5050 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.343209 5050 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.343214 5050 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.343218 5050 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.343223 5050 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343615 5050 flags.go:64] FLAG: --address="0.0.0.0" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343636 5050 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343652 5050 flags.go:64] FLAG: --anonymous-auth="true" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343662 5050 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343672 5050 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343680 5050 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343689 5050 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343699 5050 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343707 5050 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343714 5050 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343723 5050 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343730 5050 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343738 5050 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343745 5050 flags.go:64] FLAG: --cgroup-root="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343753 5050 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343761 5050 flags.go:64] FLAG: --client-ca-file="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343768 5050 flags.go:64] FLAG: --cloud-config="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343774 5050 flags.go:64] FLAG: --cloud-provider="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343781 5050 flags.go:64] FLAG: --cluster-dns="[]" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343793 5050 flags.go:64] FLAG: --cluster-domain="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343800 5050 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343808 5050 flags.go:64] FLAG: --config-dir="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343816 5050 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343824 5050 flags.go:64] FLAG: --container-log-max-files="5" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343835 5050 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343843 5050 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343850 5050 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343860 5050 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343868 5050 flags.go:64] FLAG: --contention-profiling="false" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343876 5050 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343883 5050 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343891 5050 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343899 5050 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343908 5050 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343918 5050 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343926 5050 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343934 5050 flags.go:64] FLAG: --enable-load-reader="false" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343941 5050 flags.go:64] FLAG: --enable-server="true" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343949 5050 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343958 5050 flags.go:64] FLAG: --event-burst="100" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343965 5050 flags.go:64] FLAG: --event-qps="50" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343972 5050 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343979 5050 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343987 5050 flags.go:64] FLAG: --eviction-hard="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.343996 5050 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344004 5050 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344041 5050 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344049 5050 flags.go:64] FLAG: --eviction-soft="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344056 5050 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344064 5050 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344071 5050 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344079 5050 flags.go:64] FLAG: --experimental-mounter-path="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344087 5050 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344094 5050 flags.go:64] FLAG: --fail-swap-on="true" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344101 5050 flags.go:64] FLAG: --feature-gates="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344111 5050 flags.go:64] FLAG: --file-check-frequency="20s" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344118 5050 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344127 5050 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344134 5050 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344142 5050 flags.go:64] FLAG: --healthz-port="10248" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344149 5050 flags.go:64] FLAG: --help="false" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344158 5050 flags.go:64] FLAG: --hostname-override="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344165 5050 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344175 5050 flags.go:64] FLAG: --http-check-frequency="20s" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344182 5050 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344189 5050 flags.go:64] FLAG: --image-credential-provider-config="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344196 5050 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344204 5050 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344211 5050 flags.go:64] FLAG: --image-service-endpoint="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344218 5050 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344225 5050 flags.go:64] FLAG: --kube-api-burst="100" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344232 5050 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344240 5050 flags.go:64] FLAG: --kube-api-qps="50" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344247 5050 flags.go:64] FLAG: --kube-reserved="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344254 5050 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344262 5050 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344269 5050 flags.go:64] FLAG: --kubelet-cgroups="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344276 5050 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344284 5050 flags.go:64] FLAG: --lock-file="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344291 5050 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344299 5050 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344307 5050 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344319 5050 flags.go:64] FLAG: --log-json-split-stream="false" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344326 5050 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344334 5050 flags.go:64] FLAG: --log-text-split-stream="false" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344341 5050 flags.go:64] FLAG: --logging-format="text" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344348 5050 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344356 5050 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344363 5050 flags.go:64] FLAG: --manifest-url="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344370 5050 flags.go:64] FLAG: --manifest-url-header="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344380 5050 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344387 5050 flags.go:64] FLAG: --max-open-files="1000000" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344397 5050 flags.go:64] FLAG: --max-pods="110" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344404 5050 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344411 5050 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344418 5050 flags.go:64] FLAG: --memory-manager-policy="None" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344425 5050 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344432 5050 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344439 5050 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344450 5050 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344468 5050 flags.go:64] FLAG: --node-status-max-images="50" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344476 5050 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344483 5050 flags.go:64] FLAG: --oom-score-adj="-999" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344490 5050 flags.go:64] FLAG: --pod-cidr="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344497 5050 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344509 5050 flags.go:64] FLAG: --pod-manifest-path="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344516 5050 flags.go:64] FLAG: --pod-max-pids="-1" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344523 5050 flags.go:64] FLAG: --pods-per-core="0" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344530 5050 flags.go:64] FLAG: --port="10250" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344539 5050 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344546 5050 flags.go:64] FLAG: --provider-id="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344553 5050 flags.go:64] FLAG: --qos-reserved="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344560 5050 flags.go:64] FLAG: --read-only-port="10255" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344567 5050 flags.go:64] FLAG: --register-node="true" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344574 5050 flags.go:64] FLAG: --register-schedulable="true" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344580 5050 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344593 5050 flags.go:64] FLAG: --registry-burst="10" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344601 5050 flags.go:64] FLAG: --registry-qps="5" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344608 5050 flags.go:64] FLAG: --reserved-cpus="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344615 5050 flags.go:64] FLAG: --reserved-memory="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344646 5050 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344654 5050 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344661 5050 flags.go:64] FLAG: --rotate-certificates="false" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344668 5050 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344675 5050 flags.go:64] FLAG: --runonce="false" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344682 5050 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344689 5050 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344696 5050 flags.go:64] FLAG: --seccomp-default="false" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344703 5050 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344710 5050 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344718 5050 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344725 5050 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344732 5050 flags.go:64] FLAG: --storage-driver-password="root" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344739 5050 flags.go:64] FLAG: --storage-driver-secure="false" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344746 5050 flags.go:64] FLAG: --storage-driver-table="stats" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344755 5050 flags.go:64] FLAG: --storage-driver-user="root" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344762 5050 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344769 5050 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344776 5050 flags.go:64] FLAG: --system-cgroups="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344784 5050 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344796 5050 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344803 5050 flags.go:64] FLAG: --tls-cert-file="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344810 5050 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344819 5050 flags.go:64] FLAG: --tls-min-version="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344826 5050 flags.go:64] FLAG: --tls-private-key-file="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344833 5050 flags.go:64] FLAG: --topology-manager-policy="none" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344840 5050 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344847 5050 flags.go:64] FLAG: --topology-manager-scope="container" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344854 5050 flags.go:64] FLAG: --v="2" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344865 5050 flags.go:64] FLAG: --version="false" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344875 5050 flags.go:64] FLAG: --vmodule="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344883 5050 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.344891 5050 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345068 5050 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345077 5050 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345084 5050 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345091 5050 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345098 5050 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345104 5050 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345111 5050 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345125 5050 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345132 5050 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345138 5050 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345144 5050 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345150 5050 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345155 5050 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345161 5050 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345168 5050 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345173 5050 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345179 5050 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345185 5050 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345197 5050 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345203 5050 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345209 5050 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345215 5050 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345220 5050 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345227 5050 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345233 5050 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345238 5050 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345246 5050 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345253 5050 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345259 5050 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345266 5050 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345272 5050 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345278 5050 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345288 5050 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345295 5050 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345304 5050 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345312 5050 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345318 5050 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345324 5050 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345330 5050 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345339 5050 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345345 5050 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345350 5050 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345356 5050 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345362 5050 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345368 5050 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345373 5050 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345381 5050 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345388 5050 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345394 5050 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345400 5050 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345408 5050 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345414 5050 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345421 5050 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345427 5050 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345435 5050 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345442 5050 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345448 5050 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345455 5050 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345463 5050 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345470 5050 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345479 5050 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345486 5050 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345492 5050 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345498 5050 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345505 5050 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345511 5050 feature_gate.go:330] unrecognized feature gate: Example Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345517 5050 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345523 5050 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345529 5050 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345535 5050 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.345541 5050 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.345776 5050 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.357888 5050 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.357964 5050 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358198 5050 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358231 5050 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358247 5050 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358262 5050 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358274 5050 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358287 5050 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358301 5050 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358315 5050 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358327 5050 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358339 5050 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358352 5050 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358363 5050 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358374 5050 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358386 5050 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358397 5050 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358408 5050 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358419 5050 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358430 5050 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358440 5050 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358451 5050 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358462 5050 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358473 5050 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358484 5050 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358496 5050 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358508 5050 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358521 5050 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358537 5050 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358549 5050 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358560 5050 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358571 5050 feature_gate.go:330] unrecognized feature gate: Example Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358582 5050 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358593 5050 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358604 5050 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358617 5050 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358629 5050 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358640 5050 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358652 5050 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358663 5050 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358674 5050 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358686 5050 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358696 5050 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358707 5050 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358723 5050 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358742 5050 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358755 5050 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358767 5050 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358780 5050 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358793 5050 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358806 5050 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358819 5050 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358831 5050 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358845 5050 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358856 5050 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358868 5050 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358879 5050 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358893 5050 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358909 5050 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358923 5050 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358936 5050 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358948 5050 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358963 5050 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358978 5050 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.358996 5050 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.359050 5050 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.359064 5050 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.359075 5050 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.359086 5050 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.359097 5050 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.359107 5050 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.359118 5050 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.359129 5050 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.359147 5050 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.360968 5050 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361102 5050 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361107 5050 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361112 5050 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361116 5050 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361121 5050 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361126 5050 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361131 5050 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361138 5050 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361151 5050 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361157 5050 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361161 5050 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361168 5050 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361174 5050 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361178 5050 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361182 5050 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361187 5050 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361192 5050 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361196 5050 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361342 5050 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361648 5050 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361656 5050 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361663 5050 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361670 5050 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361676 5050 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361682 5050 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361688 5050 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361694 5050 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361703 5050 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361709 5050 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361715 5050 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361720 5050 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361725 5050 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361731 5050 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361737 5050 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361742 5050 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361748 5050 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361759 5050 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361765 5050 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361771 5050 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361777 5050 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361783 5050 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361788 5050 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361793 5050 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361799 5050 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361804 5050 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361810 5050 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361815 5050 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361820 5050 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361826 5050 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361832 5050 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361841 5050 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361852 5050 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361859 5050 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361865 5050 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361871 5050 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361876 5050 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361882 5050 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361887 5050 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361893 5050 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361898 5050 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361903 5050 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361910 5050 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361917 5050 feature_gate.go:330] unrecognized feature gate: Example Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361923 5050 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361929 5050 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361936 5050 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361942 5050 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361948 5050 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361953 5050 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.361961 5050 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.361970 5050 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.362256 5050 server.go:940] "Client rotation is on, will bootstrap in background" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.365704 5050 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.365822 5050 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.366433 5050 server.go:997] "Starting client certificate rotation" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.366467 5050 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.366644 5050 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-06 10:02:14.067883842 +0000 UTC Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.366709 5050 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 620h13m44.701176554s for next certificate rotation Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.373066 5050 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.374911 5050 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.383555 5050 log.go:25] "Validated CRI v1 runtime API" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.401789 5050 log.go:25] "Validated CRI v1 image API" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.405082 5050 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.408833 5050 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-11-13-43-43-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.408868 5050 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.425813 5050 manager.go:217] Machine: {Timestamp:2025-12-11 13:48:29.421356161 +0000 UTC m=+0.265078757 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:c31d9414-5746-4e21-8ce3-ec91383aa495 BootID:b2271413-c496-4418-9f0f-0dc8363c3a86 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:34:f5:b4 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:34:f5:b4 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:b8:8d:27 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:ae:f7:2c Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:21:82:49 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:bc:82:51 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:b9:77:96 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:62:b5:63:10:60:85 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:92:4a:bb:4f:2d:40 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.426058 5050 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.426226 5050 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.426550 5050 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.426712 5050 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.426745 5050 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.426949 5050 topology_manager.go:138] "Creating topology manager with none policy" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.426959 5050 container_manager_linux.go:303] "Creating device plugin manager" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.427155 5050 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.427180 5050 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.427505 5050 state_mem.go:36] "Initialized new in-memory state store" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.427586 5050 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.428375 5050 kubelet.go:418] "Attempting to sync node with API server" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.428395 5050 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.428417 5050 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.428435 5050 kubelet.go:324] "Adding apiserver pod source" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.428447 5050 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.447375 5050 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.448525 5050 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.450033 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.450351 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 13:48:29 crc kubenswrapper[5050]: E1211 13:48:29.450395 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 13:48:29 crc kubenswrapper[5050]: E1211 13:48:29.450454 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.450757 5050 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.451525 5050 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.451563 5050 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.451577 5050 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.451592 5050 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.451613 5050 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.451626 5050 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.451638 5050 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.451657 5050 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.451673 5050 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.451687 5050 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.451705 5050 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.451717 5050 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.452394 5050 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.453256 5050 server.go:1280] "Started kubelet" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.453498 5050 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.453494 5050 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.453648 5050 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.454028 5050 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.455772 5050 server.go:460] "Adding debug handlers to kubelet server" Dec 11 13:48:29 crc systemd[1]: Started Kubernetes Kubelet. Dec 11 13:48:29 crc kubenswrapper[5050]: E1211 13:48:29.456604 5050 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.147:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18802d57e4cbb847 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 13:48:29.453195335 +0000 UTC m=+0.296917931,LastTimestamp:2025-12-11 13:48:29.453195335 +0000 UTC m=+0.296917931,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.457180 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.457259 5050 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.457500 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 19:31:12.54857246 +0000 UTC Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.457761 5050 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.458057 5050 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 11 13:48:29 crc kubenswrapper[5050]: E1211 13:48:29.457854 5050 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.457772 5050 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.458307 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 13:48:29 crc kubenswrapper[5050]: E1211 13:48:29.458378 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 13:48:29 crc kubenswrapper[5050]: E1211 13:48:29.459614 5050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="200ms" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.459783 5050 factory.go:55] Registering systemd factory Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.459820 5050 factory.go:221] Registration of the systemd container factory successfully Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.462181 5050 factory.go:153] Registering CRI-O factory Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.462223 5050 factory.go:221] Registration of the crio container factory successfully Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.462314 5050 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.462356 5050 factory.go:103] Registering Raw factory Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.462385 5050 manager.go:1196] Started watching for new ooms in manager Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.463989 5050 manager.go:319] Starting recovery of all containers Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.475905 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476086 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476108 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476124 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476141 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476158 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476173 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476187 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476235 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476253 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476271 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476286 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476302 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476322 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476336 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476352 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476369 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476383 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476396 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476412 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476432 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476447 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476461 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476475 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476495 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476513 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476533 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476551 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476567 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476582 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476598 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476616 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476634 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476649 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476663 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476676 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476691 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476709 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476726 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476740 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476754 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476769 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476784 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476800 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476815 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476830 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476869 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476884 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476900 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476957 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476973 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.476986 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477005 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477039 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477052 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477067 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477081 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477102 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477131 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477150 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477170 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477192 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477215 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477236 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477263 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477279 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477297 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477317 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477337 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477357 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477373 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477390 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477408 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477427 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477447 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477465 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477485 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477505 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477529 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477546 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477565 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477629 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477650 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477681 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477703 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477721 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477737 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477757 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477785 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477806 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477830 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477848 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477869 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477891 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477914 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477932 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477956 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477977 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.477994 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478037 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478059 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478079 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478098 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478120 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478149 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478167 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478187 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478209 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478231 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478251 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478273 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478290 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478317 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478338 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478359 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478376 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478394 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478412 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478432 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478451 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478476 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478494 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478518 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478536 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478555 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478574 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478597 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478615 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478633 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478650 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478667 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478686 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478702 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478717 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478737 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478755 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478775 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478791 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478810 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478828 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478844 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478866 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478888 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478907 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478926 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478945 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478962 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478979 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.478995 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479043 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479067 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479086 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479110 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479129 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479153 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479173 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479193 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479214 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479240 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479259 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479279 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479298 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479318 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479336 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479353 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479373 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479394 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479413 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479434 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479454 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479473 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479493 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479512 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479531 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479554 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479572 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479591 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479610 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479628 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479647 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479668 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479687 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479708 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479729 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479750 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479773 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479794 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479811 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479828 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479846 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479869 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479888 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479906 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479925 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479949 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479968 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.479991 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.480089 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.484217 5050 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.484256 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.484277 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.484290 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.484307 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.484335 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.484382 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.484394 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.484411 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.484425 5050 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.484438 5050 reconstruct.go:97] "Volume reconstruction finished" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.484448 5050 reconciler.go:26] "Reconciler: start to sync state" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.487485 5050 manager.go:324] Recovery completed Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.497318 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.500782 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.500831 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.500844 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.504024 5050 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.504042 5050 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.504060 5050 state_mem.go:36] "Initialized new in-memory state store" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.542666 5050 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.544704 5050 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.544767 5050 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.544798 5050 kubelet.go:2335] "Starting kubelet main sync loop" Dec 11 13:48:29 crc kubenswrapper[5050]: E1211 13:48:29.544853 5050 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 11 13:48:29 crc kubenswrapper[5050]: W1211 13:48:29.545809 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 13:48:29 crc kubenswrapper[5050]: E1211 13:48:29.545884 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.554849 5050 policy_none.go:49] "None policy: Start" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.556294 5050 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.556359 5050 state_mem.go:35] "Initializing new in-memory state store" Dec 11 13:48:29 crc kubenswrapper[5050]: E1211 13:48:29.558526 5050 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.637999 5050 manager.go:334] "Starting Device Plugin manager" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.638085 5050 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.638103 5050 server.go:79] "Starting device plugin registration server" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.638619 5050 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.638645 5050 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.639066 5050 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.639159 5050 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.639175 5050 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 11 13:48:29 crc kubenswrapper[5050]: E1211 13:48:29.644440 5050 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.645769 5050 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.645841 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.651399 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.651481 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.651491 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.651668 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.651812 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.651866 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.652744 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.652781 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.652795 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.652747 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.652856 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.652869 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.652966 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.653029 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.653057 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.653925 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.653948 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.653957 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.654066 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.654192 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.654235 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.654405 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.654438 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.654450 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.655172 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.655204 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.655214 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.655299 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.655420 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.655456 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.655686 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.655723 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.655731 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.655884 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.655900 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.655909 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.656093 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.656123 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.656349 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.656378 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.656391 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.656730 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.656767 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.656781 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:29 crc kubenswrapper[5050]: E1211 13:48:29.661326 5050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="400ms" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.686780 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.686848 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.686873 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.686912 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.686929 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.686989 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.687063 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.687088 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.687150 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.687198 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.687231 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.687310 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.687335 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.687369 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.687389 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.739205 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.741139 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.741195 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.741206 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.741241 5050 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 13:48:29 crc kubenswrapper[5050]: E1211 13:48:29.741875 5050 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.788761 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.788847 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.788922 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.789038 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.789043 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.789162 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.789223 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.789249 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.789271 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.789312 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.789312 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.789328 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.789391 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.789400 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.789427 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.789394 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.789590 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.789590 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.789620 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.789670 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.789647 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.789730 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.789751 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.789753 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.789785 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.789789 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.789807 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.789828 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.789833 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.789907 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.942337 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.943779 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.943825 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.943836 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.943865 5050 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 13:48:29 crc kubenswrapper[5050]: E1211 13:48:29.944426 5050 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.983346 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 13:48:29 crc kubenswrapper[5050]: I1211 13:48:29.989935 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 11 13:48:30 crc kubenswrapper[5050]: I1211 13:48:30.009404 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:48:30 crc kubenswrapper[5050]: W1211 13:48:30.013847 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-9fc760b23a51326c61a27a593360bbe7bdac21fa86022c1dabd6bf932834f8e9 WatchSource:0}: Error finding container 9fc760b23a51326c61a27a593360bbe7bdac21fa86022c1dabd6bf932834f8e9: Status 404 returned error can't find the container with id 9fc760b23a51326c61a27a593360bbe7bdac21fa86022c1dabd6bf932834f8e9 Dec 11 13:48:30 crc kubenswrapper[5050]: I1211 13:48:30.017244 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:48:30 crc kubenswrapper[5050]: W1211 13:48:30.018346 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-0d54bb88da8bb076f7fad53fa5826fe38ef8fdb26e1572f2bf8e4ae45b21ac1f WatchSource:0}: Error finding container 0d54bb88da8bb076f7fad53fa5826fe38ef8fdb26e1572f2bf8e4ae45b21ac1f: Status 404 returned error can't find the container with id 0d54bb88da8bb076f7fad53fa5826fe38ef8fdb26e1572f2bf8e4ae45b21ac1f Dec 11 13:48:30 crc kubenswrapper[5050]: I1211 13:48:30.040823 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 13:48:30 crc kubenswrapper[5050]: E1211 13:48:30.062831 5050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="800ms" Dec 11 13:48:30 crc kubenswrapper[5050]: W1211 13:48:30.280968 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 13:48:30 crc kubenswrapper[5050]: E1211 13:48:30.281133 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 13:48:30 crc kubenswrapper[5050]: W1211 13:48:30.307955 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 13:48:30 crc kubenswrapper[5050]: E1211 13:48:30.308097 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 13:48:30 crc kubenswrapper[5050]: I1211 13:48:30.345542 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:30 crc kubenswrapper[5050]: I1211 13:48:30.455713 5050 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 13:48:30 crc kubenswrapper[5050]: I1211 13:48:30.458739 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 19:49:23.875233283 +0000 UTC Dec 11 13:48:30 crc kubenswrapper[5050]: I1211 13:48:30.545249 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:30 crc kubenswrapper[5050]: I1211 13:48:30.545846 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:30 crc kubenswrapper[5050]: I1211 13:48:30.545871 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:30 crc kubenswrapper[5050]: I1211 13:48:30.545960 5050 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 13:48:30 crc kubenswrapper[5050]: E1211 13:48:30.546644 5050 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Dec 11 13:48:30 crc kubenswrapper[5050]: I1211 13:48:30.549784 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0d54bb88da8bb076f7fad53fa5826fe38ef8fdb26e1572f2bf8e4ae45b21ac1f"} Dec 11 13:48:30 crc kubenswrapper[5050]: I1211 13:48:30.551332 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"9fc760b23a51326c61a27a593360bbe7bdac21fa86022c1dabd6bf932834f8e9"} Dec 11 13:48:30 crc kubenswrapper[5050]: W1211 13:48:30.556944 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-ec8c8afe31c60e8256bfdb26edb33f07866f5ff1b4230a1e179cd5e90d2564e7 WatchSource:0}: Error finding container ec8c8afe31c60e8256bfdb26edb33f07866f5ff1b4230a1e179cd5e90d2564e7: Status 404 returned error can't find the container with id ec8c8afe31c60e8256bfdb26edb33f07866f5ff1b4230a1e179cd5e90d2564e7 Dec 11 13:48:30 crc kubenswrapper[5050]: W1211 13:48:30.560999 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-111ed4b6c437278631099b0becb3870cdda39b72d980e601e015f9058b77c0e2 WatchSource:0}: Error finding container 111ed4b6c437278631099b0becb3870cdda39b72d980e601e015f9058b77c0e2: Status 404 returned error can't find the container with id 111ed4b6c437278631099b0becb3870cdda39b72d980e601e015f9058b77c0e2 Dec 11 13:48:30 crc kubenswrapper[5050]: W1211 13:48:30.589607 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-9b18ef3f3a3d02fa59959ae904523debc5eac308fd619074950af5cbdd7c82f0 WatchSource:0}: Error finding container 9b18ef3f3a3d02fa59959ae904523debc5eac308fd619074950af5cbdd7c82f0: Status 404 returned error can't find the container with id 9b18ef3f3a3d02fa59959ae904523debc5eac308fd619074950af5cbdd7c82f0 Dec 11 13:48:30 crc kubenswrapper[5050]: W1211 13:48:30.592269 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 13:48:30 crc kubenswrapper[5050]: E1211 13:48:30.592393 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 13:48:30 crc kubenswrapper[5050]: W1211 13:48:30.846946 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 13:48:30 crc kubenswrapper[5050]: E1211 13:48:30.847074 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 13:48:30 crc kubenswrapper[5050]: E1211 13:48:30.864172 5050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="1.6s" Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.347226 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.349385 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.349435 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.349452 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.349481 5050 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 13:48:31 crc kubenswrapper[5050]: E1211 13:48:31.350113 5050 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.455145 5050 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.459170 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 17:21:00.929241668 +0000 UTC Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.459245 5050 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 795h32m29.47000426s for next certificate rotation Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.557744 5050 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d61208f7fc4b2f939d418793e7d1e0f9b12ce4a9c4fd5be0d223050a428f4fad" exitCode=0 Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.557834 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"d61208f7fc4b2f939d418793e7d1e0f9b12ce4a9c4fd5be0d223050a428f4fad"} Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.557962 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"111ed4b6c437278631099b0becb3870cdda39b72d980e601e015f9058b77c0e2"} Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.558193 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.559442 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a5990aa25c5e6fd0c4328adb0efd594ff4a31f1ee1734928beaa2608b6f16ccf"} Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.559478 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ec8c8afe31c60e8256bfdb26edb33f07866f5ff1b4230a1e179cd5e90d2564e7"} Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.559578 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.559622 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.559645 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.561522 5050 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="bcaf2650843149a711d937e5569baf705ad4b603ed5f7b8f1d9b8b215f064215" exitCode=0 Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.561626 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"bcaf2650843149a711d937e5569baf705ad4b603ed5f7b8f1d9b8b215f064215"} Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.561652 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.562219 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.562982 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.563043 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.562998 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.563092 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.563123 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.563145 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.564258 5050 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="d112e5e541ffeecf4bc60cd76ad38d4f99af3c00aabc9522c7613ae824ae3e54" exitCode=0 Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.564353 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.564368 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"d112e5e541ffeecf4bc60cd76ad38d4f99af3c00aabc9522c7613ae824ae3e54"} Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.565699 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.565730 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.565740 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.566834 5050 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="31351645739cb94814e5a03c7a4e120d2b768c560a7f71a6ba00dd8801c1750a" exitCode=0 Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.566876 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"31351645739cb94814e5a03c7a4e120d2b768c560a7f71a6ba00dd8801c1750a"} Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.566902 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9b18ef3f3a3d02fa59959ae904523debc5eac308fd619074950af5cbdd7c82f0"} Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.566996 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.567708 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.567740 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:31 crc kubenswrapper[5050]: I1211 13:48:31.567750 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:32 crc kubenswrapper[5050]: W1211 13:48:32.371820 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 13:48:32 crc kubenswrapper[5050]: E1211 13:48:32.372387 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 13:48:32 crc kubenswrapper[5050]: W1211 13:48:32.425970 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 13:48:32 crc kubenswrapper[5050]: E1211 13:48:32.426116 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 13:48:32 crc kubenswrapper[5050]: I1211 13:48:32.455811 5050 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 13:48:32 crc kubenswrapper[5050]: E1211 13:48:32.465686 5050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="3.2s" Dec 11 13:48:32 crc kubenswrapper[5050]: W1211 13:48:32.525495 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 13:48:32 crc kubenswrapper[5050]: E1211 13:48:32.525632 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 13:48:32 crc kubenswrapper[5050]: I1211 13:48:32.571749 5050 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2f6020f4b224493c9fe247c74a727434ae8c5b6a567c5765183c741814a60b19" exitCode=0 Dec 11 13:48:32 crc kubenswrapper[5050]: I1211 13:48:32.571842 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2f6020f4b224493c9fe247c74a727434ae8c5b6a567c5765183c741814a60b19"} Dec 11 13:48:32 crc kubenswrapper[5050]: I1211 13:48:32.571936 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:32 crc kubenswrapper[5050]: I1211 13:48:32.577118 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:32 crc kubenswrapper[5050]: I1211 13:48:32.577167 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:32 crc kubenswrapper[5050]: I1211 13:48:32.577180 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:32 crc kubenswrapper[5050]: I1211 13:48:32.580597 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"955d5798f59dc76a717ed7c367ba8a99b12f8975880c33014a0a5474ba388da8"} Dec 11 13:48:32 crc kubenswrapper[5050]: I1211 13:48:32.580632 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:32 crc kubenswrapper[5050]: I1211 13:48:32.581819 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:32 crc kubenswrapper[5050]: I1211 13:48:32.581852 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:32 crc kubenswrapper[5050]: I1211 13:48:32.581868 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:32 crc kubenswrapper[5050]: I1211 13:48:32.589889 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"aa46193e5c8a52c18bfe9224a45eaa212805b8012309fa25b4a5c39d22dd7785"} Dec 11 13:48:32 crc kubenswrapper[5050]: I1211 13:48:32.592413 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a8ed77421db2de6a328f1925acda12a41ca2a56cf101a962c541cf04afa16205"} Dec 11 13:48:32 crc kubenswrapper[5050]: I1211 13:48:32.594216 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a4fb1e4d5d5ef2610256022be54cf6bfa7d84fd5b3ba748820f21523691660f8"} Dec 11 13:48:32 crc kubenswrapper[5050]: I1211 13:48:32.594246 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cfb1d118b42c67bf01a21819c2cce27c155fc53a15b8dc848a22c4f162759e59"} Dec 11 13:48:32 crc kubenswrapper[5050]: I1211 13:48:32.950828 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:32 crc kubenswrapper[5050]: I1211 13:48:32.952476 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:32 crc kubenswrapper[5050]: I1211 13:48:32.952526 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:32 crc kubenswrapper[5050]: I1211 13:48:32.952538 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:32 crc kubenswrapper[5050]: I1211 13:48:32.952567 5050 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 13:48:32 crc kubenswrapper[5050]: E1211 13:48:32.953190 5050 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.147:6443: connect: connection refused" node="crc" Dec 11 13:48:33 crc kubenswrapper[5050]: I1211 13:48:33.599484 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"32a968fd70cd12b505bdb2d62d76a9e170e69751ddc9681c4a2370e649bab5ba"} Dec 11 13:48:33 crc kubenswrapper[5050]: I1211 13:48:33.599539 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"fb17c84fcbbac0b0cf96013c02954e53f9f8242caef69b22edcbb35e7b5f7d2a"} Dec 11 13:48:33 crc kubenswrapper[5050]: I1211 13:48:33.599683 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:33 crc kubenswrapper[5050]: I1211 13:48:33.600655 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:33 crc kubenswrapper[5050]: I1211 13:48:33.600690 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:33 crc kubenswrapper[5050]: I1211 13:48:33.600703 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:33 crc kubenswrapper[5050]: I1211 13:48:33.603109 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5c2283a6b4d7e02755ef3c112c65859a852db517e3fd1f33779935a45d11db1f"} Dec 11 13:48:33 crc kubenswrapper[5050]: I1211 13:48:33.603149 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"dc076730e23713ac357165823064125008a54267d6bdfc7f63dfe6f59fda1d8f"} Dec 11 13:48:33 crc kubenswrapper[5050]: I1211 13:48:33.605813 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6cc748ac537132a4bb5d6c2b4b0fdb905239225d53891bba26ddd07afe31b25b"} Dec 11 13:48:33 crc kubenswrapper[5050]: I1211 13:48:33.605874 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:33 crc kubenswrapper[5050]: I1211 13:48:33.606832 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:33 crc kubenswrapper[5050]: I1211 13:48:33.606873 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:33 crc kubenswrapper[5050]: I1211 13:48:33.606884 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:33 crc kubenswrapper[5050]: I1211 13:48:33.610228 5050 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6ac13709c07c32c44bd85674138ab8933caf15cbd1ad9a97dc8dbbc9d4cc89ae" exitCode=0 Dec 11 13:48:33 crc kubenswrapper[5050]: I1211 13:48:33.610369 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:33 crc kubenswrapper[5050]: I1211 13:48:33.610402 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:33 crc kubenswrapper[5050]: I1211 13:48:33.610404 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6ac13709c07c32c44bd85674138ab8933caf15cbd1ad9a97dc8dbbc9d4cc89ae"} Dec 11 13:48:33 crc kubenswrapper[5050]: I1211 13:48:33.613944 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:33 crc kubenswrapper[5050]: I1211 13:48:33.613991 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:33 crc kubenswrapper[5050]: I1211 13:48:33.614002 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:33 crc kubenswrapper[5050]: I1211 13:48:33.615229 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:33 crc kubenswrapper[5050]: I1211 13:48:33.615264 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:33 crc kubenswrapper[5050]: I1211 13:48:33.615276 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:34 crc kubenswrapper[5050]: I1211 13:48:34.500387 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:48:34 crc kubenswrapper[5050]: I1211 13:48:34.616059 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4ec38ef76428e2528c1f8f54b52125cd13484e0e07d5b2db41676afd53bacf47"} Dec 11 13:48:34 crc kubenswrapper[5050]: I1211 13:48:34.616129 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6a16547a61c3707938e785b0c92b50c8b4edba0a2781b5a56a9ba17ae8bddc5e"} Dec 11 13:48:34 crc kubenswrapper[5050]: I1211 13:48:34.619200 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"dfb942fdbd9d669df2b809fbb38479ab9e6cfcf013fb67c19a2155335b5018e3"} Dec 11 13:48:34 crc kubenswrapper[5050]: I1211 13:48:34.619256 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4ae4bb2bd785bbc626814e84d88ab77f2705e63cd987329e5da3806265c35256"} Dec 11 13:48:34 crc kubenswrapper[5050]: I1211 13:48:34.619264 5050 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 13:48:34 crc kubenswrapper[5050]: I1211 13:48:34.619293 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:34 crc kubenswrapper[5050]: I1211 13:48:34.619333 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:34 crc kubenswrapper[5050]: I1211 13:48:34.619296 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:34 crc kubenswrapper[5050]: I1211 13:48:34.620427 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:34 crc kubenswrapper[5050]: I1211 13:48:34.620451 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:34 crc kubenswrapper[5050]: I1211 13:48:34.620460 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:34 crc kubenswrapper[5050]: I1211 13:48:34.620463 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:34 crc kubenswrapper[5050]: I1211 13:48:34.620496 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:34 crc kubenswrapper[5050]: I1211 13:48:34.620508 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:34 crc kubenswrapper[5050]: I1211 13:48:34.620471 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:34 crc kubenswrapper[5050]: I1211 13:48:34.620667 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:34 crc kubenswrapper[5050]: I1211 13:48:34.620676 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:34 crc kubenswrapper[5050]: I1211 13:48:34.687266 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:48:35 crc kubenswrapper[5050]: I1211 13:48:35.626868 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:35 crc kubenswrapper[5050]: I1211 13:48:35.627530 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"83c8d1b9d2e1ee651f7043853d305369ba45333d00b9b4d3695de9cd763f1e23"} Dec 11 13:48:35 crc kubenswrapper[5050]: I1211 13:48:35.627594 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e1f2df17c1c9d05f5906f32aff2f35a0eab2a1b5f005b61ddcc6f72b3e137d72"} Dec 11 13:48:35 crc kubenswrapper[5050]: I1211 13:48:35.627692 5050 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 13:48:35 crc kubenswrapper[5050]: I1211 13:48:35.627720 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:35 crc kubenswrapper[5050]: I1211 13:48:35.627960 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:35 crc kubenswrapper[5050]: I1211 13:48:35.627999 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:35 crc kubenswrapper[5050]: I1211 13:48:35.628038 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:35 crc kubenswrapper[5050]: I1211 13:48:35.628919 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:35 crc kubenswrapper[5050]: I1211 13:48:35.629049 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:35 crc kubenswrapper[5050]: I1211 13:48:35.629070 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:36 crc kubenswrapper[5050]: I1211 13:48:36.153646 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:36 crc kubenswrapper[5050]: I1211 13:48:36.155003 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:36 crc kubenswrapper[5050]: I1211 13:48:36.155061 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:36 crc kubenswrapper[5050]: I1211 13:48:36.155074 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:36 crc kubenswrapper[5050]: I1211 13:48:36.155116 5050 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 13:48:36 crc kubenswrapper[5050]: I1211 13:48:36.634938 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"62b60ca3edad13ed1b9c28956f47fbe09675247b3dfe88f53ae7e60760880f0c"} Dec 11 13:48:36 crc kubenswrapper[5050]: I1211 13:48:36.635045 5050 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 13:48:36 crc kubenswrapper[5050]: I1211 13:48:36.635127 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:36 crc kubenswrapper[5050]: I1211 13:48:36.635059 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:36 crc kubenswrapper[5050]: I1211 13:48:36.636529 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:36 crc kubenswrapper[5050]: I1211 13:48:36.636574 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:36 crc kubenswrapper[5050]: I1211 13:48:36.636589 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:36 crc kubenswrapper[5050]: I1211 13:48:36.636526 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:36 crc kubenswrapper[5050]: I1211 13:48:36.636813 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:36 crc kubenswrapper[5050]: I1211 13:48:36.636857 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:36 crc kubenswrapper[5050]: I1211 13:48:36.724523 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:48:37 crc kubenswrapper[5050]: I1211 13:48:37.637418 5050 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 13:48:37 crc kubenswrapper[5050]: I1211 13:48:37.637479 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:37 crc kubenswrapper[5050]: I1211 13:48:37.637485 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:37 crc kubenswrapper[5050]: I1211 13:48:37.638783 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:37 crc kubenswrapper[5050]: I1211 13:48:37.638836 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:37 crc kubenswrapper[5050]: I1211 13:48:37.638852 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:37 crc kubenswrapper[5050]: I1211 13:48:37.638874 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:37 crc kubenswrapper[5050]: I1211 13:48:37.638905 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:37 crc kubenswrapper[5050]: I1211 13:48:37.638919 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:38 crc kubenswrapper[5050]: I1211 13:48:38.147929 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 13:48:38 crc kubenswrapper[5050]: I1211 13:48:38.148175 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:38 crc kubenswrapper[5050]: I1211 13:48:38.149462 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:38 crc kubenswrapper[5050]: I1211 13:48:38.149506 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:38 crc kubenswrapper[5050]: I1211 13:48:38.149522 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:39 crc kubenswrapper[5050]: I1211 13:48:39.274905 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:48:39 crc kubenswrapper[5050]: I1211 13:48:39.275201 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:39 crc kubenswrapper[5050]: I1211 13:48:39.277219 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:39 crc kubenswrapper[5050]: I1211 13:48:39.277283 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:39 crc kubenswrapper[5050]: I1211 13:48:39.277309 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:39 crc kubenswrapper[5050]: I1211 13:48:39.281256 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:48:39 crc kubenswrapper[5050]: I1211 13:48:39.293118 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:48:39 crc kubenswrapper[5050]: I1211 13:48:39.293381 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:39 crc kubenswrapper[5050]: I1211 13:48:39.294823 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:39 crc kubenswrapper[5050]: I1211 13:48:39.294878 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:39 crc kubenswrapper[5050]: I1211 13:48:39.294899 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:39 crc kubenswrapper[5050]: I1211 13:48:39.643082 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:39 crc kubenswrapper[5050]: E1211 13:48:39.644728 5050 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 11 13:48:39 crc kubenswrapper[5050]: I1211 13:48:39.644883 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:39 crc kubenswrapper[5050]: I1211 13:48:39.644943 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:39 crc kubenswrapper[5050]: I1211 13:48:39.644962 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:39 crc kubenswrapper[5050]: I1211 13:48:39.920782 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 11 13:48:39 crc kubenswrapper[5050]: I1211 13:48:39.921320 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:39 crc kubenswrapper[5050]: I1211 13:48:39.923422 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:39 crc kubenswrapper[5050]: I1211 13:48:39.923472 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:39 crc kubenswrapper[5050]: I1211 13:48:39.923487 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:41 crc kubenswrapper[5050]: I1211 13:48:41.191653 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:48:41 crc kubenswrapper[5050]: I1211 13:48:41.191933 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:41 crc kubenswrapper[5050]: I1211 13:48:41.193525 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:41 crc kubenswrapper[5050]: I1211 13:48:41.193580 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:41 crc kubenswrapper[5050]: I1211 13:48:41.193593 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:41 crc kubenswrapper[5050]: I1211 13:48:41.197882 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:48:41 crc kubenswrapper[5050]: I1211 13:48:41.648939 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:41 crc kubenswrapper[5050]: I1211 13:48:41.650375 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:41 crc kubenswrapper[5050]: I1211 13:48:41.650458 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:41 crc kubenswrapper[5050]: I1211 13:48:41.650485 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:42 crc kubenswrapper[5050]: I1211 13:48:42.593447 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 11 13:48:42 crc kubenswrapper[5050]: I1211 13:48:42.593639 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:42 crc kubenswrapper[5050]: I1211 13:48:42.594996 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:42 crc kubenswrapper[5050]: I1211 13:48:42.595078 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:42 crc kubenswrapper[5050]: I1211 13:48:42.595095 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:42 crc kubenswrapper[5050]: I1211 13:48:42.823239 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:48:42 crc kubenswrapper[5050]: I1211 13:48:42.823473 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:42 crc kubenswrapper[5050]: I1211 13:48:42.825676 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:42 crc kubenswrapper[5050]: I1211 13:48:42.825727 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:42 crc kubenswrapper[5050]: I1211 13:48:42.825737 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:43 crc kubenswrapper[5050]: I1211 13:48:43.405584 5050 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 11 13:48:43 crc kubenswrapper[5050]: I1211 13:48:43.405696 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 11 13:48:43 crc kubenswrapper[5050]: I1211 13:48:43.417123 5050 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 11 13:48:43 crc kubenswrapper[5050]: I1211 13:48:43.417209 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 11 13:48:45 crc kubenswrapper[5050]: I1211 13:48:45.824277 5050 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 13:48:45 crc kubenswrapper[5050]: I1211 13:48:45.824376 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 13:48:46 crc kubenswrapper[5050]: I1211 13:48:46.731631 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:48:46 crc kubenswrapper[5050]: I1211 13:48:46.731888 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:46 crc kubenswrapper[5050]: I1211 13:48:46.733579 5050 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 11 13:48:46 crc kubenswrapper[5050]: I1211 13:48:46.734184 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 11 13:48:46 crc kubenswrapper[5050]: I1211 13:48:46.736647 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:46 crc kubenswrapper[5050]: I1211 13:48:46.736718 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:46 crc kubenswrapper[5050]: I1211 13:48:46.736741 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:46 crc kubenswrapper[5050]: I1211 13:48:46.743346 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:48:47 crc kubenswrapper[5050]: I1211 13:48:47.664172 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:47 crc kubenswrapper[5050]: I1211 13:48:47.664896 5050 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 11 13:48:47 crc kubenswrapper[5050]: I1211 13:48:47.665000 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 11 13:48:47 crc kubenswrapper[5050]: I1211 13:48:47.665440 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:47 crc kubenswrapper[5050]: I1211 13:48:47.665527 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:47 crc kubenswrapper[5050]: I1211 13:48:47.665552 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:48 crc kubenswrapper[5050]: E1211 13:48:48.379462 5050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.380619 5050 trace.go:236] Trace[1503588637]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Dec-2025 13:48:36.401) (total time: 11979ms): Dec 11 13:48:48 crc kubenswrapper[5050]: Trace[1503588637]: ---"Objects listed" error: 11979ms (13:48:48.380) Dec 11 13:48:48 crc kubenswrapper[5050]: Trace[1503588637]: [11.979303843s] [11.979303843s] END Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.380656 5050 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.381351 5050 trace.go:236] Trace[1314326777]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Dec-2025 13:48:35.999) (total time: 12381ms): Dec 11 13:48:48 crc kubenswrapper[5050]: Trace[1314326777]: ---"Objects listed" error: 12381ms (13:48:48.381) Dec 11 13:48:48 crc kubenswrapper[5050]: Trace[1314326777]: [12.381960887s] [12.381960887s] END Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.381387 5050 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.382443 5050 trace.go:236] Trace[120924182]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Dec-2025 13:48:33.416) (total time: 14965ms): Dec 11 13:48:48 crc kubenswrapper[5050]: Trace[120924182]: ---"Objects listed" error: 14965ms (13:48:48.382) Dec 11 13:48:48 crc kubenswrapper[5050]: Trace[120924182]: [14.965787974s] [14.965787974s] END Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.382484 5050 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 11 13:48:48 crc kubenswrapper[5050]: E1211 13:48:48.383804 5050 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.384722 5050 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.386106 5050 trace.go:236] Trace[406662796]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Dec-2025 13:48:35.915) (total time: 12470ms): Dec 11 13:48:48 crc kubenswrapper[5050]: Trace[406662796]: ---"Objects listed" error: 12469ms (13:48:48.385) Dec 11 13:48:48 crc kubenswrapper[5050]: Trace[406662796]: [12.470214827s] [12.470214827s] END Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.386134 5050 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.439814 5050 apiserver.go:52] "Watching apiserver" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.444435 5050 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.444939 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.445388 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.445585 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:48:48 crc kubenswrapper[5050]: E1211 13:48:48.445700 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.445715 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.445781 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 13:48:48 crc kubenswrapper[5050]: E1211 13:48:48.446054 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.446073 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.446162 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:48:48 crc kubenswrapper[5050]: E1211 13:48:48.446605 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.448985 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.449266 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.449430 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.450648 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.451070 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.451256 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.451433 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.451842 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.452102 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.459671 5050 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.485590 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.485721 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.485776 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.485829 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.485882 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.485933 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.485934 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.485980 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.486082 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.486116 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.486130 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.486184 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.486216 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.486284 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.486350 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.486624 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.486626 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.488692 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.489307 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.489614 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.489899 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.490131 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.490315 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.490357 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.490573 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.490632 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.490766 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.490822 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.490855 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.490896 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.490936 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.490970 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.491000 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.491104 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.491410 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.491537 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.491571 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.491686 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.491712 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.491740 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.491762 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.491769 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.491812 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.491882 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.491935 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.491966 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.491998 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.492053 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.492081 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.492106 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.492149 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.492184 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.492230 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.492258 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.492287 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.492315 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.492340 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.492369 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.492403 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.492487 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.492513 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.492539 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.492568 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.492593 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.492618 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.492642 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.492671 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.492698 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.492725 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.492755 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.492782 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.492812 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.492836 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.492863 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.492885 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.492913 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.492955 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.492976 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.493001 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.493046 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.493070 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.493100 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.493123 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.493144 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.493163 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.493188 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.493216 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.493241 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.493259 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.493282 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.493303 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.493322 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.493345 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.493370 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.493393 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.493412 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.493433 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.493457 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.493481 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.493504 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.493527 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.493546 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.493570 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.494336 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.494403 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.494436 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.494462 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.494500 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.494547 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.494583 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.494614 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.494636 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.494659 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.494681 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.494705 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.494727 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.494751 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.494774 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.494797 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.494821 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.494846 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.494869 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.494889 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.494912 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.494936 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.494955 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.494985 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.495118 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.495146 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.495167 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.495188 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.495206 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.495230 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.495256 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.495278 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.495297 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.495319 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.495339 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.495361 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.495382 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.495406 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.495425 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.495444 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.495471 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.495496 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.495527 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.495553 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.495580 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.495597 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.495623 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.495650 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.495671 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.495696 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.495722 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.495744 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.495770 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.495794 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.495829 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.495852 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.495882 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.495911 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.495947 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.495983 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.496124 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.496157 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.496181 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.496207 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.496230 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.496252 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.496278 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.496299 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.496330 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.496352 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.496379 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.496407 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.496432 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.496455 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.496473 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.496495 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.496518 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.496545 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.496568 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.492071 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.492102 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.492311 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.492827 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.493200 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.493552 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.493886 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.494495 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.494961 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.495702 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.496108 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.496406 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.496861 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.497217 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.497378 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.497424 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.497561 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.497641 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.497680 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.497797 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.497832 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.497834 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.497887 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.497921 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.497955 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.497992 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.498042 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.498079 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.498095 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.498114 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.498148 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.498176 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.498205 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.498239 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.498281 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.498319 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.498360 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.498365 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.498398 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.498430 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.498468 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.498500 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.498527 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.498553 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.498559 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.498586 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.498618 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.500279 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.500679 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.500755 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.504917 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.511454 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.511596 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.512508 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.513105 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.514676 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.514744 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.514779 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.514814 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.514820 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.514845 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.514926 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.514962 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.514993 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.515044 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.515073 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.515104 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.515132 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.515159 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.515187 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.515222 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.515255 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.515282 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.515292 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.515312 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.515341 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.515434 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.515667 5050 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.515708 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.515728 5050 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.515790 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517071 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517099 5050 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517114 5050 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517128 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517140 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517155 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517166 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517180 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517192 5050 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517203 5050 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517212 5050 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517224 5050 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517235 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517246 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517256 5050 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517271 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517280 5050 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517291 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517302 5050 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517313 5050 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517325 5050 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517336 5050 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517347 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517358 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517369 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517380 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517391 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517400 5050 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517410 5050 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517426 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517438 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517448 5050 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517458 5050 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517469 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517479 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517489 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517499 5050 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517508 5050 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517518 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517532 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517542 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517553 5050 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517563 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517574 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517584 5050 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.515876 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.515944 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.516245 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517653 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.516340 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.516368 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.516558 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.516567 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.516831 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.516876 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: E1211 13:48:48.517808 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:48:49.017783416 +0000 UTC m=+19.861506002 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.516971 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517331 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517424 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517651 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517716 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517739 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517971 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.517988 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.518426 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.518483 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.518525 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.518548 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.523272 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.523428 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.524694 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.525046 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.525254 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.525596 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.525598 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.526146 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.526189 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.526382 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.526423 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.526604 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.526874 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.526907 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.527619 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.527661 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.527838 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.527876 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.528837 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.529070 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.529431 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.530658 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.530764 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.531046 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.531175 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.531278 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.531399 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.531431 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.531665 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.531665 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.530991 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.531811 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.531838 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.532114 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.532415 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.532543 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.532576 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.532609 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.532990 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.531062 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: E1211 13:48:48.533288 5050 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.533292 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:48:48 crc kubenswrapper[5050]: E1211 13:48:48.533366 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 13:48:49.033345178 +0000 UTC m=+19.877067764 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 13:48:48 crc kubenswrapper[5050]: E1211 13:48:48.533559 5050 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.533599 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.533580 5050 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 11 13:48:48 crc kubenswrapper[5050]: E1211 13:48:48.533675 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 13:48:49.033665256 +0000 UTC m=+19.877387842 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.534138 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.535466 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.535810 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.536407 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.536784 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.537381 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.537588 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.537662 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.538164 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.538407 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.538791 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.539715 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.540261 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.540649 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.541116 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.542554 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.542900 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.543181 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.545141 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.545516 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.545885 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.546537 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.546876 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.547386 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.547601 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.548115 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.548132 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.548535 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.548824 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.548898 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.553214 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.553401 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.553682 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.553837 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.553931 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.553913 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.553965 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.554227 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.554401 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.554576 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.554753 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.556175 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.556365 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.556474 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.559650 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.559993 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.560654 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.563472 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.566000 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.566356 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.566624 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.566972 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.567373 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.568088 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.568276 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: E1211 13:48:48.568609 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 13:48:48 crc kubenswrapper[5050]: E1211 13:48:48.568646 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 13:48:48 crc kubenswrapper[5050]: E1211 13:48:48.568666 5050 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:48:48 crc kubenswrapper[5050]: E1211 13:48:48.568752 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 13:48:49.068726716 +0000 UTC m=+19.912449502 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.569075 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.569221 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.569687 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.572910 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.574506 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.574607 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.575468 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.576781 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: E1211 13:48:48.579489 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 13:48:48 crc kubenswrapper[5050]: E1211 13:48:48.579540 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 13:48:48 crc kubenswrapper[5050]: E1211 13:48:48.579559 5050 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:48:48 crc kubenswrapper[5050]: E1211 13:48:48.579652 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 13:48:49.079620101 +0000 UTC m=+19.923342867 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.583844 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.583883 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.584110 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.584191 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.586141 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.587568 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.588084 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.588154 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.588462 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.588921 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.589127 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.591909 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.592621 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.593082 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.593289 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.594678 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.595137 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.595800 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.596099 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.596988 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.599336 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.599764 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.603184 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.603285 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.605055 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.608231 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.611425 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618055 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618136 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618198 5050 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618211 5050 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618238 5050 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618250 5050 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618261 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618270 5050 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618281 5050 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618294 5050 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618307 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618320 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618330 5050 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618342 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618353 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618365 5050 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618377 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618387 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618399 5050 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618412 5050 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618425 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618438 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618450 5050 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618462 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618471 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618483 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618492 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618501 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618512 5050 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618526 5050 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618534 5050 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618547 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618556 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618567 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618579 5050 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618593 5050 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618605 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618464 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618615 5050 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618656 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618671 5050 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618685 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618704 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618718 5050 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618728 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618738 5050 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618747 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618756 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618746 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618765 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618829 5050 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618847 5050 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618861 5050 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618875 5050 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618888 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618902 5050 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618915 5050 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618925 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618938 5050 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618951 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618963 5050 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618975 5050 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.618987 5050 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619000 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619046 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619060 5050 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619074 5050 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619086 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619098 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619112 5050 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619127 5050 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619139 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619153 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619166 5050 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619178 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619190 5050 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619205 5050 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619217 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619228 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619240 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619252 5050 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619264 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619278 5050 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619293 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619305 5050 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619317 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619328 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619340 5050 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619352 5050 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619366 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619379 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619392 5050 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619407 5050 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619421 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619434 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619446 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619457 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619468 5050 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619480 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619492 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619504 5050 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619530 5050 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619551 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619564 5050 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619578 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619590 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619604 5050 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619615 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619626 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619637 5050 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619648 5050 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619659 5050 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619670 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619680 5050 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619691 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619702 5050 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619713 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619726 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619737 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619749 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619760 5050 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619773 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619784 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619795 5050 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619806 5050 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619817 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619829 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619839 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619851 5050 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619862 5050 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619873 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619885 5050 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619896 5050 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619908 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619920 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619934 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619947 5050 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619958 5050 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619971 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619982 5050 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.619993 5050 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.620004 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.620037 5050 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.620050 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.620062 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.620074 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.625777 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.639169 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.643387 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-56g5c"] Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.643870 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-56g5c" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.647917 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: W1211 13:48:48.648134 5050 reflector.go:561] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": failed to list *v1.Secret: secrets "node-resolver-dockercfg-kz9s7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Dec 11 13:48:48 crc kubenswrapper[5050]: E1211 13:48:48.648205 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"node-resolver-dockercfg-kz9s7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-resolver-dockercfg-kz9s7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 11 13:48:48 crc kubenswrapper[5050]: W1211 13:48:48.648214 5050 reflector.go:561] object-"openshift-dns"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Dec 11 13:48:48 crc kubenswrapper[5050]: E1211 13:48:48.648241 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 11 13:48:48 crc kubenswrapper[5050]: W1211 13:48:48.648156 5050 reflector.go:561] object-"openshift-dns"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Dec 11 13:48:48 crc kubenswrapper[5050]: E1211 13:48:48.648268 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.651585 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.670462 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.702887 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.718863 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.721175 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fknsw\" (UniqueName: \"kubernetes.io/projected/c12c3ee1-28bd-431d-91ad-fa053c81a6bf-kube-api-access-fknsw\") pod \"node-resolver-56g5c\" (UID: \"c12c3ee1-28bd-431d-91ad-fa053c81a6bf\") " pod="openshift-dns/node-resolver-56g5c" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.721227 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c12c3ee1-28bd-431d-91ad-fa053c81a6bf-hosts-file\") pod \"node-resolver-56g5c\" (UID: \"c12c3ee1-28bd-431d-91ad-fa053c81a6bf\") " pod="openshift-dns/node-resolver-56g5c" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.721271 5050 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.721283 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.721294 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.742427 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.762551 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.767593 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.781037 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.794953 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.796090 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 13:48:48 crc kubenswrapper[5050]: W1211 13:48:48.796564 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-95d45b25735c40b35d03c2dfb83d74b0e20f764d528c0a0bf9bf63a5cbc07fae WatchSource:0}: Error finding container 95d45b25735c40b35d03c2dfb83d74b0e20f764d528c0a0bf9bf63a5cbc07fae: Status 404 returned error can't find the container with id 95d45b25735c40b35d03c2dfb83d74b0e20f764d528c0a0bf9bf63a5cbc07fae Dec 11 13:48:48 crc kubenswrapper[5050]: W1211 13:48:48.818108 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-bdb94be1232343ada165928038b1f52899f38b41c44c950dcc7dea189fb9dd48 WatchSource:0}: Error finding container bdb94be1232343ada165928038b1f52899f38b41c44c950dcc7dea189fb9dd48: Status 404 returned error can't find the container with id bdb94be1232343ada165928038b1f52899f38b41c44c950dcc7dea189fb9dd48 Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.822419 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fknsw\" (UniqueName: \"kubernetes.io/projected/c12c3ee1-28bd-431d-91ad-fa053c81a6bf-kube-api-access-fknsw\") pod \"node-resolver-56g5c\" (UID: \"c12c3ee1-28bd-431d-91ad-fa053c81a6bf\") " pod="openshift-dns/node-resolver-56g5c" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.822448 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c12c3ee1-28bd-431d-91ad-fa053c81a6bf-hosts-file\") pod \"node-resolver-56g5c\" (UID: \"c12c3ee1-28bd-431d-91ad-fa053c81a6bf\") " pod="openshift-dns/node-resolver-56g5c" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.822535 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c12c3ee1-28bd-431d-91ad-fa053c81a6bf-hosts-file\") pod \"node-resolver-56g5c\" (UID: \"c12c3ee1-28bd-431d-91ad-fa053c81a6bf\") " pod="openshift-dns/node-resolver-56g5c" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.843749 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.856395 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:48:48 crc kubenswrapper[5050]: I1211 13:48:48.879862 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-56g5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c12c3ee1-28bd-431d-91ad-fa053c81a6bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fknsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:48:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-56g5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.024518 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:48:49 crc kubenswrapper[5050]: E1211 13:48:49.024933 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:48:50.024894189 +0000 UTC m=+20.868616815 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.125183 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.125249 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.125280 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.125309 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:48:49 crc kubenswrapper[5050]: E1211 13:48:49.125498 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 13:48:49 crc kubenswrapper[5050]: E1211 13:48:49.125519 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 13:48:49 crc kubenswrapper[5050]: E1211 13:48:49.125534 5050 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:48:49 crc kubenswrapper[5050]: E1211 13:48:49.125594 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 13:48:50.125575835 +0000 UTC m=+20.969298421 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:48:49 crc kubenswrapper[5050]: E1211 13:48:49.125613 5050 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 13:48:49 crc kubenswrapper[5050]: E1211 13:48:49.125660 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 13:48:49 crc kubenswrapper[5050]: E1211 13:48:49.125676 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 13:48:49 crc kubenswrapper[5050]: E1211 13:48:49.125685 5050 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:48:49 crc kubenswrapper[5050]: E1211 13:48:49.125712 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 13:48:50.125703559 +0000 UTC m=+20.969426145 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:48:49 crc kubenswrapper[5050]: E1211 13:48:49.125733 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 13:48:50.125722589 +0000 UTC m=+20.969445175 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 13:48:49 crc kubenswrapper[5050]: E1211 13:48:49.125759 5050 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 13:48:49 crc kubenswrapper[5050]: E1211 13:48:49.125835 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 13:48:50.125802062 +0000 UTC m=+20.969524838 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.355965 5050 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:60150->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.356053 5050 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:60158->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.356141 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:60158->192.168.126.11:17697: read: connection reset by peer" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.356065 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:60150->192.168.126.11:17697: read: connection reset by peer" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.545360 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:48:49 crc kubenswrapper[5050]: E1211 13:48:49.545513 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.549925 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.550509 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.552116 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.552865 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.554050 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.554661 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.555403 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.556793 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.557533 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.558668 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.559166 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.559368 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.560989 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.561854 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.562495 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.563433 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.563937 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.564882 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.565319 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.565863 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.567047 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.567570 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.568645 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.569183 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.570441 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.570449 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.570978 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.571739 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.573348 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.573908 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.575292 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.575943 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.577125 5050 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.577319 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.582793 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.583539 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.585187 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.586235 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.587935 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.588821 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.590160 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.590982 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.592364 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.592980 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.594304 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.595178 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.596437 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.597055 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.597410 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.598216 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.598874 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.600173 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.600679 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.601574 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.602072 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.602636 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.603662 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.604286 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.608161 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.614890 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-56g5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c12c3ee1-28bd-431d-91ad-fa053c81a6bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fknsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:48:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-56g5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.623701 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.681800 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"bdb94be1232343ada165928038b1f52899f38b41c44c950dcc7dea189fb9dd48"} Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.684316 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"95d45b25735c40b35d03c2dfb83d74b0e20f764d528c0a0bf9bf63a5cbc07fae"} Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.685569 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"f31b96b30657fbbc0ecc1ab53e7e8e5d1660cebfdb97a7bccae37d21b66991fe"} Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.819675 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.971612 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.980649 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fknsw\" (UniqueName: \"kubernetes.io/projected/c12c3ee1-28bd-431d-91ad-fa053c81a6bf-kube-api-access-fknsw\") pod \"node-resolver-56g5c\" (UID: \"c12c3ee1-28bd-431d-91ad-fa053c81a6bf\") " pod="openshift-dns/node-resolver-56g5c" Dec 11 13:48:49 crc kubenswrapper[5050]: I1211 13:48:49.985664 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.033579 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:48:50 crc kubenswrapper[5050]: E1211 13:48:50.033803 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:48:52.033770079 +0000 UTC m=+22.877492665 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.135295 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.135360 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.135402 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.135431 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:48:50 crc kubenswrapper[5050]: E1211 13:48:50.135523 5050 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 13:48:50 crc kubenswrapper[5050]: E1211 13:48:50.135541 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 13:48:50 crc kubenswrapper[5050]: E1211 13:48:50.135566 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 13:48:50 crc kubenswrapper[5050]: E1211 13:48:50.135580 5050 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:48:50 crc kubenswrapper[5050]: E1211 13:48:50.135591 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 13:48:52.135571986 +0000 UTC m=+22.979294572 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 13:48:50 crc kubenswrapper[5050]: E1211 13:48:50.135613 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 13:48:52.135603237 +0000 UTC m=+22.979325823 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:48:50 crc kubenswrapper[5050]: E1211 13:48:50.135647 5050 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 13:48:50 crc kubenswrapper[5050]: E1211 13:48:50.135655 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 13:48:50 crc kubenswrapper[5050]: E1211 13:48:50.135704 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 13:48:50 crc kubenswrapper[5050]: E1211 13:48:50.135720 5050 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:48:50 crc kubenswrapper[5050]: E1211 13:48:50.135675 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 13:48:52.135667509 +0000 UTC m=+22.979390095 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 13:48:50 crc kubenswrapper[5050]: E1211 13:48:50.135815 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 13:48:52.135795492 +0000 UTC m=+22.979518068 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.170219 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-56g5c" Dec 11 13:48:50 crc kubenswrapper[5050]: W1211 13:48:50.181050 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc12c3ee1_28bd_431d_91ad_fa053c81a6bf.slice/crio-084aa1c7d42dfbc0baadf7f5ebc11186cf0cd9307d7b6a1da7038878d668c3fa WatchSource:0}: Error finding container 084aa1c7d42dfbc0baadf7f5ebc11186cf0cd9307d7b6a1da7038878d668c3fa: Status 404 returned error can't find the container with id 084aa1c7d42dfbc0baadf7f5ebc11186cf0cd9307d7b6a1da7038878d668c3fa Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.545699 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:48:50 crc kubenswrapper[5050]: E1211 13:48:50.546277 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.546710 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:48:50 crc kubenswrapper[5050]: E1211 13:48:50.546783 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.600216 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-klv95"] Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.600878 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-klv95" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.607063 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.607214 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.607485 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.607628 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-q9k57"] Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.607837 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.608527 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-wcb2s"] Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.608549 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.608727 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-4fhtp"] Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.608912 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.608979 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.609057 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.611530 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.611591 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.611698 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.611884 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.612846 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.612977 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.613098 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.613477 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.613545 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.613582 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.613723 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.613769 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.613832 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.613854 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.625534 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:48:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.644535 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-klv95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28eed9d5-26a0-42dd-b2a9-9a86841e2516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:48:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-klv95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:48:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.665259 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:48:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.678605 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:48:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.690945 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"fea0a82353e1815d59a43f9b649d7675206bae9df75cb4e568b6d7704b4d8271"} Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.692435 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-56g5c" event={"ID":"c12c3ee1-28bd-431d-91ad-fa053c81a6bf","Type":"ContainerStarted","Data":"d14f46f6c31399a2b9bd5f45c0856d783419b24a19a198e186dae2593a551b76"} Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.692499 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-56g5c" event={"ID":"c12c3ee1-28bd-431d-91ad-fa053c81a6bf","Type":"ContainerStarted","Data":"084aa1c7d42dfbc0baadf7f5ebc11186cf0cd9307d7b6a1da7038878d668c3fa"} Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.694325 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.696261 5050 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="dfb942fdbd9d669df2b809fbb38479ab9e6cfcf013fb67c19a2155335b5018e3" exitCode=255 Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.696305 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"dfb942fdbd9d669df2b809fbb38479ab9e6cfcf013fb67c19a2155335b5018e3"} Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.698796 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"61e3950610f64909e8cac86e375c431913174fe7038ade2f26dffb72452219d0"} Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.698840 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2156683be56db25c3dbc22e6c7b2cd1e13f57e2daafd647ab754a9243006ba92"} Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.699393 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:48:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.707535 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.707970 5050 scope.go:117] "RemoveContainer" containerID="dfb942fdbd9d669df2b809fbb38479ab9e6cfcf013fb67c19a2155335b5018e3" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.722374 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:48:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.740413 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/28eed9d5-26a0-42dd-b2a9-9a86841e2516-os-release\") pod \"multus-additional-cni-plugins-klv95\" (UID: \"28eed9d5-26a0-42dd-b2a9-9a86841e2516\") " pod="openshift-multus/multus-additional-cni-plugins-klv95" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.740723 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-cni-bin\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.740829 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkxsp\" (UniqueName: \"kubernetes.io/projected/7e849b2e-7cd7-4e49-acd2-deab139c699a-kube-api-access-mkxsp\") pod \"machine-config-daemon-wcb2s\" (UID: \"7e849b2e-7cd7-4e49-acd2-deab139c699a\") " pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.740961 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/28eed9d5-26a0-42dd-b2a9-9a86841e2516-cni-binary-copy\") pod \"multus-additional-cni-plugins-klv95\" (UID: \"28eed9d5-26a0-42dd-b2a9-9a86841e2516\") " pod="openshift-multus/multus-additional-cni-plugins-klv95" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.741088 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-slash\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.741278 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-run-ovn\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.741445 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-ovn-node-metrics-cert\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.741627 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-multus-socket-dir-parent\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.741798 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-log-socket\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.741966 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-env-overrides\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.742118 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7e849b2e-7cd7-4e49-acd2-deab139c699a-proxy-tls\") pod \"machine-config-daemon-wcb2s\" (UID: \"7e849b2e-7cd7-4e49-acd2-deab139c699a\") " pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.742270 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p7dg\" (UniqueName: \"kubernetes.io/projected/de09c7d4-952a-405d-9a54-32331c538ee2-kube-api-access-4p7dg\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.742402 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-kubelet\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.742569 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-host-run-netns\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.742711 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-host-var-lib-cni-multus\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.742864 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-var-lib-openvswitch\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.743036 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-cni-netd\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.743185 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7e849b2e-7cd7-4e49-acd2-deab139c699a-rootfs\") pod \"machine-config-daemon-wcb2s\" (UID: \"7e849b2e-7cd7-4e49-acd2-deab139c699a\") " pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.743334 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-host-run-k8s-cni-cncf-io\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.743480 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m6fs\" (UniqueName: \"kubernetes.io/projected/28eed9d5-26a0-42dd-b2a9-9a86841e2516-kube-api-access-5m6fs\") pod \"multus-additional-cni-plugins-klv95\" (UID: \"28eed9d5-26a0-42dd-b2a9-9a86841e2516\") " pod="openshift-multus/multus-additional-cni-plugins-klv95" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.743634 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.743787 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-os-release\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.743934 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/28eed9d5-26a0-42dd-b2a9-9a86841e2516-tuning-conf-dir\") pod \"multus-additional-cni-plugins-klv95\" (UID: \"28eed9d5-26a0-42dd-b2a9-9a86841e2516\") " pod="openshift-multus/multus-additional-cni-plugins-klv95" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.744118 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-ovnkube-config\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.744272 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-run-openvswitch\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.744446 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7e849b2e-7cd7-4e49-acd2-deab139c699a-mcd-auth-proxy-config\") pod \"machine-config-daemon-wcb2s\" (UID: \"7e849b2e-7cd7-4e49-acd2-deab139c699a\") " pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.744585 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-multus-cni-dir\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.744749 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-systemd-units\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.744953 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-run-systemd\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.745201 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-etc-kubernetes\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.745374 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-cnibin\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.745513 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-host-var-lib-cni-bin\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.745664 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-run-ovn-kubernetes\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.745818 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/de09c7d4-952a-405d-9a54-32331c538ee2-cni-binary-copy\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.745969 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-host-var-lib-kubelet\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.746151 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-etc-openvswitch\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.746298 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-run-netns\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.746460 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/de09c7d4-952a-405d-9a54-32331c538ee2-multus-daemon-config\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.746622 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-host-run-multus-certs\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.746784 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/28eed9d5-26a0-42dd-b2a9-9a86841e2516-system-cni-dir\") pod \"multus-additional-cni-plugins-klv95\" (UID: \"28eed9d5-26a0-42dd-b2a9-9a86841e2516\") " pod="openshift-multus/multus-additional-cni-plugins-klv95" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.746940 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/28eed9d5-26a0-42dd-b2a9-9a86841e2516-cnibin\") pod \"multus-additional-cni-plugins-klv95\" (UID: \"28eed9d5-26a0-42dd-b2a9-9a86841e2516\") " pod="openshift-multus/multus-additional-cni-plugins-klv95" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.747107 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/28eed9d5-26a0-42dd-b2a9-9a86841e2516-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-klv95\" (UID: \"28eed9d5-26a0-42dd-b2a9-9a86841e2516\") " pod="openshift-multus/multus-additional-cni-plugins-klv95" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.747249 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-node-log\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.747412 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-system-cni-dir\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.747561 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-multus-conf-dir\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.747713 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-hostroot\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.747873 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-ovnkube-script-lib\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.748395 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z872c\" (UniqueName: \"kubernetes.io/projected/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-kube-api-access-z872c\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.750083 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:48:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.764933 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-56g5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c12c3ee1-28bd-431d-91ad-fa053c81a6bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fknsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:48:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-56g5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:48:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.784541 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8faf83f9-4f21-437e-89d4-28a1f993604a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a8ed77421db2de6a328f1925acda12a41ca2a56cf101a962c541cf04afa16205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:48:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c2283a6b4d7e02755ef3c112c65859a852db517e3fd1f33779935a45d11db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:48:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dc076730e23713ac357165823064125008a54267d6bdfc7f63dfe6f59fda1d8f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:48:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfb942fdbd9d669df2b809fbb38479ab9e6cfcf013fb67c19a2155335b5018e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dfb942fdbd9d669df2b809fbb38479ab9e6cfcf013fb67c19a2155335b5018e3\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T13:48:49Z\\\",\\\"message\\\":\\\"file observer\\\\nW1211 13:48:48.402565 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1211 13:48:48.402791 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 13:48:48.404226 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-156630335/tls.crt::/tmp/serving-cert-156630335/tls.key\\\\\\\"\\\\nI1211 13:48:48.880279 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 13:48:49.308708 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 13:48:49.308740 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 13:48:49.308777 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 13:48:49.308784 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 13:48:49.325791 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 13:48:49.325819 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:48:49.325825 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 13:48:49.325831 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 13:48:49.325834 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 13:48:49.325838 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 13:48:49.325842 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 13:48:49.326061 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 13:48:49.335290 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T13:48:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ae4bb2bd785bbc626814e84d88ab77f2705e63cd987329e5da3806265c35256\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:48:33Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d61208f7fc4b2f939d418793e7d1e0f9b12ce4a9c4fd5be0d223050a428f4fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d61208f7fc4b2f939d418793e7d1e0f9b12ce4a9c4fd5be0d223050a428f4fad\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:48:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:48:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:48:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:48:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.801106 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-klv95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28eed9d5-26a0-42dd-b2a9-9a86841e2516\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5m6fs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:48:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-klv95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:48:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.813958 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fhtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de09c7d4-952a-405d-9a54-32331c538ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p7dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:48:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fhtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:48:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.825988 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea0a82353e1815d59a43f9b649d7675206bae9df75cb4e568b6d7704b4d8271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:48:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:48:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.839274 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:48:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.849430 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-os-release\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.849478 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/28eed9d5-26a0-42dd-b2a9-9a86841e2516-tuning-conf-dir\") pod \"multus-additional-cni-plugins-klv95\" (UID: \"28eed9d5-26a0-42dd-b2a9-9a86841e2516\") " pod="openshift-multus/multus-additional-cni-plugins-klv95" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.849503 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-ovnkube-config\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.849539 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7e849b2e-7cd7-4e49-acd2-deab139c699a-mcd-auth-proxy-config\") pod \"machine-config-daemon-wcb2s\" (UID: \"7e849b2e-7cd7-4e49-acd2-deab139c699a\") " pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.849560 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-multus-cni-dir\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.849578 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-systemd-units\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.849595 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-run-systemd\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.849616 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-run-openvswitch\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.849652 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-cnibin\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.849671 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-host-var-lib-cni-bin\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.849687 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-etc-kubernetes\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.849703 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/de09c7d4-952a-405d-9a54-32331c538ee2-cni-binary-copy\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.849722 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-host-var-lib-kubelet\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.849737 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-etc-openvswitch\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.849753 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-run-ovn-kubernetes\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.849771 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/de09c7d4-952a-405d-9a54-32331c538ee2-multus-daemon-config\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.849791 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-host-run-multus-certs\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.849810 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/28eed9d5-26a0-42dd-b2a9-9a86841e2516-system-cni-dir\") pod \"multus-additional-cni-plugins-klv95\" (UID: \"28eed9d5-26a0-42dd-b2a9-9a86841e2516\") " pod="openshift-multus/multus-additional-cni-plugins-klv95" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.849841 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/28eed9d5-26a0-42dd-b2a9-9a86841e2516-cnibin\") pod \"multus-additional-cni-plugins-klv95\" (UID: \"28eed9d5-26a0-42dd-b2a9-9a86841e2516\") " pod="openshift-multus/multus-additional-cni-plugins-klv95" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.849857 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-run-netns\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.849978 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-system-cni-dir\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.849997 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-multus-conf-dir\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.850036 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/28eed9d5-26a0-42dd-b2a9-9a86841e2516-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-klv95\" (UID: \"28eed9d5-26a0-42dd-b2a9-9a86841e2516\") " pod="openshift-multus/multus-additional-cni-plugins-klv95" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.850059 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-node-log\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.850087 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-hostroot\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.850080 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-run-openvswitch\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.850122 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-ovnkube-script-lib\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.850243 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/28eed9d5-26a0-42dd-b2a9-9a86841e2516-system-cni-dir\") pod \"multus-additional-cni-plugins-klv95\" (UID: \"28eed9d5-26a0-42dd-b2a9-9a86841e2516\") " pod="openshift-multus/multus-additional-cni-plugins-klv95" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.850278 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-host-var-lib-kubelet\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.850298 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z872c\" (UniqueName: \"kubernetes.io/projected/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-kube-api-access-z872c\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.850335 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-etc-openvswitch\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.850383 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-os-release\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.850408 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-run-ovn-kubernetes\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.850412 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-host-var-lib-cni-bin\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.850456 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-host-run-multus-certs\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.850464 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-cnibin\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.850516 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-run-netns\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.850520 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-etc-kubernetes\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.850636 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-run-systemd\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.850709 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-multus-conf-dir\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.850791 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-hostroot\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.850873 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/28eed9d5-26a0-42dd-b2a9-9a86841e2516-os-release\") pod \"multus-additional-cni-plugins-klv95\" (UID: \"28eed9d5-26a0-42dd-b2a9-9a86841e2516\") " pod="openshift-multus/multus-additional-cni-plugins-klv95" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.850931 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-cni-bin\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.850961 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/28eed9d5-26a0-42dd-b2a9-9a86841e2516-cni-binary-copy\") pod \"multus-additional-cni-plugins-klv95\" (UID: \"28eed9d5-26a0-42dd-b2a9-9a86841e2516\") " pod="openshift-multus/multus-additional-cni-plugins-klv95" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.850971 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-ovnkube-script-lib\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.850979 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-system-cni-dir\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.851029 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-slash\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.851084 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/28eed9d5-26a0-42dd-b2a9-9a86841e2516-cnibin\") pod \"multus-additional-cni-plugins-klv95\" (UID: \"28eed9d5-26a0-42dd-b2a9-9a86841e2516\") " pod="openshift-multus/multus-additional-cni-plugins-klv95" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.851085 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-cni-bin\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.851115 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-run-ovn\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.851092 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-run-ovn\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.851133 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-slash\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.851125 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-node-log\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.851153 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkxsp\" (UniqueName: \"kubernetes.io/projected/7e849b2e-7cd7-4e49-acd2-deab139c699a-kube-api-access-mkxsp\") pod \"machine-config-daemon-wcb2s\" (UID: \"7e849b2e-7cd7-4e49-acd2-deab139c699a\") " pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.851254 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-multus-socket-dir-parent\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.851282 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-log-socket\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.851296 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/de09c7d4-952a-405d-9a54-32331c538ee2-cni-binary-copy\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.851307 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-env-overrides\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.851326 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-multus-cni-dir\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.851334 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-ovn-node-metrics-cert\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.851372 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-systemd-units\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.851375 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-ovnkube-config\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.851432 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7e849b2e-7cd7-4e49-acd2-deab139c699a-proxy-tls\") pod \"machine-config-daemon-wcb2s\" (UID: \"7e849b2e-7cd7-4e49-acd2-deab139c699a\") " pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.851442 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-multus-socket-dir-parent\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.851461 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p7dg\" (UniqueName: \"kubernetes.io/projected/de09c7d4-952a-405d-9a54-32331c538ee2-kube-api-access-4p7dg\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.851633 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-kubelet\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.851669 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-host-run-netns\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.851674 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/de09c7d4-952a-405d-9a54-32331c538ee2-multus-daemon-config\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.851742 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-log-socket\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.851746 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-host-run-netns\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.851806 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-kubelet\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.851810 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/28eed9d5-26a0-42dd-b2a9-9a86841e2516-os-release\") pod \"multus-additional-cni-plugins-klv95\" (UID: \"28eed9d5-26a0-42dd-b2a9-9a86841e2516\") " pod="openshift-multus/multus-additional-cni-plugins-klv95" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.851873 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7e849b2e-7cd7-4e49-acd2-deab139c699a-rootfs\") pod \"machine-config-daemon-wcb2s\" (UID: \"7e849b2e-7cd7-4e49-acd2-deab139c699a\") " pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.851897 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-host-var-lib-cni-multus\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.851956 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-var-lib-openvswitch\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.851982 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-cni-netd\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.852109 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/28eed9d5-26a0-42dd-b2a9-9a86841e2516-tuning-conf-dir\") pod \"multus-additional-cni-plugins-klv95\" (UID: \"28eed9d5-26a0-42dd-b2a9-9a86841e2516\") " pod="openshift-multus/multus-additional-cni-plugins-klv95" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.852120 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-host-run-k8s-cni-cncf-io\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.852183 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m6fs\" (UniqueName: \"kubernetes.io/projected/28eed9d5-26a0-42dd-b2a9-9a86841e2516-kube-api-access-5m6fs\") pod \"multus-additional-cni-plugins-klv95\" (UID: \"28eed9d5-26a0-42dd-b2a9-9a86841e2516\") " pod="openshift-multus/multus-additional-cni-plugins-klv95" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.852197 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/7e849b2e-7cd7-4e49-acd2-deab139c699a-rootfs\") pod \"machine-config-daemon-wcb2s\" (UID: \"7e849b2e-7cd7-4e49-acd2-deab139c699a\") " pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.852220 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.852230 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-host-var-lib-cni-multus\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.852235 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/28eed9d5-26a0-42dd-b2a9-9a86841e2516-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-klv95\" (UID: \"28eed9d5-26a0-42dd-b2a9-9a86841e2516\") " pod="openshift-multus/multus-additional-cni-plugins-klv95" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.852147 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/de09c7d4-952a-405d-9a54-32331c538ee2-host-run-k8s-cni-cncf-io\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.852372 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-var-lib-openvswitch\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.852465 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.852470 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-cni-netd\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.853367 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/28eed9d5-26a0-42dd-b2a9-9a86841e2516-cni-binary-copy\") pod \"multus-additional-cni-plugins-klv95\" (UID: \"28eed9d5-26a0-42dd-b2a9-9a86841e2516\") " pod="openshift-multus/multus-additional-cni-plugins-klv95" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.853763 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-env-overrides\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.853776 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7e849b2e-7cd7-4e49-acd2-deab139c699a-mcd-auth-proxy-config\") pod \"machine-config-daemon-wcb2s\" (UID: \"7e849b2e-7cd7-4e49-acd2-deab139c699a\") " pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.856035 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-ovn-node-metrics-cert\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.856245 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e3950610f64909e8cac86e375c431913174fe7038ade2f26dffb72452219d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2156683be56db25c3dbc22e6c7b2cd1e13f57e2daafd647ab754a9243006ba92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:48:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:48:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.863613 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7e849b2e-7cd7-4e49-acd2-deab139c699a-proxy-tls\") pod \"machine-config-daemon-wcb2s\" (UID: \"7e849b2e-7cd7-4e49-acd2-deab139c699a\") " pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.868481 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z872c\" (UniqueName: \"kubernetes.io/projected/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-kube-api-access-z872c\") pod \"ovnkube-node-q9k57\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.871879 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m6fs\" (UniqueName: \"kubernetes.io/projected/28eed9d5-26a0-42dd-b2a9-9a86841e2516-kube-api-access-5m6fs\") pod \"multus-additional-cni-plugins-klv95\" (UID: \"28eed9d5-26a0-42dd-b2a9-9a86841e2516\") " pod="openshift-multus/multus-additional-cni-plugins-klv95" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.872430 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:48:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.874100 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkxsp\" (UniqueName: \"kubernetes.io/projected/7e849b2e-7cd7-4e49-acd2-deab139c699a-kube-api-access-mkxsp\") pod \"machine-config-daemon-wcb2s\" (UID: \"7e849b2e-7cd7-4e49-acd2-deab139c699a\") " pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.874403 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p7dg\" (UniqueName: \"kubernetes.io/projected/de09c7d4-952a-405d-9a54-32331c538ee2-kube-api-access-4p7dg\") pod \"multus-4fhtp\" (UID: \"de09c7d4-952a-405d-9a54-32331c538ee2\") " pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.886493 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-56g5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c12c3ee1-28bd-431d-91ad-fa053c81a6bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d14f46f6c31399a2b9bd5f45c0856d783419b24a19a198e186dae2593a551b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fknsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:48:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-56g5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:48:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.904927 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:48:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.921744 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-klv95" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.930671 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-4fhtp" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.931270 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z872c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z872c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z872c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z872c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z872c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z872c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z872c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z872c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z872c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:48:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9k57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:48:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.945275 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.956751 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:50 crc kubenswrapper[5050]: I1211 13:48:50.956792 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:48:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:48:51 crc kubenswrapper[5050]: I1211 13:48:51.002487 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7e849b2e-7cd7-4e49-acd2-deab139c699a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkxsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mkxsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:48:50Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-wcb2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:48:50Z is after 2025-08-24T17:21:41Z" Dec 11 13:48:51 crc kubenswrapper[5050]: I1211 13:48:51.545411 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:48:51 crc kubenswrapper[5050]: E1211 13:48:51.545773 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:48:51 crc kubenswrapper[5050]: I1211 13:48:51.703654 5050 generic.go:334] "Generic (PLEG): container finished" podID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerID="01285bd00ac1055e7f57130d4d3e052769346733fd1c2854e0e988581a553f66" exitCode=0 Dec 11 13:48:51 crc kubenswrapper[5050]: I1211 13:48:51.703761 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" event={"ID":"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3","Type":"ContainerDied","Data":"01285bd00ac1055e7f57130d4d3e052769346733fd1c2854e0e988581a553f66"} Dec 11 13:48:51 crc kubenswrapper[5050]: I1211 13:48:51.703828 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" event={"ID":"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3","Type":"ContainerStarted","Data":"7a9891925bc898a38965a64c07159961b34a3666d2ded2a3e52a52eba078fcd6"} Dec 11 13:48:51 crc kubenswrapper[5050]: I1211 13:48:51.706669 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerStarted","Data":"b8f6db6e6f98c7ab7de3ee1c5718196e46f5059c59d59733803a8f7a528e4054"} Dec 11 13:48:51 crc kubenswrapper[5050]: I1211 13:48:51.706715 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerStarted","Data":"dd5e3efb6c32fb9d9f76a12a8e8a2e6fcb32ad3cbf663ac6d264ea7b3f858008"} Dec 11 13:48:51 crc kubenswrapper[5050]: I1211 13:48:51.706727 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerStarted","Data":"2bb490df8647f0150547585ce861a12452f22c29dfa4540e215c79972b35375f"} Dec 11 13:48:51 crc kubenswrapper[5050]: I1211 13:48:51.708248 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4fhtp" event={"ID":"de09c7d4-952a-405d-9a54-32331c538ee2","Type":"ContainerStarted","Data":"06e98b7bca17e966a2ccbdcf16ada0897f369c85d34360def6417eea7581e4f2"} Dec 11 13:48:51 crc kubenswrapper[5050]: I1211 13:48:51.708313 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4fhtp" event={"ID":"de09c7d4-952a-405d-9a54-32331c538ee2","Type":"ContainerStarted","Data":"baabb290d687699c4314933398018b5028dd59fc45f81660cbb386a6941b07b0"} Dec 11 13:48:51 crc kubenswrapper[5050]: I1211 13:48:51.716069 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 11 13:48:51 crc kubenswrapper[5050]: I1211 13:48:51.718109 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d6f8c2f45cdfba5b4eb7bdbcb5d48fd2dcf70731aeee14cf3a54dbfc597b8af9"} Dec 11 13:48:51 crc kubenswrapper[5050]: I1211 13:48:51.719811 5050 generic.go:334] "Generic (PLEG): container finished" podID="28eed9d5-26a0-42dd-b2a9-9a86841e2516" containerID="22071b154f385af82abe8bdad0c174d5900440e45fe496f706dcb512548cce4e" exitCode=0 Dec 11 13:48:51 crc kubenswrapper[5050]: I1211 13:48:51.720490 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-klv95" event={"ID":"28eed9d5-26a0-42dd-b2a9-9a86841e2516","Type":"ContainerDied","Data":"22071b154f385af82abe8bdad0c174d5900440e45fe496f706dcb512548cce4e"} Dec 11 13:48:51 crc kubenswrapper[5050]: I1211 13:48:51.720538 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-klv95" event={"ID":"28eed9d5-26a0-42dd-b2a9-9a86841e2516","Type":"ContainerStarted","Data":"426c53ab08a1c199e2b05894d401337029068863e80c511d95f81b31bceef725"} Dec 11 13:48:51 crc kubenswrapper[5050]: I1211 13:48:51.722375 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea0a82353e1815d59a43f9b649d7675206bae9df75cb4e568b6d7704b4d8271\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:48:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:48:51Z is after 2025-08-24T17:21:41Z" Dec 11 13:48:51 crc kubenswrapper[5050]: I1211 13:48:51.739516 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:48:51Z is after 2025-08-24T17:21:41Z" Dec 11 13:48:51 crc kubenswrapper[5050]: I1211 13:48:51.754921 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://61e3950610f64909e8cac86e375c431913174fe7038ade2f26dffb72452219d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2156683be56db25c3dbc22e6c7b2cd1e13f57e2daafd647ab754a9243006ba92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:48:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:48:51Z is after 2025-08-24T17:21:41Z" Dec 11 13:48:51 crc kubenswrapper[5050]: I1211 13:48:51.769833 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:48:51Z is after 2025-08-24T17:21:41Z" Dec 11 13:48:51 crc kubenswrapper[5050]: I1211 13:48:51.779180 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-56g5c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c12c3ee1-28bd-431d-91ad-fa053c81a6bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d14f46f6c31399a2b9bd5f45c0856d783419b24a19a198e186dae2593a551b76\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T13:48:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fknsw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:48:48Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-56g5c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:48:51Z is after 2025-08-24T17:21:41Z" Dec 11 13:48:51 crc kubenswrapper[5050]: I1211 13:48:51.793172 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-4fhtp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"de09c7d4-952a-405d-9a54-32331c538ee2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4p7dg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:48:50Z\\\"}}\" for pod \"openshift-multus\"/\"multus-4fhtp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:48:51Z is after 2025-08-24T17:21:41Z" Dec 11 13:48:51 crc kubenswrapper[5050]: I1211 13:48:51.818526 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:50Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z872c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z872c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z872c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z872c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z872c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z872c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z872c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z872c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01285bd00ac1055e7f57130d4d3e052769346733fd1c2854e0e988581a553f66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://01285bd00ac1055e7f57130d4d3e052769346733fd1c2854e0e988581a553f66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T13:48:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T13:48:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-z872c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T13:48:50Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-q9k57\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:48:51Z is after 2025-08-24T17:21:41Z" Dec 11 13:48:51 crc kubenswrapper[5050]: I1211 13:48:51.845575 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:48:51Z is after 2025-08-24T17:21:41Z" Dec 11 13:48:51 crc kubenswrapper[5050]: I1211 13:48:51.861752 5050 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T13:48:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T13:48:51Z is after 2025-08-24T17:21:41Z" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.051206 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podStartSLOduration=3.0511794 podStartE2EDuration="3.0511794s" podCreationTimestamp="2025-12-11 13:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:48:52.050985775 +0000 UTC m=+22.894708381" watchObservedRunningTime="2025-12-11 13:48:52.0511794 +0000 UTC m=+22.894901996" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.066447 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:48:52 crc kubenswrapper[5050]: E1211 13:48:52.066623 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:48:56.066598127 +0000 UTC m=+26.910320713 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.130914 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=2.130890268 podStartE2EDuration="2.130890268s" podCreationTimestamp="2025-12-11 13:48:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:48:52.10805162 +0000 UTC m=+22.951774226" watchObservedRunningTime="2025-12-11 13:48:52.130890268 +0000 UTC m=+22.974612854" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.167656 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.167699 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.167723 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.167748 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:48:52 crc kubenswrapper[5050]: E1211 13:48:52.167824 5050 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 13:48:52 crc kubenswrapper[5050]: E1211 13:48:52.167878 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 13:48:56.16786289 +0000 UTC m=+27.011585476 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 13:48:52 crc kubenswrapper[5050]: E1211 13:48:52.167876 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 13:48:52 crc kubenswrapper[5050]: E1211 13:48:52.167903 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 13:48:52 crc kubenswrapper[5050]: E1211 13:48:52.167917 5050 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:48:52 crc kubenswrapper[5050]: E1211 13:48:52.167909 5050 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 13:48:52 crc kubenswrapper[5050]: E1211 13:48:52.167946 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 13:48:56.167937752 +0000 UTC m=+27.011660338 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:48:52 crc kubenswrapper[5050]: E1211 13:48:52.167993 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 13:48:52 crc kubenswrapper[5050]: E1211 13:48:52.168048 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 13:48:56.167994983 +0000 UTC m=+27.011717559 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 13:48:52 crc kubenswrapper[5050]: E1211 13:48:52.168052 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 13:48:52 crc kubenswrapper[5050]: E1211 13:48:52.168071 5050 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:48:52 crc kubenswrapper[5050]: E1211 13:48:52.168143 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 13:48:56.168121887 +0000 UTC m=+27.011844473 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.218339 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-56g5c" podStartSLOduration=4.218318636 podStartE2EDuration="4.218318636s" podCreationTimestamp="2025-12-11 13:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:48:52.191950252 +0000 UTC m=+23.035672838" watchObservedRunningTime="2025-12-11 13:48:52.218318636 +0000 UTC m=+23.062041222" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.264084 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-4fhtp" podStartSLOduration=3.264058015 podStartE2EDuration="3.264058015s" podCreationTimestamp="2025-12-11 13:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:48:52.223370563 +0000 UTC m=+23.067093149" watchObservedRunningTime="2025-12-11 13:48:52.264058015 +0000 UTC m=+23.107780611" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.431499 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5k2z5"] Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.432593 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5k2z5" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.435877 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.441426 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.444162 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-tm86r"] Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.444825 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-tm86r" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.446719 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.448838 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.449321 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.453304 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.488184 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-lttxf"] Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.488707 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lttxf" Dec 11 13:48:52 crc kubenswrapper[5050]: E1211 13:48:52.488774 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lttxf" podUID="2b16e336-8c81-45d1-a527-599b29a7c070" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.544954 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.545080 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:48:52 crc kubenswrapper[5050]: E1211 13:48:52.545109 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:48:52 crc kubenswrapper[5050]: E1211 13:48:52.545292 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.572527 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ec1756d3-37eb-495d-8fe6-34095f734351-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5k2z5\" (UID: \"ec1756d3-37eb-495d-8fe6-34095f734351\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5k2z5" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.572602 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b16e336-8c81-45d1-a527-599b29a7c070-metrics-certs\") pod \"network-metrics-daemon-lttxf\" (UID: \"2b16e336-8c81-45d1-a527-599b29a7c070\") " pod="openshift-multus/network-metrics-daemon-lttxf" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.572659 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4tpf\" (UniqueName: \"kubernetes.io/projected/2b16e336-8c81-45d1-a527-599b29a7c070-kube-api-access-b4tpf\") pod \"network-metrics-daemon-lttxf\" (UID: \"2b16e336-8c81-45d1-a527-599b29a7c070\") " pod="openshift-multus/network-metrics-daemon-lttxf" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.572689 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27zw4\" (UniqueName: \"kubernetes.io/projected/5dbb9e68-5211-4900-96a3-09ea714dc86b-kube-api-access-27zw4\") pod \"node-ca-tm86r\" (UID: \"5dbb9e68-5211-4900-96a3-09ea714dc86b\") " pod="openshift-image-registry/node-ca-tm86r" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.572735 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5dbb9e68-5211-4900-96a3-09ea714dc86b-host\") pod \"node-ca-tm86r\" (UID: \"5dbb9e68-5211-4900-96a3-09ea714dc86b\") " pod="openshift-image-registry/node-ca-tm86r" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.572765 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4qcc\" (UniqueName: \"kubernetes.io/projected/ec1756d3-37eb-495d-8fe6-34095f734351-kube-api-access-t4qcc\") pod \"ovnkube-control-plane-749d76644c-5k2z5\" (UID: \"ec1756d3-37eb-495d-8fe6-34095f734351\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5k2z5" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.572794 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5dbb9e68-5211-4900-96a3-09ea714dc86b-serviceca\") pod \"node-ca-tm86r\" (UID: \"5dbb9e68-5211-4900-96a3-09ea714dc86b\") " pod="openshift-image-registry/node-ca-tm86r" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.572842 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ec1756d3-37eb-495d-8fe6-34095f734351-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5k2z5\" (UID: \"ec1756d3-37eb-495d-8fe6-34095f734351\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5k2z5" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.572881 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ec1756d3-37eb-495d-8fe6-34095f734351-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5k2z5\" (UID: \"ec1756d3-37eb-495d-8fe6-34095f734351\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5k2z5" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.619438 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.632353 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.639198 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.673281 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4tpf\" (UniqueName: \"kubernetes.io/projected/2b16e336-8c81-45d1-a527-599b29a7c070-kube-api-access-b4tpf\") pod \"network-metrics-daemon-lttxf\" (UID: \"2b16e336-8c81-45d1-a527-599b29a7c070\") " pod="openshift-multus/network-metrics-daemon-lttxf" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.673323 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27zw4\" (UniqueName: \"kubernetes.io/projected/5dbb9e68-5211-4900-96a3-09ea714dc86b-kube-api-access-27zw4\") pod \"node-ca-tm86r\" (UID: \"5dbb9e68-5211-4900-96a3-09ea714dc86b\") " pod="openshift-image-registry/node-ca-tm86r" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.673358 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5dbb9e68-5211-4900-96a3-09ea714dc86b-host\") pod \"node-ca-tm86r\" (UID: \"5dbb9e68-5211-4900-96a3-09ea714dc86b\") " pod="openshift-image-registry/node-ca-tm86r" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.673382 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4qcc\" (UniqueName: \"kubernetes.io/projected/ec1756d3-37eb-495d-8fe6-34095f734351-kube-api-access-t4qcc\") pod \"ovnkube-control-plane-749d76644c-5k2z5\" (UID: \"ec1756d3-37eb-495d-8fe6-34095f734351\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5k2z5" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.673404 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5dbb9e68-5211-4900-96a3-09ea714dc86b-serviceca\") pod \"node-ca-tm86r\" (UID: \"5dbb9e68-5211-4900-96a3-09ea714dc86b\") " pod="openshift-image-registry/node-ca-tm86r" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.673440 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ec1756d3-37eb-495d-8fe6-34095f734351-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5k2z5\" (UID: \"ec1756d3-37eb-495d-8fe6-34095f734351\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5k2z5" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.673547 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ec1756d3-37eb-495d-8fe6-34095f734351-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5k2z5\" (UID: \"ec1756d3-37eb-495d-8fe6-34095f734351\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5k2z5" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.673639 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5dbb9e68-5211-4900-96a3-09ea714dc86b-host\") pod \"node-ca-tm86r\" (UID: \"5dbb9e68-5211-4900-96a3-09ea714dc86b\") " pod="openshift-image-registry/node-ca-tm86r" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.674082 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ec1756d3-37eb-495d-8fe6-34095f734351-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5k2z5\" (UID: \"ec1756d3-37eb-495d-8fe6-34095f734351\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5k2z5" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.674155 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b16e336-8c81-45d1-a527-599b29a7c070-metrics-certs\") pod \"network-metrics-daemon-lttxf\" (UID: \"2b16e336-8c81-45d1-a527-599b29a7c070\") " pod="openshift-multus/network-metrics-daemon-lttxf" Dec 11 13:48:52 crc kubenswrapper[5050]: E1211 13:48:52.674371 5050 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 13:48:52 crc kubenswrapper[5050]: E1211 13:48:52.674677 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b16e336-8c81-45d1-a527-599b29a7c070-metrics-certs podName:2b16e336-8c81-45d1-a527-599b29a7c070 nodeName:}" failed. No retries permitted until 2025-12-11 13:48:53.174447688 +0000 UTC m=+24.018170274 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2b16e336-8c81-45d1-a527-599b29a7c070-metrics-certs") pod "network-metrics-daemon-lttxf" (UID: "2b16e336-8c81-45d1-a527-599b29a7c070") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.675132 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ec1756d3-37eb-495d-8fe6-34095f734351-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5k2z5\" (UID: \"ec1756d3-37eb-495d-8fe6-34095f734351\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5k2z5" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.675367 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5dbb9e68-5211-4900-96a3-09ea714dc86b-serviceca\") pod \"node-ca-tm86r\" (UID: \"5dbb9e68-5211-4900-96a3-09ea714dc86b\") " pod="openshift-image-registry/node-ca-tm86r" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.675664 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ec1756d3-37eb-495d-8fe6-34095f734351-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5k2z5\" (UID: \"ec1756d3-37eb-495d-8fe6-34095f734351\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5k2z5" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.685047 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ec1756d3-37eb-495d-8fe6-34095f734351-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5k2z5\" (UID: \"ec1756d3-37eb-495d-8fe6-34095f734351\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5k2z5" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.696841 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4tpf\" (UniqueName: \"kubernetes.io/projected/2b16e336-8c81-45d1-a527-599b29a7c070-kube-api-access-b4tpf\") pod \"network-metrics-daemon-lttxf\" (UID: \"2b16e336-8c81-45d1-a527-599b29a7c070\") " pod="openshift-multus/network-metrics-daemon-lttxf" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.697084 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27zw4\" (UniqueName: \"kubernetes.io/projected/5dbb9e68-5211-4900-96a3-09ea714dc86b-kube-api-access-27zw4\") pod \"node-ca-tm86r\" (UID: \"5dbb9e68-5211-4900-96a3-09ea714dc86b\") " pod="openshift-image-registry/node-ca-tm86r" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.697622 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4qcc\" (UniqueName: \"kubernetes.io/projected/ec1756d3-37eb-495d-8fe6-34095f734351-kube-api-access-t4qcc\") pod \"ovnkube-control-plane-749d76644c-5k2z5\" (UID: \"ec1756d3-37eb-495d-8fe6-34095f734351\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5k2z5" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.724850 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"7988c20daf7c2d4e8670e6399f68ea6bf584789b21bf4dc4aa97184a7d2312b6"} Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.729615 5050 generic.go:334] "Generic (PLEG): container finished" podID="28eed9d5-26a0-42dd-b2a9-9a86841e2516" containerID="d91d8f0a62be2ef32b329c4099b5ea3866c4b5b468b0a0e4282d8b1a83bd7d14" exitCode=0 Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.729747 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-klv95" event={"ID":"28eed9d5-26a0-42dd-b2a9-9a86841e2516","Type":"ContainerDied","Data":"d91d8f0a62be2ef32b329c4099b5ea3866c4b5b468b0a0e4282d8b1a83bd7d14"} Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.733870 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" event={"ID":"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3","Type":"ContainerStarted","Data":"a3155fb448e8d746437d2814848db17f5b9f4928a65b8d2d5b6554ae4acf4ae7"} Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.733950 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" event={"ID":"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3","Type":"ContainerStarted","Data":"c2ba379761001eca9453261ca94fd9bcb5a9b85cec51acadd96ce43a4be35f43"} Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.733967 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" event={"ID":"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3","Type":"ContainerStarted","Data":"ab073247dca519f0c76d97e6a5a6e3de4ef0c58431d92d503a9c6494d9e2316a"} Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.733982 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" event={"ID":"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3","Type":"ContainerStarted","Data":"b79cca188f2f2740a6f2ada09d6063ae085ca891409ef51df998ad18b121d465"} Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.733995 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" event={"ID":"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3","Type":"ContainerStarted","Data":"ccf6b22e62a8e428060aa8c1c10e874c12d3410de016017c055dc65c2b66b963"} Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.734525 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:48:52 crc kubenswrapper[5050]: E1211 13:48:52.745150 5050 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.751805 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=0.751785692 podStartE2EDuration="751.785692ms" podCreationTimestamp="2025-12-11 13:48:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:48:52.748267527 +0000 UTC m=+23.591990113" watchObservedRunningTime="2025-12-11 13:48:52.751785692 +0000 UTC m=+23.595508278" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.760709 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5k2z5" Dec 11 13:48:52 crc kubenswrapper[5050]: W1211 13:48:52.783777 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec1756d3_37eb_495d_8fe6_34095f734351.slice/crio-8f58f3cd526a115320ec301bf8b5362abcae8e6a98a52c0ad54b0d4aadb83fb8 WatchSource:0}: Error finding container 8f58f3cd526a115320ec301bf8b5362abcae8e6a98a52c0ad54b0d4aadb83fb8: Status 404 returned error can't find the container with id 8f58f3cd526a115320ec301bf8b5362abcae8e6a98a52c0ad54b0d4aadb83fb8 Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.825951 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.829922 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.839797 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 11 13:48:52 crc kubenswrapper[5050]: I1211 13:48:52.840042 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-tm86r" Dec 11 13:48:52 crc kubenswrapper[5050]: W1211 13:48:52.853082 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dbb9e68_5211_4900_96a3_09ea714dc86b.slice/crio-45c18e7cf868c1d4b202c55ce950d9892091c902ea36df8e58473aba5639ece4 WatchSource:0}: Error finding container 45c18e7cf868c1d4b202c55ce950d9892091c902ea36df8e58473aba5639ece4: Status 404 returned error can't find the container with id 45c18e7cf868c1d4b202c55ce950d9892091c902ea36df8e58473aba5639ece4 Dec 11 13:48:53 crc kubenswrapper[5050]: I1211 13:48:53.288425 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b16e336-8c81-45d1-a527-599b29a7c070-metrics-certs\") pod \"network-metrics-daemon-lttxf\" (UID: \"2b16e336-8c81-45d1-a527-599b29a7c070\") " pod="openshift-multus/network-metrics-daemon-lttxf" Dec 11 13:48:53 crc kubenswrapper[5050]: E1211 13:48:53.288609 5050 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 13:48:53 crc kubenswrapper[5050]: E1211 13:48:53.288678 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b16e336-8c81-45d1-a527-599b29a7c070-metrics-certs podName:2b16e336-8c81-45d1-a527-599b29a7c070 nodeName:}" failed. No retries permitted until 2025-12-11 13:48:54.288656451 +0000 UTC m=+25.132379057 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2b16e336-8c81-45d1-a527-599b29a7c070-metrics-certs") pod "network-metrics-daemon-lttxf" (UID: "2b16e336-8c81-45d1-a527-599b29a7c070") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 13:48:53 crc kubenswrapper[5050]: I1211 13:48:53.545563 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lttxf" Dec 11 13:48:53 crc kubenswrapper[5050]: E1211 13:48:53.545726 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lttxf" podUID="2b16e336-8c81-45d1-a527-599b29a7c070" Dec 11 13:48:53 crc kubenswrapper[5050]: I1211 13:48:53.547540 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:48:53 crc kubenswrapper[5050]: E1211 13:48:53.547744 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:48:53 crc kubenswrapper[5050]: I1211 13:48:53.740123 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-tm86r" event={"ID":"5dbb9e68-5211-4900-96a3-09ea714dc86b","Type":"ContainerStarted","Data":"b49ca9768e83f4fa73006b814e1684b24fd61a0abd174e5db7caacae46b55f9b"} Dec 11 13:48:53 crc kubenswrapper[5050]: I1211 13:48:53.740177 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-tm86r" event={"ID":"5dbb9e68-5211-4900-96a3-09ea714dc86b","Type":"ContainerStarted","Data":"45c18e7cf868c1d4b202c55ce950d9892091c902ea36df8e58473aba5639ece4"} Dec 11 13:48:53 crc kubenswrapper[5050]: I1211 13:48:53.744188 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" event={"ID":"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3","Type":"ContainerStarted","Data":"5767ff2d50657f2ed1fd5f7fa8bee945962a258ca0ec0507943a5a7295f65efb"} Dec 11 13:48:53 crc kubenswrapper[5050]: I1211 13:48:53.746726 5050 generic.go:334] "Generic (PLEG): container finished" podID="28eed9d5-26a0-42dd-b2a9-9a86841e2516" containerID="f3e5d60c77cdd9982ff28edc378671d4552068ec714d4afb54d96056236f20bf" exitCode=0 Dec 11 13:48:53 crc kubenswrapper[5050]: I1211 13:48:53.746771 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-klv95" event={"ID":"28eed9d5-26a0-42dd-b2a9-9a86841e2516","Type":"ContainerDied","Data":"f3e5d60c77cdd9982ff28edc378671d4552068ec714d4afb54d96056236f20bf"} Dec 11 13:48:53 crc kubenswrapper[5050]: I1211 13:48:53.750483 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5k2z5" event={"ID":"ec1756d3-37eb-495d-8fe6-34095f734351","Type":"ContainerStarted","Data":"f6bfd6f5c56e100e2f6aad2133fdf170db00fdff680094b0ae6f5ffca8e81f8e"} Dec 11 13:48:53 crc kubenswrapper[5050]: I1211 13:48:53.750553 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5k2z5" event={"ID":"ec1756d3-37eb-495d-8fe6-34095f734351","Type":"ContainerStarted","Data":"9b251ff555156db98e525a89f91804e52d33d3d884fbd16f6aedf098b50b6823"} Dec 11 13:48:53 crc kubenswrapper[5050]: I1211 13:48:53.750568 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5k2z5" event={"ID":"ec1756d3-37eb-495d-8fe6-34095f734351","Type":"ContainerStarted","Data":"8f58f3cd526a115320ec301bf8b5362abcae8e6a98a52c0ad54b0d4aadb83fb8"} Dec 11 13:48:53 crc kubenswrapper[5050]: E1211 13:48:53.757795 5050 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:48:53 crc kubenswrapper[5050]: I1211 13:48:53.767972 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=1.76795058 podStartE2EDuration="1.76795058s" podCreationTimestamp="2025-12-11 13:48:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:48:53.767741794 +0000 UTC m=+24.611464390" watchObservedRunningTime="2025-12-11 13:48:53.76795058 +0000 UTC m=+24.611673166" Dec 11 13:48:53 crc kubenswrapper[5050]: I1211 13:48:53.784349 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-tm86r" podStartSLOduration=4.784323223 podStartE2EDuration="4.784323223s" podCreationTimestamp="2025-12-11 13:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:48:53.783686466 +0000 UTC m=+24.627409062" watchObservedRunningTime="2025-12-11 13:48:53.784323223 +0000 UTC m=+24.628045809" Dec 11 13:48:53 crc kubenswrapper[5050]: I1211 13:48:53.807766 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5k2z5" podStartSLOduration=4.807735717 podStartE2EDuration="4.807735717s" podCreationTimestamp="2025-12-11 13:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:48:53.806638828 +0000 UTC m=+24.650361444" watchObservedRunningTime="2025-12-11 13:48:53.807735717 +0000 UTC m=+24.651458313" Dec 11 13:48:54 crc kubenswrapper[5050]: I1211 13:48:54.302766 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b16e336-8c81-45d1-a527-599b29a7c070-metrics-certs\") pod \"network-metrics-daemon-lttxf\" (UID: \"2b16e336-8c81-45d1-a527-599b29a7c070\") " pod="openshift-multus/network-metrics-daemon-lttxf" Dec 11 13:48:54 crc kubenswrapper[5050]: E1211 13:48:54.302987 5050 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 13:48:54 crc kubenswrapper[5050]: E1211 13:48:54.303141 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b16e336-8c81-45d1-a527-599b29a7c070-metrics-certs podName:2b16e336-8c81-45d1-a527-599b29a7c070 nodeName:}" failed. No retries permitted until 2025-12-11 13:48:56.303111982 +0000 UTC m=+27.146834768 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2b16e336-8c81-45d1-a527-599b29a7c070-metrics-certs") pod "network-metrics-daemon-lttxf" (UID: "2b16e336-8c81-45d1-a527-599b29a7c070") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 13:48:54 crc kubenswrapper[5050]: I1211 13:48:54.546080 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:48:54 crc kubenswrapper[5050]: I1211 13:48:54.546159 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:48:54 crc kubenswrapper[5050]: E1211 13:48:54.546230 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:48:54 crc kubenswrapper[5050]: E1211 13:48:54.546371 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:48:54 crc kubenswrapper[5050]: I1211 13:48:54.756909 5050 generic.go:334] "Generic (PLEG): container finished" podID="28eed9d5-26a0-42dd-b2a9-9a86841e2516" containerID="919cf8d25f65af2405f4426dcdb18691f474b7018cc87ef966353cc1c419907c" exitCode=0 Dec 11 13:48:54 crc kubenswrapper[5050]: I1211 13:48:54.756991 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-klv95" event={"ID":"28eed9d5-26a0-42dd-b2a9-9a86841e2516","Type":"ContainerDied","Data":"919cf8d25f65af2405f4426dcdb18691f474b7018cc87ef966353cc1c419907c"} Dec 11 13:48:54 crc kubenswrapper[5050]: I1211 13:48:54.784659 5050 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 13:48:54 crc kubenswrapper[5050]: I1211 13:48:54.787397 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:54 crc kubenswrapper[5050]: I1211 13:48:54.787483 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:54 crc kubenswrapper[5050]: I1211 13:48:54.787510 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:54 crc kubenswrapper[5050]: I1211 13:48:54.787723 5050 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 13:48:54 crc kubenswrapper[5050]: I1211 13:48:54.795130 5050 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 11 13:48:54 crc kubenswrapper[5050]: I1211 13:48:54.795515 5050 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 11 13:48:54 crc kubenswrapper[5050]: I1211 13:48:54.796977 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 13:48:54 crc kubenswrapper[5050]: I1211 13:48:54.797055 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 13:48:54 crc kubenswrapper[5050]: I1211 13:48:54.797074 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 13:48:54 crc kubenswrapper[5050]: I1211 13:48:54.797095 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 13:48:54 crc kubenswrapper[5050]: I1211 13:48:54.797116 5050 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T13:48:54Z","lastTransitionTime":"2025-12-11T13:48:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 13:48:54 crc kubenswrapper[5050]: I1211 13:48:54.850835 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-69whn"] Dec 11 13:48:54 crc kubenswrapper[5050]: I1211 13:48:54.851358 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69whn" Dec 11 13:48:54 crc kubenswrapper[5050]: I1211 13:48:54.853149 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 11 13:48:54 crc kubenswrapper[5050]: I1211 13:48:54.853952 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 11 13:48:54 crc kubenswrapper[5050]: I1211 13:48:54.854190 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 11 13:48:54 crc kubenswrapper[5050]: I1211 13:48:54.855386 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 11 13:48:54 crc kubenswrapper[5050]: I1211 13:48:54.910321 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e23d9768-4acb-4469-be19-f00834ea063f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-69whn\" (UID: \"e23d9768-4acb-4469-be19-f00834ea063f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69whn" Dec 11 13:48:54 crc kubenswrapper[5050]: I1211 13:48:54.910509 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e23d9768-4acb-4469-be19-f00834ea063f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-69whn\" (UID: \"e23d9768-4acb-4469-be19-f00834ea063f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69whn" Dec 11 13:48:54 crc kubenswrapper[5050]: I1211 13:48:54.910690 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e23d9768-4acb-4469-be19-f00834ea063f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-69whn\" (UID: \"e23d9768-4acb-4469-be19-f00834ea063f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69whn" Dec 11 13:48:54 crc kubenswrapper[5050]: I1211 13:48:54.910767 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e23d9768-4acb-4469-be19-f00834ea063f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-69whn\" (UID: \"e23d9768-4acb-4469-be19-f00834ea063f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69whn" Dec 11 13:48:54 crc kubenswrapper[5050]: I1211 13:48:54.910858 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e23d9768-4acb-4469-be19-f00834ea063f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-69whn\" (UID: \"e23d9768-4acb-4469-be19-f00834ea063f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69whn" Dec 11 13:48:55 crc kubenswrapper[5050]: I1211 13:48:55.012614 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e23d9768-4acb-4469-be19-f00834ea063f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-69whn\" (UID: \"e23d9768-4acb-4469-be19-f00834ea063f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69whn" Dec 11 13:48:55 crc kubenswrapper[5050]: I1211 13:48:55.012818 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e23d9768-4acb-4469-be19-f00834ea063f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-69whn\" (UID: \"e23d9768-4acb-4469-be19-f00834ea063f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69whn" Dec 11 13:48:55 crc kubenswrapper[5050]: I1211 13:48:55.012864 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e23d9768-4acb-4469-be19-f00834ea063f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-69whn\" (UID: \"e23d9768-4acb-4469-be19-f00834ea063f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69whn" Dec 11 13:48:55 crc kubenswrapper[5050]: I1211 13:48:55.012892 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e23d9768-4acb-4469-be19-f00834ea063f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-69whn\" (UID: \"e23d9768-4acb-4469-be19-f00834ea063f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69whn" Dec 11 13:48:55 crc kubenswrapper[5050]: I1211 13:48:55.012910 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e23d9768-4acb-4469-be19-f00834ea063f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-69whn\" (UID: \"e23d9768-4acb-4469-be19-f00834ea063f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69whn" Dec 11 13:48:55 crc kubenswrapper[5050]: I1211 13:48:55.013121 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e23d9768-4acb-4469-be19-f00834ea063f-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-69whn\" (UID: \"e23d9768-4acb-4469-be19-f00834ea063f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69whn" Dec 11 13:48:55 crc kubenswrapper[5050]: I1211 13:48:55.013142 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e23d9768-4acb-4469-be19-f00834ea063f-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-69whn\" (UID: \"e23d9768-4acb-4469-be19-f00834ea063f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69whn" Dec 11 13:48:55 crc kubenswrapper[5050]: I1211 13:48:55.013790 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e23d9768-4acb-4469-be19-f00834ea063f-service-ca\") pod \"cluster-version-operator-5c965bbfc6-69whn\" (UID: \"e23d9768-4acb-4469-be19-f00834ea063f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69whn" Dec 11 13:48:55 crc kubenswrapper[5050]: I1211 13:48:55.018609 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e23d9768-4acb-4469-be19-f00834ea063f-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-69whn\" (UID: \"e23d9768-4acb-4469-be19-f00834ea063f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69whn" Dec 11 13:48:55 crc kubenswrapper[5050]: I1211 13:48:55.037356 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e23d9768-4acb-4469-be19-f00834ea063f-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-69whn\" (UID: \"e23d9768-4acb-4469-be19-f00834ea063f\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69whn" Dec 11 13:48:55 crc kubenswrapper[5050]: I1211 13:48:55.180335 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69whn" Dec 11 13:48:55 crc kubenswrapper[5050]: W1211 13:48:55.194922 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode23d9768_4acb_4469_be19_f00834ea063f.slice/crio-e1e7d31260fab1dbe2e4c0a282701a16a2799f89cd6e82bd70b699e3e46edeab WatchSource:0}: Error finding container e1e7d31260fab1dbe2e4c0a282701a16a2799f89cd6e82bd70b699e3e46edeab: Status 404 returned error can't find the container with id e1e7d31260fab1dbe2e4c0a282701a16a2799f89cd6e82bd70b699e3e46edeab Dec 11 13:48:55 crc kubenswrapper[5050]: I1211 13:48:55.546799 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lttxf" Dec 11 13:48:55 crc kubenswrapper[5050]: E1211 13:48:55.547675 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lttxf" podUID="2b16e336-8c81-45d1-a527-599b29a7c070" Dec 11 13:48:55 crc kubenswrapper[5050]: I1211 13:48:55.547987 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:48:55 crc kubenswrapper[5050]: E1211 13:48:55.548410 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:48:55 crc kubenswrapper[5050]: I1211 13:48:55.764046 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69whn" event={"ID":"e23d9768-4acb-4469-be19-f00834ea063f","Type":"ContainerStarted","Data":"13e4f91668f81ef7948e80c8b5d336565856b5cfdc48855bf0a656acd953d46e"} Dec 11 13:48:55 crc kubenswrapper[5050]: I1211 13:48:55.764124 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69whn" event={"ID":"e23d9768-4acb-4469-be19-f00834ea063f","Type":"ContainerStarted","Data":"e1e7d31260fab1dbe2e4c0a282701a16a2799f89cd6e82bd70b699e3e46edeab"} Dec 11 13:48:55 crc kubenswrapper[5050]: I1211 13:48:55.779056 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" event={"ID":"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3","Type":"ContainerStarted","Data":"0d6dc0aec5873226f4eaa2bfbe08e77f854d42624aa47ffb8403123b614fcd9f"} Dec 11 13:48:55 crc kubenswrapper[5050]: I1211 13:48:55.789996 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-69whn" podStartSLOduration=7.789955885 podStartE2EDuration="7.789955885s" podCreationTimestamp="2025-12-11 13:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:48:55.789610096 +0000 UTC m=+26.633332682" watchObservedRunningTime="2025-12-11 13:48:55.789955885 +0000 UTC m=+26.633678471" Dec 11 13:48:55 crc kubenswrapper[5050]: I1211 13:48:55.794377 5050 generic.go:334] "Generic (PLEG): container finished" podID="28eed9d5-26a0-42dd-b2a9-9a86841e2516" containerID="2b48f625a55a8051018abe9db055218bd7f9f16255b61ac042fda970d823a79c" exitCode=0 Dec 11 13:48:55 crc kubenswrapper[5050]: I1211 13:48:55.794429 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-klv95" event={"ID":"28eed9d5-26a0-42dd-b2a9-9a86841e2516","Type":"ContainerDied","Data":"2b48f625a55a8051018abe9db055218bd7f9f16255b61ac042fda970d823a79c"} Dec 11 13:48:56 crc kubenswrapper[5050]: I1211 13:48:56.126239 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:48:56 crc kubenswrapper[5050]: E1211 13:48:56.126397 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:04.126368275 +0000 UTC m=+34.970090861 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:48:56 crc kubenswrapper[5050]: I1211 13:48:56.228150 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:48:56 crc kubenswrapper[5050]: I1211 13:48:56.228221 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:48:56 crc kubenswrapper[5050]: I1211 13:48:56.228260 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:48:56 crc kubenswrapper[5050]: I1211 13:48:56.228298 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:48:56 crc kubenswrapper[5050]: E1211 13:48:56.228369 5050 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 13:48:56 crc kubenswrapper[5050]: E1211 13:48:56.228408 5050 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 13:48:56 crc kubenswrapper[5050]: E1211 13:48:56.228441 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 13:48:56 crc kubenswrapper[5050]: E1211 13:48:56.228468 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 13:48:56 crc kubenswrapper[5050]: E1211 13:48:56.228483 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 13:49:04.22845499 +0000 UTC m=+35.072177576 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 13:48:56 crc kubenswrapper[5050]: E1211 13:48:56.228487 5050 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:48:56 crc kubenswrapper[5050]: E1211 13:48:56.228512 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 13:49:04.228499691 +0000 UTC m=+35.072222347 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 13:48:56 crc kubenswrapper[5050]: E1211 13:48:56.228566 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 13:49:04.228554422 +0000 UTC m=+35.072277108 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:48:56 crc kubenswrapper[5050]: E1211 13:48:56.228544 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 13:48:56 crc kubenswrapper[5050]: E1211 13:48:56.228625 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 13:48:56 crc kubenswrapper[5050]: E1211 13:48:56.228641 5050 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:48:56 crc kubenswrapper[5050]: E1211 13:48:56.228710 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 13:49:04.228690606 +0000 UTC m=+35.072413192 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 13:48:56 crc kubenswrapper[5050]: I1211 13:48:56.329379 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b16e336-8c81-45d1-a527-599b29a7c070-metrics-certs\") pod \"network-metrics-daemon-lttxf\" (UID: \"2b16e336-8c81-45d1-a527-599b29a7c070\") " pod="openshift-multus/network-metrics-daemon-lttxf" Dec 11 13:48:56 crc kubenswrapper[5050]: E1211 13:48:56.329542 5050 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 13:48:56 crc kubenswrapper[5050]: E1211 13:48:56.329606 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b16e336-8c81-45d1-a527-599b29a7c070-metrics-certs podName:2b16e336-8c81-45d1-a527-599b29a7c070 nodeName:}" failed. No retries permitted until 2025-12-11 13:49:00.329590778 +0000 UTC m=+31.173313364 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2b16e336-8c81-45d1-a527-599b29a7c070-metrics-certs") pod "network-metrics-daemon-lttxf" (UID: "2b16e336-8c81-45d1-a527-599b29a7c070") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 13:48:56 crc kubenswrapper[5050]: I1211 13:48:56.545645 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:48:56 crc kubenswrapper[5050]: I1211 13:48:56.545724 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:48:56 crc kubenswrapper[5050]: E1211 13:48:56.545839 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:48:56 crc kubenswrapper[5050]: E1211 13:48:56.545966 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:48:56 crc kubenswrapper[5050]: I1211 13:48:56.803694 5050 generic.go:334] "Generic (PLEG): container finished" podID="28eed9d5-26a0-42dd-b2a9-9a86841e2516" containerID="93718fc37e04e3952773569c4ca54b4a67a190e545d0de1e844116139a1c6120" exitCode=0 Dec 11 13:48:56 crc kubenswrapper[5050]: I1211 13:48:56.803801 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-klv95" event={"ID":"28eed9d5-26a0-42dd-b2a9-9a86841e2516","Type":"ContainerDied","Data":"93718fc37e04e3952773569c4ca54b4a67a190e545d0de1e844116139a1c6120"} Dec 11 13:48:57 crc kubenswrapper[5050]: I1211 13:48:57.545175 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lttxf" Dec 11 13:48:57 crc kubenswrapper[5050]: E1211 13:48:57.545815 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lttxf" podUID="2b16e336-8c81-45d1-a527-599b29a7c070" Dec 11 13:48:57 crc kubenswrapper[5050]: I1211 13:48:57.545247 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:48:57 crc kubenswrapper[5050]: E1211 13:48:57.546504 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:48:57 crc kubenswrapper[5050]: I1211 13:48:57.815266 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" event={"ID":"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3","Type":"ContainerStarted","Data":"8af0c7f049f1a518d307999685e3bee830df9eb2f0c799eb94c8acaa62f548df"} Dec 11 13:48:57 crc kubenswrapper[5050]: I1211 13:48:57.815692 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:57 crc kubenswrapper[5050]: I1211 13:48:57.826580 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-klv95" event={"ID":"28eed9d5-26a0-42dd-b2a9-9a86841e2516","Type":"ContainerStarted","Data":"b2ce76f3cda35228504d36a61ba7239a1aa36020c7f9cf05f3a094fbfa2001f9"} Dec 11 13:48:57 crc kubenswrapper[5050]: I1211 13:48:57.912049 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" podStartSLOduration=8.91200355 podStartE2EDuration="8.91200355s" podCreationTimestamp="2025-12-11 13:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:48:57.847547495 +0000 UTC m=+28.691270101" watchObservedRunningTime="2025-12-11 13:48:57.91200355 +0000 UTC m=+28.755726136" Dec 11 13:48:57 crc kubenswrapper[5050]: I1211 13:48:57.912783 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:57 crc kubenswrapper[5050]: I1211 13:48:57.912882 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-klv95" podStartSLOduration=8.912875664 podStartE2EDuration="8.912875664s" podCreationTimestamp="2025-12-11 13:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:48:57.911762044 +0000 UTC m=+28.755484640" watchObservedRunningTime="2025-12-11 13:48:57.912875664 +0000 UTC m=+28.756598250" Dec 11 13:48:58 crc kubenswrapper[5050]: I1211 13:48:58.546183 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:48:58 crc kubenswrapper[5050]: I1211 13:48:58.546262 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:48:58 crc kubenswrapper[5050]: E1211 13:48:58.546384 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:48:58 crc kubenswrapper[5050]: E1211 13:48:58.546504 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:48:58 crc kubenswrapper[5050]: I1211 13:48:58.831200 5050 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 13:48:58 crc kubenswrapper[5050]: I1211 13:48:58.832286 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:58 crc kubenswrapper[5050]: I1211 13:48:58.855169 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:48:59 crc kubenswrapper[5050]: I1211 13:48:59.545744 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lttxf" Dec 11 13:48:59 crc kubenswrapper[5050]: I1211 13:48:59.545848 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:48:59 crc kubenswrapper[5050]: E1211 13:48:59.547401 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lttxf" podUID="2b16e336-8c81-45d1-a527-599b29a7c070" Dec 11 13:48:59 crc kubenswrapper[5050]: E1211 13:48:59.548231 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:48:59 crc kubenswrapper[5050]: I1211 13:48:59.668135 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-lttxf"] Dec 11 13:48:59 crc kubenswrapper[5050]: I1211 13:48:59.834676 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lttxf" Dec 11 13:48:59 crc kubenswrapper[5050]: I1211 13:48:59.834729 5050 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 13:48:59 crc kubenswrapper[5050]: E1211 13:48:59.834835 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lttxf" podUID="2b16e336-8c81-45d1-a527-599b29a7c070" Dec 11 13:49:00 crc kubenswrapper[5050]: I1211 13:49:00.379847 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b16e336-8c81-45d1-a527-599b29a7c070-metrics-certs\") pod \"network-metrics-daemon-lttxf\" (UID: \"2b16e336-8c81-45d1-a527-599b29a7c070\") " pod="openshift-multus/network-metrics-daemon-lttxf" Dec 11 13:49:00 crc kubenswrapper[5050]: E1211 13:49:00.380073 5050 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 13:49:00 crc kubenswrapper[5050]: E1211 13:49:00.380210 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2b16e336-8c81-45d1-a527-599b29a7c070-metrics-certs podName:2b16e336-8c81-45d1-a527-599b29a7c070 nodeName:}" failed. No retries permitted until 2025-12-11 13:49:08.380170637 +0000 UTC m=+39.223893353 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2b16e336-8c81-45d1-a527-599b29a7c070-metrics-certs") pod "network-metrics-daemon-lttxf" (UID: "2b16e336-8c81-45d1-a527-599b29a7c070") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 13:49:00 crc kubenswrapper[5050]: I1211 13:49:00.545390 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:49:00 crc kubenswrapper[5050]: I1211 13:49:00.545462 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:49:00 crc kubenswrapper[5050]: E1211 13:49:00.545570 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:49:00 crc kubenswrapper[5050]: E1211 13:49:00.545780 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:49:00 crc kubenswrapper[5050]: I1211 13:49:00.837795 5050 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 13:49:01 crc kubenswrapper[5050]: I1211 13:49:01.545381 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:49:01 crc kubenswrapper[5050]: E1211 13:49:01.545531 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 13:49:01 crc kubenswrapper[5050]: I1211 13:49:01.545734 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lttxf" Dec 11 13:49:01 crc kubenswrapper[5050]: E1211 13:49:01.546055 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lttxf" podUID="2b16e336-8c81-45d1-a527-599b29a7c070" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.545418 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.545498 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:49:02 crc kubenswrapper[5050]: E1211 13:49:02.545613 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 13:49:02 crc kubenswrapper[5050]: E1211 13:49:02.545729 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.887245 5050 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.887445 5050 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.928890 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bvjdq"] Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.929555 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bvjdq" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.930043 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zljtn"] Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.930762 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-zljtn" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.932312 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vplw7"] Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.932789 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vplw7" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.947778 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.949677 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.952415 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gxtkz"] Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.953637 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gxtkz" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.957827 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.957987 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.958259 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.958367 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-cd66n"] Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.958258 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.958657 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.958842 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.958936 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.971975 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n"] Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.972467 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.973088 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.973521 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.973772 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.974878 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.975097 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.975183 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-skz5v"] Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.975355 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.975606 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-skz5v" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.976440 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jmqdr"] Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.977188 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jmqdr" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.978063 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.978169 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-l4w2d"] Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.978254 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.978363 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.978480 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.978610 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.978686 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.978828 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.982732 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.984276 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-dsrxh"] Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.984879 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-gp9fp"] Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.985232 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-gp9fp" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.985593 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dsrxh" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.985885 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mv9g5"] Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.986553 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mv9g5" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.992418 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-qc97s"] Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.992977 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-qc97s" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.996057 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.996586 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.996757 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.997385 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.997533 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5zrm6"] Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.997602 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.997771 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.997922 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.998409 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-rstxr"] Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.998835 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.998876 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-5zrm6" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.999134 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.999283 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.999457 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.999628 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n2j6p"] Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.999705 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 11 13:49:02 crc kubenswrapper[5050]: I1211 13:49:02.998840 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-rstxr" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.000292 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jmrjt"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.000650 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jmrjt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.001286 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n2j6p" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.001417 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-mg97h"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.001947 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mg97h" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.002385 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mrg9f"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.002796 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mrg9f" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.005004 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.005156 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.005207 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.005572 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-f6wfx"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.005682 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.005845 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.005873 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.006106 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.006166 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.006217 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.006262 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.006349 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.006367 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.006711 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-krl9b"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.006929 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.008089 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9zt5j"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.008702 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckbbn"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.009200 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-krl9b" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.009222 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckbbn" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.009534 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9zt5j" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.041679 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-dtlb9"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.042760 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfe5cd7c-4c40-4e8a-8d26-f66424569dbe-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vplw7\" (UID: \"dfe5cd7c-4c40-4e8a-8d26-f66424569dbe\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vplw7" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.042856 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea884e88-c0df-4212-976a-0d7ce1731fdc-serving-cert\") pod \"apiserver-76f77b778f-cd66n\" (UID: \"ea884e88-c0df-4212-976a-0d7ce1731fdc\") " pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.042928 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ea884e88-c0df-4212-976a-0d7ce1731fdc-image-import-ca\") pod \"apiserver-76f77b778f-cd66n\" (UID: \"ea884e88-c0df-4212-976a-0d7ce1731fdc\") " pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.042983 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea884e88-c0df-4212-976a-0d7ce1731fdc-audit-dir\") pod \"apiserver-76f77b778f-cd66n\" (UID: \"ea884e88-c0df-4212-976a-0d7ce1731fdc\") " pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.043040 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ea884e88-c0df-4212-976a-0d7ce1731fdc-etcd-serving-ca\") pod \"apiserver-76f77b778f-cd66n\" (UID: \"ea884e88-c0df-4212-976a-0d7ce1731fdc\") " pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.043072 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d89350d-55e9-4ef6-8182-287894b6c14b-serving-cert\") pod \"apiserver-7bbb656c7d-cnp7n\" (UID: \"1d89350d-55e9-4ef6-8182-287894b6c14b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.043114 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v57tk\" (UniqueName: \"kubernetes.io/projected/3069e571-2923-44a7-ae85-7cc7e64991ef-kube-api-access-v57tk\") pod \"route-controller-manager-6576b87f9c-bvjdq\" (UID: \"3069e571-2923-44a7-ae85-7cc7e64991ef\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bvjdq" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.043150 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c37f7db2-f363-4ef9-8c5f-3c6d0ed3f75c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gxtkz\" (UID: \"c37f7db2-f363-4ef9-8c5f-3c6d0ed3f75c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gxtkz" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.043222 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3069e571-2923-44a7-ae85-7cc7e64991ef-serving-cert\") pod \"route-controller-manager-6576b87f9c-bvjdq\" (UID: \"3069e571-2923-44a7-ae85-7cc7e64991ef\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bvjdq" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.043279 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ea884e88-c0df-4212-976a-0d7ce1731fdc-node-pullsecrets\") pod \"apiserver-76f77b778f-cd66n\" (UID: \"ea884e88-c0df-4212-976a-0d7ce1731fdc\") " pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.043315 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea884e88-c0df-4212-976a-0d7ce1731fdc-trusted-ca-bundle\") pod \"apiserver-76f77b778f-cd66n\" (UID: \"ea884e88-c0df-4212-976a-0d7ce1731fdc\") " pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.043346 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d89350d-55e9-4ef6-8182-287894b6c14b-etcd-client\") pod \"apiserver-7bbb656c7d-cnp7n\" (UID: \"1d89350d-55e9-4ef6-8182-287894b6c14b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.043381 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ea884e88-c0df-4212-976a-0d7ce1731fdc-encryption-config\") pod \"apiserver-76f77b778f-cd66n\" (UID: \"ea884e88-c0df-4212-976a-0d7ce1731fdc\") " pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.043429 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1d89350d-55e9-4ef6-8182-287894b6c14b-encryption-config\") pod \"apiserver-7bbb656c7d-cnp7n\" (UID: \"1d89350d-55e9-4ef6-8182-287894b6c14b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.043506 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d89350d-55e9-4ef6-8182-287894b6c14b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-cnp7n\" (UID: \"1d89350d-55e9-4ef6-8182-287894b6c14b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.043551 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1d89350d-55e9-4ef6-8182-287894b6c14b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-cnp7n\" (UID: \"1d89350d-55e9-4ef6-8182-287894b6c14b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.043591 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ea884e88-c0df-4212-976a-0d7ce1731fdc-etcd-client\") pod \"apiserver-76f77b778f-cd66n\" (UID: \"ea884e88-c0df-4212-976a-0d7ce1731fdc\") " pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.043623 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3069e571-2923-44a7-ae85-7cc7e64991ef-config\") pod \"route-controller-manager-6576b87f9c-bvjdq\" (UID: \"3069e571-2923-44a7-ae85-7cc7e64991ef\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bvjdq" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.043743 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdwfv\" (UniqueName: \"kubernetes.io/projected/c37f7db2-f363-4ef9-8c5f-3c6d0ed3f75c-kube-api-access-tdwfv\") pod \"cluster-samples-operator-665b6dd947-gxtkz\" (UID: \"c37f7db2-f363-4ef9-8c5f-3c6d0ed3f75c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gxtkz" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.043789 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccbcp\" (UniqueName: \"kubernetes.io/projected/1d89350d-55e9-4ef6-8182-287894b6c14b-kube-api-access-ccbcp\") pod \"apiserver-7bbb656c7d-cnp7n\" (UID: \"1d89350d-55e9-4ef6-8182-287894b6c14b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.043817 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d89350d-55e9-4ef6-8182-287894b6c14b-audit-dir\") pod \"apiserver-7bbb656c7d-cnp7n\" (UID: \"1d89350d-55e9-4ef6-8182-287894b6c14b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.043845 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45lpd\" (UniqueName: \"kubernetes.io/projected/ea884e88-c0df-4212-976a-0d7ce1731fdc-kube-api-access-45lpd\") pod \"apiserver-76f77b778f-cd66n\" (UID: \"ea884e88-c0df-4212-976a-0d7ce1731fdc\") " pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.043873 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/465806d7-7b39-4ddf-b098-8bed4c0c5a3a-images\") pod \"machine-api-operator-5694c8668f-zljtn\" (UID: \"465806d7-7b39-4ddf-b098-8bed4c0c5a3a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zljtn" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.043921 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfe5cd7c-4c40-4e8a-8d26-f66424569dbe-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vplw7\" (UID: \"dfe5cd7c-4c40-4e8a-8d26-f66424569dbe\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vplw7" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.043991 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1d89350d-55e9-4ef6-8182-287894b6c14b-audit-policies\") pod \"apiserver-7bbb656c7d-cnp7n\" (UID: \"1d89350d-55e9-4ef6-8182-287894b6c14b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.048615 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3069e571-2923-44a7-ae85-7cc7e64991ef-client-ca\") pod \"route-controller-manager-6576b87f9c-bvjdq\" (UID: \"3069e571-2923-44a7-ae85-7cc7e64991ef\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bvjdq" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.048677 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/465806d7-7b39-4ddf-b098-8bed4c0c5a3a-config\") pod \"machine-api-operator-5694c8668f-zljtn\" (UID: \"465806d7-7b39-4ddf-b098-8bed4c0c5a3a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zljtn" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.048715 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/465806d7-7b39-4ddf-b098-8bed4c0c5a3a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zljtn\" (UID: \"465806d7-7b39-4ddf-b098-8bed4c0c5a3a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zljtn" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.048749 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4psgt\" (UniqueName: \"kubernetes.io/projected/dfe5cd7c-4c40-4e8a-8d26-f66424569dbe-kube-api-access-4psgt\") pod \"openshift-apiserver-operator-796bbdcf4f-vplw7\" (UID: \"dfe5cd7c-4c40-4e8a-8d26-f66424569dbe\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vplw7" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.048834 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea884e88-c0df-4212-976a-0d7ce1731fdc-config\") pod \"apiserver-76f77b778f-cd66n\" (UID: \"ea884e88-c0df-4212-976a-0d7ce1731fdc\") " pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.048883 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ea884e88-c0df-4212-976a-0d7ce1731fdc-audit\") pod \"apiserver-76f77b778f-cd66n\" (UID: \"ea884e88-c0df-4212-976a-0d7ce1731fdc\") " pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.048916 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8nrf\" (UniqueName: \"kubernetes.io/projected/465806d7-7b39-4ddf-b098-8bed4c0c5a3a-kube-api-access-t8nrf\") pod \"machine-api-operator-5694c8668f-zljtn\" (UID: \"465806d7-7b39-4ddf-b098-8bed4c0c5a3a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zljtn" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.044847 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-j2zlh"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.044997 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-dtlb9" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.045264 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.052980 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-j2zlh" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.053104 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.046793 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.046973 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.047323 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.047555 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.047850 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.048229 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.048300 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.048379 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.048429 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.048005 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.050608 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.050693 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.051250 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.051474 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.051590 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.052083 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.053523 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.053764 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.053800 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.054431 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.054853 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.056436 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.057058 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.057240 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.057676 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.057841 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.057982 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.058050 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.057707 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.058266 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.058266 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.058499 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.058609 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.059028 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.059053 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.059230 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.059343 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.059458 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.059641 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.059672 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.059838 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.060312 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r49rr"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.059002 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.058182 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.072100 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.073935 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.074336 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.074824 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.075230 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.075292 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c6twg"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.076081 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.076505 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c6twg" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.077117 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r49rr" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.078873 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-bxjjm"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.080081 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-bxjjm" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.089999 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.094089 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.094760 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.094877 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.094935 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.106731 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bvjdq"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.106837 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.107798 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.108508 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.108590 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jns7"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.111752 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jns7" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.111865 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-9ttg9"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.112737 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6t4gf"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.113237 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-6t4gf" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.112742 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9ttg9" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.115119 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.116407 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.116674 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.118027 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.120144 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.120994 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rmlj6"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.122521 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rmlj6" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.126174 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-t75hp"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.127085 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-t75hp" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.127398 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-24wkc"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.128349 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-24wkc" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.128451 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tkg2n"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.131590 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.135052 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424345-gxsbc"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.135337 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tkg2n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.135871 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.136094 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424345-gxsbc" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.136673 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-25p7l"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.136809 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.138403 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gxtkz"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.138466 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vplw7"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.138484 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-skz5v"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.138500 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zljtn"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.138542 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jmqdr"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.138562 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-25p7l" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.138566 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mv9g5"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.139113 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-qc97s"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.140309 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n2j6p"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.141803 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-mbbdj"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.144211 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.145073 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-p9jmx"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.147292 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.147387 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-p9jmx" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.147682 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mrg9f"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.149173 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-j2zlh"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.150041 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9bwx\" (UniqueName: \"kubernetes.io/projected/d40e12c4-8331-453c-b20b-5bbd5e3c2a9c-kube-api-access-k9bwx\") pod \"controller-manager-879f6c89f-skz5v\" (UID: \"d40e12c4-8331-453c-b20b-5bbd5e3c2a9c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-skz5v" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.150121 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ea884e88-c0df-4212-976a-0d7ce1731fdc-audit\") pod \"apiserver-76f77b778f-cd66n\" (UID: \"ea884e88-c0df-4212-976a-0d7ce1731fdc\") " pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.150168 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d40e12c4-8331-453c-b20b-5bbd5e3c2a9c-client-ca\") pod \"controller-manager-879f6c89f-skz5v\" (UID: \"d40e12c4-8331-453c-b20b-5bbd5e3c2a9c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-skz5v" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.150194 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e32c23eb-6cda-4f54-bb1b-d8512e4d30cc-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ckbbn\" (UID: \"e32c23eb-6cda-4f54-bb1b-d8512e4d30cc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckbbn" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.150264 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8nrf\" (UniqueName: \"kubernetes.io/projected/465806d7-7b39-4ddf-b098-8bed4c0c5a3a-kube-api-access-t8nrf\") pod \"machine-api-operator-5694c8668f-zljtn\" (UID: \"465806d7-7b39-4ddf-b098-8bed4c0c5a3a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zljtn" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.150293 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.150328 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-mg97h"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.150352 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.150382 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f89bbffa-4dbb-4aac-bb26-c146a6037f67-machine-approver-tls\") pod \"machine-approver-56656f9798-dsrxh\" (UID: \"f89bbffa-4dbb-4aac-bb26-c146a6037f67\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dsrxh" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.150457 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfe5cd7c-4c40-4e8a-8d26-f66424569dbe-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vplw7\" (UID: \"dfe5cd7c-4c40-4e8a-8d26-f66424569dbe\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vplw7" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.150511 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea884e88-c0df-4212-976a-0d7ce1731fdc-serving-cert\") pod \"apiserver-76f77b778f-cd66n\" (UID: \"ea884e88-c0df-4212-976a-0d7ce1731fdc\") " pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.150542 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9244eaaf-3ab0-426e-a568-aa295b66c246-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-r49rr\" (UID: \"9244eaaf-3ab0-426e-a568-aa295b66c246\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r49rr" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.150596 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f633d554-794b-4a64-9699-27fbc28a4d7c-service-ca\") pod \"console-f9d7485db-gp9fp\" (UID: \"f633d554-794b-4a64-9699-27fbc28a4d7c\") " pod="openshift-console/console-f9d7485db-gp9fp" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.150629 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/617510ba-d86e-4485-9c02-761a60ec1a90-etcd-ca\") pod \"etcd-operator-b45778765-rstxr\" (UID: \"617510ba-d86e-4485-9c02-761a60ec1a90\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rstxr" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.150684 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04af6d44-0ead-4b43-8287-b6bdb88c14ea-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-n2j6p\" (UID: \"04af6d44-0ead-4b43-8287-b6bdb88c14ea\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n2j6p" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.150728 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81fb21e6-41c1-4e89-b458-5d83efc1eec6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-jmrjt\" (UID: \"81fb21e6-41c1-4e89-b458-5d83efc1eec6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jmrjt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.150789 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdqpk\" (UniqueName: \"kubernetes.io/projected/617510ba-d86e-4485-9c02-761a60ec1a90-kube-api-access-qdqpk\") pod \"etcd-operator-b45778765-rstxr\" (UID: \"617510ba-d86e-4485-9c02-761a60ec1a90\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rstxr" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.150811 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b6bd08-d432-4a3b-b739-d575ca32ac6e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-c6twg\" (UID: \"26b6bd08-d432-4a3b-b739-d575ca32ac6e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c6twg" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.150890 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbjc4\" (UniqueName: \"kubernetes.io/projected/5002875e-cc97-4cd1-a72a-f8611227e58c-kube-api-access-kbjc4\") pod \"dns-operator-744455d44c-5zrm6\" (UID: \"5002875e-cc97-4cd1-a72a-f8611227e58c\") " pod="openshift-dns-operator/dns-operator-744455d44c-5zrm6" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.150950 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ea884e88-c0df-4212-976a-0d7ce1731fdc-etcd-serving-ca\") pod \"apiserver-76f77b778f-cd66n\" (UID: \"ea884e88-c0df-4212-976a-0d7ce1731fdc\") " pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.151034 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ea884e88-c0df-4212-976a-0d7ce1731fdc-image-import-ca\") pod \"apiserver-76f77b778f-cd66n\" (UID: \"ea884e88-c0df-4212-976a-0d7ce1731fdc\") " pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.151093 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea884e88-c0df-4212-976a-0d7ce1731fdc-audit-dir\") pod \"apiserver-76f77b778f-cd66n\" (UID: \"ea884e88-c0df-4212-976a-0d7ce1731fdc\") " pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.151120 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d40e12c4-8331-453c-b20b-5bbd5e3c2a9c-serving-cert\") pod \"controller-manager-879f6c89f-skz5v\" (UID: \"d40e12c4-8331-453c-b20b-5bbd5e3c2a9c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-skz5v" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.151144 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.151202 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f633d554-794b-4a64-9699-27fbc28a4d7c-console-serving-cert\") pod \"console-f9d7485db-gp9fp\" (UID: \"f633d554-794b-4a64-9699-27fbc28a4d7c\") " pod="openshift-console/console-f9d7485db-gp9fp" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.151225 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2w8k\" (UniqueName: \"kubernetes.io/projected/f633d554-794b-4a64-9699-27fbc28a4d7c-kube-api-access-l2w8k\") pod \"console-f9d7485db-gp9fp\" (UID: \"f633d554-794b-4a64-9699-27fbc28a4d7c\") " pod="openshift-console/console-f9d7485db-gp9fp" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.151282 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d89350d-55e9-4ef6-8182-287894b6c14b-serving-cert\") pod \"apiserver-7bbb656c7d-cnp7n\" (UID: \"1d89350d-55e9-4ef6-8182-287894b6c14b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.151289 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c6twg"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.151308 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.151367 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v57tk\" (UniqueName: \"kubernetes.io/projected/3069e571-2923-44a7-ae85-7cc7e64991ef-kube-api-access-v57tk\") pod \"route-controller-manager-6576b87f9c-bvjdq\" (UID: \"3069e571-2923-44a7-ae85-7cc7e64991ef\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bvjdq" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.151421 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c37f7db2-f363-4ef9-8c5f-3c6d0ed3f75c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gxtkz\" (UID: \"c37f7db2-f363-4ef9-8c5f-3c6d0ed3f75c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gxtkz" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.151452 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.151515 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e32c23eb-6cda-4f54-bb1b-d8512e4d30cc-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ckbbn\" (UID: \"e32c23eb-6cda-4f54-bb1b-d8512e4d30cc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckbbn" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.151547 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3069e571-2923-44a7-ae85-7cc7e64991ef-serving-cert\") pod \"route-controller-manager-6576b87f9c-bvjdq\" (UID: \"3069e571-2923-44a7-ae85-7cc7e64991ef\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bvjdq" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.151604 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ea884e88-c0df-4212-976a-0d7ce1731fdc-node-pullsecrets\") pod \"apiserver-76f77b778f-cd66n\" (UID: \"ea884e88-c0df-4212-976a-0d7ce1731fdc\") " pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.151629 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea884e88-c0df-4212-976a-0d7ce1731fdc-trusted-ca-bundle\") pod \"apiserver-76f77b778f-cd66n\" (UID: \"ea884e88-c0df-4212-976a-0d7ce1731fdc\") " pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.151694 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/95cc22f3-bae0-4461-a63e-0adbddd76cbb-images\") pod \"machine-config-operator-74547568cd-j2zlh\" (UID: \"95cc22f3-bae0-4461-a63e-0adbddd76cbb\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-j2zlh" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.151750 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4953940c-67b1-4a85-851f-ad290b9a0d0d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mrg9f\" (UID: \"4953940c-67b1-4a85-851f-ad290b9a0d0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mrg9f" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.151778 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.151828 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f89bbffa-4dbb-4aac-bb26-c146a6037f67-config\") pod \"machine-approver-56656f9798-dsrxh\" (UID: \"f89bbffa-4dbb-4aac-bb26-c146a6037f67\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dsrxh" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.151854 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e32c23eb-6cda-4f54-bb1b-d8512e4d30cc-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ckbbn\" (UID: \"e32c23eb-6cda-4f54-bb1b-d8512e4d30cc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckbbn" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.151881 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d89350d-55e9-4ef6-8182-287894b6c14b-etcd-client\") pod \"apiserver-7bbb656c7d-cnp7n\" (UID: \"1d89350d-55e9-4ef6-8182-287894b6c14b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.151958 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ea884e88-c0df-4212-976a-0d7ce1731fdc-encryption-config\") pod \"apiserver-76f77b778f-cd66n\" (UID: \"ea884e88-c0df-4212-976a-0d7ce1731fdc\") " pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.152046 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/95cc22f3-bae0-4461-a63e-0adbddd76cbb-proxy-tls\") pod \"machine-config-operator-74547568cd-j2zlh\" (UID: \"95cc22f3-bae0-4461-a63e-0adbddd76cbb\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-j2zlh" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.152077 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26b6bd08-d432-4a3b-b739-d575ca32ac6e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-c6twg\" (UID: \"26b6bd08-d432-4a3b-b739-d575ca32ac6e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c6twg" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.152168 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d89350d-55e9-4ef6-8182-287894b6c14b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-cnp7n\" (UID: \"1d89350d-55e9-4ef6-8182-287894b6c14b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.152195 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1d89350d-55e9-4ef6-8182-287894b6c14b-encryption-config\") pod \"apiserver-7bbb656c7d-cnp7n\" (UID: \"1d89350d-55e9-4ef6-8182-287894b6c14b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.152251 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.152286 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f633d554-794b-4a64-9699-27fbc28a4d7c-console-oauth-config\") pod \"console-f9d7485db-gp9fp\" (UID: \"f633d554-794b-4a64-9699-27fbc28a4d7c\") " pod="openshift-console/console-f9d7485db-gp9fp" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.152335 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f633d554-794b-4a64-9699-27fbc28a4d7c-trusted-ca-bundle\") pod \"console-f9d7485db-gp9fp\" (UID: \"f633d554-794b-4a64-9699-27fbc28a4d7c\") " pod="openshift-console/console-f9d7485db-gp9fp" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.152368 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/112006c5-e3a9-4fbb-813c-f195e98277bc-serving-cert\") pod \"console-operator-58897d9998-mv9g5\" (UID: \"112006c5-e3a9-4fbb-813c-f195e98277bc\") " pod="openshift-console-operator/console-operator-58897d9998-mv9g5" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.152425 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czz6r\" (UniqueName: \"kubernetes.io/projected/112006c5-e3a9-4fbb-813c-f195e98277bc-kube-api-access-czz6r\") pod \"console-operator-58897d9998-mv9g5\" (UID: \"112006c5-e3a9-4fbb-813c-f195e98277bc\") " pod="openshift-console-operator/console-operator-58897d9998-mv9g5" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.152456 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj27z\" (UniqueName: \"kubernetes.io/projected/64efd0fc-ec3c-403b-ac98-0546f2affa94-kube-api-access-jj27z\") pod \"openshift-config-operator-7777fb866f-jmqdr\" (UID: \"64efd0fc-ec3c-403b-ac98-0546f2affa94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jmqdr" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.152510 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/617510ba-d86e-4485-9c02-761a60ec1a90-config\") pod \"etcd-operator-b45778765-rstxr\" (UID: \"617510ba-d86e-4485-9c02-761a60ec1a90\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rstxr" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.152538 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5002875e-cc97-4cd1-a72a-f8611227e58c-metrics-tls\") pod \"dns-operator-744455d44c-5zrm6\" (UID: \"5002875e-cc97-4cd1-a72a-f8611227e58c\") " pod="openshift-dns-operator/dns-operator-744455d44c-5zrm6" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.152599 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.152625 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dv5f\" (UniqueName: \"kubernetes.io/projected/d9e38eea-202a-4bf1-bb51-1d4a1fc20202-kube-api-access-9dv5f\") pod \"downloads-7954f5f757-qc97s\" (UID: \"d9e38eea-202a-4bf1-bb51-1d4a1fc20202\") " pod="openshift-console/downloads-7954f5f757-qc97s" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.152665 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-krl9b"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.152687 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvxlj\" (UniqueName: \"kubernetes.io/projected/c6b95f15-c748-43a9-8ca6-2007cd1727e5-kube-api-access-nvxlj\") pod \"control-plane-machine-set-operator-78cbb6b69f-9zt5j\" (UID: \"c6b95f15-c748-43a9-8ca6-2007cd1727e5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9zt5j" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.151646 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ea884e88-c0df-4212-976a-0d7ce1731fdc-audit\") pod \"apiserver-76f77b778f-cd66n\" (UID: \"ea884e88-c0df-4212-976a-0d7ce1731fdc\") " pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.153533 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ea884e88-c0df-4212-976a-0d7ce1731fdc-etcd-serving-ca\") pod \"apiserver-76f77b778f-cd66n\" (UID: \"ea884e88-c0df-4212-976a-0d7ce1731fdc\") " pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.153613 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.154989 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ea884e88-c0df-4212-976a-0d7ce1731fdc-image-import-ca\") pod \"apiserver-76f77b778f-cd66n\" (UID: \"ea884e88-c0df-4212-976a-0d7ce1731fdc\") " pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.156215 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea884e88-c0df-4212-976a-0d7ce1731fdc-audit-dir\") pod \"apiserver-76f77b778f-cd66n\" (UID: \"ea884e88-c0df-4212-976a-0d7ce1731fdc\") " pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.156431 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d89350d-55e9-4ef6-8182-287894b6c14b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-cnp7n\" (UID: \"1d89350d-55e9-4ef6-8182-287894b6c14b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.157694 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1d89350d-55e9-4ef6-8182-287894b6c14b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-cnp7n\" (UID: \"1d89350d-55e9-4ef6-8182-287894b6c14b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.157845 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfe5cd7c-4c40-4e8a-8d26-f66424569dbe-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vplw7\" (UID: \"dfe5cd7c-4c40-4e8a-8d26-f66424569dbe\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vplw7" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.158036 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ea884e88-c0df-4212-976a-0d7ce1731fdc-etcd-client\") pod \"apiserver-76f77b778f-cd66n\" (UID: \"ea884e88-c0df-4212-976a-0d7ce1731fdc\") " pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.158074 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d33f8d3b-768a-44d3-bdf6-1a885b096055-metrics-tls\") pod \"ingress-operator-5b745b69d9-krl9b\" (UID: \"d33f8d3b-768a-44d3-bdf6-1a885b096055\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-krl9b" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.158108 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qlf7\" (UniqueName: \"kubernetes.io/projected/d33f8d3b-768a-44d3-bdf6-1a885b096055-kube-api-access-2qlf7\") pod \"ingress-operator-5b745b69d9-krl9b\" (UID: \"d33f8d3b-768a-44d3-bdf6-1a885b096055\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-krl9b" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.158140 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.158198 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/617510ba-d86e-4485-9c02-761a60ec1a90-etcd-service-ca\") pod \"etcd-operator-b45778765-rstxr\" (UID: \"617510ba-d86e-4485-9c02-761a60ec1a90\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rstxr" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.158221 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqsj8\" (UniqueName: \"kubernetes.io/projected/e32c23eb-6cda-4f54-bb1b-d8512e4d30cc-kube-api-access-fqsj8\") pod \"cluster-image-registry-operator-dc59b4c8b-ckbbn\" (UID: \"e32c23eb-6cda-4f54-bb1b-d8512e4d30cc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckbbn" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.158248 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d40e12c4-8331-453c-b20b-5bbd5e3c2a9c-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-skz5v\" (UID: \"d40e12c4-8331-453c-b20b-5bbd5e3c2a9c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-skz5v" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.158270 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea884e88-c0df-4212-976a-0d7ce1731fdc-trusted-ca-bundle\") pod \"apiserver-76f77b778f-cd66n\" (UID: \"ea884e88-c0df-4212-976a-0d7ce1731fdc\") " pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.158579 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea884e88-c0df-4212-976a-0d7ce1731fdc-serving-cert\") pod \"apiserver-76f77b778f-cd66n\" (UID: \"ea884e88-c0df-4212-976a-0d7ce1731fdc\") " pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.159078 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1d89350d-55e9-4ef6-8182-287894b6c14b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-cnp7n\" (UID: \"1d89350d-55e9-4ef6-8182-287894b6c14b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.159688 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ea884e88-c0df-4212-976a-0d7ce1731fdc-node-pullsecrets\") pod \"apiserver-76f77b778f-cd66n\" (UID: \"ea884e88-c0df-4212-976a-0d7ce1731fdc\") " pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.160221 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3069e571-2923-44a7-ae85-7cc7e64991ef-config\") pod \"route-controller-manager-6576b87f9c-bvjdq\" (UID: \"3069e571-2923-44a7-ae85-7cc7e64991ef\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bvjdq" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.160832 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.161567 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c37f7db2-f363-4ef9-8c5f-3c6d0ed3f75c-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gxtkz\" (UID: \"c37f7db2-f363-4ef9-8c5f-3c6d0ed3f75c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gxtkz" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.162235 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdwfv\" (UniqueName: \"kubernetes.io/projected/c37f7db2-f363-4ef9-8c5f-3c6d0ed3f75c-kube-api-access-tdwfv\") pod \"cluster-samples-operator-665b6dd947-gxtkz\" (UID: \"c37f7db2-f363-4ef9-8c5f-3c6d0ed3f75c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gxtkz" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.162296 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4953940c-67b1-4a85-851f-ad290b9a0d0d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mrg9f\" (UID: \"4953940c-67b1-4a85-851f-ad290b9a0d0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mrg9f" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.162437 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3069e571-2923-44a7-ae85-7cc7e64991ef-serving-cert\") pod \"route-controller-manager-6576b87f9c-bvjdq\" (UID: \"3069e571-2923-44a7-ae85-7cc7e64991ef\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bvjdq" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.162561 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92llc\" (UniqueName: \"kubernetes.io/projected/95cc22f3-bae0-4461-a63e-0adbddd76cbb-kube-api-access-92llc\") pod \"machine-config-operator-74547568cd-j2zlh\" (UID: \"95cc22f3-bae0-4461-a63e-0adbddd76cbb\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-j2zlh" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.162627 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d40e12c4-8331-453c-b20b-5bbd5e3c2a9c-config\") pod \"controller-manager-879f6c89f-skz5v\" (UID: \"d40e12c4-8331-453c-b20b-5bbd5e3c2a9c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-skz5v" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.162807 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tkg2n"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.162894 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c6b95f15-c748-43a9-8ca6-2007cd1727e5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-9zt5j\" (UID: \"c6b95f15-c748-43a9-8ca6-2007cd1727e5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9zt5j" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.163206 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccbcp\" (UniqueName: \"kubernetes.io/projected/1d89350d-55e9-4ef6-8182-287894b6c14b-kube-api-access-ccbcp\") pod \"apiserver-7bbb656c7d-cnp7n\" (UID: \"1d89350d-55e9-4ef6-8182-287894b6c14b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.163326 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ea884e88-c0df-4212-976a-0d7ce1731fdc-etcd-client\") pod \"apiserver-76f77b778f-cd66n\" (UID: \"ea884e88-c0df-4212-976a-0d7ce1731fdc\") " pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.163514 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/112006c5-e3a9-4fbb-813c-f195e98277bc-config\") pod \"console-operator-58897d9998-mv9g5\" (UID: \"112006c5-e3a9-4fbb-813c-f195e98277bc\") " pod="openshift-console-operator/console-operator-58897d9998-mv9g5" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.163557 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbl8h\" (UniqueName: \"kubernetes.io/projected/f89bbffa-4dbb-4aac-bb26-c146a6037f67-kube-api-access-bbl8h\") pod \"machine-approver-56656f9798-dsrxh\" (UID: \"f89bbffa-4dbb-4aac-bb26-c146a6037f67\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dsrxh" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.163934 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d89350d-55e9-4ef6-8182-287894b6c14b-audit-dir\") pod \"apiserver-7bbb656c7d-cnp7n\" (UID: \"1d89350d-55e9-4ef6-8182-287894b6c14b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.164023 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45lpd\" (UniqueName: \"kubernetes.io/projected/ea884e88-c0df-4212-976a-0d7ce1731fdc-kube-api-access-45lpd\") pod \"apiserver-76f77b778f-cd66n\" (UID: \"ea884e88-c0df-4212-976a-0d7ce1731fdc\") " pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.164178 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4953940c-67b1-4a85-851f-ad290b9a0d0d-config\") pod \"kube-apiserver-operator-766d6c64bb-mrg9f\" (UID: \"4953940c-67b1-4a85-851f-ad290b9a0d0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mrg9f" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.164228 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64efd0fc-ec3c-403b-ac98-0546f2affa94-serving-cert\") pod \"openshift-config-operator-7777fb866f-jmqdr\" (UID: \"64efd0fc-ec3c-403b-ac98-0546f2affa94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jmqdr" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.164273 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d89350d-55e9-4ef6-8182-287894b6c14b-serving-cert\") pod \"apiserver-7bbb656c7d-cnp7n\" (UID: \"1d89350d-55e9-4ef6-8182-287894b6c14b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.164340 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1d89350d-55e9-4ef6-8182-287894b6c14b-audit-dir\") pod \"apiserver-7bbb656c7d-cnp7n\" (UID: \"1d89350d-55e9-4ef6-8182-287894b6c14b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.164342 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/617510ba-d86e-4485-9c02-761a60ec1a90-serving-cert\") pod \"etcd-operator-b45778765-rstxr\" (UID: \"617510ba-d86e-4485-9c02-761a60ec1a90\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rstxr" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.164503 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/465806d7-7b39-4ddf-b098-8bed4c0c5a3a-images\") pod \"machine-api-operator-5694c8668f-zljtn\" (UID: \"465806d7-7b39-4ddf-b098-8bed4c0c5a3a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zljtn" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.164631 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfe5cd7c-4c40-4e8a-8d26-f66424569dbe-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vplw7\" (UID: \"dfe5cd7c-4c40-4e8a-8d26-f66424569dbe\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vplw7" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.164701 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fb05f4f3-f5be-4823-934f-14d5c48b43c1-audit-dir\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.164860 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.165291 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/465806d7-7b39-4ddf-b098-8bed4c0c5a3a-images\") pod \"machine-api-operator-5694c8668f-zljtn\" (UID: \"465806d7-7b39-4ddf-b098-8bed4c0c5a3a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zljtn" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.165433 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1d89350d-55e9-4ef6-8182-287894b6c14b-audit-policies\") pod \"apiserver-7bbb656c7d-cnp7n\" (UID: \"1d89350d-55e9-4ef6-8182-287894b6c14b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.165598 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfe5cd7c-4c40-4e8a-8d26-f66424569dbe-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vplw7\" (UID: \"dfe5cd7c-4c40-4e8a-8d26-f66424569dbe\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vplw7" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.165499 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfnnk\" (UniqueName: \"kubernetes.io/projected/04af6d44-0ead-4b43-8287-b6bdb88c14ea-kube-api-access-rfnnk\") pod \"openshift-controller-manager-operator-756b6f6bc6-n2j6p\" (UID: \"04af6d44-0ead-4b43-8287-b6bdb88c14ea\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n2j6p" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.165766 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.165864 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f89bbffa-4dbb-4aac-bb26-c146a6037f67-auth-proxy-config\") pod \"machine-approver-56656f9798-dsrxh\" (UID: \"f89bbffa-4dbb-4aac-bb26-c146a6037f67\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dsrxh" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.165919 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f633d554-794b-4a64-9699-27fbc28a4d7c-oauth-serving-cert\") pod \"console-f9d7485db-gp9fp\" (UID: \"f633d554-794b-4a64-9699-27fbc28a4d7c\") " pod="openshift-console/console-f9d7485db-gp9fp" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.165981 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1d89350d-55e9-4ef6-8182-287894b6c14b-audit-policies\") pod \"apiserver-7bbb656c7d-cnp7n\" (UID: \"1d89350d-55e9-4ef6-8182-287894b6c14b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.166000 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3069e571-2923-44a7-ae85-7cc7e64991ef-client-ca\") pod \"route-controller-manager-6576b87f9c-bvjdq\" (UID: \"3069e571-2923-44a7-ae85-7cc7e64991ef\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bvjdq" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.166052 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b94wm\" (UniqueName: \"kubernetes.io/projected/534031e8-875b-4fd2-91cf-bc969db66c22-kube-api-access-b94wm\") pod \"migrator-59844c95c7-mg97h\" (UID: \"534031e8-875b-4fd2-91cf-bc969db66c22\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mg97h" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.166144 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/95cc22f3-bae0-4461-a63e-0adbddd76cbb-auth-proxy-config\") pod \"machine-config-operator-74547568cd-j2zlh\" (UID: \"95cc22f3-bae0-4461-a63e-0adbddd76cbb\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-j2zlh" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.166223 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fb05f4f3-f5be-4823-934f-14d5c48b43c1-audit-policies\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.166265 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/465806d7-7b39-4ddf-b098-8bed4c0c5a3a-config\") pod \"machine-api-operator-5694c8668f-zljtn\" (UID: \"465806d7-7b39-4ddf-b098-8bed4c0c5a3a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zljtn" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.166326 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/617510ba-d86e-4485-9c02-761a60ec1a90-etcd-client\") pod \"etcd-operator-b45778765-rstxr\" (UID: \"617510ba-d86e-4485-9c02-761a60ec1a90\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rstxr" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.166358 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26b6bd08-d432-4a3b-b739-d575ca32ac6e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-c6twg\" (UID: \"26b6bd08-d432-4a3b-b739-d575ca32ac6e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c6twg" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.166663 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/465806d7-7b39-4ddf-b098-8bed4c0c5a3a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zljtn\" (UID: \"465806d7-7b39-4ddf-b098-8bed4c0c5a3a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zljtn" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.166992 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4psgt\" (UniqueName: \"kubernetes.io/projected/dfe5cd7c-4c40-4e8a-8d26-f66424569dbe-kube-api-access-4psgt\") pod \"openshift-apiserver-operator-796bbdcf4f-vplw7\" (UID: \"dfe5cd7c-4c40-4e8a-8d26-f66424569dbe\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vplw7" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.167072 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/112006c5-e3a9-4fbb-813c-f195e98277bc-trusted-ca\") pod \"console-operator-58897d9998-mv9g5\" (UID: \"112006c5-e3a9-4fbb-813c-f195e98277bc\") " pod="openshift-console-operator/console-operator-58897d9998-mv9g5" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.167083 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/465806d7-7b39-4ddf-b098-8bed4c0c5a3a-config\") pod \"machine-api-operator-5694c8668f-zljtn\" (UID: \"465806d7-7b39-4ddf-b098-8bed4c0c5a3a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zljtn" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.167127 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04af6d44-0ead-4b43-8287-b6bdb88c14ea-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-n2j6p\" (UID: \"04af6d44-0ead-4b43-8287-b6bdb88c14ea\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n2j6p" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.167154 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9244eaaf-3ab0-426e-a568-aa295b66c246-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-r49rr\" (UID: \"9244eaaf-3ab0-426e-a568-aa295b66c246\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r49rr" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.167174 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81fb21e6-41c1-4e89-b458-5d83efc1eec6-config\") pod \"kube-controller-manager-operator-78b949d7b-jmrjt\" (UID: \"81fb21e6-41c1-4e89-b458-5d83efc1eec6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jmrjt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.167200 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d33f8d3b-768a-44d3-bdf6-1a885b096055-bound-sa-token\") pod \"ingress-operator-5b745b69d9-krl9b\" (UID: \"d33f8d3b-768a-44d3-bdf6-1a885b096055\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-krl9b" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.167228 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f633d554-794b-4a64-9699-27fbc28a4d7c-console-config\") pod \"console-f9d7485db-gp9fp\" (UID: \"f633d554-794b-4a64-9699-27fbc28a4d7c\") " pod="openshift-console/console-f9d7485db-gp9fp" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.166695 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3069e571-2923-44a7-ae85-7cc7e64991ef-client-ca\") pod \"route-controller-manager-6576b87f9c-bvjdq\" (UID: \"3069e571-2923-44a7-ae85-7cc7e64991ef\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bvjdq" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.167257 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d33f8d3b-768a-44d3-bdf6-1a885b096055-trusted-ca\") pod \"ingress-operator-5b745b69d9-krl9b\" (UID: \"d33f8d3b-768a-44d3-bdf6-1a885b096055\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-krl9b" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.167347 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81fb21e6-41c1-4e89-b458-5d83efc1eec6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-jmrjt\" (UID: \"81fb21e6-41c1-4e89-b458-5d83efc1eec6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jmrjt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.167373 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/64efd0fc-ec3c-403b-ac98-0546f2affa94-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jmqdr\" (UID: \"64efd0fc-ec3c-403b-ac98-0546f2affa94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jmqdr" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.167456 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea884e88-c0df-4212-976a-0d7ce1731fdc-config\") pod \"apiserver-76f77b778f-cd66n\" (UID: \"ea884e88-c0df-4212-976a-0d7ce1731fdc\") " pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.167514 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7dtk\" (UniqueName: \"kubernetes.io/projected/9244eaaf-3ab0-426e-a568-aa295b66c246-kube-api-access-j7dtk\") pod \"kube-storage-version-migrator-operator-b67b599dd-r49rr\" (UID: \"9244eaaf-3ab0-426e-a568-aa295b66c246\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r49rr" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.167545 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmdlb\" (UniqueName: \"kubernetes.io/projected/fb05f4f3-f5be-4823-934f-14d5c48b43c1-kube-api-access-kmdlb\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.168563 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-gp9fp"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.170370 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3069e571-2923-44a7-ae85-7cc7e64991ef-config\") pod \"route-controller-manager-6576b87f9c-bvjdq\" (UID: \"3069e571-2923-44a7-ae85-7cc7e64991ef\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bvjdq" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.172152 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea884e88-c0df-4212-976a-0d7ce1731fdc-config\") pod \"apiserver-76f77b778f-cd66n\" (UID: \"ea884e88-c0df-4212-976a-0d7ce1731fdc\") " pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.172500 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.178299 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-cd66n"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.178352 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-l4w2d"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.184189 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-rstxr"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.184274 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r49rr"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.184365 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-vftb9"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.185899 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-vftb9" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.187069 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1d89350d-55e9-4ef6-8182-287894b6c14b-etcd-client\") pod \"apiserver-7bbb656c7d-cnp7n\" (UID: \"1d89350d-55e9-4ef6-8182-287894b6c14b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.190466 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1d89350d-55e9-4ef6-8182-287894b6c14b-encryption-config\") pod \"apiserver-7bbb656c7d-cnp7n\" (UID: \"1d89350d-55e9-4ef6-8182-287894b6c14b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.191077 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/465806d7-7b39-4ddf-b098-8bed4c0c5a3a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-zljtn\" (UID: \"465806d7-7b39-4ddf-b098-8bed4c0c5a3a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zljtn" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.194774 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-bxjjm"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.196786 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6t4gf"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.207748 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.207764 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ea884e88-c0df-4212-976a-0d7ce1731fdc-encryption-config\") pod \"apiserver-76f77b778f-cd66n\" (UID: \"ea884e88-c0df-4212-976a-0d7ce1731fdc\") " pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.209079 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-pqplb"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.210417 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pqplb" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.210818 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424345-gxsbc"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.211741 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-9ttg9"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.211954 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.213162 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jns7"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.213670 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9zt5j"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.214721 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5zrm6"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.215848 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-25p7l"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.217223 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jmrjt"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.218201 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-t75hp"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.219187 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckbbn"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.220170 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-24wkc"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.221179 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-f6wfx"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.222270 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-mbbdj"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.223180 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-pqplb"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.224162 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.225178 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rmlj6"] Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.232002 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.252305 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.268192 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fb05f4f3-f5be-4823-934f-14d5c48b43c1-audit-dir\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.268230 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.268257 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfnnk\" (UniqueName: \"kubernetes.io/projected/04af6d44-0ead-4b43-8287-b6bdb88c14ea-kube-api-access-rfnnk\") pod \"openshift-controller-manager-operator-756b6f6bc6-n2j6p\" (UID: \"04af6d44-0ead-4b43-8287-b6bdb88c14ea\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n2j6p" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.268274 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.268294 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f89bbffa-4dbb-4aac-bb26-c146a6037f67-auth-proxy-config\") pod \"machine-approver-56656f9798-dsrxh\" (UID: \"f89bbffa-4dbb-4aac-bb26-c146a6037f67\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dsrxh" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.268313 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f633d554-794b-4a64-9699-27fbc28a4d7c-oauth-serving-cert\") pod \"console-f9d7485db-gp9fp\" (UID: \"f633d554-794b-4a64-9699-27fbc28a4d7c\") " pod="openshift-console/console-f9d7485db-gp9fp" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.268345 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b94wm\" (UniqueName: \"kubernetes.io/projected/534031e8-875b-4fd2-91cf-bc969db66c22-kube-api-access-b94wm\") pod \"migrator-59844c95c7-mg97h\" (UID: \"534031e8-875b-4fd2-91cf-bc969db66c22\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mg97h" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.268345 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fb05f4f3-f5be-4823-934f-14d5c48b43c1-audit-dir\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.268367 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/95cc22f3-bae0-4461-a63e-0adbddd76cbb-auth-proxy-config\") pod \"machine-config-operator-74547568cd-j2zlh\" (UID: \"95cc22f3-bae0-4461-a63e-0adbddd76cbb\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-j2zlh" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.268388 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fb05f4f3-f5be-4823-934f-14d5c48b43c1-audit-policies\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.268407 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/617510ba-d86e-4485-9c02-761a60ec1a90-etcd-client\") pod \"etcd-operator-b45778765-rstxr\" (UID: \"617510ba-d86e-4485-9c02-761a60ec1a90\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rstxr" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.268425 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26b6bd08-d432-4a3b-b739-d575ca32ac6e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-c6twg\" (UID: \"26b6bd08-d432-4a3b-b739-d575ca32ac6e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c6twg" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.268451 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/112006c5-e3a9-4fbb-813c-f195e98277bc-trusted-ca\") pod \"console-operator-58897d9998-mv9g5\" (UID: \"112006c5-e3a9-4fbb-813c-f195e98277bc\") " pod="openshift-console-operator/console-operator-58897d9998-mv9g5" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.268478 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04af6d44-0ead-4b43-8287-b6bdb88c14ea-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-n2j6p\" (UID: \"04af6d44-0ead-4b43-8287-b6bdb88c14ea\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n2j6p" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.268495 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9244eaaf-3ab0-426e-a568-aa295b66c246-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-r49rr\" (UID: \"9244eaaf-3ab0-426e-a568-aa295b66c246\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r49rr" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.268514 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81fb21e6-41c1-4e89-b458-5d83efc1eec6-config\") pod \"kube-controller-manager-operator-78b949d7b-jmrjt\" (UID: \"81fb21e6-41c1-4e89-b458-5d83efc1eec6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jmrjt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.268531 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d33f8d3b-768a-44d3-bdf6-1a885b096055-bound-sa-token\") pod \"ingress-operator-5b745b69d9-krl9b\" (UID: \"d33f8d3b-768a-44d3-bdf6-1a885b096055\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-krl9b" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.268548 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f633d554-794b-4a64-9699-27fbc28a4d7c-console-config\") pod \"console-f9d7485db-gp9fp\" (UID: \"f633d554-794b-4a64-9699-27fbc28a4d7c\") " pod="openshift-console/console-f9d7485db-gp9fp" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.268566 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d33f8d3b-768a-44d3-bdf6-1a885b096055-trusted-ca\") pod \"ingress-operator-5b745b69d9-krl9b\" (UID: \"d33f8d3b-768a-44d3-bdf6-1a885b096055\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-krl9b" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.268585 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81fb21e6-41c1-4e89-b458-5d83efc1eec6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-jmrjt\" (UID: \"81fb21e6-41c1-4e89-b458-5d83efc1eec6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jmrjt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.268606 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/64efd0fc-ec3c-403b-ac98-0546f2affa94-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jmqdr\" (UID: \"64efd0fc-ec3c-403b-ac98-0546f2affa94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jmqdr" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.269185 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7dtk\" (UniqueName: \"kubernetes.io/projected/9244eaaf-3ab0-426e-a568-aa295b66c246-kube-api-access-j7dtk\") pod \"kube-storage-version-migrator-operator-b67b599dd-r49rr\" (UID: \"9244eaaf-3ab0-426e-a568-aa295b66c246\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r49rr" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.269220 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmdlb\" (UniqueName: \"kubernetes.io/projected/fb05f4f3-f5be-4823-934f-14d5c48b43c1-kube-api-access-kmdlb\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.269241 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9bwx\" (UniqueName: \"kubernetes.io/projected/d40e12c4-8331-453c-b20b-5bbd5e3c2a9c-kube-api-access-k9bwx\") pod \"controller-manager-879f6c89f-skz5v\" (UID: \"d40e12c4-8331-453c-b20b-5bbd5e3c2a9c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-skz5v" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.269297 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d40e12c4-8331-453c-b20b-5bbd5e3c2a9c-client-ca\") pod \"controller-manager-879f6c89f-skz5v\" (UID: \"d40e12c4-8331-453c-b20b-5bbd5e3c2a9c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-skz5v" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.269316 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e32c23eb-6cda-4f54-bb1b-d8512e4d30cc-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ckbbn\" (UID: \"e32c23eb-6cda-4f54-bb1b-d8512e4d30cc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckbbn" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.269350 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.269389 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.269408 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f89bbffa-4dbb-4aac-bb26-c146a6037f67-machine-approver-tls\") pod \"machine-approver-56656f9798-dsrxh\" (UID: \"f89bbffa-4dbb-4aac-bb26-c146a6037f67\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dsrxh" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.269427 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9244eaaf-3ab0-426e-a568-aa295b66c246-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-r49rr\" (UID: \"9244eaaf-3ab0-426e-a568-aa295b66c246\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r49rr" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.269445 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f633d554-794b-4a64-9699-27fbc28a4d7c-service-ca\") pod \"console-f9d7485db-gp9fp\" (UID: \"f633d554-794b-4a64-9699-27fbc28a4d7c\") " pod="openshift-console/console-f9d7485db-gp9fp" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.269461 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/617510ba-d86e-4485-9c02-761a60ec1a90-etcd-ca\") pod \"etcd-operator-b45778765-rstxr\" (UID: \"617510ba-d86e-4485-9c02-761a60ec1a90\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rstxr" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.269479 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04af6d44-0ead-4b43-8287-b6bdb88c14ea-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-n2j6p\" (UID: \"04af6d44-0ead-4b43-8287-b6bdb88c14ea\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n2j6p" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.269498 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81fb21e6-41c1-4e89-b458-5d83efc1eec6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-jmrjt\" (UID: \"81fb21e6-41c1-4e89-b458-5d83efc1eec6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jmrjt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.269553 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdqpk\" (UniqueName: \"kubernetes.io/projected/617510ba-d86e-4485-9c02-761a60ec1a90-kube-api-access-qdqpk\") pod \"etcd-operator-b45778765-rstxr\" (UID: \"617510ba-d86e-4485-9c02-761a60ec1a90\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rstxr" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.269572 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b6bd08-d432-4a3b-b739-d575ca32ac6e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-c6twg\" (UID: \"26b6bd08-d432-4a3b-b739-d575ca32ac6e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c6twg" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.269597 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbjc4\" (UniqueName: \"kubernetes.io/projected/5002875e-cc97-4cd1-a72a-f8611227e58c-kube-api-access-kbjc4\") pod \"dns-operator-744455d44c-5zrm6\" (UID: \"5002875e-cc97-4cd1-a72a-f8611227e58c\") " pod="openshift-dns-operator/dns-operator-744455d44c-5zrm6" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.269615 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d40e12c4-8331-453c-b20b-5bbd5e3c2a9c-serving-cert\") pod \"controller-manager-879f6c89f-skz5v\" (UID: \"d40e12c4-8331-453c-b20b-5bbd5e3c2a9c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-skz5v" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.269635 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.270099 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f89bbffa-4dbb-4aac-bb26-c146a6037f67-auth-proxy-config\") pod \"machine-approver-56656f9798-dsrxh\" (UID: \"f89bbffa-4dbb-4aac-bb26-c146a6037f67\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dsrxh" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.270176 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f633d554-794b-4a64-9699-27fbc28a4d7c-console-config\") pod \"console-f9d7485db-gp9fp\" (UID: \"f633d554-794b-4a64-9699-27fbc28a4d7c\") " pod="openshift-console/console-f9d7485db-gp9fp" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.270245 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f633d554-794b-4a64-9699-27fbc28a4d7c-oauth-serving-cert\") pod \"console-f9d7485db-gp9fp\" (UID: \"f633d554-794b-4a64-9699-27fbc28a4d7c\") " pod="openshift-console/console-f9d7485db-gp9fp" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.270256 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/64efd0fc-ec3c-403b-ac98-0546f2affa94-available-featuregates\") pod \"openshift-config-operator-7777fb866f-jmqdr\" (UID: \"64efd0fc-ec3c-403b-ac98-0546f2affa94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jmqdr" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.270577 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/112006c5-e3a9-4fbb-813c-f195e98277bc-trusted-ca\") pod \"console-operator-58897d9998-mv9g5\" (UID: \"112006c5-e3a9-4fbb-813c-f195e98277bc\") " pod="openshift-console-operator/console-operator-58897d9998-mv9g5" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.270948 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f633d554-794b-4a64-9699-27fbc28a4d7c-service-ca\") pod \"console-f9d7485db-gp9fp\" (UID: \"f633d554-794b-4a64-9699-27fbc28a4d7c\") " pod="openshift-console/console-f9d7485db-gp9fp" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.271317 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f633d554-794b-4a64-9699-27fbc28a4d7c-console-serving-cert\") pod \"console-f9d7485db-gp9fp\" (UID: \"f633d554-794b-4a64-9699-27fbc28a4d7c\") " pod="openshift-console/console-f9d7485db-gp9fp" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.272142 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2w8k\" (UniqueName: \"kubernetes.io/projected/f633d554-794b-4a64-9699-27fbc28a4d7c-kube-api-access-l2w8k\") pod \"console-f9d7485db-gp9fp\" (UID: \"f633d554-794b-4a64-9699-27fbc28a4d7c\") " pod="openshift-console/console-f9d7485db-gp9fp" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.272283 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.272323 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.272346 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e32c23eb-6cda-4f54-bb1b-d8512e4d30cc-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ckbbn\" (UID: \"e32c23eb-6cda-4f54-bb1b-d8512e4d30cc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckbbn" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.272650 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.272817 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d40e12c4-8331-453c-b20b-5bbd5e3c2a9c-client-ca\") pod \"controller-manager-879f6c89f-skz5v\" (UID: \"d40e12c4-8331-453c-b20b-5bbd5e3c2a9c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-skz5v" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.272841 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/617510ba-d86e-4485-9c02-761a60ec1a90-etcd-client\") pod \"etcd-operator-b45778765-rstxr\" (UID: \"617510ba-d86e-4485-9c02-761a60ec1a90\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rstxr" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.273033 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.273144 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81fb21e6-41c1-4e89-b458-5d83efc1eec6-config\") pod \"kube-controller-manager-operator-78b949d7b-jmrjt\" (UID: \"81fb21e6-41c1-4e89-b458-5d83efc1eec6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jmrjt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.273200 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/95cc22f3-bae0-4461-a63e-0adbddd76cbb-auth-proxy-config\") pod \"machine-config-operator-74547568cd-j2zlh\" (UID: \"95cc22f3-bae0-4461-a63e-0adbddd76cbb\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-j2zlh" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.273254 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.273378 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04af6d44-0ead-4b43-8287-b6bdb88c14ea-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-n2j6p\" (UID: \"04af6d44-0ead-4b43-8287-b6bdb88c14ea\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n2j6p" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.273787 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/617510ba-d86e-4485-9c02-761a60ec1a90-etcd-ca\") pod \"etcd-operator-b45778765-rstxr\" (UID: \"617510ba-d86e-4485-9c02-761a60ec1a90\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rstxr" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.273827 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04af6d44-0ead-4b43-8287-b6bdb88c14ea-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-n2j6p\" (UID: \"04af6d44-0ead-4b43-8287-b6bdb88c14ea\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n2j6p" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.273800 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fb05f4f3-f5be-4823-934f-14d5c48b43c1-audit-policies\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.273904 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/95cc22f3-bae0-4461-a63e-0adbddd76cbb-images\") pod \"machine-config-operator-74547568cd-j2zlh\" (UID: \"95cc22f3-bae0-4461-a63e-0adbddd76cbb\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-j2zlh" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.273925 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4953940c-67b1-4a85-851f-ad290b9a0d0d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mrg9f\" (UID: \"4953940c-67b1-4a85-851f-ad290b9a0d0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mrg9f" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.273945 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.274308 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f89bbffa-4dbb-4aac-bb26-c146a6037f67-config\") pod \"machine-approver-56656f9798-dsrxh\" (UID: \"f89bbffa-4dbb-4aac-bb26-c146a6037f67\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dsrxh" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.274340 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e32c23eb-6cda-4f54-bb1b-d8512e4d30cc-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ckbbn\" (UID: \"e32c23eb-6cda-4f54-bb1b-d8512e4d30cc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckbbn" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.274362 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/95cc22f3-bae0-4461-a63e-0adbddd76cbb-proxy-tls\") pod \"machine-config-operator-74547568cd-j2zlh\" (UID: \"95cc22f3-bae0-4461-a63e-0adbddd76cbb\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-j2zlh" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.274384 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26b6bd08-d432-4a3b-b739-d575ca32ac6e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-c6twg\" (UID: \"26b6bd08-d432-4a3b-b739-d575ca32ac6e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c6twg" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.274409 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.274430 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f633d554-794b-4a64-9699-27fbc28a4d7c-console-oauth-config\") pod \"console-f9d7485db-gp9fp\" (UID: \"f633d554-794b-4a64-9699-27fbc28a4d7c\") " pod="openshift-console/console-f9d7485db-gp9fp" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.274452 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f633d554-794b-4a64-9699-27fbc28a4d7c-trusted-ca-bundle\") pod \"console-f9d7485db-gp9fp\" (UID: \"f633d554-794b-4a64-9699-27fbc28a4d7c\") " pod="openshift-console/console-f9d7485db-gp9fp" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.274815 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/112006c5-e3a9-4fbb-813c-f195e98277bc-serving-cert\") pod \"console-operator-58897d9998-mv9g5\" (UID: \"112006c5-e3a9-4fbb-813c-f195e98277bc\") " pod="openshift-console-operator/console-operator-58897d9998-mv9g5" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.274836 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czz6r\" (UniqueName: \"kubernetes.io/projected/112006c5-e3a9-4fbb-813c-f195e98277bc-kube-api-access-czz6r\") pod \"console-operator-58897d9998-mv9g5\" (UID: \"112006c5-e3a9-4fbb-813c-f195e98277bc\") " pod="openshift-console-operator/console-operator-58897d9998-mv9g5" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.274855 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f89bbffa-4dbb-4aac-bb26-c146a6037f67-config\") pod \"machine-approver-56656f9798-dsrxh\" (UID: \"f89bbffa-4dbb-4aac-bb26-c146a6037f67\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dsrxh" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.274863 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj27z\" (UniqueName: \"kubernetes.io/projected/64efd0fc-ec3c-403b-ac98-0546f2affa94-kube-api-access-jj27z\") pod \"openshift-config-operator-7777fb866f-jmqdr\" (UID: \"64efd0fc-ec3c-403b-ac98-0546f2affa94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jmqdr" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.275075 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/617510ba-d86e-4485-9c02-761a60ec1a90-config\") pod \"etcd-operator-b45778765-rstxr\" (UID: \"617510ba-d86e-4485-9c02-761a60ec1a90\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rstxr" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.275122 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5002875e-cc97-4cd1-a72a-f8611227e58c-metrics-tls\") pod \"dns-operator-744455d44c-5zrm6\" (UID: \"5002875e-cc97-4cd1-a72a-f8611227e58c\") " pod="openshift-dns-operator/dns-operator-744455d44c-5zrm6" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.275181 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.275259 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dv5f\" (UniqueName: \"kubernetes.io/projected/d9e38eea-202a-4bf1-bb51-1d4a1fc20202-kube-api-access-9dv5f\") pod \"downloads-7954f5f757-qc97s\" (UID: \"d9e38eea-202a-4bf1-bb51-1d4a1fc20202\") " pod="openshift-console/downloads-7954f5f757-qc97s" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.275290 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvxlj\" (UniqueName: \"kubernetes.io/projected/c6b95f15-c748-43a9-8ca6-2007cd1727e5-kube-api-access-nvxlj\") pod \"control-plane-machine-set-operator-78cbb6b69f-9zt5j\" (UID: \"c6b95f15-c748-43a9-8ca6-2007cd1727e5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9zt5j" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.275328 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d33f8d3b-768a-44d3-bdf6-1a885b096055-metrics-tls\") pod \"ingress-operator-5b745b69d9-krl9b\" (UID: \"d33f8d3b-768a-44d3-bdf6-1a885b096055\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-krl9b" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.275358 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qlf7\" (UniqueName: \"kubernetes.io/projected/d33f8d3b-768a-44d3-bdf6-1a885b096055-kube-api-access-2qlf7\") pod \"ingress-operator-5b745b69d9-krl9b\" (UID: \"d33f8d3b-768a-44d3-bdf6-1a885b096055\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-krl9b" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.275384 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/617510ba-d86e-4485-9c02-761a60ec1a90-etcd-service-ca\") pod \"etcd-operator-b45778765-rstxr\" (UID: \"617510ba-d86e-4485-9c02-761a60ec1a90\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rstxr" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.275412 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqsj8\" (UniqueName: \"kubernetes.io/projected/e32c23eb-6cda-4f54-bb1b-d8512e4d30cc-kube-api-access-fqsj8\") pod \"cluster-image-registry-operator-dc59b4c8b-ckbbn\" (UID: \"e32c23eb-6cda-4f54-bb1b-d8512e4d30cc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckbbn" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.275432 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.275440 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d40e12c4-8331-453c-b20b-5bbd5e3c2a9c-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-skz5v\" (UID: \"d40e12c4-8331-453c-b20b-5bbd5e3c2a9c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-skz5v" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.276271 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.276188 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f633d554-794b-4a64-9699-27fbc28a4d7c-trusted-ca-bundle\") pod \"console-f9d7485db-gp9fp\" (UID: \"f633d554-794b-4a64-9699-27fbc28a4d7c\") " pod="openshift-console/console-f9d7485db-gp9fp" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.276972 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/617510ba-d86e-4485-9c02-761a60ec1a90-etcd-service-ca\") pod \"etcd-operator-b45778765-rstxr\" (UID: \"617510ba-d86e-4485-9c02-761a60ec1a90\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rstxr" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.276489 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d40e12c4-8331-453c-b20b-5bbd5e3c2a9c-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-skz5v\" (UID: \"d40e12c4-8331-453c-b20b-5bbd5e3c2a9c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-skz5v" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.275888 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/617510ba-d86e-4485-9c02-761a60ec1a90-config\") pod \"etcd-operator-b45778765-rstxr\" (UID: \"617510ba-d86e-4485-9c02-761a60ec1a90\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rstxr" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.277082 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.279449 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4953940c-67b1-4a85-851f-ad290b9a0d0d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mrg9f\" (UID: \"4953940c-67b1-4a85-851f-ad290b9a0d0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mrg9f" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.279520 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92llc\" (UniqueName: \"kubernetes.io/projected/95cc22f3-bae0-4461-a63e-0adbddd76cbb-kube-api-access-92llc\") pod \"machine-config-operator-74547568cd-j2zlh\" (UID: \"95cc22f3-bae0-4461-a63e-0adbddd76cbb\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-j2zlh" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.279546 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d40e12c4-8331-453c-b20b-5bbd5e3c2a9c-config\") pod \"controller-manager-879f6c89f-skz5v\" (UID: \"d40e12c4-8331-453c-b20b-5bbd5e3c2a9c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-skz5v" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.279594 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c6b95f15-c748-43a9-8ca6-2007cd1727e5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-9zt5j\" (UID: \"c6b95f15-c748-43a9-8ca6-2007cd1727e5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9zt5j" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.279635 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/112006c5-e3a9-4fbb-813c-f195e98277bc-config\") pod \"console-operator-58897d9998-mv9g5\" (UID: \"112006c5-e3a9-4fbb-813c-f195e98277bc\") " pod="openshift-console-operator/console-operator-58897d9998-mv9g5" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.279661 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbl8h\" (UniqueName: \"kubernetes.io/projected/f89bbffa-4dbb-4aac-bb26-c146a6037f67-kube-api-access-bbl8h\") pod \"machine-approver-56656f9798-dsrxh\" (UID: \"f89bbffa-4dbb-4aac-bb26-c146a6037f67\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dsrxh" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.279692 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4953940c-67b1-4a85-851f-ad290b9a0d0d-config\") pod \"kube-apiserver-operator-766d6c64bb-mrg9f\" (UID: \"4953940c-67b1-4a85-851f-ad290b9a0d0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mrg9f" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.279717 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64efd0fc-ec3c-403b-ac98-0546f2affa94-serving-cert\") pod \"openshift-config-operator-7777fb866f-jmqdr\" (UID: \"64efd0fc-ec3c-403b-ac98-0546f2affa94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jmqdr" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.279747 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/617510ba-d86e-4485-9c02-761a60ec1a90-serving-cert\") pod \"etcd-operator-b45778765-rstxr\" (UID: \"617510ba-d86e-4485-9c02-761a60ec1a90\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rstxr" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.279907 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.279959 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.280311 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f89bbffa-4dbb-4aac-bb26-c146a6037f67-machine-approver-tls\") pod \"machine-approver-56656f9798-dsrxh\" (UID: \"f89bbffa-4dbb-4aac-bb26-c146a6037f67\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dsrxh" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.280346 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81fb21e6-41c1-4e89-b458-5d83efc1eec6-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-jmrjt\" (UID: \"81fb21e6-41c1-4e89-b458-5d83efc1eec6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jmrjt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.280470 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d40e12c4-8331-453c-b20b-5bbd5e3c2a9c-serving-cert\") pod \"controller-manager-879f6c89f-skz5v\" (UID: \"d40e12c4-8331-453c-b20b-5bbd5e3c2a9c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-skz5v" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.280561 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.280621 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5002875e-cc97-4cd1-a72a-f8611227e58c-metrics-tls\") pod \"dns-operator-744455d44c-5zrm6\" (UID: \"5002875e-cc97-4cd1-a72a-f8611227e58c\") " pod="openshift-dns-operator/dns-operator-744455d44c-5zrm6" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.280700 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/112006c5-e3a9-4fbb-813c-f195e98277bc-config\") pod \"console-operator-58897d9998-mv9g5\" (UID: \"112006c5-e3a9-4fbb-813c-f195e98277bc\") " pod="openshift-console-operator/console-operator-58897d9998-mv9g5" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.281779 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d40e12c4-8331-453c-b20b-5bbd5e3c2a9c-config\") pod \"controller-manager-879f6c89f-skz5v\" (UID: \"d40e12c4-8331-453c-b20b-5bbd5e3c2a9c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-skz5v" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.281888 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.283433 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/617510ba-d86e-4485-9c02-761a60ec1a90-serving-cert\") pod \"etcd-operator-b45778765-rstxr\" (UID: \"617510ba-d86e-4485-9c02-761a60ec1a90\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rstxr" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.284106 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.284312 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f633d554-794b-4a64-9699-27fbc28a4d7c-console-serving-cert\") pod \"console-f9d7485db-gp9fp\" (UID: \"f633d554-794b-4a64-9699-27fbc28a4d7c\") " pod="openshift-console/console-f9d7485db-gp9fp" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.284358 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/112006c5-e3a9-4fbb-813c-f195e98277bc-serving-cert\") pod \"console-operator-58897d9998-mv9g5\" (UID: \"112006c5-e3a9-4fbb-813c-f195e98277bc\") " pod="openshift-console-operator/console-operator-58897d9998-mv9g5" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.284642 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.284789 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.285296 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/64efd0fc-ec3c-403b-ac98-0546f2affa94-serving-cert\") pod \"openshift-config-operator-7777fb866f-jmqdr\" (UID: \"64efd0fc-ec3c-403b-ac98-0546f2affa94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jmqdr" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.285865 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f633d554-794b-4a64-9699-27fbc28a4d7c-console-oauth-config\") pod \"console-f9d7485db-gp9fp\" (UID: \"f633d554-794b-4a64-9699-27fbc28a4d7c\") " pod="openshift-console/console-f9d7485db-gp9fp" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.291503 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.296971 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4953940c-67b1-4a85-851f-ad290b9a0d0d-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-mrg9f\" (UID: \"4953940c-67b1-4a85-851f-ad290b9a0d0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mrg9f" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.311799 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.321440 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4953940c-67b1-4a85-851f-ad290b9a0d0d-config\") pod \"kube-apiserver-operator-766d6c64bb-mrg9f\" (UID: \"4953940c-67b1-4a85-851f-ad290b9a0d0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mrg9f" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.338754 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.343162 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e32c23eb-6cda-4f54-bb1b-d8512e4d30cc-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ckbbn\" (UID: \"e32c23eb-6cda-4f54-bb1b-d8512e4d30cc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckbbn" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.351634 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.372716 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.392146 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.421260 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.429845 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d33f8d3b-768a-44d3-bdf6-1a885b096055-trusted-ca\") pod \"ingress-operator-5b745b69d9-krl9b\" (UID: \"d33f8d3b-768a-44d3-bdf6-1a885b096055\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-krl9b" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.431982 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.451814 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.459845 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d33f8d3b-768a-44d3-bdf6-1a885b096055-metrics-tls\") pod \"ingress-operator-5b745b69d9-krl9b\" (UID: \"d33f8d3b-768a-44d3-bdf6-1a885b096055\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-krl9b" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.472027 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.492338 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.512367 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.516885 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/e32c23eb-6cda-4f54-bb1b-d8512e4d30cc-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ckbbn\" (UID: \"e32c23eb-6cda-4f54-bb1b-d8512e4d30cc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckbbn" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.531526 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.546155 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lttxf" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.546291 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.552166 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.564725 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c6b95f15-c748-43a9-8ca6-2007cd1727e5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-9zt5j\" (UID: \"c6b95f15-c748-43a9-8ca6-2007cd1727e5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9zt5j" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.572041 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.590952 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.611669 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.632519 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.652128 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.672192 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.692770 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.712307 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.732080 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.735892 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/95cc22f3-bae0-4461-a63e-0adbddd76cbb-images\") pod \"machine-config-operator-74547568cd-j2zlh\" (UID: \"95cc22f3-bae0-4461-a63e-0adbddd76cbb\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-j2zlh" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.751687 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.771986 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.778663 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/95cc22f3-bae0-4461-a63e-0adbddd76cbb-proxy-tls\") pod \"machine-config-operator-74547568cd-j2zlh\" (UID: \"95cc22f3-bae0-4461-a63e-0adbddd76cbb\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-j2zlh" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.792462 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.811645 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.822188 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/26b6bd08-d432-4a3b-b739-d575ca32ac6e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-c6twg\" (UID: \"26b6bd08-d432-4a3b-b739-d575ca32ac6e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c6twg" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.831275 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.831944 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26b6bd08-d432-4a3b-b739-d575ca32ac6e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-c6twg\" (UID: \"26b6bd08-d432-4a3b-b739-d575ca32ac6e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c6twg" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.852474 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.872179 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.891381 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.912140 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.925533 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9244eaaf-3ab0-426e-a568-aa295b66c246-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-r49rr\" (UID: \"9244eaaf-3ab0-426e-a568-aa295b66c246\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r49rr" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.932313 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.941884 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9244eaaf-3ab0-426e-a568-aa295b66c246-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-r49rr\" (UID: \"9244eaaf-3ab0-426e-a568-aa295b66c246\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r49rr" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.951745 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 11 13:49:03 crc kubenswrapper[5050]: I1211 13:49:03.992634 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.012914 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.031602 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.052939 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.072549 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.091134 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.109756 5050 request.go:700] Waited for 1.005132849s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-qc97s Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.133488 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.152465 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.173105 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.192050 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.192650 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:04 crc kubenswrapper[5050]: E1211 13:49:04.192842 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:20.192817493 +0000 UTC m=+51.036540099 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.212559 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.232417 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.252430 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.271607 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.291991 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.294712 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.294781 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.294813 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.294868 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:49:04 crc kubenswrapper[5050]: E1211 13:49:04.294920 5050 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 13:49:04 crc kubenswrapper[5050]: E1211 13:49:04.295021 5050 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 13:49:04 crc kubenswrapper[5050]: E1211 13:49:04.295037 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 13:49:20.29498978 +0000 UTC m=+51.138712366 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 13:49:04 crc kubenswrapper[5050]: E1211 13:49:04.295105 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 13:49:20.295084382 +0000 UTC m=+51.138807148 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.311748 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.332572 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.351854 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.372379 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.391990 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.418329 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.431778 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.451822 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.471564 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.491516 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.512282 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.540534 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.545301 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.545364 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.552208 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.572566 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.591779 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.612668 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.630992 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.651395 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.672065 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.691883 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.712777 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.731554 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.752400 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.772498 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.791423 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.811951 5050 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.831759 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.853316 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.898654 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8nrf\" (UniqueName: \"kubernetes.io/projected/465806d7-7b39-4ddf-b098-8bed4c0c5a3a-kube-api-access-t8nrf\") pod \"machine-api-operator-5694c8668f-zljtn\" (UID: \"465806d7-7b39-4ddf-b098-8bed4c0c5a3a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-zljtn" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.919885 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v57tk\" (UniqueName: \"kubernetes.io/projected/3069e571-2923-44a7-ae85-7cc7e64991ef-kube-api-access-v57tk\") pod \"route-controller-manager-6576b87f9c-bvjdq\" (UID: \"3069e571-2923-44a7-ae85-7cc7e64991ef\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bvjdq" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.937950 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdwfv\" (UniqueName: \"kubernetes.io/projected/c37f7db2-f363-4ef9-8c5f-3c6d0ed3f75c-kube-api-access-tdwfv\") pod \"cluster-samples-operator-665b6dd947-gxtkz\" (UID: \"c37f7db2-f363-4ef9-8c5f-3c6d0ed3f75c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gxtkz" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.948879 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccbcp\" (UniqueName: \"kubernetes.io/projected/1d89350d-55e9-4ef6-8182-287894b6c14b-kube-api-access-ccbcp\") pod \"apiserver-7bbb656c7d-cnp7n\" (UID: \"1d89350d-55e9-4ef6-8182-287894b6c14b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.967398 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45lpd\" (UniqueName: \"kubernetes.io/projected/ea884e88-c0df-4212-976a-0d7ce1731fdc-kube-api-access-45lpd\") pod \"apiserver-76f77b778f-cd66n\" (UID: \"ea884e88-c0df-4212-976a-0d7ce1731fdc\") " pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.991065 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 11 13:49:04 crc kubenswrapper[5050]: I1211 13:49:04.991911 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4psgt\" (UniqueName: \"kubernetes.io/projected/dfe5cd7c-4c40-4e8a-8d26-f66424569dbe-kube-api-access-4psgt\") pod \"openshift-apiserver-operator-796bbdcf4f-vplw7\" (UID: \"dfe5cd7c-4c40-4e8a-8d26-f66424569dbe\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vplw7" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.012432 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.031928 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.051220 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.071925 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.083573 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bvjdq" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.092519 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.111558 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-zljtn" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.112103 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.130111 5050 request.go:700] Waited for 1.861347665s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/serviceaccounts/openshift-controller-manager-operator/token Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.156182 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vplw7" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.160850 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfnnk\" (UniqueName: \"kubernetes.io/projected/04af6d44-0ead-4b43-8287-b6bdb88c14ea-kube-api-access-rfnnk\") pod \"openshift-controller-manager-operator-756b6f6bc6-n2j6p\" (UID: \"04af6d44-0ead-4b43-8287-b6bdb88c14ea\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n2j6p" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.174903 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d33f8d3b-768a-44d3-bdf6-1a885b096055-bound-sa-token\") pod \"ingress-operator-5b745b69d9-krl9b\" (UID: \"d33f8d3b-768a-44d3-bdf6-1a885b096055\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-krl9b" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.181811 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gxtkz" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.203187 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmdlb\" (UniqueName: \"kubernetes.io/projected/fb05f4f3-f5be-4823-934f-14d5c48b43c1-kube-api-access-kmdlb\") pod \"oauth-openshift-558db77b4-l4w2d\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.207412 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.215886 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.220358 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/81fb21e6-41c1-4e89-b458-5d83efc1eec6-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-jmrjt\" (UID: \"81fb21e6-41c1-4e89-b458-5d83efc1eec6\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jmrjt" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.227529 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b94wm\" (UniqueName: \"kubernetes.io/projected/534031e8-875b-4fd2-91cf-bc969db66c22-kube-api-access-b94wm\") pod \"migrator-59844c95c7-mg97h\" (UID: \"534031e8-875b-4fd2-91cf-bc969db66c22\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mg97h" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.249359 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.251461 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7dtk\" (UniqueName: \"kubernetes.io/projected/9244eaaf-3ab0-426e-a568-aa295b66c246-kube-api-access-j7dtk\") pod \"kube-storage-version-migrator-operator-b67b599dd-r49rr\" (UID: \"9244eaaf-3ab0-426e-a568-aa295b66c246\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r49rr" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.272328 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdqpk\" (UniqueName: \"kubernetes.io/projected/617510ba-d86e-4485-9c02-761a60ec1a90-kube-api-access-qdqpk\") pod \"etcd-operator-b45778765-rstxr\" (UID: \"617510ba-d86e-4485-9c02-761a60ec1a90\") " pod="openshift-etcd-operator/etcd-operator-b45778765-rstxr" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.286691 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbjc4\" (UniqueName: \"kubernetes.io/projected/5002875e-cc97-4cd1-a72a-f8611227e58c-kube-api-access-kbjc4\") pod \"dns-operator-744455d44c-5zrm6\" (UID: \"5002875e-cc97-4cd1-a72a-f8611227e58c\") " pod="openshift-dns-operator/dns-operator-744455d44c-5zrm6" Dec 11 13:49:05 crc kubenswrapper[5050]: E1211 13:49:05.296677 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 11 13:49:05 crc kubenswrapper[5050]: E1211 13:49:05.296773 5050 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.307192 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9bwx\" (UniqueName: \"kubernetes.io/projected/d40e12c4-8331-453c-b20b-5bbd5e3c2a9c-kube-api-access-k9bwx\") pod \"controller-manager-879f6c89f-skz5v\" (UID: \"d40e12c4-8331-453c-b20b-5bbd5e3c2a9c\") " pod="openshift-controller-manager/controller-manager-879f6c89f-skz5v" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.312127 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bvjdq"] Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.340806 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2w8k\" (UniqueName: \"kubernetes.io/projected/f633d554-794b-4a64-9699-27fbc28a4d7c-kube-api-access-l2w8k\") pod \"console-f9d7485db-gp9fp\" (UID: \"f633d554-794b-4a64-9699-27fbc28a4d7c\") " pod="openshift-console/console-f9d7485db-gp9fp" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.340868 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-5zrm6" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.347292 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-rstxr" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.356239 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n2j6p" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.360450 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e32c23eb-6cda-4f54-bb1b-d8512e4d30cc-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ckbbn\" (UID: \"e32c23eb-6cda-4f54-bb1b-d8512e4d30cc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckbbn" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.362677 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mg97h" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.368429 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj27z\" (UniqueName: \"kubernetes.io/projected/64efd0fc-ec3c-403b-ac98-0546f2affa94-kube-api-access-jj27z\") pod \"openshift-config-operator-7777fb866f-jmqdr\" (UID: \"64efd0fc-ec3c-403b-ac98-0546f2affa94\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-jmqdr" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.377441 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-zljtn"] Dec 11 13:49:05 crc kubenswrapper[5050]: W1211 13:49:05.380717 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3069e571_2923_44a7_ae85_7cc7e64991ef.slice/crio-6938ebda5d0b1451828f34f453f145f03f88f1ef22bd0f91990b2361fe972ee9 WatchSource:0}: Error finding container 6938ebda5d0b1451828f34f453f145f03f88f1ef22bd0f91990b2361fe972ee9: Status 404 returned error can't find the container with id 6938ebda5d0b1451828f34f453f145f03f88f1ef22bd0f91990b2361fe972ee9 Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.381664 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jmrjt" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.388677 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dv5f\" (UniqueName: \"kubernetes.io/projected/d9e38eea-202a-4bf1-bb51-1d4a1fc20202-kube-api-access-9dv5f\") pod \"downloads-7954f5f757-qc97s\" (UID: \"d9e38eea-202a-4bf1-bb51-1d4a1fc20202\") " pod="openshift-console/downloads-7954f5f757-qc97s" Dec 11 13:49:05 crc kubenswrapper[5050]: W1211 13:49:05.415728 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod465806d7_7b39_4ddf_b098_8bed4c0c5a3a.slice/crio-191118702e69165ebdc455079ed558f1528c1583a57674e1cbcb08826dc70c8b WatchSource:0}: Error finding container 191118702e69165ebdc455079ed558f1528c1583a57674e1cbcb08826dc70c8b: Status 404 returned error can't find the container with id 191118702e69165ebdc455079ed558f1528c1583a57674e1cbcb08826dc70c8b Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.432002 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qlf7\" (UniqueName: \"kubernetes.io/projected/d33f8d3b-768a-44d3-bdf6-1a885b096055-kube-api-access-2qlf7\") pod \"ingress-operator-5b745b69d9-krl9b\" (UID: \"d33f8d3b-768a-44d3-bdf6-1a885b096055\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-krl9b" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.437244 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvxlj\" (UniqueName: \"kubernetes.io/projected/c6b95f15-c748-43a9-8ca6-2007cd1727e5-kube-api-access-nvxlj\") pod \"control-plane-machine-set-operator-78cbb6b69f-9zt5j\" (UID: \"c6b95f15-c748-43a9-8ca6-2007cd1727e5\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9zt5j" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.452723 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czz6r\" (UniqueName: \"kubernetes.io/projected/112006c5-e3a9-4fbb-813c-f195e98277bc-kube-api-access-czz6r\") pod \"console-operator-58897d9998-mv9g5\" (UID: \"112006c5-e3a9-4fbb-813c-f195e98277bc\") " pod="openshift-console-operator/console-operator-58897d9998-mv9g5" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.456950 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r49rr" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.485400 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqsj8\" (UniqueName: \"kubernetes.io/projected/e32c23eb-6cda-4f54-bb1b-d8512e4d30cc-kube-api-access-fqsj8\") pod \"cluster-image-registry-operator-dc59b4c8b-ckbbn\" (UID: \"e32c23eb-6cda-4f54-bb1b-d8512e4d30cc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckbbn" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.489983 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26b6bd08-d432-4a3b-b739-d575ca32ac6e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-c6twg\" (UID: \"26b6bd08-d432-4a3b-b739-d575ca32ac6e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c6twg" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.513947 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4953940c-67b1-4a85-851f-ad290b9a0d0d-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-mrg9f\" (UID: \"4953940c-67b1-4a85-851f-ad290b9a0d0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mrg9f" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.530376 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-skz5v" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.533242 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbl8h\" (UniqueName: \"kubernetes.io/projected/f89bbffa-4dbb-4aac-bb26-c146a6037f67-kube-api-access-bbl8h\") pod \"machine-approver-56656f9798-dsrxh\" (UID: \"f89bbffa-4dbb-4aac-bb26-c146a6037f67\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dsrxh" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.542787 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jmqdr" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.548458 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92llc\" (UniqueName: \"kubernetes.io/projected/95cc22f3-bae0-4461-a63e-0adbddd76cbb-kube-api-access-92llc\") pod \"machine-config-operator-74547568cd-j2zlh\" (UID: \"95cc22f3-bae0-4461-a63e-0adbddd76cbb\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-j2zlh" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.551822 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.572614 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 11 13:49:05 crc kubenswrapper[5050]: E1211 13:49:05.580397 5050 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: failed to sync configmap cache: timed out waiting for the condition Dec 11 13:49:05 crc kubenswrapper[5050]: E1211 13:49:05.581661 5050 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: failed to sync configmap cache: timed out waiting for the condition Dec 11 13:49:05 crc kubenswrapper[5050]: E1211 13:49:05.581768 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 13:49:21.581738835 +0000 UTC m=+52.425461421 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : failed to sync configmap cache: timed out waiting for the condition Dec 11 13:49:05 crc kubenswrapper[5050]: E1211 13:49:05.581905 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 13:49:21.581877459 +0000 UTC m=+52.425600035 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : failed to sync configmap cache: timed out waiting for the condition Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.592767 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.593230 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-gp9fp" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.613176 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.617852 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dsrxh" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.625138 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-mv9g5" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.631614 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-qc97s" Dec 11 13:49:05 crc kubenswrapper[5050]: W1211 13:49:05.647090 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf89bbffa_4dbb_4aac_bb26_c146a6037f67.slice/crio-4c1d08f47fa04366e7254cc639dd3cbeb7122e9dcf7b13c640fb4141525cedd9 WatchSource:0}: Error finding container 4c1d08f47fa04366e7254cc639dd3cbeb7122e9dcf7b13c640fb4141525cedd9: Status 404 returned error can't find the container with id 4c1d08f47fa04366e7254cc639dd3cbeb7122e9dcf7b13c640fb4141525cedd9 Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.681283 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.691740 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vplw7"] Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.692906 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.699741 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gxtkz"] Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.704797 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-cd66n"] Dec 11 13:49:05 crc kubenswrapper[5050]: W1211 13:49:05.752717 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea884e88_c0df_4212_976a_0d7ce1731fdc.slice/crio-5a19ab833df2883fd5fc2050f34715d2ac0d781c55479518259a31ad9d578213 WatchSource:0}: Error finding container 5a19ab833df2883fd5fc2050f34715d2ac0d781c55479518259a31ad9d578213: Status 404 returned error can't find the container with id 5a19ab833df2883fd5fc2050f34715d2ac0d781c55479518259a31ad9d578213 Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.781526 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-l4w2d"] Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.801531 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n"] Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.836710 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-jmqdr"] Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.864995 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dsrxh" event={"ID":"f89bbffa-4dbb-4aac-bb26-c146a6037f67","Type":"ContainerStarted","Data":"4c1d08f47fa04366e7254cc639dd3cbeb7122e9dcf7b13c640fb4141525cedd9"} Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.866370 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-mg97h"] Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.868994 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zljtn" event={"ID":"465806d7-7b39-4ddf-b098-8bed4c0c5a3a","Type":"ContainerStarted","Data":"191118702e69165ebdc455079ed558f1528c1583a57674e1cbcb08826dc70c8b"} Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.871769 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5zrm6"] Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.872151 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n2j6p"] Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.873081 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bvjdq" event={"ID":"3069e571-2923-44a7-ae85-7cc7e64991ef","Type":"ContainerStarted","Data":"6938ebda5d0b1451828f34f453f145f03f88f1ef22bd0f91990b2361fe972ee9"} Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.883548 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r49rr"] Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.893850 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" event={"ID":"ea884e88-c0df-4212-976a-0d7ce1731fdc","Type":"ContainerStarted","Data":"5a19ab833df2883fd5fc2050f34715d2ac0d781c55479518259a31ad9d578213"} Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.905895 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vplw7" event={"ID":"dfe5cd7c-4c40-4e8a-8d26-f66424569dbe","Type":"ContainerStarted","Data":"e00a13c02b74a1b306c2c54bacf817021c34b130e2e6c1affd22d6bcf689fd61"} Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.908755 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-skz5v"] Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.915499 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-gp9fp"] Dec 11 13:49:05 crc kubenswrapper[5050]: I1211 13:49:05.952440 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jmrjt"] Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.570491 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mrg9f" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.571448 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-krl9b" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.572219 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-j2zlh" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.572301 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9zt5j" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.573222 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckbbn" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.578250 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-mv9g5"] Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.578310 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-rstxr"] Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.578409 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c6twg" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.579030 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-qc97s"] Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.583452 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/79162d90-14dd-4df9-9bcd-10c2c666cae7-registry-certificates\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.583500 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/79162d90-14dd-4df9-9bcd-10c2c666cae7-ca-trust-extracted\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.583631 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/79162d90-14dd-4df9-9bcd-10c2c666cae7-installation-pull-secrets\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.583797 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/79162d90-14dd-4df9-9bcd-10c2c666cae7-bound-sa-token\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.583838 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.583879 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/79162d90-14dd-4df9-9bcd-10c2c666cae7-registry-tls\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.583958 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79162d90-14dd-4df9-9bcd-10c2c666cae7-trusted-ca\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.583989 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56fqp\" (UniqueName: \"kubernetes.io/projected/79162d90-14dd-4df9-9bcd-10c2c666cae7-kube-api-access-56fqp\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:06 crc kubenswrapper[5050]: E1211 13:49:06.584943 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:07.08491196 +0000 UTC m=+37.928634546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:06 crc kubenswrapper[5050]: W1211 13:49:06.596992 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d89350d_55e9_4ef6_8182_287894b6c14b.slice/crio-b0d1ed8ee2fff94954760e62ba624fe15d71a8f47a2a405afe16a6af64ae3d26 WatchSource:0}: Error finding container b0d1ed8ee2fff94954760e62ba624fe15d71a8f47a2a405afe16a6af64ae3d26: Status 404 returned error can't find the container with id b0d1ed8ee2fff94954760e62ba624fe15d71a8f47a2a405afe16a6af64ae3d26 Dec 11 13:49:06 crc kubenswrapper[5050]: W1211 13:49:06.597882 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64efd0fc_ec3c_403b_ac98_0546f2affa94.slice/crio-422e41202726ef29aa924098958a486ba0ac5799df0d562126af27e36e56e508 WatchSource:0}: Error finding container 422e41202726ef29aa924098958a486ba0ac5799df0d562126af27e36e56e508: Status 404 returned error can't find the container with id 422e41202726ef29aa924098958a486ba0ac5799df0d562126af27e36e56e508 Dec 11 13:49:06 crc kubenswrapper[5050]: W1211 13:49:06.604931 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod534031e8_875b_4fd2_91cf_bc969db66c22.slice/crio-e04178dfd23c0fd1dbd9c6e138a92012ce35bda038a0b60c57c9da1e338af642 WatchSource:0}: Error finding container e04178dfd23c0fd1dbd9c6e138a92012ce35bda038a0b60c57c9da1e338af642: Status 404 returned error can't find the container with id e04178dfd23c0fd1dbd9c6e138a92012ce35bda038a0b60c57c9da1e338af642 Dec 11 13:49:06 crc kubenswrapper[5050]: W1211 13:49:06.606256 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04af6d44_0ead_4b43_8287_b6bdb88c14ea.slice/crio-6e04db5a019430008fe914463a2214a5dbae0355288bbb68fc2915fd637cac4b WatchSource:0}: Error finding container 6e04db5a019430008fe914463a2214a5dbae0355288bbb68fc2915fd637cac4b: Status 404 returned error can't find the container with id 6e04db5a019430008fe914463a2214a5dbae0355288bbb68fc2915fd637cac4b Dec 11 13:49:06 crc kubenswrapper[5050]: W1211 13:49:06.609519 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9244eaaf_3ab0_426e_a568_aa295b66c246.slice/crio-3e33022faf478d589ed2a5df54f453aa2509e4918e674c1a0eba050198112a2b WatchSource:0}: Error finding container 3e33022faf478d589ed2a5df54f453aa2509e4918e674c1a0eba050198112a2b: Status 404 returned error can't find the container with id 3e33022faf478d589ed2a5df54f453aa2509e4918e674c1a0eba050198112a2b Dec 11 13:49:06 crc kubenswrapper[5050]: W1211 13:49:06.612224 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd40e12c4_8331_453c_b20b_5bbd5e3c2a9c.slice/crio-4751a1aa5a8b225202fd81acbddffdd316744052a0c04f6559cdb482970ed025 WatchSource:0}: Error finding container 4751a1aa5a8b225202fd81acbddffdd316744052a0c04f6559cdb482970ed025: Status 404 returned error can't find the container with id 4751a1aa5a8b225202fd81acbddffdd316744052a0c04f6559cdb482970ed025 Dec 11 13:49:06 crc kubenswrapper[5050]: W1211 13:49:06.618366 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf633d554_794b_4a64_9699_27fbc28a4d7c.slice/crio-8a92413c75471e031f68d61104b2ee4756a365a30a3b4b75a95669d65bac3c95 WatchSource:0}: Error finding container 8a92413c75471e031f68d61104b2ee4756a365a30a3b4b75a95669d65bac3c95: Status 404 returned error can't find the container with id 8a92413c75471e031f68d61104b2ee4756a365a30a3b4b75a95669d65bac3c95 Dec 11 13:49:06 crc kubenswrapper[5050]: W1211 13:49:06.625951 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod112006c5_e3a9_4fbb_813c_f195e98277bc.slice/crio-e6ee575b4ec38fd5bfccabbf466c8a72c9b453628b3950dafb19bf34c6db910b WatchSource:0}: Error finding container e6ee575b4ec38fd5bfccabbf466c8a72c9b453628b3950dafb19bf34c6db910b: Status 404 returned error can't find the container with id e6ee575b4ec38fd5bfccabbf466c8a72c9b453628b3950dafb19bf34c6db910b Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.684890 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:06 crc kubenswrapper[5050]: E1211 13:49:06.685003 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:07.1849822 +0000 UTC m=+38.028704786 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.685470 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8c2n\" (UniqueName: \"kubernetes.io/projected/cef7b97f-083b-44ae-9357-94f97b3eb30c-kube-api-access-w8c2n\") pod \"router-default-5444994796-dtlb9\" (UID: \"cef7b97f-083b-44ae-9357-94f97b3eb30c\") " pod="openshift-ingress/router-default-5444994796-dtlb9" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.685536 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6eec0801-2040-47f2-88b7-f39b0498746e-srv-cert\") pod \"olm-operator-6b444d44fb-24wkc\" (UID: \"6eec0801-2040-47f2-88b7-f39b0498746e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-24wkc" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.685611 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/18d0253b-d7da-4be9-8fc3-6911f1c92076-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rmlj6\" (UID: \"18d0253b-d7da-4be9-8fc3-6911f1c92076\") " pod="openshift-marketplace/marketplace-operator-79b997595-rmlj6" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.685645 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/79162d90-14dd-4df9-9bcd-10c2c666cae7-registry-certificates\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.685797 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/79162d90-14dd-4df9-9bcd-10c2c666cae7-ca-trust-extracted\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.686216 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cef7b97f-083b-44ae-9357-94f97b3eb30c-stats-auth\") pod \"router-default-5444994796-dtlb9\" (UID: \"cef7b97f-083b-44ae-9357-94f97b3eb30c\") " pod="openshift-ingress/router-default-5444994796-dtlb9" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.686387 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/79162d90-14dd-4df9-9bcd-10c2c666cae7-ca-trust-extracted\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.686572 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/79162d90-14dd-4df9-9bcd-10c2c666cae7-installation-pull-secrets\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.687222 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/18d0253b-d7da-4be9-8fc3-6911f1c92076-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rmlj6\" (UID: \"18d0253b-d7da-4be9-8fc3-6911f1c92076\") " pod="openshift-marketplace/marketplace-operator-79b997595-rmlj6" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.687773 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/79162d90-14dd-4df9-9bcd-10c2c666cae7-bound-sa-token\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.687822 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cef7b97f-083b-44ae-9357-94f97b3eb30c-service-ca-bundle\") pod \"router-default-5444994796-dtlb9\" (UID: \"cef7b97f-083b-44ae-9357-94f97b3eb30c\") " pod="openshift-ingress/router-default-5444994796-dtlb9" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.689363 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/79162d90-14dd-4df9-9bcd-10c2c666cae7-registry-certificates\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.689460 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cef7b97f-083b-44ae-9357-94f97b3eb30c-metrics-certs\") pod \"router-default-5444994796-dtlb9\" (UID: \"cef7b97f-083b-44ae-9357-94f97b3eb30c\") " pod="openshift-ingress/router-default-5444994796-dtlb9" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.689534 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6eec0801-2040-47f2-88b7-f39b0498746e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-24wkc\" (UID: \"6eec0801-2040-47f2-88b7-f39b0498746e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-24wkc" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.689683 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.689772 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pksc7\" (UniqueName: \"kubernetes.io/projected/18d0253b-d7da-4be9-8fc3-6911f1c92076-kube-api-access-pksc7\") pod \"marketplace-operator-79b997595-rmlj6\" (UID: \"18d0253b-d7da-4be9-8fc3-6911f1c92076\") " pod="openshift-marketplace/marketplace-operator-79b997595-rmlj6" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.689999 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/79162d90-14dd-4df9-9bcd-10c2c666cae7-registry-tls\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.690172 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc9mz\" (UniqueName: \"kubernetes.io/projected/6eec0801-2040-47f2-88b7-f39b0498746e-kube-api-access-tc9mz\") pod \"olm-operator-6b444d44fb-24wkc\" (UID: \"6eec0801-2040-47f2-88b7-f39b0498746e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-24wkc" Dec 11 13:49:06 crc kubenswrapper[5050]: E1211 13:49:06.690209 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:07.190185961 +0000 UTC m=+38.033908747 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.690544 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cef7b97f-083b-44ae-9357-94f97b3eb30c-default-certificate\") pod \"router-default-5444994796-dtlb9\" (UID: \"cef7b97f-083b-44ae-9357-94f97b3eb30c\") " pod="openshift-ingress/router-default-5444994796-dtlb9" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.690960 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79162d90-14dd-4df9-9bcd-10c2c666cae7-trusted-ca\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.691405 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56fqp\" (UniqueName: \"kubernetes.io/projected/79162d90-14dd-4df9-9bcd-10c2c666cae7-kube-api-access-56fqp\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.691959 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79162d90-14dd-4df9-9bcd-10c2c666cae7-trusted-ca\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.692737 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/79162d90-14dd-4df9-9bcd-10c2c666cae7-installation-pull-secrets\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.692805 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/79162d90-14dd-4df9-9bcd-10c2c666cae7-registry-tls\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.706409 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/79162d90-14dd-4df9-9bcd-10c2c666cae7-bound-sa-token\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.706539 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56fqp\" (UniqueName: \"kubernetes.io/projected/79162d90-14dd-4df9-9bcd-10c2c666cae7-kube-api-access-56fqp\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.793936 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:06 crc kubenswrapper[5050]: E1211 13:49:06.794203 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:07.294168757 +0000 UTC m=+38.137891343 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.794265 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/8f687791-2831-4665-bb32-ee48ab6e70be-ready\") pod \"cni-sysctl-allowlist-ds-p9jmx\" (UID: \"8f687791-2831-4665-bb32-ee48ab6e70be\") " pod="openshift-multus/cni-sysctl-allowlist-ds-p9jmx" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.794308 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/81255772-fddc-4936-8de3-da4649c32d1f-profile-collector-cert\") pod \"catalog-operator-68c6474976-8qmpk\" (UID: \"81255772-fddc-4936-8de3-da4649c32d1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.794351 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rc7k\" (UniqueName: \"kubernetes.io/projected/dbd5b107-5d08-43af-881c-11540f395267-kube-api-access-4rc7k\") pod \"packageserver-d55dfcdfc-r54sd\" (UID: \"dbd5b107-5d08-43af-881c-11540f395267\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.794404 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/18d0253b-d7da-4be9-8fc3-6911f1c92076-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rmlj6\" (UID: \"18d0253b-d7da-4be9-8fc3-6911f1c92076\") " pod="openshift-marketplace/marketplace-operator-79b997595-rmlj6" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.794438 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87bs4\" (UniqueName: \"kubernetes.io/projected/fe7b4bda-d33b-4c5a-a6f7-0f074d2ae3f9-kube-api-access-87bs4\") pod \"machine-config-server-vftb9\" (UID: \"fe7b4bda-d33b-4c5a-a6f7-0f074d2ae3f9\") " pod="openshift-machine-config-operator/machine-config-server-vftb9" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.794481 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cef7b97f-083b-44ae-9357-94f97b3eb30c-service-ca-bundle\") pod \"router-default-5444994796-dtlb9\" (UID: \"cef7b97f-083b-44ae-9357-94f97b3eb30c\") " pod="openshift-ingress/router-default-5444994796-dtlb9" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.794502 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cef7b97f-083b-44ae-9357-94f97b3eb30c-metrics-certs\") pod \"router-default-5444994796-dtlb9\" (UID: \"cef7b97f-083b-44ae-9357-94f97b3eb30c\") " pod="openshift-ingress/router-default-5444994796-dtlb9" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.794523 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6eec0801-2040-47f2-88b7-f39b0498746e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-24wkc\" (UID: \"6eec0801-2040-47f2-88b7-f39b0498746e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-24wkc" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.794545 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2b344bba-8e1d-415b-9b5f-e21d3144fe42-mountpoint-dir\") pod \"csi-hostpathplugin-mbbdj\" (UID: \"2b344bba-8e1d-415b-9b5f-e21d3144fe42\") " pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.794587 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.794943 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2b344bba-8e1d-415b-9b5f-e21d3144fe42-csi-data-dir\") pod \"csi-hostpathplugin-mbbdj\" (UID: \"2b344bba-8e1d-415b-9b5f-e21d3144fe42\") " pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.795002 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/fe7b4bda-d33b-4c5a-a6f7-0f074d2ae3f9-certs\") pod \"machine-config-server-vftb9\" (UID: \"fe7b4bda-d33b-4c5a-a6f7-0f074d2ae3f9\") " pod="openshift-machine-config-operator/machine-config-server-vftb9" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.795087 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pksc7\" (UniqueName: \"kubernetes.io/projected/18d0253b-d7da-4be9-8fc3-6911f1c92076-kube-api-access-pksc7\") pod \"marketplace-operator-79b997595-rmlj6\" (UID: \"18d0253b-d7da-4be9-8fc3-6911f1c92076\") " pod="openshift-marketplace/marketplace-operator-79b997595-rmlj6" Dec 11 13:49:06 crc kubenswrapper[5050]: E1211 13:49:06.795167 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:07.295142043 +0000 UTC m=+38.138864639 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.795227 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/164f4fce-a190-451c-b056-103eadc5bb6d-signing-cabundle\") pod \"service-ca-9c57cc56f-6t4gf\" (UID: \"164f4fce-a190-451c-b056-103eadc5bb6d\") " pod="openshift-service-ca/service-ca-9c57cc56f-6t4gf" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.796080 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7btbj\" (UniqueName: \"kubernetes.io/projected/66dfe3f5-9e7a-4e1c-b03a-b6f01dab6ef4-kube-api-access-7btbj\") pod \"dns-default-25p7l\" (UID: \"66dfe3f5-9e7a-4e1c-b03a-b6f01dab6ef4\") " pod="openshift-dns/dns-default-25p7l" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.796202 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc9mz\" (UniqueName: \"kubernetes.io/projected/6eec0801-2040-47f2-88b7-f39b0498746e-kube-api-access-tc9mz\") pod \"olm-operator-6b444d44fb-24wkc\" (UID: \"6eec0801-2040-47f2-88b7-f39b0498746e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-24wkc" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.796338 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a182f606-2976-4cb9-8175-65b012a06596-serving-cert\") pod \"service-ca-operator-777779d784-9ttg9\" (UID: \"a182f606-2976-4cb9-8175-65b012a06596\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9ttg9" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.796441 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cef7b97f-083b-44ae-9357-94f97b3eb30c-default-certificate\") pod \"router-default-5444994796-dtlb9\" (UID: \"cef7b97f-083b-44ae-9357-94f97b3eb30c\") " pod="openshift-ingress/router-default-5444994796-dtlb9" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.796582 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x4qj\" (UniqueName: \"kubernetes.io/projected/2b344bba-8e1d-415b-9b5f-e21d3144fe42-kube-api-access-4x4qj\") pod \"csi-hostpathplugin-mbbdj\" (UID: \"2b344bba-8e1d-415b-9b5f-e21d3144fe42\") " pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.796626 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfc40369-6f7d-4ab2-89b0-60bfc12e9bfe-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-t75hp\" (UID: \"dfc40369-6f7d-4ab2-89b0-60bfc12e9bfe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t75hp" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.796753 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cf02ac59-1ff0-46d9-a346-cfdb183dee53-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tkg2n\" (UID: \"cf02ac59-1ff0-46d9-a346-cfdb183dee53\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tkg2n" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.796799 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/66dfe3f5-9e7a-4e1c-b03a-b6f01dab6ef4-metrics-tls\") pod \"dns-default-25p7l\" (UID: \"66dfe3f5-9e7a-4e1c-b03a-b6f01dab6ef4\") " pod="openshift-dns/dns-default-25p7l" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.796879 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cef7b97f-083b-44ae-9357-94f97b3eb30c-service-ca-bundle\") pod \"router-default-5444994796-dtlb9\" (UID: \"cef7b97f-083b-44ae-9357-94f97b3eb30c\") " pod="openshift-ingress/router-default-5444994796-dtlb9" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.796907 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dbd5b107-5d08-43af-881c-11540f395267-tmpfs\") pod \"packageserver-d55dfcdfc-r54sd\" (UID: \"dbd5b107-5d08-43af-881c-11540f395267\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.796977 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8c2n\" (UniqueName: \"kubernetes.io/projected/cef7b97f-083b-44ae-9357-94f97b3eb30c-kube-api-access-w8c2n\") pod \"router-default-5444994796-dtlb9\" (UID: \"cef7b97f-083b-44ae-9357-94f97b3eb30c\") " pod="openshift-ingress/router-default-5444994796-dtlb9" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.797025 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6eec0801-2040-47f2-88b7-f39b0498746e-srv-cert\") pod \"olm-operator-6b444d44fb-24wkc\" (UID: \"6eec0801-2040-47f2-88b7-f39b0498746e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-24wkc" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.797056 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/fe7b4bda-d33b-4c5a-a6f7-0f074d2ae3f9-node-bootstrap-token\") pod \"machine-config-server-vftb9\" (UID: \"fe7b4bda-d33b-4c5a-a6f7-0f074d2ae3f9\") " pod="openshift-machine-config-operator/machine-config-server-vftb9" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.797080 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/81255772-fddc-4936-8de3-da4649c32d1f-srv-cert\") pod \"catalog-operator-68c6474976-8qmpk\" (UID: \"81255772-fddc-4936-8de3-da4649c32d1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.797123 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2d6m\" (UniqueName: \"kubernetes.io/projected/8f687791-2831-4665-bb32-ee48ab6e70be-kube-api-access-s2d6m\") pod \"cni-sysctl-allowlist-ds-p9jmx\" (UID: \"8f687791-2831-4665-bb32-ee48ab6e70be\") " pod="openshift-multus/cni-sysctl-allowlist-ds-p9jmx" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.797195 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8f687791-2831-4665-bb32-ee48ab6e70be-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-p9jmx\" (UID: \"8f687791-2831-4665-bb32-ee48ab6e70be\") " pod="openshift-multus/cni-sysctl-allowlist-ds-p9jmx" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.797217 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8f687791-2831-4665-bb32-ee48ab6e70be-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-p9jmx\" (UID: \"8f687791-2831-4665-bb32-ee48ab6e70be\") " pod="openshift-multus/cni-sysctl-allowlist-ds-p9jmx" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.797354 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjx65\" (UniqueName: \"kubernetes.io/projected/cff9c4b2-d51d-4a2d-b325-522d1b2a4442-kube-api-access-fjx65\") pod \"ingress-canary-pqplb\" (UID: \"cff9c4b2-d51d-4a2d-b325-522d1b2a4442\") " pod="openshift-ingress-canary/ingress-canary-pqplb" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.797391 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfc40369-6f7d-4ab2-89b0-60bfc12e9bfe-config\") pod \"authentication-operator-69f744f599-t75hp\" (UID: \"dfc40369-6f7d-4ab2-89b0-60bfc12e9bfe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t75hp" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.797475 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfc40369-6f7d-4ab2-89b0-60bfc12e9bfe-serving-cert\") pod \"authentication-operator-69f744f599-t75hp\" (UID: \"dfc40369-6f7d-4ab2-89b0-60bfc12e9bfe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t75hp" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.797504 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/18d0253b-d7da-4be9-8fc3-6911f1c92076-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rmlj6\" (UID: \"18d0253b-d7da-4be9-8fc3-6911f1c92076\") " pod="openshift-marketplace/marketplace-operator-79b997595-rmlj6" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.797532 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r6mk\" (UniqueName: \"kubernetes.io/projected/dfc40369-6f7d-4ab2-89b0-60bfc12e9bfe-kube-api-access-2r6mk\") pod \"authentication-operator-69f744f599-t75hp\" (UID: \"dfc40369-6f7d-4ab2-89b0-60bfc12e9bfe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t75hp" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.797594 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg4g4\" (UniqueName: \"kubernetes.io/projected/80709272-78a8-46c2-82af-9109c6f2048f-kube-api-access-rg4g4\") pod \"multus-admission-controller-857f4d67dd-bxjjm\" (UID: \"80709272-78a8-46c2-82af-9109c6f2048f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bxjjm" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.797658 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d1f35830-d883-4b41-ab97-7c382dec0387-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9jns7\" (UID: \"d1f35830-d883-4b41-ab97-7c382dec0387\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jns7" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.797685 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80709272-78a8-46c2-82af-9109c6f2048f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-bxjjm\" (UID: \"80709272-78a8-46c2-82af-9109c6f2048f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bxjjm" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.797780 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78f80616-d1e0-4152-a7fb-99a512670f27-config-volume\") pod \"collect-profiles-29424345-gxsbc\" (UID: \"78f80616-d1e0-4152-a7fb-99a512670f27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424345-gxsbc" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.797841 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cf02ac59-1ff0-46d9-a346-cfdb183dee53-proxy-tls\") pod \"machine-config-controller-84d6567774-tkg2n\" (UID: \"cf02ac59-1ff0-46d9-a346-cfdb183dee53\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tkg2n" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.797876 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66dfe3f5-9e7a-4e1c-b03a-b6f01dab6ef4-config-volume\") pod \"dns-default-25p7l\" (UID: \"66dfe3f5-9e7a-4e1c-b03a-b6f01dab6ef4\") " pod="openshift-dns/dns-default-25p7l" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.797903 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a182f606-2976-4cb9-8175-65b012a06596-config\") pod \"service-ca-operator-777779d784-9ttg9\" (UID: \"a182f606-2976-4cb9-8175-65b012a06596\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9ttg9" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.797969 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2b344bba-8e1d-415b-9b5f-e21d3144fe42-socket-dir\") pod \"csi-hostpathplugin-mbbdj\" (UID: \"2b344bba-8e1d-415b-9b5f-e21d3144fe42\") " pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.797996 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrxgh\" (UniqueName: \"kubernetes.io/projected/164f4fce-a190-451c-b056-103eadc5bb6d-kube-api-access-jrxgh\") pod \"service-ca-9c57cc56f-6t4gf\" (UID: \"164f4fce-a190-451c-b056-103eadc5bb6d\") " pod="openshift-service-ca/service-ca-9c57cc56f-6t4gf" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.798072 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hw7n\" (UniqueName: \"kubernetes.io/projected/a182f606-2976-4cb9-8175-65b012a06596-kube-api-access-6hw7n\") pod \"service-ca-operator-777779d784-9ttg9\" (UID: \"a182f606-2976-4cb9-8175-65b012a06596\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9ttg9" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.798136 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cef7b97f-083b-44ae-9357-94f97b3eb30c-stats-auth\") pod \"router-default-5444994796-dtlb9\" (UID: \"cef7b97f-083b-44ae-9357-94f97b3eb30c\") " pod="openshift-ingress/router-default-5444994796-dtlb9" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.798161 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cff9c4b2-d51d-4a2d-b325-522d1b2a4442-cert\") pod \"ingress-canary-pqplb\" (UID: \"cff9c4b2-d51d-4a2d-b325-522d1b2a4442\") " pod="openshift-ingress-canary/ingress-canary-pqplb" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.798288 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzwlc\" (UniqueName: \"kubernetes.io/projected/d1f35830-d883-4b41-ab97-7c382dec0387-kube-api-access-fzwlc\") pod \"package-server-manager-789f6589d5-9jns7\" (UID: \"d1f35830-d883-4b41-ab97-7c382dec0387\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jns7" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.798331 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/164f4fce-a190-451c-b056-103eadc5bb6d-signing-key\") pod \"service-ca-9c57cc56f-6t4gf\" (UID: \"164f4fce-a190-451c-b056-103eadc5bb6d\") " pod="openshift-service-ca/service-ca-9c57cc56f-6t4gf" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.798358 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2b344bba-8e1d-415b-9b5f-e21d3144fe42-registration-dir\") pod \"csi-hostpathplugin-mbbdj\" (UID: \"2b344bba-8e1d-415b-9b5f-e21d3144fe42\") " pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.798554 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/78f80616-d1e0-4152-a7fb-99a512670f27-secret-volume\") pod \"collect-profiles-29424345-gxsbc\" (UID: \"78f80616-d1e0-4152-a7fb-99a512670f27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424345-gxsbc" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.798581 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg672\" (UniqueName: \"kubernetes.io/projected/78f80616-d1e0-4152-a7fb-99a512670f27-kube-api-access-dg672\") pod \"collect-profiles-29424345-gxsbc\" (UID: \"78f80616-d1e0-4152-a7fb-99a512670f27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424345-gxsbc" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.799388 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96nqr\" (UniqueName: \"kubernetes.io/projected/cf02ac59-1ff0-46d9-a346-cfdb183dee53-kube-api-access-96nqr\") pod \"machine-config-controller-84d6567774-tkg2n\" (UID: \"cf02ac59-1ff0-46d9-a346-cfdb183dee53\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tkg2n" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.799481 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dbd5b107-5d08-43af-881c-11540f395267-apiservice-cert\") pod \"packageserver-d55dfcdfc-r54sd\" (UID: \"dbd5b107-5d08-43af-881c-11540f395267\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.799516 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfc40369-6f7d-4ab2-89b0-60bfc12e9bfe-service-ca-bundle\") pod \"authentication-operator-69f744f599-t75hp\" (UID: \"dfc40369-6f7d-4ab2-89b0-60bfc12e9bfe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t75hp" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.799610 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2b344bba-8e1d-415b-9b5f-e21d3144fe42-plugins-dir\") pod \"csi-hostpathplugin-mbbdj\" (UID: \"2b344bba-8e1d-415b-9b5f-e21d3144fe42\") " pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.800233 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6cjz\" (UniqueName: \"kubernetes.io/projected/81255772-fddc-4936-8de3-da4649c32d1f-kube-api-access-t6cjz\") pod \"catalog-operator-68c6474976-8qmpk\" (UID: \"81255772-fddc-4936-8de3-da4649c32d1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.800327 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dbd5b107-5d08-43af-881c-11540f395267-webhook-cert\") pod \"packageserver-d55dfcdfc-r54sd\" (UID: \"dbd5b107-5d08-43af-881c-11540f395267\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.803348 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/18d0253b-d7da-4be9-8fc3-6911f1c92076-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-rmlj6\" (UID: \"18d0253b-d7da-4be9-8fc3-6911f1c92076\") " pod="openshift-marketplace/marketplace-operator-79b997595-rmlj6" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.804299 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/18d0253b-d7da-4be9-8fc3-6911f1c92076-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-rmlj6\" (UID: \"18d0253b-d7da-4be9-8fc3-6911f1c92076\") " pod="openshift-marketplace/marketplace-operator-79b997595-rmlj6" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.804523 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cef7b97f-083b-44ae-9357-94f97b3eb30c-default-certificate\") pod \"router-default-5444994796-dtlb9\" (UID: \"cef7b97f-083b-44ae-9357-94f97b3eb30c\") " pod="openshift-ingress/router-default-5444994796-dtlb9" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.804642 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cef7b97f-083b-44ae-9357-94f97b3eb30c-metrics-certs\") pod \"router-default-5444994796-dtlb9\" (UID: \"cef7b97f-083b-44ae-9357-94f97b3eb30c\") " pod="openshift-ingress/router-default-5444994796-dtlb9" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.810588 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6eec0801-2040-47f2-88b7-f39b0498746e-srv-cert\") pod \"olm-operator-6b444d44fb-24wkc\" (UID: \"6eec0801-2040-47f2-88b7-f39b0498746e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-24wkc" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.813318 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6eec0801-2040-47f2-88b7-f39b0498746e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-24wkc\" (UID: \"6eec0801-2040-47f2-88b7-f39b0498746e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-24wkc" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.813646 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cef7b97f-083b-44ae-9357-94f97b3eb30c-stats-auth\") pod \"router-default-5444994796-dtlb9\" (UID: \"cef7b97f-083b-44ae-9357-94f97b3eb30c\") " pod="openshift-ingress/router-default-5444994796-dtlb9" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.816248 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8c2n\" (UniqueName: \"kubernetes.io/projected/cef7b97f-083b-44ae-9357-94f97b3eb30c-kube-api-access-w8c2n\") pod \"router-default-5444994796-dtlb9\" (UID: \"cef7b97f-083b-44ae-9357-94f97b3eb30c\") " pod="openshift-ingress/router-default-5444994796-dtlb9" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.818495 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pksc7\" (UniqueName: \"kubernetes.io/projected/18d0253b-d7da-4be9-8fc3-6911f1c92076-kube-api-access-pksc7\") pod \"marketplace-operator-79b997595-rmlj6\" (UID: \"18d0253b-d7da-4be9-8fc3-6911f1c92076\") " pod="openshift-marketplace/marketplace-operator-79b997595-rmlj6" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.902137 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:06 crc kubenswrapper[5050]: E1211 13:49:06.902340 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:07.402314445 +0000 UTC m=+38.246037031 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.902990 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x4qj\" (UniqueName: \"kubernetes.io/projected/2b344bba-8e1d-415b-9b5f-e21d3144fe42-kube-api-access-4x4qj\") pod \"csi-hostpathplugin-mbbdj\" (UID: \"2b344bba-8e1d-415b-9b5f-e21d3144fe42\") " pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.903071 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfc40369-6f7d-4ab2-89b0-60bfc12e9bfe-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-t75hp\" (UID: \"dfc40369-6f7d-4ab2-89b0-60bfc12e9bfe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t75hp" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.903112 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cf02ac59-1ff0-46d9-a346-cfdb183dee53-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tkg2n\" (UID: \"cf02ac59-1ff0-46d9-a346-cfdb183dee53\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tkg2n" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.903143 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/66dfe3f5-9e7a-4e1c-b03a-b6f01dab6ef4-metrics-tls\") pod \"dns-default-25p7l\" (UID: \"66dfe3f5-9e7a-4e1c-b03a-b6f01dab6ef4\") " pod="openshift-dns/dns-default-25p7l" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.903171 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dbd5b107-5d08-43af-881c-11540f395267-tmpfs\") pod \"packageserver-d55dfcdfc-r54sd\" (UID: \"dbd5b107-5d08-43af-881c-11540f395267\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.903226 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/fe7b4bda-d33b-4c5a-a6f7-0f074d2ae3f9-node-bootstrap-token\") pod \"machine-config-server-vftb9\" (UID: \"fe7b4bda-d33b-4c5a-a6f7-0f074d2ae3f9\") " pod="openshift-machine-config-operator/machine-config-server-vftb9" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.903512 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/81255772-fddc-4936-8de3-da4649c32d1f-srv-cert\") pod \"catalog-operator-68c6474976-8qmpk\" (UID: \"81255772-fddc-4936-8de3-da4649c32d1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.903591 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2d6m\" (UniqueName: \"kubernetes.io/projected/8f687791-2831-4665-bb32-ee48ab6e70be-kube-api-access-s2d6m\") pod \"cni-sysctl-allowlist-ds-p9jmx\" (UID: \"8f687791-2831-4665-bb32-ee48ab6e70be\") " pod="openshift-multus/cni-sysctl-allowlist-ds-p9jmx" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.903640 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8f687791-2831-4665-bb32-ee48ab6e70be-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-p9jmx\" (UID: \"8f687791-2831-4665-bb32-ee48ab6e70be\") " pod="openshift-multus/cni-sysctl-allowlist-ds-p9jmx" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.903676 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8f687791-2831-4665-bb32-ee48ab6e70be-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-p9jmx\" (UID: \"8f687791-2831-4665-bb32-ee48ab6e70be\") " pod="openshift-multus/cni-sysctl-allowlist-ds-p9jmx" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.903727 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjx65\" (UniqueName: \"kubernetes.io/projected/cff9c4b2-d51d-4a2d-b325-522d1b2a4442-kube-api-access-fjx65\") pod \"ingress-canary-pqplb\" (UID: \"cff9c4b2-d51d-4a2d-b325-522d1b2a4442\") " pod="openshift-ingress-canary/ingress-canary-pqplb" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.903779 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfc40369-6f7d-4ab2-89b0-60bfc12e9bfe-config\") pod \"authentication-operator-69f744f599-t75hp\" (UID: \"dfc40369-6f7d-4ab2-89b0-60bfc12e9bfe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t75hp" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.903828 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfc40369-6f7d-4ab2-89b0-60bfc12e9bfe-serving-cert\") pod \"authentication-operator-69f744f599-t75hp\" (UID: \"dfc40369-6f7d-4ab2-89b0-60bfc12e9bfe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t75hp" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.903871 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r6mk\" (UniqueName: \"kubernetes.io/projected/dfc40369-6f7d-4ab2-89b0-60bfc12e9bfe-kube-api-access-2r6mk\") pod \"authentication-operator-69f744f599-t75hp\" (UID: \"dfc40369-6f7d-4ab2-89b0-60bfc12e9bfe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t75hp" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.903950 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg4g4\" (UniqueName: \"kubernetes.io/projected/80709272-78a8-46c2-82af-9109c6f2048f-kube-api-access-rg4g4\") pod \"multus-admission-controller-857f4d67dd-bxjjm\" (UID: \"80709272-78a8-46c2-82af-9109c6f2048f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bxjjm" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.903999 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d1f35830-d883-4b41-ab97-7c382dec0387-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9jns7\" (UID: \"d1f35830-d883-4b41-ab97-7c382dec0387\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jns7" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.904080 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80709272-78a8-46c2-82af-9109c6f2048f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-bxjjm\" (UID: \"80709272-78a8-46c2-82af-9109c6f2048f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bxjjm" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.904086 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dbd5b107-5d08-43af-881c-11540f395267-tmpfs\") pod \"packageserver-d55dfcdfc-r54sd\" (UID: \"dbd5b107-5d08-43af-881c-11540f395267\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.904120 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78f80616-d1e0-4152-a7fb-99a512670f27-config-volume\") pod \"collect-profiles-29424345-gxsbc\" (UID: \"78f80616-d1e0-4152-a7fb-99a512670f27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424345-gxsbc" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.904215 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66dfe3f5-9e7a-4e1c-b03a-b6f01dab6ef4-config-volume\") pod \"dns-default-25p7l\" (UID: \"66dfe3f5-9e7a-4e1c-b03a-b6f01dab6ef4\") " pod="openshift-dns/dns-default-25p7l" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.904310 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a182f606-2976-4cb9-8175-65b012a06596-config\") pod \"service-ca-operator-777779d784-9ttg9\" (UID: \"a182f606-2976-4cb9-8175-65b012a06596\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9ttg9" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.905105 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cf02ac59-1ff0-46d9-a346-cfdb183dee53-proxy-tls\") pod \"machine-config-controller-84d6567774-tkg2n\" (UID: \"cf02ac59-1ff0-46d9-a346-cfdb183dee53\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tkg2n" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.905210 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2b344bba-8e1d-415b-9b5f-e21d3144fe42-socket-dir\") pod \"csi-hostpathplugin-mbbdj\" (UID: \"2b344bba-8e1d-415b-9b5f-e21d3144fe42\") " pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.905376 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8f687791-2831-4665-bb32-ee48ab6e70be-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-p9jmx\" (UID: \"8f687791-2831-4665-bb32-ee48ab6e70be\") " pod="openshift-multus/cni-sysctl-allowlist-ds-p9jmx" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.905643 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrxgh\" (UniqueName: \"kubernetes.io/projected/164f4fce-a190-451c-b056-103eadc5bb6d-kube-api-access-jrxgh\") pod \"service-ca-9c57cc56f-6t4gf\" (UID: \"164f4fce-a190-451c-b056-103eadc5bb6d\") " pod="openshift-service-ca/service-ca-9c57cc56f-6t4gf" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.905739 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8f687791-2831-4665-bb32-ee48ab6e70be-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-p9jmx\" (UID: \"8f687791-2831-4665-bb32-ee48ab6e70be\") " pod="openshift-multus/cni-sysctl-allowlist-ds-p9jmx" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.905573 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2b344bba-8e1d-415b-9b5f-e21d3144fe42-socket-dir\") pod \"csi-hostpathplugin-mbbdj\" (UID: \"2b344bba-8e1d-415b-9b5f-e21d3144fe42\") " pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.905968 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hw7n\" (UniqueName: \"kubernetes.io/projected/a182f606-2976-4cb9-8175-65b012a06596-kube-api-access-6hw7n\") pod \"service-ca-operator-777779d784-9ttg9\" (UID: \"a182f606-2976-4cb9-8175-65b012a06596\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9ttg9" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.906241 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cff9c4b2-d51d-4a2d-b325-522d1b2a4442-cert\") pod \"ingress-canary-pqplb\" (UID: \"cff9c4b2-d51d-4a2d-b325-522d1b2a4442\") " pod="openshift-ingress-canary/ingress-canary-pqplb" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.906343 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzwlc\" (UniqueName: \"kubernetes.io/projected/d1f35830-d883-4b41-ab97-7c382dec0387-kube-api-access-fzwlc\") pod \"package-server-manager-789f6589d5-9jns7\" (UID: \"d1f35830-d883-4b41-ab97-7c382dec0387\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jns7" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.906421 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2b344bba-8e1d-415b-9b5f-e21d3144fe42-registration-dir\") pod \"csi-hostpathplugin-mbbdj\" (UID: \"2b344bba-8e1d-415b-9b5f-e21d3144fe42\") " pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.906501 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/164f4fce-a190-451c-b056-103eadc5bb6d-signing-key\") pod \"service-ca-9c57cc56f-6t4gf\" (UID: \"164f4fce-a190-451c-b056-103eadc5bb6d\") " pod="openshift-service-ca/service-ca-9c57cc56f-6t4gf" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.907128 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfc40369-6f7d-4ab2-89b0-60bfc12e9bfe-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-t75hp\" (UID: \"dfc40369-6f7d-4ab2-89b0-60bfc12e9bfe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t75hp" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.907289 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2b344bba-8e1d-415b-9b5f-e21d3144fe42-registration-dir\") pod \"csi-hostpathplugin-mbbdj\" (UID: \"2b344bba-8e1d-415b-9b5f-e21d3144fe42\") " pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.907467 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/78f80616-d1e0-4152-a7fb-99a512670f27-secret-volume\") pod \"collect-profiles-29424345-gxsbc\" (UID: \"78f80616-d1e0-4152-a7fb-99a512670f27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424345-gxsbc" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.907537 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg672\" (UniqueName: \"kubernetes.io/projected/78f80616-d1e0-4152-a7fb-99a512670f27-kube-api-access-dg672\") pod \"collect-profiles-29424345-gxsbc\" (UID: \"78f80616-d1e0-4152-a7fb-99a512670f27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424345-gxsbc" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.907626 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96nqr\" (UniqueName: \"kubernetes.io/projected/cf02ac59-1ff0-46d9-a346-cfdb183dee53-kube-api-access-96nqr\") pod \"machine-config-controller-84d6567774-tkg2n\" (UID: \"cf02ac59-1ff0-46d9-a346-cfdb183dee53\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tkg2n" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.907706 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dbd5b107-5d08-43af-881c-11540f395267-apiservice-cert\") pod \"packageserver-d55dfcdfc-r54sd\" (UID: \"dbd5b107-5d08-43af-881c-11540f395267\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.907772 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfc40369-6f7d-4ab2-89b0-60bfc12e9bfe-service-ca-bundle\") pod \"authentication-operator-69f744f599-t75hp\" (UID: \"dfc40369-6f7d-4ab2-89b0-60bfc12e9bfe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t75hp" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.907879 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2b344bba-8e1d-415b-9b5f-e21d3144fe42-plugins-dir\") pod \"csi-hostpathplugin-mbbdj\" (UID: \"2b344bba-8e1d-415b-9b5f-e21d3144fe42\") " pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.907938 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6cjz\" (UniqueName: \"kubernetes.io/projected/81255772-fddc-4936-8de3-da4649c32d1f-kube-api-access-t6cjz\") pod \"catalog-operator-68c6474976-8qmpk\" (UID: \"81255772-fddc-4936-8de3-da4649c32d1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.908055 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dbd5b107-5d08-43af-881c-11540f395267-webhook-cert\") pod \"packageserver-d55dfcdfc-r54sd\" (UID: \"dbd5b107-5d08-43af-881c-11540f395267\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.908917 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfc40369-6f7d-4ab2-89b0-60bfc12e9bfe-config\") pod \"authentication-operator-69f744f599-t75hp\" (UID: \"dfc40369-6f7d-4ab2-89b0-60bfc12e9bfe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t75hp" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.909326 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/81255772-fddc-4936-8de3-da4649c32d1f-srv-cert\") pod \"catalog-operator-68c6474976-8qmpk\" (UID: \"81255772-fddc-4936-8de3-da4649c32d1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.909441 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/8f687791-2831-4665-bb32-ee48ab6e70be-ready\") pod \"cni-sysctl-allowlist-ds-p9jmx\" (UID: \"8f687791-2831-4665-bb32-ee48ab6e70be\") " pod="openshift-multus/cni-sysctl-allowlist-ds-p9jmx" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.909506 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/81255772-fddc-4936-8de3-da4649c32d1f-profile-collector-cert\") pod \"catalog-operator-68c6474976-8qmpk\" (UID: \"81255772-fddc-4936-8de3-da4649c32d1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.910146 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78f80616-d1e0-4152-a7fb-99a512670f27-config-volume\") pod \"collect-profiles-29424345-gxsbc\" (UID: \"78f80616-d1e0-4152-a7fb-99a512670f27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424345-gxsbc" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.910292 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/fe7b4bda-d33b-4c5a-a6f7-0f074d2ae3f9-node-bootstrap-token\") pod \"machine-config-server-vftb9\" (UID: \"fe7b4bda-d33b-4c5a-a6f7-0f074d2ae3f9\") " pod="openshift-machine-config-operator/machine-config-server-vftb9" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.910324 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2b344bba-8e1d-415b-9b5f-e21d3144fe42-plugins-dir\") pod \"csi-hostpathplugin-mbbdj\" (UID: \"2b344bba-8e1d-415b-9b5f-e21d3144fe42\") " pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.910879 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rc7k\" (UniqueName: \"kubernetes.io/projected/dbd5b107-5d08-43af-881c-11540f395267-kube-api-access-4rc7k\") pod \"packageserver-d55dfcdfc-r54sd\" (UID: \"dbd5b107-5d08-43af-881c-11540f395267\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.910889 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/8f687791-2831-4665-bb32-ee48ab6e70be-ready\") pod \"cni-sysctl-allowlist-ds-p9jmx\" (UID: \"8f687791-2831-4665-bb32-ee48ab6e70be\") " pod="openshift-multus/cni-sysctl-allowlist-ds-p9jmx" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.913978 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87bs4\" (UniqueName: \"kubernetes.io/projected/fe7b4bda-d33b-4c5a-a6f7-0f074d2ae3f9-kube-api-access-87bs4\") pod \"machine-config-server-vftb9\" (UID: \"fe7b4bda-d33b-4c5a-a6f7-0f074d2ae3f9\") " pod="openshift-machine-config-operator/machine-config-server-vftb9" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.911902 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dfc40369-6f7d-4ab2-89b0-60bfc12e9bfe-service-ca-bundle\") pod \"authentication-operator-69f744f599-t75hp\" (UID: \"dfc40369-6f7d-4ab2-89b0-60bfc12e9bfe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t75hp" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.915973 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cff9c4b2-d51d-4a2d-b325-522d1b2a4442-cert\") pod \"ingress-canary-pqplb\" (UID: \"cff9c4b2-d51d-4a2d-b325-522d1b2a4442\") " pod="openshift-ingress-canary/ingress-canary-pqplb" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.916066 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2b344bba-8e1d-415b-9b5f-e21d3144fe42-mountpoint-dir\") pod \"csi-hostpathplugin-mbbdj\" (UID: \"2b344bba-8e1d-415b-9b5f-e21d3144fe42\") " pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.916125 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.916155 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2b344bba-8e1d-415b-9b5f-e21d3144fe42-csi-data-dir\") pod \"csi-hostpathplugin-mbbdj\" (UID: \"2b344bba-8e1d-415b-9b5f-e21d3144fe42\") " pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.916181 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/fe7b4bda-d33b-4c5a-a6f7-0f074d2ae3f9-certs\") pod \"machine-config-server-vftb9\" (UID: \"fe7b4bda-d33b-4c5a-a6f7-0f074d2ae3f9\") " pod="openshift-machine-config-operator/machine-config-server-vftb9" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.916441 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80709272-78a8-46c2-82af-9109c6f2048f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-bxjjm\" (UID: \"80709272-78a8-46c2-82af-9109c6f2048f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bxjjm" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.916774 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/164f4fce-a190-451c-b056-103eadc5bb6d-signing-cabundle\") pod \"service-ca-9c57cc56f-6t4gf\" (UID: \"164f4fce-a190-451c-b056-103eadc5bb6d\") " pod="openshift-service-ca/service-ca-9c57cc56f-6t4gf" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.916857 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7btbj\" (UniqueName: \"kubernetes.io/projected/66dfe3f5-9e7a-4e1c-b03a-b6f01dab6ef4-kube-api-access-7btbj\") pod \"dns-default-25p7l\" (UID: \"66dfe3f5-9e7a-4e1c-b03a-b6f01dab6ef4\") " pod="openshift-dns/dns-default-25p7l" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.917162 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2b344bba-8e1d-415b-9b5f-e21d3144fe42-csi-data-dir\") pod \"csi-hostpathplugin-mbbdj\" (UID: \"2b344bba-8e1d-415b-9b5f-e21d3144fe42\") " pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.917218 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2b344bba-8e1d-415b-9b5f-e21d3144fe42-mountpoint-dir\") pod \"csi-hostpathplugin-mbbdj\" (UID: \"2b344bba-8e1d-415b-9b5f-e21d3144fe42\") " pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.917474 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/78f80616-d1e0-4152-a7fb-99a512670f27-secret-volume\") pod \"collect-profiles-29424345-gxsbc\" (UID: \"78f80616-d1e0-4152-a7fb-99a512670f27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424345-gxsbc" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.917671 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a182f606-2976-4cb9-8175-65b012a06596-serving-cert\") pod \"service-ca-operator-777779d784-9ttg9\" (UID: \"a182f606-2976-4cb9-8175-65b012a06596\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9ttg9" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.918034 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dbd5b107-5d08-43af-881c-11540f395267-webhook-cert\") pod \"packageserver-d55dfcdfc-r54sd\" (UID: \"dbd5b107-5d08-43af-881c-11540f395267\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" Dec 11 13:49:06 crc kubenswrapper[5050]: E1211 13:49:06.918230 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:07.418199125 +0000 UTC m=+38.261921751 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.918230 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/164f4fce-a190-451c-b056-103eadc5bb6d-signing-key\") pod \"service-ca-9c57cc56f-6t4gf\" (UID: \"164f4fce-a190-451c-b056-103eadc5bb6d\") " pod="openshift-service-ca/service-ca-9c57cc56f-6t4gf" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.918339 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cf02ac59-1ff0-46d9-a346-cfdb183dee53-proxy-tls\") pod \"machine-config-controller-84d6567774-tkg2n\" (UID: \"cf02ac59-1ff0-46d9-a346-cfdb183dee53\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tkg2n" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.918401 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dbd5b107-5d08-43af-881c-11540f395267-apiservice-cert\") pod \"packageserver-d55dfcdfc-r54sd\" (UID: \"dbd5b107-5d08-43af-881c-11540f395267\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.918778 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfc40369-6f7d-4ab2-89b0-60bfc12e9bfe-serving-cert\") pod \"authentication-operator-69f744f599-t75hp\" (UID: \"dfc40369-6f7d-4ab2-89b0-60bfc12e9bfe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t75hp" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.919267 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/81255772-fddc-4936-8de3-da4649c32d1f-profile-collector-cert\") pod \"catalog-operator-68c6474976-8qmpk\" (UID: \"81255772-fddc-4936-8de3-da4649c32d1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.919718 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/66dfe3f5-9e7a-4e1c-b03a-b6f01dab6ef4-metrics-tls\") pod \"dns-default-25p7l\" (UID: \"66dfe3f5-9e7a-4e1c-b03a-b6f01dab6ef4\") " pod="openshift-dns/dns-default-25p7l" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.919898 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/66dfe3f5-9e7a-4e1c-b03a-b6f01dab6ef4-config-volume\") pod \"dns-default-25p7l\" (UID: \"66dfe3f5-9e7a-4e1c-b03a-b6f01dab6ef4\") " pod="openshift-dns/dns-default-25p7l" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.921285 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/fe7b4bda-d33b-4c5a-a6f7-0f074d2ae3f9-certs\") pod \"machine-config-server-vftb9\" (UID: \"fe7b4bda-d33b-4c5a-a6f7-0f074d2ae3f9\") " pod="openshift-machine-config-operator/machine-config-server-vftb9" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.921356 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/164f4fce-a190-451c-b056-103eadc5bb6d-signing-cabundle\") pod \"service-ca-9c57cc56f-6t4gf\" (UID: \"164f4fce-a190-451c-b056-103eadc5bb6d\") " pod="openshift-service-ca/service-ca-9c57cc56f-6t4gf" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.926862 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jmrjt" event={"ID":"81fb21e6-41c1-4e89-b458-5d83efc1eec6","Type":"ContainerStarted","Data":"154a0e016d0333fca83b53c20b24e7608e07e25cdbde79a4bc92f8a70f7525a4"} Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.927391 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-dtlb9" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.927379 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjx65\" (UniqueName: \"kubernetes.io/projected/cff9c4b2-d51d-4a2d-b325-522d1b2a4442-kube-api-access-fjx65\") pod \"ingress-canary-pqplb\" (UID: \"cff9c4b2-d51d-4a2d-b325-522d1b2a4442\") " pod="openshift-ingress-canary/ingress-canary-pqplb" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.927672 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x4qj\" (UniqueName: \"kubernetes.io/projected/2b344bba-8e1d-415b-9b5f-e21d3144fe42-kube-api-access-4x4qj\") pod \"csi-hostpathplugin-mbbdj\" (UID: \"2b344bba-8e1d-415b-9b5f-e21d3144fe42\") " pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.929283 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jmqdr" event={"ID":"64efd0fc-ec3c-403b-ac98-0546f2affa94","Type":"ContainerStarted","Data":"422e41202726ef29aa924098958a486ba0ac5799df0d562126af27e36e56e508"} Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.931642 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-rstxr" event={"ID":"617510ba-d86e-4485-9c02-761a60ec1a90","Type":"ContainerStarted","Data":"8593aac66b98c87c3190f68c3b83d2a2a35e0d40de10948446b905c8d9ebf549"} Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.933449 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n2j6p" event={"ID":"04af6d44-0ead-4b43-8287-b6bdb88c14ea","Type":"ContainerStarted","Data":"6e04db5a019430008fe914463a2214a5dbae0355288bbb68fc2915fd637cac4b"} Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.936704 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-qc97s" event={"ID":"d9e38eea-202a-4bf1-bb51-1d4a1fc20202","Type":"ContainerStarted","Data":"a251df3e998464382a2b0ce0b4d2ad1520ff96b12d00b6656add76b983c2e3f2"} Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.938682 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mv9g5" event={"ID":"112006c5-e3a9-4fbb-813c-f195e98277bc","Type":"ContainerStarted","Data":"e6ee575b4ec38fd5bfccabbf466c8a72c9b453628b3950dafb19bf34c6db910b"} Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.940227 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-gp9fp" event={"ID":"f633d554-794b-4a64-9699-27fbc28a4d7c","Type":"ContainerStarted","Data":"8a92413c75471e031f68d61104b2ee4756a365a30a3b4b75a95669d65bac3c95"} Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.942656 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mg97h" event={"ID":"534031e8-875b-4fd2-91cf-bc969db66c22","Type":"ContainerStarted","Data":"e04178dfd23c0fd1dbd9c6e138a92012ce35bda038a0b60c57c9da1e338af642"} Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.944090 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" event={"ID":"fb05f4f3-f5be-4823-934f-14d5c48b43c1","Type":"ContainerStarted","Data":"46d98d9627e4081e6cfc4e647640929d9823db15e9bb787f200eb7c0a070447e"} Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.945513 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" event={"ID":"1d89350d-55e9-4ef6-8182-287894b6c14b","Type":"ContainerStarted","Data":"b0d1ed8ee2fff94954760e62ba624fe15d71a8f47a2a405afe16a6af64ae3d26"} Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.947162 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-skz5v" event={"ID":"d40e12c4-8331-453c-b20b-5bbd5e3c2a9c","Type":"ContainerStarted","Data":"4751a1aa5a8b225202fd81acbddffdd316744052a0c04f6559cdb482970ed025"} Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.948635 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r49rr" event={"ID":"9244eaaf-3ab0-426e-a568-aa295b66c246","Type":"ContainerStarted","Data":"3e33022faf478d589ed2a5df54f453aa2509e4918e674c1a0eba050198112a2b"} Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.950601 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5zrm6" event={"ID":"5002875e-cc97-4cd1-a72a-f8611227e58c","Type":"ContainerStarted","Data":"59f82c12cadf0415afcb789ea6c258966320690ce0df938d8f4299dcdc7a2375"} Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.952722 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrxgh\" (UniqueName: \"kubernetes.io/projected/164f4fce-a190-451c-b056-103eadc5bb6d-kube-api-access-jrxgh\") pod \"service-ca-9c57cc56f-6t4gf\" (UID: \"164f4fce-a190-451c-b056-103eadc5bb6d\") " pod="openshift-service-ca/service-ca-9c57cc56f-6t4gf" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.965903 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r6mk\" (UniqueName: \"kubernetes.io/projected/dfc40369-6f7d-4ab2-89b0-60bfc12e9bfe-kube-api-access-2r6mk\") pod \"authentication-operator-69f744f599-t75hp\" (UID: \"dfc40369-6f7d-4ab2-89b0-60bfc12e9bfe\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-t75hp" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.974068 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cf02ac59-1ff0-46d9-a346-cfdb183dee53-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tkg2n\" (UID: \"cf02ac59-1ff0-46d9-a346-cfdb183dee53\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tkg2n" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.975211 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a182f606-2976-4cb9-8175-65b012a06596-config\") pod \"service-ca-operator-777779d784-9ttg9\" (UID: \"a182f606-2976-4cb9-8175-65b012a06596\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9ttg9" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.977763 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a182f606-2976-4cb9-8175-65b012a06596-serving-cert\") pod \"service-ca-operator-777779d784-9ttg9\" (UID: \"a182f606-2976-4cb9-8175-65b012a06596\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9ttg9" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.978684 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hw7n\" (UniqueName: \"kubernetes.io/projected/a182f606-2976-4cb9-8175-65b012a06596-kube-api-access-6hw7n\") pod \"service-ca-operator-777779d784-9ttg9\" (UID: \"a182f606-2976-4cb9-8175-65b012a06596\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-9ttg9" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.979172 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc9mz\" (UniqueName: \"kubernetes.io/projected/6eec0801-2040-47f2-88b7-f39b0498746e-kube-api-access-tc9mz\") pod \"olm-operator-6b444d44fb-24wkc\" (UID: \"6eec0801-2040-47f2-88b7-f39b0498746e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-24wkc" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.979676 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d1f35830-d883-4b41-ab97-7c382dec0387-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9jns7\" (UID: \"d1f35830-d883-4b41-ab97-7c382dec0387\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jns7" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.980060 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzwlc\" (UniqueName: \"kubernetes.io/projected/d1f35830-d883-4b41-ab97-7c382dec0387-kube-api-access-fzwlc\") pod \"package-server-manager-789f6589d5-9jns7\" (UID: \"d1f35830-d883-4b41-ab97-7c382dec0387\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jns7" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.985999 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jns7" Dec 11 13:49:06 crc kubenswrapper[5050]: I1211 13:49:06.991597 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg4g4\" (UniqueName: \"kubernetes.io/projected/80709272-78a8-46c2-82af-9109c6f2048f-kube-api-access-rg4g4\") pod \"multus-admission-controller-857f4d67dd-bxjjm\" (UID: \"80709272-78a8-46c2-82af-9109c6f2048f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bxjjm" Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:06.998032 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-6t4gf" Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.005970 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9ttg9" Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.014412 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96nqr\" (UniqueName: \"kubernetes.io/projected/cf02ac59-1ff0-46d9-a346-cfdb183dee53-kube-api-access-96nqr\") pod \"machine-config-controller-84d6567774-tkg2n\" (UID: \"cf02ac59-1ff0-46d9-a346-cfdb183dee53\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tkg2n" Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.020179 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:07 crc kubenswrapper[5050]: E1211 13:49:07.020500 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:07.520458725 +0000 UTC m=+38.364181311 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.021130 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:07 crc kubenswrapper[5050]: E1211 13:49:07.021547 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:07.521529474 +0000 UTC m=+38.365252060 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.025193 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rmlj6" Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.027833 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg672\" (UniqueName: \"kubernetes.io/projected/78f80616-d1e0-4152-a7fb-99a512670f27-kube-api-access-dg672\") pod \"collect-profiles-29424345-gxsbc\" (UID: \"78f80616-d1e0-4152-a7fb-99a512670f27\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424345-gxsbc" Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.029250 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-t75hp" Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.038772 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-24wkc" Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.046570 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87bs4\" (UniqueName: \"kubernetes.io/projected/fe7b4bda-d33b-4c5a-a6f7-0f074d2ae3f9-kube-api-access-87bs4\") pod \"machine-config-server-vftb9\" (UID: \"fe7b4bda-d33b-4c5a-a6f7-0f074d2ae3f9\") " pod="openshift-machine-config-operator/machine-config-server-vftb9" Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.047063 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tkg2n" Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.055572 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424345-gxsbc" Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.076781 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rc7k\" (UniqueName: \"kubernetes.io/projected/dbd5b107-5d08-43af-881c-11540f395267-kube-api-access-4rc7k\") pod \"packageserver-d55dfcdfc-r54sd\" (UID: \"dbd5b107-5d08-43af-881c-11540f395267\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.093997 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6cjz\" (UniqueName: \"kubernetes.io/projected/81255772-fddc-4936-8de3-da4649c32d1f-kube-api-access-t6cjz\") pod \"catalog-operator-68c6474976-8qmpk\" (UID: \"81255772-fddc-4936-8de3-da4649c32d1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk" Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.094733 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.106945 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7btbj\" (UniqueName: \"kubernetes.io/projected/66dfe3f5-9e7a-4e1c-b03a-b6f01dab6ef4-kube-api-access-7btbj\") pod \"dns-default-25p7l\" (UID: \"66dfe3f5-9e7a-4e1c-b03a-b6f01dab6ef4\") " pod="openshift-dns/dns-default-25p7l" Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.115592 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-vftb9" Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.122121 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:07 crc kubenswrapper[5050]: E1211 13:49:07.122298 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:07.622271032 +0000 UTC m=+38.465993618 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.122489 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.122535 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pqplb" Dec 11 13:49:07 crc kubenswrapper[5050]: E1211 13:49:07.123040 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:07.622995811 +0000 UTC m=+38.466718397 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.146568 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2d6m\" (UniqueName: \"kubernetes.io/projected/8f687791-2831-4665-bb32-ee48ab6e70be-kube-api-access-s2d6m\") pod \"cni-sysctl-allowlist-ds-p9jmx\" (UID: \"8f687791-2831-4665-bb32-ee48ab6e70be\") " pod="openshift-multus/cni-sysctl-allowlist-ds-p9jmx" Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.224402 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:07 crc kubenswrapper[5050]: E1211 13:49:07.224594 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:07.724568032 +0000 UTC m=+38.568290618 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.224954 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:07 crc kubenswrapper[5050]: E1211 13:49:07.225302 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:07.725294732 +0000 UTC m=+38.569017318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.262901 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-bxjjm" Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.277751 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.294793 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-j2zlh"] Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.325646 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:07 crc kubenswrapper[5050]: E1211 13:49:07.326108 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:07.826078321 +0000 UTC m=+38.669800907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.363354 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk" Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.373532 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-25p7l" Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.408914 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-p9jmx" Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.427325 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:07 crc kubenswrapper[5050]: E1211 13:49:07.427715 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:07.927699853 +0000 UTC m=+38.771422439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:07 crc kubenswrapper[5050]: W1211 13:49:07.472091 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95cc22f3_bae0_4461_a63e_0adbddd76cbb.slice/crio-f41a2b62e643572fcb2484af8da2a72eb5e73ebc6f32d160bcf9f59c07f0b7d2 WatchSource:0}: Error finding container f41a2b62e643572fcb2484af8da2a72eb5e73ebc6f32d160bcf9f59c07f0b7d2: Status 404 returned error can't find the container with id f41a2b62e643572fcb2484af8da2a72eb5e73ebc6f32d160bcf9f59c07f0b7d2 Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.523487 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckbbn"] Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.528999 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:07 crc kubenswrapper[5050]: E1211 13:49:07.529185 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:08.02915228 +0000 UTC m=+38.872874866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.529374 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:07 crc kubenswrapper[5050]: E1211 13:49:07.529767 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:08.029749316 +0000 UTC m=+38.873471902 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.560980 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-krl9b"] Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.631101 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:07 crc kubenswrapper[5050]: E1211 13:49:07.632053 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:08.132033336 +0000 UTC m=+38.975755922 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:07 crc kubenswrapper[5050]: W1211 13:49:07.665951 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode32c23eb_6cda_4f54_bb1b_d8512e4d30cc.slice/crio-b09ab041809c0593f6cfb6e73c64f055dcaf1c9d4c11371565f4f689041f7f3e WatchSource:0}: Error finding container b09ab041809c0593f6cfb6e73c64f055dcaf1c9d4c11371565f4f689041f7f3e: Status 404 returned error can't find the container with id b09ab041809c0593f6cfb6e73c64f055dcaf1c9d4c11371565f4f689041f7f3e Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.713384 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9zt5j"] Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.733752 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:07 crc kubenswrapper[5050]: E1211 13:49:07.734437 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:08.234411068 +0000 UTC m=+39.078133654 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.754857 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mrg9f"] Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.779918 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c6twg"] Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.835334 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:07 crc kubenswrapper[5050]: E1211 13:49:07.835531 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:08.335486216 +0000 UTC m=+39.179208802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.835848 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:07 crc kubenswrapper[5050]: E1211 13:49:07.836262 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:08.336250956 +0000 UTC m=+39.179973542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.866712 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jns7"] Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.938449 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:07 crc kubenswrapper[5050]: E1211 13:49:07.938906 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:08.438886136 +0000 UTC m=+39.282608712 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.961415 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-krl9b" event={"ID":"d33f8d3b-768a-44d3-bdf6-1a885b096055","Type":"ContainerStarted","Data":"bebb2b74723745e81a9185a6f55871960a9c6f759f90573dab9305c1074ea4f5"} Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.963276 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-j2zlh" event={"ID":"95cc22f3-bae0-4461-a63e-0adbddd76cbb","Type":"ContainerStarted","Data":"f41a2b62e643572fcb2484af8da2a72eb5e73ebc6f32d160bcf9f59c07f0b7d2"} Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.964691 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckbbn" event={"ID":"e32c23eb-6cda-4f54-bb1b-d8512e4d30cc","Type":"ContainerStarted","Data":"b09ab041809c0593f6cfb6e73c64f055dcaf1c9d4c11371565f4f689041f7f3e"} Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.966730 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bvjdq" event={"ID":"3069e571-2923-44a7-ae85-7cc7e64991ef","Type":"ContainerStarted","Data":"5f39bbd36b3950a89dc4d6758df21536feb19af0dc144cad266650450065aaca"} Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.968301 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bvjdq" Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.974896 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gxtkz" event={"ID":"c37f7db2-f363-4ef9-8c5f-3c6d0ed3f75c","Type":"ContainerStarted","Data":"0d5dfd9881ad47247444a4a2ca00738fa8ce438c71d5776b8a385dfd13943ba9"} Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.978430 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vplw7" event={"ID":"dfe5cd7c-4c40-4e8a-8d26-f66424569dbe","Type":"ContainerStarted","Data":"adbd25a9c2c8fec89266ff283d37686cb81b2fef26a5317670a92bc4bca5056a"} Dec 11 13:49:07 crc kubenswrapper[5050]: I1211 13:49:07.984114 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bvjdq" podStartSLOduration=18.98409418 podStartE2EDuration="18.98409418s" podCreationTimestamp="2025-12-11 13:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:07.983632777 +0000 UTC m=+38.827355373" watchObservedRunningTime="2025-12-11 13:49:07.98409418 +0000 UTC m=+38.827816766" Dec 11 13:49:08 crc kubenswrapper[5050]: I1211 13:49:08.006567 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vplw7" podStartSLOduration=20.006526377 podStartE2EDuration="20.006526377s" podCreationTimestamp="2025-12-11 13:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:07.99849874 +0000 UTC m=+38.842221326" watchObservedRunningTime="2025-12-11 13:49:08.006526377 +0000 UTC m=+38.850248963" Dec 11 13:49:08 crc kubenswrapper[5050]: I1211 13:49:08.007634 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dsrxh" event={"ID":"f89bbffa-4dbb-4aac-bb26-c146a6037f67","Type":"ContainerStarted","Data":"f1021c1a8c421d1f34a844e24a7014a86ddf41d1e8de448719230a26c2f9507b"} Dec 11 13:49:08 crc kubenswrapper[5050]: I1211 13:49:08.015155 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zljtn" event={"ID":"465806d7-7b39-4ddf-b098-8bed4c0c5a3a","Type":"ContainerStarted","Data":"7d1b6556b3d08b94f0147431a1977820b4db4584d85b784ee75d31fd9af1a7f4"} Dec 11 13:49:08 crc kubenswrapper[5050]: I1211 13:49:08.040374 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:08 crc kubenswrapper[5050]: E1211 13:49:08.041392 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:08.541363161 +0000 UTC m=+39.385085817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:08 crc kubenswrapper[5050]: I1211 13:49:08.142287 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:08 crc kubenswrapper[5050]: E1211 13:49:08.142626 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:08.642584242 +0000 UTC m=+39.486307008 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:08 crc kubenswrapper[5050]: I1211 13:49:08.245281 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:08 crc kubenswrapper[5050]: E1211 13:49:08.245726 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:08.745703304 +0000 UTC m=+39.589425890 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:08 crc kubenswrapper[5050]: I1211 13:49:08.347632 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:08 crc kubenswrapper[5050]: E1211 13:49:08.349476 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:08.849393042 +0000 UTC m=+39.693115678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:08 crc kubenswrapper[5050]: I1211 13:49:08.353933 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-25p7l"] Dec 11 13:49:08 crc kubenswrapper[5050]: I1211 13:49:08.363518 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rmlj6"] Dec 11 13:49:08 crc kubenswrapper[5050]: I1211 13:49:08.367285 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-9ttg9"] Dec 11 13:49:08 crc kubenswrapper[5050]: I1211 13:49:08.371489 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-6t4gf"] Dec 11 13:49:08 crc kubenswrapper[5050]: W1211 13:49:08.445690 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18d0253b_d7da_4be9_8fc3_6911f1c92076.slice/crio-e382accad787c3c82b21882c1f9b59f994472f44e849045646ec38f8d589ec58 WatchSource:0}: Error finding container e382accad787c3c82b21882c1f9b59f994472f44e849045646ec38f8d589ec58: Status 404 returned error can't find the container with id e382accad787c3c82b21882c1f9b59f994472f44e849045646ec38f8d589ec58 Dec 11 13:49:08 crc kubenswrapper[5050]: I1211 13:49:08.450300 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:08 crc kubenswrapper[5050]: I1211 13:49:08.450353 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b16e336-8c81-45d1-a527-599b29a7c070-metrics-certs\") pod \"network-metrics-daemon-lttxf\" (UID: \"2b16e336-8c81-45d1-a527-599b29a7c070\") " pod="openshift-multus/network-metrics-daemon-lttxf" Dec 11 13:49:08 crc kubenswrapper[5050]: E1211 13:49:08.451846 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:08.951821296 +0000 UTC m=+39.795543952 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:08 crc kubenswrapper[5050]: I1211 13:49:08.458939 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bvjdq" Dec 11 13:49:08 crc kubenswrapper[5050]: I1211 13:49:08.462970 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2b16e336-8c81-45d1-a527-599b29a7c070-metrics-certs\") pod \"network-metrics-daemon-lttxf\" (UID: \"2b16e336-8c81-45d1-a527-599b29a7c070\") " pod="openshift-multus/network-metrics-daemon-lttxf" Dec 11 13:49:08 crc kubenswrapper[5050]: I1211 13:49:08.568756 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:08 crc kubenswrapper[5050]: E1211 13:49:08.569363 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:09.069328488 +0000 UTC m=+39.913051074 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:08 crc kubenswrapper[5050]: I1211 13:49:08.606075 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:08 crc kubenswrapper[5050]: E1211 13:49:08.607141 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:09.107122121 +0000 UTC m=+39.950844707 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:08 crc kubenswrapper[5050]: I1211 13:49:08.662355 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lttxf" Dec 11 13:49:08 crc kubenswrapper[5050]: I1211 13:49:08.707912 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:08 crc kubenswrapper[5050]: E1211 13:49:08.709051 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:09.20898696 +0000 UTC m=+40.052709546 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:08 crc kubenswrapper[5050]: I1211 13:49:08.824220 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:08 crc kubenswrapper[5050]: E1211 13:49:08.827719 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:09.327701535 +0000 UTC m=+40.171424121 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:08 crc kubenswrapper[5050]: I1211 13:49:08.867579 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-t75hp"] Dec 11 13:49:08 crc kubenswrapper[5050]: I1211 13:49:08.896449 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-pqplb"] Dec 11 13:49:08 crc kubenswrapper[5050]: I1211 13:49:08.926262 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:08 crc kubenswrapper[5050]: E1211 13:49:08.926481 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:09.426437878 +0000 UTC m=+40.270160464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:08 crc kubenswrapper[5050]: I1211 13:49:08.926916 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:08 crc kubenswrapper[5050]: E1211 13:49:08.927413 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:09.427400474 +0000 UTC m=+40.271123060 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:08 crc kubenswrapper[5050]: I1211 13:49:08.940078 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-24wkc"] Dec 11 13:49:08 crc kubenswrapper[5050]: I1211 13:49:08.991628 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-bxjjm"] Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.027747 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:09 crc kubenswrapper[5050]: E1211 13:49:09.027931 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:09.527876265 +0000 UTC m=+40.371598851 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.028034 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:09 crc kubenswrapper[5050]: E1211 13:49:09.028448 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:09.528440301 +0000 UTC m=+40.372162877 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.028836 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-rstxr" event={"ID":"617510ba-d86e-4485-9c02-761a60ec1a90","Type":"ContainerStarted","Data":"86c10b3715f4175b2a643c1188e9da740c45d223ee945fac4e96281e5a66d81f"} Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.031348 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jns7" event={"ID":"d1f35830-d883-4b41-ab97-7c382dec0387","Type":"ContainerStarted","Data":"2f8896b7fa04cb653032254be365e3742ecec33ccbbb2c8bea312cba14ba4f19"} Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.032990 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-mv9g5" event={"ID":"112006c5-e3a9-4fbb-813c-f195e98277bc","Type":"ContainerStarted","Data":"24a353b351be021ebd60439ed35b7d3f412e5ad614a28f7c4da190d581f18124"} Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.035394 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mg97h" event={"ID":"534031e8-875b-4fd2-91cf-bc969db66c22","Type":"ContainerStarted","Data":"d85dc90fbecde58a2e93560fe67a5fdef06a5489fdfed0d6071c10745087ed3b"} Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.038169 5050 generic.go:334] "Generic (PLEG): container finished" podID="1d89350d-55e9-4ef6-8182-287894b6c14b" containerID="f30f1a6a8652fbe92caee16199cc0d3d9451239ae91d9ffc93eab8aa8abdf54c" exitCode=0 Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.038297 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" event={"ID":"1d89350d-55e9-4ef6-8182-287894b6c14b","Type":"ContainerDied","Data":"f30f1a6a8652fbe92caee16199cc0d3d9451239ae91d9ffc93eab8aa8abdf54c"} Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.039878 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rmlj6" event={"ID":"18d0253b-d7da-4be9-8fc3-6911f1c92076","Type":"ContainerStarted","Data":"e382accad787c3c82b21882c1f9b59f994472f44e849045646ec38f8d589ec58"} Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.043545 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-qc97s" event={"ID":"d9e38eea-202a-4bf1-bb51-1d4a1fc20202","Type":"ContainerStarted","Data":"1adcd75ccd002e7d20f5710bb8aa074281d59934265e7685e2a194a0faa48266"} Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.045471 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n2j6p" event={"ID":"04af6d44-0ead-4b43-8287-b6bdb88c14ea","Type":"ContainerStarted","Data":"e0d3e7a23a0b6cfb9c124daba394034e81671f002b4388da9d6d8a5493017742"} Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.046389 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c6twg" event={"ID":"26b6bd08-d432-4a3b-b739-d575ca32ac6e","Type":"ContainerStarted","Data":"0c155d8c7b0e3f170178d2b4a7cfe34c1bc10cae0db5c17154c546f9199f339c"} Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.048137 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-gp9fp" event={"ID":"f633d554-794b-4a64-9699-27fbc28a4d7c","Type":"ContainerStarted","Data":"88da5d50bd3ba895417abaa05182bd801aecb6ce1fb2eb1562e333fbd9b09a37"} Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.049482 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-vftb9" event={"ID":"fe7b4bda-d33b-4c5a-a6f7-0f074d2ae3f9","Type":"ContainerStarted","Data":"59bf68e0c5c586e39253471feef962be476cd74033002cd608dc5c0f6a3b5117"} Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.050766 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-dtlb9" event={"ID":"cef7b97f-083b-44ae-9357-94f97b3eb30c","Type":"ContainerStarted","Data":"b40c698e4785e92fc938380a6f2c7a06d960451f57035a7c54b6b3d5bdf97584"} Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.051940 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-p9jmx" event={"ID":"8f687791-2831-4665-bb32-ee48ab6e70be","Type":"ContainerStarted","Data":"a06ea0d64ae569e2b4d2d528c3a46baf1a1102780ca5077c0c515d53a8841083"} Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.053867 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-skz5v" event={"ID":"d40e12c4-8331-453c-b20b-5bbd5e3c2a9c","Type":"ContainerStarted","Data":"a26e87b7916b4cf910405aa4d57ebae142c78b0b630a03eb23a81d830c9750ce"} Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.057117 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9ttg9" event={"ID":"a182f606-2976-4cb9-8175-65b012a06596","Type":"ContainerStarted","Data":"44b1852c7322c2b3e81dfa9aeaba0e45e803787ef50b336421b612ba27626862"} Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.058721 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r49rr" event={"ID":"9244eaaf-3ab0-426e-a568-aa295b66c246","Type":"ContainerStarted","Data":"c0cd82cf8e42f4384589a8db95f9223aceb007001940f8b76ac05da6ab33109b"} Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.062060 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-6t4gf" event={"ID":"164f4fce-a190-451c-b056-103eadc5bb6d","Type":"ContainerStarted","Data":"6891f97ab1b4ff4fb0d0f63fc98079ed0f872bb5eb4e753a805a562aac4dbb86"} Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.063671 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5zrm6" event={"ID":"5002875e-cc97-4cd1-a72a-f8611227e58c","Type":"ContainerStarted","Data":"a856c158ca4536bf5d80c197ae6bd6211a948c844d22ee32262da80a39f6d002"} Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.065277 5050 generic.go:334] "Generic (PLEG): container finished" podID="64efd0fc-ec3c-403b-ac98-0546f2affa94" containerID="a6a6f8cc2b58dc9e8eeb64c0aa2b26ffc07def29b7a4bf19ff33b72632aba237" exitCode=0 Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.065362 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jmqdr" event={"ID":"64efd0fc-ec3c-403b-ac98-0546f2affa94","Type":"ContainerDied","Data":"a6a6f8cc2b58dc9e8eeb64c0aa2b26ffc07def29b7a4bf19ff33b72632aba237"} Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.066751 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" event={"ID":"fb05f4f3-f5be-4823-934f-14d5c48b43c1","Type":"ContainerStarted","Data":"d748d9f42dc03e51951c66af131a100bb2dd61db202faab1acd4f50abc2e82ec"} Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.067600 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-25p7l" event={"ID":"66dfe3f5-9e7a-4e1c-b03a-b6f01dab6ef4","Type":"ContainerStarted","Data":"c74ff447064617ab4fbe8e87fcfb5d813d426d0a69780eff4de0ef5d94b1d937"} Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.068933 5050 generic.go:334] "Generic (PLEG): container finished" podID="ea884e88-c0df-4212-976a-0d7ce1731fdc" containerID="97df5cad7f92598e4b9358a8e6027193d215579440e74ad843878ba4226b77e8" exitCode=0 Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.068988 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" event={"ID":"ea884e88-c0df-4212-976a-0d7ce1731fdc","Type":"ContainerDied","Data":"97df5cad7f92598e4b9358a8e6027193d215579440e74ad843878ba4226b77e8"} Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.070148 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9zt5j" event={"ID":"c6b95f15-c748-43a9-8ca6-2007cd1727e5","Type":"ContainerStarted","Data":"f0de2ace08dd24630ddaa304726d3195713f12a1eeaa05527f2fce8dba66d4b4"} Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.074050 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-n2j6p" podStartSLOduration=21.074032565 podStartE2EDuration="21.074032565s" podCreationTimestamp="2025-12-11 13:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:09.072584466 +0000 UTC m=+39.916307052" watchObservedRunningTime="2025-12-11 13:49:09.074032565 +0000 UTC m=+39.917755171" Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.074656 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jmrjt" event={"ID":"81fb21e6-41c1-4e89-b458-5d83efc1eec6","Type":"ContainerStarted","Data":"8b93eea35bc57453d2c8581c2b4284c6a32d615a2a81633cee4c05d69a4f747f"} Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.078986 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mrg9f" event={"ID":"4953940c-67b1-4a85-851f-ad290b9a0d0d","Type":"ContainerStarted","Data":"6f98984df9eaa42a1d21f27cc944829471df94dcabdaaa440b702b95e80de5a3"} Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.091324 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r49rr" podStartSLOduration=20.091297323 podStartE2EDuration="20.091297323s" podCreationTimestamp="2025-12-11 13:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:09.089998758 +0000 UTC m=+39.933721344" watchObservedRunningTime="2025-12-11 13:49:09.091297323 +0000 UTC m=+39.935019929" Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.129861 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:09 crc kubenswrapper[5050]: E1211 13:49:09.130138 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:09.630103274 +0000 UTC m=+40.473825860 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.130248 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:09 crc kubenswrapper[5050]: E1211 13:49:09.130717 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:09.63070921 +0000 UTC m=+40.474431796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.231425 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:09 crc kubenswrapper[5050]: E1211 13:49:09.231589 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:09.731554601 +0000 UTC m=+40.575277187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.231739 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:09 crc kubenswrapper[5050]: E1211 13:49:09.232075 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:09.732067495 +0000 UTC m=+40.575790081 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.297754 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:49:09 crc kubenswrapper[5050]: W1211 13:49:09.329646 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfc40369_6f7d_4ab2_89b0_60bfc12e9bfe.slice/crio-4f34b67da751d52e18be9a5bc8bb21b1fd7e5a4c4a238bdfa4b7e60fb2a7b315 WatchSource:0}: Error finding container 4f34b67da751d52e18be9a5bc8bb21b1fd7e5a4c4a238bdfa4b7e60fb2a7b315: Status 404 returned error can't find the container with id 4f34b67da751d52e18be9a5bc8bb21b1fd7e5a4c4a238bdfa4b7e60fb2a7b315 Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.334821 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:09 crc kubenswrapper[5050]: E1211 13:49:09.335380 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:09.835357752 +0000 UTC m=+40.679080338 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:09 crc kubenswrapper[5050]: W1211 13:49:09.344380 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcff9c4b2_d51d_4a2d_b325_522d1b2a4442.slice/crio-9b28635ef97fc7e3c56ab02ef559ab04df4dc26ac82a2b2d7cc45c142eaaccb3 WatchSource:0}: Error finding container 9b28635ef97fc7e3c56ab02ef559ab04df4dc26ac82a2b2d7cc45c142eaaccb3: Status 404 returned error can't find the container with id 9b28635ef97fc7e3c56ab02ef559ab04df4dc26ac82a2b2d7cc45c142eaaccb3 Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.344449 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tkg2n"] Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.357691 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424345-gxsbc"] Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.366021 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd"] Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.370456 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-mbbdj"] Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.403330 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk"] Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.436721 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:09 crc kubenswrapper[5050]: E1211 13:49:09.438697 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:09.93866541 +0000 UTC m=+40.782387996 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:09 crc kubenswrapper[5050]: W1211 13:49:09.448365 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80709272_78a8_46c2_82af_9109c6f2048f.slice/crio-2991bffba7d909682935b0e36bc8f0137410dae32413fefab620649e9a907bfa WatchSource:0}: Error finding container 2991bffba7d909682935b0e36bc8f0137410dae32413fefab620649e9a907bfa: Status 404 returned error can't find the container with id 2991bffba7d909682935b0e36bc8f0137410dae32413fefab620649e9a907bfa Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.537466 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:09 crc kubenswrapper[5050]: E1211 13:49:09.537705 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:10.037665371 +0000 UTC m=+40.881387967 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.538479 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:09 crc kubenswrapper[5050]: E1211 13:49:09.538849 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:10.038837072 +0000 UTC m=+40.882559728 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:09 crc kubenswrapper[5050]: W1211 13:49:09.557874 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbd5b107_5d08_43af_881c_11540f395267.slice/crio-c805849922fd43258b62b681c62c67d9dcc81b35b6272fbfa3be359310c2b889 WatchSource:0}: Error finding container c805849922fd43258b62b681c62c67d9dcc81b35b6272fbfa3be359310c2b889: Status 404 returned error can't find the container with id c805849922fd43258b62b681c62c67d9dcc81b35b6272fbfa3be359310c2b889 Dec 11 13:49:09 crc kubenswrapper[5050]: W1211 13:49:09.559350 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78f80616_d1e0_4152_a7fb_99a512670f27.slice/crio-724ea55e5f204f0621a9a9de791cb1b4f412b55a23d60a387abfe8fa28b17be1 WatchSource:0}: Error finding container 724ea55e5f204f0621a9a9de791cb1b4f412b55a23d60a387abfe8fa28b17be1: Status 404 returned error can't find the container with id 724ea55e5f204f0621a9a9de791cb1b4f412b55a23d60a387abfe8fa28b17be1 Dec 11 13:49:09 crc kubenswrapper[5050]: W1211 13:49:09.567207 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b344bba_8e1d_415b_9b5f_e21d3144fe42.slice/crio-41bbab175201ff09f8047531eeccd83706cae3af36234ebe8263f75bf4de7572 WatchSource:0}: Error finding container 41bbab175201ff09f8047531eeccd83706cae3af36234ebe8263f75bf4de7572: Status 404 returned error can't find the container with id 41bbab175201ff09f8047531eeccd83706cae3af36234ebe8263f75bf4de7572 Dec 11 13:49:09 crc kubenswrapper[5050]: W1211 13:49:09.593785 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf02ac59_1ff0_46d9_a346_cfdb183dee53.slice/crio-26e3d14d4e6fc7914d8dba2d0ecc245907f684bb5effa78c29d13502e7d7424f WatchSource:0}: Error finding container 26e3d14d4e6fc7914d8dba2d0ecc245907f684bb5effa78c29d13502e7d7424f: Status 404 returned error can't find the container with id 26e3d14d4e6fc7914d8dba2d0ecc245907f684bb5effa78c29d13502e7d7424f Dec 11 13:49:09 crc kubenswrapper[5050]: W1211 13:49:09.600203 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81255772_fddc_4936_8de3_da4649c32d1f.slice/crio-668e709e005f74fc247d68139d38e74c37542de20c4efa9721a581dd079a3684 WatchSource:0}: Error finding container 668e709e005f74fc247d68139d38e74c37542de20c4efa9721a581dd079a3684: Status 404 returned error can't find the container with id 668e709e005f74fc247d68139d38e74c37542de20c4efa9721a581dd079a3684 Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.641706 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:09 crc kubenswrapper[5050]: E1211 13:49:09.642314 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:10.142276723 +0000 UTC m=+40.985999309 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.676554 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-lttxf"] Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.751582 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:09 crc kubenswrapper[5050]: E1211 13:49:09.752319 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:10.252301912 +0000 UTC m=+41.096024488 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.854571 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:09 crc kubenswrapper[5050]: E1211 13:49:09.855353 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:10.355334662 +0000 UTC m=+41.199057248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:09 crc kubenswrapper[5050]: I1211 13:49:09.956995 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:09 crc kubenswrapper[5050]: E1211 13:49:09.957466 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:10.457437427 +0000 UTC m=+41.301160013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.059821 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:10 crc kubenswrapper[5050]: E1211 13:49:10.060180 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:10.560156859 +0000 UTC m=+41.403879445 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.126709 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-j2zlh" event={"ID":"95cc22f3-bae0-4461-a63e-0adbddd76cbb","Type":"ContainerStarted","Data":"cddcd660f279b4b6f723c5864e232ae8d556372bcf0197c0ca08d3b032b21a16"} Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.133926 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" event={"ID":"dbd5b107-5d08-43af-881c-11540f395267","Type":"ContainerStarted","Data":"c805849922fd43258b62b681c62c67d9dcc81b35b6272fbfa3be359310c2b889"} Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.147524 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424345-gxsbc" event={"ID":"78f80616-d1e0-4152-a7fb-99a512670f27","Type":"ContainerStarted","Data":"724ea55e5f204f0621a9a9de791cb1b4f412b55a23d60a387abfe8fa28b17be1"} Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.148492 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-bxjjm" event={"ID":"80709272-78a8-46c2-82af-9109c6f2048f","Type":"ContainerStarted","Data":"2991bffba7d909682935b0e36bc8f0137410dae32413fefab620649e9a907bfa"} Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.150184 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-krl9b" event={"ID":"d33f8d3b-768a-44d3-bdf6-1a885b096055","Type":"ContainerStarted","Data":"2f251d9783ff20390b3ba84ccc16a4e559d825f1c9e37af0bf888de3412e7026"} Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.152689 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-24wkc" event={"ID":"6eec0801-2040-47f2-88b7-f39b0498746e","Type":"ContainerStarted","Data":"6d61024e2b9fa53670b3548b974c7fdfe271ca6a157fd9031690dc3b51bd5828"} Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.154283 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" event={"ID":"2b344bba-8e1d-415b-9b5f-e21d3144fe42","Type":"ContainerStarted","Data":"41bbab175201ff09f8047531eeccd83706cae3af36234ebe8263f75bf4de7572"} Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.157111 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-t75hp" event={"ID":"dfc40369-6f7d-4ab2-89b0-60bfc12e9bfe","Type":"ContainerStarted","Data":"4f34b67da751d52e18be9a5bc8bb21b1fd7e5a4c4a238bdfa4b7e60fb2a7b315"} Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.158529 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckbbn" event={"ID":"e32c23eb-6cda-4f54-bb1b-d8512e4d30cc","Type":"ContainerStarted","Data":"5c5f863b48944675341a1c96f236903307c14e4de7b7aa3381e358c2b2c6c792"} Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.161127 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:10 crc kubenswrapper[5050]: E1211 13:49:10.161647 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:10.661626896 +0000 UTC m=+41.505349482 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.176428 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gxtkz" event={"ID":"c37f7db2-f363-4ef9-8c5f-3c6d0ed3f75c","Type":"ContainerStarted","Data":"386136167fc238c2fd94a179b1424a0cdcfe67d6a2672c548a13e68a7d00f3b5"} Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.184570 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-zljtn" event={"ID":"465806d7-7b39-4ddf-b098-8bed4c0c5a3a","Type":"ContainerStarted","Data":"c47be26f9189a90a414827e2407c23de1311c03af8851cfa1787666e1f506950"} Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.200848 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tkg2n" event={"ID":"cf02ac59-1ff0-46d9-a346-cfdb183dee53","Type":"ContainerStarted","Data":"26e3d14d4e6fc7914d8dba2d0ecc245907f684bb5effa78c29d13502e7d7424f"} Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.202143 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lttxf" event={"ID":"2b16e336-8c81-45d1-a527-599b29a7c070","Type":"ContainerStarted","Data":"d001937cbefe46ce5585b174ce2a36542d76d6966a0a6135acaea7b652140a7f"} Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.208271 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk" event={"ID":"81255772-fddc-4936-8de3-da4649c32d1f","Type":"ContainerStarted","Data":"668e709e005f74fc247d68139d38e74c37542de20c4efa9721a581dd079a3684"} Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.211326 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-pqplb" event={"ID":"cff9c4b2-d51d-4a2d-b325-522d1b2a4442","Type":"ContainerStarted","Data":"9b28635ef97fc7e3c56ab02ef559ab04df4dc26ac82a2b2d7cc45c142eaaccb3"} Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.217203 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dsrxh" event={"ID":"f89bbffa-4dbb-4aac-bb26-c146a6037f67","Type":"ContainerStarted","Data":"fc5e5901908072895ce6bc792de570cafa3077d08f6bc457e65be72bdab09732"} Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.217245 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-skz5v" Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.219101 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.228403 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-skz5v" Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.262700 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:10 crc kubenswrapper[5050]: E1211 13:49:10.266904 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:10.766869026 +0000 UTC m=+41.610591612 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.280518 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-zljtn" podStartSLOduration=21.280474985 podStartE2EDuration="21.280474985s" podCreationTimestamp="2025-12-11 13:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:10.279592001 +0000 UTC m=+41.123314607" watchObservedRunningTime="2025-12-11 13:49:10.280474985 +0000 UTC m=+41.124197571" Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.303217 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-gp9fp" podStartSLOduration=22.30319255 podStartE2EDuration="22.30319255s" podCreationTimestamp="2025-12-11 13:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:10.302683056 +0000 UTC m=+41.146405642" watchObservedRunningTime="2025-12-11 13:49:10.30319255 +0000 UTC m=+41.146915136" Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.362290 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-rstxr" podStartSLOduration=22.36227017 podStartE2EDuration="22.36227017s" podCreationTimestamp="2025-12-11 13:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:10.332318099 +0000 UTC m=+41.176040685" watchObservedRunningTime="2025-12-11 13:49:10.36227017 +0000 UTC m=+41.205992756" Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.365995 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:10 crc kubenswrapper[5050]: E1211 13:49:10.366397 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:10.866385181 +0000 UTC m=+41.710107767 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.388618 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckbbn" podStartSLOduration=22.388593383 podStartE2EDuration="22.388593383s" podCreationTimestamp="2025-12-11 13:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:10.364964113 +0000 UTC m=+41.208686699" watchObservedRunningTime="2025-12-11 13:49:10.388593383 +0000 UTC m=+41.232315979" Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.413655 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" podStartSLOduration=22.413628511 podStartE2EDuration="22.413628511s" podCreationTimestamp="2025-12-11 13:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:10.411244446 +0000 UTC m=+41.254967032" watchObservedRunningTime="2025-12-11 13:49:10.413628511 +0000 UTC m=+41.257351097" Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.445469 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dsrxh" podStartSLOduration=22.445446522 podStartE2EDuration="22.445446522s" podCreationTimestamp="2025-12-11 13:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:10.431313289 +0000 UTC m=+41.275035875" watchObservedRunningTime="2025-12-11 13:49:10.445446522 +0000 UTC m=+41.289169108" Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.475329 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:10 crc kubenswrapper[5050]: E1211 13:49:10.475811 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:10.975786474 +0000 UTC m=+41.819509060 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.503950 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-skz5v" podStartSLOduration=22.503925816 podStartE2EDuration="22.503925816s" podCreationTimestamp="2025-12-11 13:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:10.501539911 +0000 UTC m=+41.345262527" watchObservedRunningTime="2025-12-11 13:49:10.503925816 +0000 UTC m=+41.347648412" Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.528792 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jmrjt" podStartSLOduration=22.528773339 podStartE2EDuration="22.528773339s" podCreationTimestamp="2025-12-11 13:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:10.525975673 +0000 UTC m=+41.369698259" watchObservedRunningTime="2025-12-11 13:49:10.528773339 +0000 UTC m=+41.372495925" Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.570313 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-qc97s" podStartSLOduration=22.570290213 podStartE2EDuration="22.570290213s" podCreationTimestamp="2025-12-11 13:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:10.566639304 +0000 UTC m=+41.410361890" watchObservedRunningTime="2025-12-11 13:49:10.570290213 +0000 UTC m=+41.414012819" Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.579406 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:10 crc kubenswrapper[5050]: E1211 13:49:10.579867 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:11.079853062 +0000 UTC m=+41.923575648 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.680394 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:10 crc kubenswrapper[5050]: E1211 13:49:10.681302 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:11.181284559 +0000 UTC m=+42.025007145 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.784077 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:10 crc kubenswrapper[5050]: E1211 13:49:10.784954 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:11.284939006 +0000 UTC m=+42.128661582 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.885615 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:10 crc kubenswrapper[5050]: E1211 13:49:10.885853 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:11.385815917 +0000 UTC m=+42.229538503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.885940 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:10 crc kubenswrapper[5050]: E1211 13:49:10.886701 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:11.386681721 +0000 UTC m=+42.230404307 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.987619 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:10 crc kubenswrapper[5050]: E1211 13:49:10.987974 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:11.487940133 +0000 UTC m=+42.331662719 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:10 crc kubenswrapper[5050]: I1211 13:49:10.988221 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:10 crc kubenswrapper[5050]: E1211 13:49:10.988746 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:11.488720444 +0000 UTC m=+42.332443100 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.089505 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:11 crc kubenswrapper[5050]: E1211 13:49:11.089977 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:11.589956106 +0000 UTC m=+42.433678692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.191489 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:11 crc kubenswrapper[5050]: E1211 13:49:11.191898 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:11.691883466 +0000 UTC m=+42.535606052 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.217753 5050 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-l4w2d container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.26:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.218094 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" podUID="fb05f4f3-f5be-4823-934f-14d5c48b43c1" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.26:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.223473 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" event={"ID":"1d89350d-55e9-4ef6-8182-287894b6c14b","Type":"ContainerStarted","Data":"1159aa4debe407a8fd7477f9060abe7e2e3c2a44e60387647f3e9553a5b988fd"} Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.225376 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mg97h" event={"ID":"534031e8-875b-4fd2-91cf-bc969db66c22","Type":"ContainerStarted","Data":"9a4d59b929afe06c85c7e7aecba9929e521a69d3e3812606384b54a47b59e924"} Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.226743 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-24wkc" event={"ID":"6eec0801-2040-47f2-88b7-f39b0498746e","Type":"ContainerStarted","Data":"87655f9db789e2aab1bbbeb5b926a974942e84408b18dcaa5803a5c326d3d7a0"} Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.228933 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-j2zlh" event={"ID":"95cc22f3-bae0-4461-a63e-0adbddd76cbb","Type":"ContainerStarted","Data":"8f5510de2f30bf1b17f4d6b9219eecc4f187045c15c02b5430cc38fd45c98d7f"} Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.230559 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jns7" event={"ID":"d1f35830-d883-4b41-ab97-7c382dec0387","Type":"ContainerStarted","Data":"689a213fb689f3fe9f5a9ed339f34a821581957e330d7fbf10ef1b88d706394e"} Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.231831 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-pqplb" event={"ID":"cff9c4b2-d51d-4a2d-b325-522d1b2a4442","Type":"ContainerStarted","Data":"2d994b4ee6318bb2801313e5c5e9a2388fe384c16896ce010a7d776861126821"} Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.233114 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-p9jmx" event={"ID":"8f687791-2831-4665-bb32-ee48ab6e70be","Type":"ContainerStarted","Data":"89e622f1f2a7ec200a5a01f9388e42ff9bc39192cbc98de7d5503694481f631b"} Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.234594 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gxtkz" event={"ID":"c37f7db2-f363-4ef9-8c5f-3c6d0ed3f75c","Type":"ContainerStarted","Data":"6e3b8da7a5f303ed2bf90de0701847c644fc5abb4e5c578702ff58c449e52cd3"} Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.235717 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" event={"ID":"dbd5b107-5d08-43af-881c-11540f395267","Type":"ContainerStarted","Data":"c4fcd3e0277409e398f4cc4d635cc0accc5ab2df1e1cb5b5780d5fabcb6748cf"} Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.236936 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-6t4gf" event={"ID":"164f4fce-a190-451c-b056-103eadc5bb6d","Type":"ContainerStarted","Data":"7f7b879740fe938adfa0bea02b00d8e1e30e3b12a8c0532e2bed3ccc4def7c38"} Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.239026 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9zt5j" event={"ID":"c6b95f15-c748-43a9-8ca6-2007cd1727e5","Type":"ContainerStarted","Data":"cdcf360eaeca4a5c787865a45f6e022014f6ba458157725a2d46e40c5a416f92"} Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.240930 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5zrm6" event={"ID":"5002875e-cc97-4cd1-a72a-f8611227e58c","Type":"ContainerStarted","Data":"4673957648b897a260712e18c1cb3bfd61d79da4145b8a4d43063aeb06a226c1"} Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.242249 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-dtlb9" event={"ID":"cef7b97f-083b-44ae-9357-94f97b3eb30c","Type":"ContainerStarted","Data":"12ea92e05ef5068bc00cef879e5d62acde88d43a07bda6ab527efc4e92c7bde3"} Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.243555 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424345-gxsbc" event={"ID":"78f80616-d1e0-4152-a7fb-99a512670f27","Type":"ContainerStarted","Data":"a3e97dcb11dcb12023d7aac6301414eada55c698af12daef0dd5afda535de932"} Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.244923 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rmlj6" event={"ID":"18d0253b-d7da-4be9-8fc3-6911f1c92076","Type":"ContainerStarted","Data":"1b79085b5d7c7cda571d08b9ebfbe11a57da21a8992138208b2e1f41be5d86d2"} Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.246358 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-t75hp" event={"ID":"dfc40369-6f7d-4ab2-89b0-60bfc12e9bfe","Type":"ContainerStarted","Data":"4e890e83a0edde3338670c600a6cebb758ccf26b93f8b64cc55b20b5501c0c7d"} Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.247554 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-vftb9" event={"ID":"fe7b4bda-d33b-4c5a-a6f7-0f074d2ae3f9","Type":"ContainerStarted","Data":"f118b2d876f95b8b67826cea3194e88b634e52f6e8d825adb2f2d934cf9d74f9"} Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.248754 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c6twg" event={"ID":"26b6bd08-d432-4a3b-b739-d575ca32ac6e","Type":"ContainerStarted","Data":"e6961dd22103c5ce80c2ef5f9f74a9d652ffb1c32e26b514500ca3a6393218a1"} Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.249839 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-25p7l" event={"ID":"66dfe3f5-9e7a-4e1c-b03a-b6f01dab6ef4","Type":"ContainerStarted","Data":"78ca8a53cf730414ee7a1a74e485c6886883b82801e77094366535ddc23ba09b"} Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.251421 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mrg9f" event={"ID":"4953940c-67b1-4a85-851f-ad290b9a0d0d","Type":"ContainerStarted","Data":"8a972b1f33247f79fa44c064e4b8d3cc00acc806c9c831e07a0894ad551bbd1a"} Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.254696 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-6t4gf" podStartSLOduration=22.254686666 podStartE2EDuration="22.254686666s" podCreationTimestamp="2025-12-11 13:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:11.25298833 +0000 UTC m=+42.096710916" watchObservedRunningTime="2025-12-11 13:49:11.254686666 +0000 UTC m=+42.098409252" Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.254783 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-mv9g5" podStartSLOduration=23.254779249 podStartE2EDuration="23.254779249s" podCreationTimestamp="2025-12-11 13:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:10.605692422 +0000 UTC m=+41.449415008" watchObservedRunningTime="2025-12-11 13:49:11.254779249 +0000 UTC m=+42.098501835" Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.255302 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9ttg9" event={"ID":"a182f606-2976-4cb9-8175-65b012a06596","Type":"ContainerStarted","Data":"4ce561400459f7f44f81bcd0d755d2d98b7a614120a3473f6c1eb6bc3c37b956"} Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.293099 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:11 crc kubenswrapper[5050]: E1211 13:49:11.293347 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:11.793298372 +0000 UTC m=+42.637020958 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.294455 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:11 crc kubenswrapper[5050]: E1211 13:49:11.294880 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:11.794857494 +0000 UTC m=+42.638580080 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.360866 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.396159 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:11 crc kubenswrapper[5050]: E1211 13:49:11.396387 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:11.896346023 +0000 UTC m=+42.740068609 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.396453 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:11 crc kubenswrapper[5050]: E1211 13:49:11.397156 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:11.897147504 +0000 UTC m=+42.740870090 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.497678 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:11 crc kubenswrapper[5050]: E1211 13:49:11.498439 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:11.998415777 +0000 UTC m=+42.842138363 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.498530 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:11 crc kubenswrapper[5050]: E1211 13:49:11.498961 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:11.998949811 +0000 UTC m=+42.842672397 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.600787 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:11 crc kubenswrapper[5050]: E1211 13:49:11.601218 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:12.10119338 +0000 UTC m=+42.944915966 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.704047 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:11 crc kubenswrapper[5050]: E1211 13:49:11.704795 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:12.204779325 +0000 UTC m=+43.048501911 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.805121 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:11 crc kubenswrapper[5050]: E1211 13:49:11.805577 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:12.305561324 +0000 UTC m=+43.149283910 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:11 crc kubenswrapper[5050]: I1211 13:49:11.908733 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:11 crc kubenswrapper[5050]: E1211 13:49:11.909271 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:12.409252982 +0000 UTC m=+43.252975578 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.009840 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:12 crc kubenswrapper[5050]: E1211 13:49:12.010390 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:12.51036691 +0000 UTC m=+43.354089496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.111937 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:12 crc kubenswrapper[5050]: E1211 13:49:12.112454 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:12.612431004 +0000 UTC m=+43.456153590 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.213944 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:12 crc kubenswrapper[5050]: E1211 13:49:12.214136 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:12.714105648 +0000 UTC m=+43.557828234 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.214448 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:12 crc kubenswrapper[5050]: E1211 13:49:12.214877 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:12.714854348 +0000 UTC m=+43.558576924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.264285 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-25p7l" event={"ID":"66dfe3f5-9e7a-4e1c-b03a-b6f01dab6ef4","Type":"ContainerStarted","Data":"2740000a610f85e5fbec7217664ac5c536bd5ded946c53a90abc54633e041950"} Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.264625 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-25p7l" Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.289666 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jns7" event={"ID":"d1f35830-d883-4b41-ab97-7c382dec0387","Type":"ContainerStarted","Data":"9bcb72a7122b2373360a63afde5cbe6a0bd4eb7948390af3be0726b4527fa52b"} Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.290401 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jns7" Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.300447 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-krl9b" event={"ID":"d33f8d3b-768a-44d3-bdf6-1a885b096055","Type":"ContainerStarted","Data":"c78bce49482fc146dd830ba01e528305b646d002fa94daee3852981d5e6f3de5"} Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.318697 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.320247 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tkg2n" event={"ID":"cf02ac59-1ff0-46d9-a346-cfdb183dee53","Type":"ContainerStarted","Data":"55b01e74de4da95eca43408e0893eedd4d6a6ec5ed24f3a8f6e6e410569ac183"} Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.320310 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tkg2n" event={"ID":"cf02ac59-1ff0-46d9-a346-cfdb183dee53","Type":"ContainerStarted","Data":"d631afcf7229e4b7eb3d3d4a3e66f663f0ab4b1fb9f06d259cdea319af4c0f62"} Dec 11 13:49:12 crc kubenswrapper[5050]: E1211 13:49:12.320598 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:12.820576201 +0000 UTC m=+43.664298797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.333528 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk" event={"ID":"81255772-fddc-4936-8de3-da4649c32d1f","Type":"ContainerStarted","Data":"737f9cb69bc5ba682bce46695449bf052170034d47a489f229d1389e142c9bb4"} Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.335171 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk" Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.337084 5050 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8qmpk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.337143 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk" podUID="81255772-fddc-4936-8de3-da4649c32d1f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.338258 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" event={"ID":"ea884e88-c0df-4212-976a-0d7ce1731fdc","Type":"ContainerStarted","Data":"e18ad153ed4896a86b1cd8f246d28ad89c887a91753c71f96ab6ca3e68ab303c"} Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.338303 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" event={"ID":"ea884e88-c0df-4212-976a-0d7ce1731fdc","Type":"ContainerStarted","Data":"870ab51995eb595a4e2d4b1790de952cb62e6bd39b249e1ed5b3a9e636607508"} Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.359657 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-bxjjm" event={"ID":"80709272-78a8-46c2-82af-9109c6f2048f","Type":"ContainerStarted","Data":"47c773d9e981889a7ca38106d91f0a28ccf3994e5dfb8eb0740fe9ff7aa94eb2"} Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.359718 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-bxjjm" event={"ID":"80709272-78a8-46c2-82af-9109c6f2048f","Type":"ContainerStarted","Data":"7916a66cf2022200178bb5b3547f44e78c259e2e9be419cfa0756f00cf75663e"} Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.367266 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-25p7l" podStartSLOduration=10.367245955 podStartE2EDuration="10.367245955s" podCreationTimestamp="2025-12-11 13:49:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:12.327372615 +0000 UTC m=+43.171095201" watchObservedRunningTime="2025-12-11 13:49:12.367245955 +0000 UTC m=+43.210968541" Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.383914 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lttxf" event={"ID":"2b16e336-8c81-45d1-a527-599b29a7c070","Type":"ContainerStarted","Data":"4c5892ee40c9c3b521b4207a174dc16e2ac3d584d983a354ffb9328fa834e139"} Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.407311 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jmqdr" event={"ID":"64efd0fc-ec3c-403b-ac98-0546f2affa94","Type":"ContainerStarted","Data":"5c1a75f017af99f5c8857233a5dcd1380e21f623c0d7074d9609cb97d30af585"} Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.413668 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jns7" podStartSLOduration=23.413646871 podStartE2EDuration="23.413646871s" podCreationTimestamp="2025-12-11 13:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:12.3796453 +0000 UTC m=+43.223367886" watchObservedRunningTime="2025-12-11 13:49:12.413646871 +0000 UTC m=+43.257369457" Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.424648 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:12 crc kubenswrapper[5050]: E1211 13:49:12.424965 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:12.924951817 +0000 UTC m=+43.768674403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.472840 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-krl9b" podStartSLOduration=24.472804683 podStartE2EDuration="24.472804683s" podCreationTimestamp="2025-12-11 13:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:12.416720574 +0000 UTC m=+43.260443160" watchObservedRunningTime="2025-12-11 13:49:12.472804683 +0000 UTC m=+43.316527259" Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.473725 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tkg2n" podStartSLOduration=23.473715938 podStartE2EDuration="23.473715938s" podCreationTimestamp="2025-12-11 13:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:12.45680638 +0000 UTC m=+43.300528966" watchObservedRunningTime="2025-12-11 13:49:12.473715938 +0000 UTC m=+43.317438524" Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.525296 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:12 crc kubenswrapper[5050]: E1211 13:49:12.526966 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:13.026948509 +0000 UTC m=+43.870671095 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.562450 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk" podStartSLOduration=23.56242408 podStartE2EDuration="23.56242408s" podCreationTimestamp="2025-12-11 13:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:12.51035605 +0000 UTC m=+43.354078646" watchObservedRunningTime="2025-12-11 13:49:12.56242408 +0000 UTC m=+43.406146666" Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.574864 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29424345-gxsbc" podStartSLOduration=24.574845036 podStartE2EDuration="24.574845036s" podCreationTimestamp="2025-12-11 13:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:12.562390179 +0000 UTC m=+43.406112765" watchObservedRunningTime="2025-12-11 13:49:12.574845036 +0000 UTC m=+43.418567622" Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.608876 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-p9jmx" podStartSLOduration=9.608848996999999 podStartE2EDuration="9.608848997s" podCreationTimestamp="2025-12-11 13:49:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:12.604319765 +0000 UTC m=+43.448042351" watchObservedRunningTime="2025-12-11 13:49:12.608848997 +0000 UTC m=+43.452571583" Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.627828 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:12 crc kubenswrapper[5050]: E1211 13:49:12.628256 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:13.128241892 +0000 UTC m=+43.971964478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.698897 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-c6twg" podStartSLOduration=23.698874375 podStartE2EDuration="23.698874375s" podCreationTimestamp="2025-12-11 13:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:12.649309683 +0000 UTC m=+43.493032269" watchObservedRunningTime="2025-12-11 13:49:12.698874375 +0000 UTC m=+43.542596961" Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.699490 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-vftb9" podStartSLOduration=9.699486262 podStartE2EDuration="9.699486262s" podCreationTimestamp="2025-12-11 13:49:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:12.699404499 +0000 UTC m=+43.543127075" watchObservedRunningTime="2025-12-11 13:49:12.699486262 +0000 UTC m=+43.543208848" Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.730150 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:12 crc kubenswrapper[5050]: E1211 13:49:12.730599 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:13.230582314 +0000 UTC m=+44.074304900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.769042 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mg97h" podStartSLOduration=23.769023515 podStartE2EDuration="23.769023515s" podCreationTimestamp="2025-12-11 13:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:12.738380535 +0000 UTC m=+43.582103121" watchObservedRunningTime="2025-12-11 13:49:12.769023515 +0000 UTC m=+43.612746101" Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.806061 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jmqdr" podStartSLOduration=24.806038657 podStartE2EDuration="24.806038657s" podCreationTimestamp="2025-12-11 13:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:12.804327391 +0000 UTC m=+43.648049977" watchObservedRunningTime="2025-12-11 13:49:12.806038657 +0000 UTC m=+43.649761243" Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.806818 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-t75hp" podStartSLOduration=23.806813408 podStartE2EDuration="23.806813408s" podCreationTimestamp="2025-12-11 13:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:12.775382357 +0000 UTC m=+43.619104943" watchObservedRunningTime="2025-12-11 13:49:12.806813408 +0000 UTC m=+43.650535994" Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.834819 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:12 crc kubenswrapper[5050]: E1211 13:49:12.835165 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:13.335150326 +0000 UTC m=+44.178872912 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.839399 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-dtlb9" podStartSLOduration=24.83938 podStartE2EDuration="24.83938s" podCreationTimestamp="2025-12-11 13:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:12.837091128 +0000 UTC m=+43.680813714" watchObservedRunningTime="2025-12-11 13:49:12.83938 +0000 UTC m=+43.683102586" Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.900957 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-j2zlh" podStartSLOduration=23.900932397 podStartE2EDuration="23.900932397s" podCreationTimestamp="2025-12-11 13:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:12.876245938 +0000 UTC m=+43.719968534" watchObservedRunningTime="2025-12-11 13:49:12.900932397 +0000 UTC m=+43.744654983" Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.902670 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-24wkc" podStartSLOduration=23.902664104 podStartE2EDuration="23.902664104s" podCreationTimestamp="2025-12-11 13:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:12.900834814 +0000 UTC m=+43.744557410" watchObservedRunningTime="2025-12-11 13:49:12.902664104 +0000 UTC m=+43.746386690" Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.930182 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-dtlb9" Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.936875 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:12 crc kubenswrapper[5050]: E1211 13:49:12.937396 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:13.437373944 +0000 UTC m=+44.281096530 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.948280 5050 patch_prober.go:28] interesting pod/router-default-5444994796-dtlb9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 13:49:12 crc kubenswrapper[5050]: [-]has-synced failed: reason withheld Dec 11 13:49:12 crc kubenswrapper[5050]: [+]process-running ok Dec 11 13:49:12 crc kubenswrapper[5050]: healthz check failed Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.948359 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dtlb9" podUID="cef7b97f-083b-44ae-9357-94f97b3eb30c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.993612 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" podStartSLOduration=23.993588146 podStartE2EDuration="23.993588146s" podCreationTimestamp="2025-12-11 13:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:12.961749114 +0000 UTC m=+43.805471720" watchObservedRunningTime="2025-12-11 13:49:12.993588146 +0000 UTC m=+43.837310732" Dec 11 13:49:12 crc kubenswrapper[5050]: I1211 13:49:12.995816 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-5zrm6" podStartSLOduration=24.995808416 podStartE2EDuration="24.995808416s" podCreationTimestamp="2025-12-11 13:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:12.992694962 +0000 UTC m=+43.836417548" watchObservedRunningTime="2025-12-11 13:49:12.995808416 +0000 UTC m=+43.839531002" Dec 11 13:49:13 crc kubenswrapper[5050]: I1211 13:49:13.038472 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-pqplb" podStartSLOduration=10.038452821 podStartE2EDuration="10.038452821s" podCreationTimestamp="2025-12-11 13:49:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:13.036634262 +0000 UTC m=+43.880356848" watchObservedRunningTime="2025-12-11 13:49:13.038452821 +0000 UTC m=+43.882175417" Dec 11 13:49:13 crc kubenswrapper[5050]: I1211 13:49:13.038699 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:13 crc kubenswrapper[5050]: E1211 13:49:13.039064 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:13.539048947 +0000 UTC m=+44.382771533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:13 crc kubenswrapper[5050]: I1211 13:49:13.088360 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-9ttg9" podStartSLOduration=24.088342632 podStartE2EDuration="24.088342632s" podCreationTimestamp="2025-12-11 13:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:13.086209914 +0000 UTC m=+43.929932500" watchObservedRunningTime="2025-12-11 13:49:13.088342632 +0000 UTC m=+43.932065218" Dec 11 13:49:13 crc kubenswrapper[5050]: I1211 13:49:13.139657 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:13 crc kubenswrapper[5050]: E1211 13:49:13.140142 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:13.640123274 +0000 UTC m=+44.483845860 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:13 crc kubenswrapper[5050]: I1211 13:49:13.187984 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" podStartSLOduration=24.18796256 podStartE2EDuration="24.18796256s" podCreationTimestamp="2025-12-11 13:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:13.129226659 +0000 UTC m=+43.972949245" watchObservedRunningTime="2025-12-11 13:49:13.18796256 +0000 UTC m=+44.031685146" Dec 11 13:49:13 crc kubenswrapper[5050]: I1211 13:49:13.241870 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:13 crc kubenswrapper[5050]: E1211 13:49:13.242268 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:13.74225394 +0000 UTC m=+44.585976526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:13 crc kubenswrapper[5050]: I1211 13:49:13.285998 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9zt5j" podStartSLOduration=24.285972644 podStartE2EDuration="24.285972644s" podCreationTimestamp="2025-12-11 13:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:13.194806305 +0000 UTC m=+44.038528911" watchObservedRunningTime="2025-12-11 13:49:13.285972644 +0000 UTC m=+44.129695230" Dec 11 13:49:13 crc kubenswrapper[5050]: I1211 13:49:13.287707 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gxtkz" podStartSLOduration=25.28769905 podStartE2EDuration="25.28769905s" podCreationTimestamp="2025-12-11 13:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:13.285403348 +0000 UTC m=+44.129125934" watchObservedRunningTime="2025-12-11 13:49:13.28769905 +0000 UTC m=+44.131421636" Dec 11 13:49:13 crc kubenswrapper[5050]: I1211 13:49:13.338746 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" podStartSLOduration=25.338721001 podStartE2EDuration="25.338721001s" podCreationTimestamp="2025-12-11 13:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:13.333560381 +0000 UTC m=+44.177282977" watchObservedRunningTime="2025-12-11 13:49:13.338721001 +0000 UTC m=+44.182443597" Dec 11 13:49:13 crc kubenswrapper[5050]: I1211 13:49:13.342863 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:13 crc kubenswrapper[5050]: E1211 13:49:13.343246 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:13.843228353 +0000 UTC m=+44.686950939 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:13 crc kubenswrapper[5050]: I1211 13:49:13.405602 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-bxjjm" podStartSLOduration=24.405583682 podStartE2EDuration="24.405583682s" podCreationTimestamp="2025-12-11 13:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:13.402742635 +0000 UTC m=+44.246465221" watchObservedRunningTime="2025-12-11 13:49:13.405583682 +0000 UTC m=+44.249306268" Dec 11 13:49:13 crc kubenswrapper[5050]: I1211 13:49:13.415003 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" event={"ID":"2b344bba-8e1d-415b-9b5f-e21d3144fe42","Type":"ContainerStarted","Data":"b5bea0bcdd2e66523511acdf8482695e27c2297dc7d6b7729ac1ce7866fb5763"} Dec 11 13:49:13 crc kubenswrapper[5050]: I1211 13:49:13.418772 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lttxf" event={"ID":"2b16e336-8c81-45d1-a527-599b29a7c070","Type":"ContainerStarted","Data":"02a61c847968641998f5a560d1d4806903bd7230b1a5e4532c8cc0022183ff3a"} Dec 11 13:49:13 crc kubenswrapper[5050]: I1211 13:49:13.445668 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:13 crc kubenswrapper[5050]: E1211 13:49:13.445982 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:13.945965635 +0000 UTC m=+44.789688221 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:13 crc kubenswrapper[5050]: I1211 13:49:13.447991 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-rmlj6" podStartSLOduration=24.44797576 podStartE2EDuration="24.44797576s" podCreationTimestamp="2025-12-11 13:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:13.442267035 +0000 UTC m=+44.285989621" watchObservedRunningTime="2025-12-11 13:49:13.44797576 +0000 UTC m=+44.291698346" Dec 11 13:49:13 crc kubenswrapper[5050]: I1211 13:49:13.464799 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk" Dec 11 13:49:13 crc kubenswrapper[5050]: I1211 13:49:13.473759 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-mrg9f" podStartSLOduration=25.473740767 podStartE2EDuration="25.473740767s" podCreationTimestamp="2025-12-11 13:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:13.471703022 +0000 UTC m=+44.315425608" watchObservedRunningTime="2025-12-11 13:49:13.473740767 +0000 UTC m=+44.317463353" Dec 11 13:49:13 crc kubenswrapper[5050]: I1211 13:49:13.547194 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:13 crc kubenswrapper[5050]: E1211 13:49:13.547404 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:14.047364141 +0000 UTC m=+44.891086737 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:13 crc kubenswrapper[5050]: I1211 13:49:13.547600 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:13 crc kubenswrapper[5050]: E1211 13:49:13.547930 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:14.047916726 +0000 UTC m=+44.891639312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:13 crc kubenswrapper[5050]: I1211 13:49:13.568615 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-lttxf" podStartSLOduration=24.568593946 podStartE2EDuration="24.568593946s" podCreationTimestamp="2025-12-11 13:48:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:13.567550528 +0000 UTC m=+44.411273124" watchObservedRunningTime="2025-12-11 13:49:13.568593946 +0000 UTC m=+44.412316532" Dec 11 13:49:13 crc kubenswrapper[5050]: I1211 13:49:13.648682 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:13 crc kubenswrapper[5050]: E1211 13:49:13.649421 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:14.149401044 +0000 UTC m=+44.993123630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:13 crc kubenswrapper[5050]: I1211 13:49:13.756055 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:13 crc kubenswrapper[5050]: E1211 13:49:13.756520 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:14.256505245 +0000 UTC m=+45.100227831 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:13 crc kubenswrapper[5050]: I1211 13:49:13.857691 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:13 crc kubenswrapper[5050]: E1211 13:49:13.858176 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:14.358151757 +0000 UTC m=+45.201874353 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:13 crc kubenswrapper[5050]: I1211 13:49:13.941183 5050 patch_prober.go:28] interesting pod/router-default-5444994796-dtlb9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 13:49:13 crc kubenswrapper[5050]: [-]has-synced failed: reason withheld Dec 11 13:49:13 crc kubenswrapper[5050]: [+]process-running ok Dec 11 13:49:13 crc kubenswrapper[5050]: healthz check failed Dec 11 13:49:13 crc kubenswrapper[5050]: I1211 13:49:13.941271 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dtlb9" podUID="cef7b97f-083b-44ae-9357-94f97b3eb30c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 13:49:13 crc kubenswrapper[5050]: I1211 13:49:13.959511 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:13 crc kubenswrapper[5050]: E1211 13:49:13.959927 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:14.459907733 +0000 UTC m=+45.303630319 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.061900 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:14 crc kubenswrapper[5050]: E1211 13:49:14.062119 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:14.56208835 +0000 UTC m=+45.405810926 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.062555 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:14 crc kubenswrapper[5050]: E1211 13:49:14.062870 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:14.562861911 +0000 UTC m=+45.406584497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.163669 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:14 crc kubenswrapper[5050]: E1211 13:49:14.163924 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:14.663893337 +0000 UTC m=+45.507615923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.164503 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:14 crc kubenswrapper[5050]: E1211 13:49:14.164867 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:14.664857743 +0000 UTC m=+45.508580329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.272294 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:14 crc kubenswrapper[5050]: E1211 13:49:14.273089 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:14.773067043 +0000 UTC m=+45.616789629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.374641 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:14 crc kubenswrapper[5050]: E1211 13:49:14.375160 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:14.875136887 +0000 UTC m=+45.718859503 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.433380 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" event={"ID":"2b344bba-8e1d-415b-9b5f-e21d3144fe42","Type":"ContainerStarted","Data":"01def08857c1c82157875a735a790993b639cadb0d19fea1692fd95eef870a56"} Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.433834 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" event={"ID":"2b344bba-8e1d-415b-9b5f-e21d3144fe42","Type":"ContainerStarted","Data":"9bf116f674dde3596f17a505dfcd6ae2a63e461f2217be85394d45a9ad186012"} Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.475681 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:14 crc kubenswrapper[5050]: E1211 13:49:14.476161 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:14.976139522 +0000 UTC m=+45.819862108 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.476427 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:14 crc kubenswrapper[5050]: E1211 13:49:14.477801 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:14.977784737 +0000 UTC m=+45.821507383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.543517 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jmqdr" Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.557853 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jmqdr" Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.577743 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:14 crc kubenswrapper[5050]: E1211 13:49:14.578254 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:15.078223867 +0000 UTC m=+45.921946453 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.629903 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6n98v"] Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.630936 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6n98v" Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.635259 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.655904 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.656665 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.660998 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6n98v"] Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.665730 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.665826 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.681931 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.682116 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnn9x\" (UniqueName: \"kubernetes.io/projected/b2ee71a3-392e-442c-aa3b-bec310a86031-kube-api-access-gnn9x\") pod \"community-operators-6n98v\" (UID: \"b2ee71a3-392e-442c-aa3b-bec310a86031\") " pod="openshift-marketplace/community-operators-6n98v" Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.682149 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2ee71a3-392e-442c-aa3b-bec310a86031-catalog-content\") pod \"community-operators-6n98v\" (UID: \"b2ee71a3-392e-442c-aa3b-bec310a86031\") " pod="openshift-marketplace/community-operators-6n98v" Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.682176 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2ee71a3-392e-442c-aa3b-bec310a86031-utilities\") pod \"community-operators-6n98v\" (UID: \"b2ee71a3-392e-442c-aa3b-bec310a86031\") " pod="openshift-marketplace/community-operators-6n98v" Dec 11 13:49:14 crc kubenswrapper[5050]: E1211 13:49:14.684702 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:15.18468304 +0000 UTC m=+46.028405626 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.694701 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.784677 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.784958 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02214444-97ce-4ba2-bc85-df0ea02e6945-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"02214444-97ce-4ba2-bc85-df0ea02e6945\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.784982 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02214444-97ce-4ba2-bc85-df0ea02e6945-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"02214444-97ce-4ba2-bc85-df0ea02e6945\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.785024 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnn9x\" (UniqueName: \"kubernetes.io/projected/b2ee71a3-392e-442c-aa3b-bec310a86031-kube-api-access-gnn9x\") pod \"community-operators-6n98v\" (UID: \"b2ee71a3-392e-442c-aa3b-bec310a86031\") " pod="openshift-marketplace/community-operators-6n98v" Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.785049 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2ee71a3-392e-442c-aa3b-bec310a86031-catalog-content\") pod \"community-operators-6n98v\" (UID: \"b2ee71a3-392e-442c-aa3b-bec310a86031\") " pod="openshift-marketplace/community-operators-6n98v" Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.785070 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2ee71a3-392e-442c-aa3b-bec310a86031-utilities\") pod \"community-operators-6n98v\" (UID: \"b2ee71a3-392e-442c-aa3b-bec310a86031\") " pod="openshift-marketplace/community-operators-6n98v" Dec 11 13:49:14 crc kubenswrapper[5050]: E1211 13:49:14.785470 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:15.285436078 +0000 UTC m=+46.129158674 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.785976 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2ee71a3-392e-442c-aa3b-bec310a86031-utilities\") pod \"community-operators-6n98v\" (UID: \"b2ee71a3-392e-442c-aa3b-bec310a86031\") " pod="openshift-marketplace/community-operators-6n98v" Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.791709 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2ee71a3-392e-442c-aa3b-bec310a86031-catalog-content\") pod \"community-operators-6n98v\" (UID: \"b2ee71a3-392e-442c-aa3b-bec310a86031\") " pod="openshift-marketplace/community-operators-6n98v" Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.817431 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pvns6"] Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.818701 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pvns6" Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.823175 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.829084 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnn9x\" (UniqueName: \"kubernetes.io/projected/b2ee71a3-392e-442c-aa3b-bec310a86031-kube-api-access-gnn9x\") pod \"community-operators-6n98v\" (UID: \"b2ee71a3-392e-442c-aa3b-bec310a86031\") " pod="openshift-marketplace/community-operators-6n98v" Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.861932 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pvns6"] Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.886249 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f9d51ca-1ebf-4986-ba4b-08939d025cbd-catalog-content\") pod \"certified-operators-pvns6\" (UID: \"6f9d51ca-1ebf-4986-ba4b-08939d025cbd\") " pod="openshift-marketplace/certified-operators-pvns6" Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.886312 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.886348 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f9d51ca-1ebf-4986-ba4b-08939d025cbd-utilities\") pod \"certified-operators-pvns6\" (UID: \"6f9d51ca-1ebf-4986-ba4b-08939d025cbd\") " pod="openshift-marketplace/certified-operators-pvns6" Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.886545 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02214444-97ce-4ba2-bc85-df0ea02e6945-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"02214444-97ce-4ba2-bc85-df0ea02e6945\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.886616 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02214444-97ce-4ba2-bc85-df0ea02e6945-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"02214444-97ce-4ba2-bc85-df0ea02e6945\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.886663 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jzz5\" (UniqueName: \"kubernetes.io/projected/6f9d51ca-1ebf-4986-ba4b-08939d025cbd-kube-api-access-8jzz5\") pod \"certified-operators-pvns6\" (UID: \"6f9d51ca-1ebf-4986-ba4b-08939d025cbd\") " pod="openshift-marketplace/certified-operators-pvns6" Dec 11 13:49:14 crc kubenswrapper[5050]: E1211 13:49:14.886730 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:15.386712531 +0000 UTC m=+46.230435167 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.886962 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02214444-97ce-4ba2-bc85-df0ea02e6945-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"02214444-97ce-4ba2-bc85-df0ea02e6945\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.896544 5050 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.922435 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02214444-97ce-4ba2-bc85-df0ea02e6945-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"02214444-97ce-4ba2-bc85-df0ea02e6945\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.944497 5050 patch_prober.go:28] interesting pod/router-default-5444994796-dtlb9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 13:49:14 crc kubenswrapper[5050]: [-]has-synced failed: reason withheld Dec 11 13:49:14 crc kubenswrapper[5050]: [+]process-running ok Dec 11 13:49:14 crc kubenswrapper[5050]: healthz check failed Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.944569 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dtlb9" podUID="cef7b97f-083b-44ae-9357-94f97b3eb30c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.953237 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6n98v" Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.988377 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.988691 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jzz5\" (UniqueName: \"kubernetes.io/projected/6f9d51ca-1ebf-4986-ba4b-08939d025cbd-kube-api-access-8jzz5\") pod \"certified-operators-pvns6\" (UID: \"6f9d51ca-1ebf-4986-ba4b-08939d025cbd\") " pod="openshift-marketplace/certified-operators-pvns6" Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.988771 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f9d51ca-1ebf-4986-ba4b-08939d025cbd-catalog-content\") pod \"certified-operators-pvns6\" (UID: \"6f9d51ca-1ebf-4986-ba4b-08939d025cbd\") " pod="openshift-marketplace/certified-operators-pvns6" Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.988808 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f9d51ca-1ebf-4986-ba4b-08939d025cbd-utilities\") pod \"certified-operators-pvns6\" (UID: \"6f9d51ca-1ebf-4986-ba4b-08939d025cbd\") " pod="openshift-marketplace/certified-operators-pvns6" Dec 11 13:49:14 crc kubenswrapper[5050]: E1211 13:49:14.989148 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:15.489121204 +0000 UTC m=+46.332843790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.989271 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f9d51ca-1ebf-4986-ba4b-08939d025cbd-utilities\") pod \"certified-operators-pvns6\" (UID: \"6f9d51ca-1ebf-4986-ba4b-08939d025cbd\") " pod="openshift-marketplace/certified-operators-pvns6" Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.989479 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f9d51ca-1ebf-4986-ba4b-08939d025cbd-catalog-content\") pod \"certified-operators-pvns6\" (UID: \"6f9d51ca-1ebf-4986-ba4b-08939d025cbd\") " pod="openshift-marketplace/certified-operators-pvns6" Dec 11 13:49:14 crc kubenswrapper[5050]: I1211 13:49:14.994231 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.016289 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-klgr9"] Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.017869 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-klgr9" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.027752 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jzz5\" (UniqueName: \"kubernetes.io/projected/6f9d51ca-1ebf-4986-ba4b-08939d025cbd-kube-api-access-8jzz5\") pod \"certified-operators-pvns6\" (UID: \"6f9d51ca-1ebf-4986-ba4b-08939d025cbd\") " pod="openshift-marketplace/certified-operators-pvns6" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.052616 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-klgr9"] Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.091088 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5faa7088-04c1-4d75-abab-1e426f7cd032-catalog-content\") pod \"community-operators-klgr9\" (UID: \"5faa7088-04c1-4d75-abab-1e426f7cd032\") " pod="openshift-marketplace/community-operators-klgr9" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.091130 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-967wt\" (UniqueName: \"kubernetes.io/projected/5faa7088-04c1-4d75-abab-1e426f7cd032-kube-api-access-967wt\") pod \"community-operators-klgr9\" (UID: \"5faa7088-04c1-4d75-abab-1e426f7cd032\") " pod="openshift-marketplace/community-operators-klgr9" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.091176 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5faa7088-04c1-4d75-abab-1e426f7cd032-utilities\") pod \"community-operators-klgr9\" (UID: \"5faa7088-04c1-4d75-abab-1e426f7cd032\") " pod="openshift-marketplace/community-operators-klgr9" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.091215 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:15 crc kubenswrapper[5050]: E1211 13:49:15.091561 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:15.591546828 +0000 UTC m=+46.435269414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.168608 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pvns6" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.195362 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.195842 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-967wt\" (UniqueName: \"kubernetes.io/projected/5faa7088-04c1-4d75-abab-1e426f7cd032-kube-api-access-967wt\") pod \"community-operators-klgr9\" (UID: \"5faa7088-04c1-4d75-abab-1e426f7cd032\") " pod="openshift-marketplace/community-operators-klgr9" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.195887 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5faa7088-04c1-4d75-abab-1e426f7cd032-utilities\") pod \"community-operators-klgr9\" (UID: \"5faa7088-04c1-4d75-abab-1e426f7cd032\") " pod="openshift-marketplace/community-operators-klgr9" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.195979 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5faa7088-04c1-4d75-abab-1e426f7cd032-catalog-content\") pod \"community-operators-klgr9\" (UID: \"5faa7088-04c1-4d75-abab-1e426f7cd032\") " pod="openshift-marketplace/community-operators-klgr9" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.196702 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5faa7088-04c1-4d75-abab-1e426f7cd032-catalog-content\") pod \"community-operators-klgr9\" (UID: \"5faa7088-04c1-4d75-abab-1e426f7cd032\") " pod="openshift-marketplace/community-operators-klgr9" Dec 11 13:49:15 crc kubenswrapper[5050]: E1211 13:49:15.196772 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:15.696755607 +0000 UTC m=+46.540478193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.197267 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5faa7088-04c1-4d75-abab-1e426f7cd032-utilities\") pod \"community-operators-klgr9\" (UID: \"5faa7088-04c1-4d75-abab-1e426f7cd032\") " pod="openshift-marketplace/community-operators-klgr9" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.216555 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.216597 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.219485 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.219523 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.236929 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-967wt\" (UniqueName: \"kubernetes.io/projected/5faa7088-04c1-4d75-abab-1e426f7cd032-kube-api-access-967wt\") pod \"community-operators-klgr9\" (UID: \"5faa7088-04c1-4d75-abab-1e426f7cd032\") " pod="openshift-marketplace/community-operators-klgr9" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.242040 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bfjt2"] Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.243088 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bfjt2" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.247532 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.316733 5050 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cd66n container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 11 13:49:15 crc kubenswrapper[5050]: [+]log ok Dec 11 13:49:15 crc kubenswrapper[5050]: [+]etcd ok Dec 11 13:49:15 crc kubenswrapper[5050]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 11 13:49:15 crc kubenswrapper[5050]: [+]poststarthook/generic-apiserver-start-informers ok Dec 11 13:49:15 crc kubenswrapper[5050]: [+]poststarthook/max-in-flight-filter ok Dec 11 13:49:15 crc kubenswrapper[5050]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 11 13:49:15 crc kubenswrapper[5050]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 11 13:49:15 crc kubenswrapper[5050]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 11 13:49:15 crc kubenswrapper[5050]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 11 13:49:15 crc kubenswrapper[5050]: [+]poststarthook/project.openshift.io-projectcache ok Dec 11 13:49:15 crc kubenswrapper[5050]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 11 13:49:15 crc kubenswrapper[5050]: [+]poststarthook/openshift.io-startinformers ok Dec 11 13:49:15 crc kubenswrapper[5050]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 11 13:49:15 crc kubenswrapper[5050]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 11 13:49:15 crc kubenswrapper[5050]: livez check failed Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.316807 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" podUID="ea884e88-c0df-4212-976a-0d7ce1731fdc" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.346026 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.346558 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/345ac559-35e7-487b-859a-e583a6e88c6c-utilities\") pod \"certified-operators-bfjt2\" (UID: \"345ac559-35e7-487b-859a-e583a6e88c6c\") " pod="openshift-marketplace/certified-operators-bfjt2" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.346613 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/345ac559-35e7-487b-859a-e583a6e88c6c-catalog-content\") pod \"certified-operators-bfjt2\" (UID: \"345ac559-35e7-487b-859a-e583a6e88c6c\") " pod="openshift-marketplace/certified-operators-bfjt2" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.346697 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dv7p\" (UniqueName: \"kubernetes.io/projected/345ac559-35e7-487b-859a-e583a6e88c6c-kube-api-access-7dv7p\") pod \"certified-operators-bfjt2\" (UID: \"345ac559-35e7-487b-859a-e583a6e88c6c\") " pod="openshift-marketplace/certified-operators-bfjt2" Dec 11 13:49:15 crc kubenswrapper[5050]: E1211 13:49:15.352104 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:15.852079063 +0000 UTC m=+46.695801649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.355073 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-klgr9" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.368544 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bfjt2"] Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.448721 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.448977 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/345ac559-35e7-487b-859a-e583a6e88c6c-utilities\") pod \"certified-operators-bfjt2\" (UID: \"345ac559-35e7-487b-859a-e583a6e88c6c\") " pod="openshift-marketplace/certified-operators-bfjt2" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.449004 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/345ac559-35e7-487b-859a-e583a6e88c6c-catalog-content\") pod \"certified-operators-bfjt2\" (UID: \"345ac559-35e7-487b-859a-e583a6e88c6c\") " pod="openshift-marketplace/certified-operators-bfjt2" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.449042 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dv7p\" (UniqueName: \"kubernetes.io/projected/345ac559-35e7-487b-859a-e583a6e88c6c-kube-api-access-7dv7p\") pod \"certified-operators-bfjt2\" (UID: \"345ac559-35e7-487b-859a-e583a6e88c6c\") " pod="openshift-marketplace/certified-operators-bfjt2" Dec 11 13:49:15 crc kubenswrapper[5050]: E1211 13:49:15.449525 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:15.949506911 +0000 UTC m=+46.793229487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.449927 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/345ac559-35e7-487b-859a-e583a6e88c6c-utilities\") pod \"certified-operators-bfjt2\" (UID: \"345ac559-35e7-487b-859a-e583a6e88c6c\") " pod="openshift-marketplace/certified-operators-bfjt2" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.450166 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/345ac559-35e7-487b-859a-e583a6e88c6c-catalog-content\") pod \"certified-operators-bfjt2\" (UID: \"345ac559-35e7-487b-859a-e583a6e88c6c\") " pod="openshift-marketplace/certified-operators-bfjt2" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.490536 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dv7p\" (UniqueName: \"kubernetes.io/projected/345ac559-35e7-487b-859a-e583a6e88c6c-kube-api-access-7dv7p\") pod \"certified-operators-bfjt2\" (UID: \"345ac559-35e7-487b-859a-e583a6e88c6c\") " pod="openshift-marketplace/certified-operators-bfjt2" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.493611 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" event={"ID":"2b344bba-8e1d-415b-9b5f-e21d3144fe42","Type":"ContainerStarted","Data":"7b9e51720e525e9c301693cbff21aa564f900560e49c2f7c34f0a29c1efe0d37"} Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.516926 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.525534 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6n98v"] Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.540257 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" podStartSLOduration=13.540236778 podStartE2EDuration="13.540236778s" podCreationTimestamp="2025-12-11 13:49:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:15.538330286 +0000 UTC m=+46.382052872" watchObservedRunningTime="2025-12-11 13:49:15.540236778 +0000 UTC m=+46.383959364" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.551407 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:15 crc kubenswrapper[5050]: E1211 13:49:15.553371 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 13:49:16.053348863 +0000 UTC m=+46.897071529 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-f6wfx" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.598029 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-gp9fp" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.606966 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-gp9fp" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.609729 5050 patch_prober.go:28] interesting pod/console-f9d7485db-gp9fp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.609801 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-gp9fp" podUID="f633d554-794b-4a64-9699-27fbc28a4d7c" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.633485 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.637517 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-qc97s" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.644264 5050 patch_prober.go:28] interesting pod/downloads-7954f5f757-qc97s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.644331 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qc97s" podUID="d9e38eea-202a-4bf1-bb51-1d4a1fc20202" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.644711 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-mv9g5" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.644720 5050 patch_prober.go:28] interesting pod/downloads-7954f5f757-qc97s container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.644740 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-qc97s" podUID="d9e38eea-202a-4bf1-bb51-1d4a1fc20202" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.644791 5050 patch_prober.go:28] interesting pod/downloads-7954f5f757-qc97s container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.644818 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-qc97s" podUID="d9e38eea-202a-4bf1-bb51-1d4a1fc20202" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.651963 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-mv9g5" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.652835 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:15 crc kubenswrapper[5050]: E1211 13:49:15.654432 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 13:49:16.15441164 +0000 UTC m=+46.998134226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.664317 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bfjt2" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.667898 5050 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-11T13:49:14.896580298Z","Handler":null,"Name":""} Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.697973 5050 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.698235 5050 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.755794 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.757970 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pvns6"] Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.764477 5050 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.764532 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.813310 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-f6wfx\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.840219 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-klgr9"] Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.857040 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.886196 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.889130 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.937729 5050 patch_prober.go:28] interesting pod/router-default-5444994796-dtlb9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 13:49:15 crc kubenswrapper[5050]: [-]has-synced failed: reason withheld Dec 11 13:49:15 crc kubenswrapper[5050]: [+]process-running ok Dec 11 13:49:15 crc kubenswrapper[5050]: healthz check failed Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.938331 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dtlb9" podUID="cef7b97f-083b-44ae-9357-94f97b3eb30c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 13:49:15 crc kubenswrapper[5050]: I1211 13:49:15.995078 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bfjt2"] Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.214319 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-f6wfx"] Dec 11 13:49:16 crc kubenswrapper[5050]: W1211 13:49:16.272990 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79162d90_14dd_4df9_9bcd_10c2c666cae7.slice/crio-d3e34f50bdd8a4b52036aa7d833fa84972a452c54ee684cc847577f254bfe6e8 WatchSource:0}: Error finding container d3e34f50bdd8a4b52036aa7d833fa84972a452c54ee684cc847577f254bfe6e8: Status 404 returned error can't find the container with id d3e34f50bdd8a4b52036aa7d833fa84972a452c54ee684cc847577f254bfe6e8 Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.497672 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.497836 5050 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.498275 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" event={"ID":"79162d90-14dd-4df9-9bcd-10c2c666cae7","Type":"ContainerStarted","Data":"d3e34f50bdd8a4b52036aa7d833fa84972a452c54ee684cc847577f254bfe6e8"} Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.500639 5050 generic.go:334] "Generic (PLEG): container finished" podID="78f80616-d1e0-4152-a7fb-99a512670f27" containerID="a3e97dcb11dcb12023d7aac6301414eada55c698af12daef0dd5afda535de932" exitCode=0 Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.500712 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424345-gxsbc" event={"ID":"78f80616-d1e0-4152-a7fb-99a512670f27","Type":"ContainerDied","Data":"a3e97dcb11dcb12023d7aac6301414eada55c698af12daef0dd5afda535de932"} Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.502401 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"02214444-97ce-4ba2-bc85-df0ea02e6945","Type":"ContainerStarted","Data":"642966b0abdd7d2e5ecb4ac0520ba63c1bf303bf250313c67d1df2af3b1db33f"} Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.502451 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"02214444-97ce-4ba2-bc85-df0ea02e6945","Type":"ContainerStarted","Data":"8d382d3e2ce055c826fa62878b3ca257c147d5184d130abc3fc91b447eb8eaef"} Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.503705 5050 generic.go:334] "Generic (PLEG): container finished" podID="345ac559-35e7-487b-859a-e583a6e88c6c" containerID="a1b1333ae6f444e6041eb7cf2ec3fea0b56de643dae8170a2360f333b629fe15" exitCode=0 Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.503743 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bfjt2" event={"ID":"345ac559-35e7-487b-859a-e583a6e88c6c","Type":"ContainerDied","Data":"a1b1333ae6f444e6041eb7cf2ec3fea0b56de643dae8170a2360f333b629fe15"} Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.503956 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bfjt2" event={"ID":"345ac559-35e7-487b-859a-e583a6e88c6c","Type":"ContainerStarted","Data":"280d9c487b1a04a11e7f0306857366d64611b16580c0f8711a327b0875c6f2dc"} Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.505120 5050 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.505809 5050 generic.go:334] "Generic (PLEG): container finished" podID="6f9d51ca-1ebf-4986-ba4b-08939d025cbd" containerID="67c63d37d60c469331b6535229dfd7849f44b1d981ab7424f9b7e634fb28e562" exitCode=0 Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.505927 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvns6" event={"ID":"6f9d51ca-1ebf-4986-ba4b-08939d025cbd","Type":"ContainerDied","Data":"67c63d37d60c469331b6535229dfd7849f44b1d981ab7424f9b7e634fb28e562"} Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.506040 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvns6" event={"ID":"6f9d51ca-1ebf-4986-ba4b-08939d025cbd","Type":"ContainerStarted","Data":"3ab2d98cd21468a2a7a6c4906995e53767611bcced38c6daf72e1cc89655c6b9"} Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.508113 5050 generic.go:334] "Generic (PLEG): container finished" podID="b2ee71a3-392e-442c-aa3b-bec310a86031" containerID="6377df013b1ebe57e0e97327b1811556e09c1daa1fe33d72e24c6a7aa8a121c9" exitCode=0 Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.508181 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6n98v" event={"ID":"b2ee71a3-392e-442c-aa3b-bec310a86031","Type":"ContainerDied","Data":"6377df013b1ebe57e0e97327b1811556e09c1daa1fe33d72e24c6a7aa8a121c9"} Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.508206 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6n98v" event={"ID":"b2ee71a3-392e-442c-aa3b-bec310a86031","Type":"ContainerStarted","Data":"1e4397a2f8804938e58f33dac0e85aff1ed115029e1a95d27202ae10ef582834"} Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.509896 5050 generic.go:334] "Generic (PLEG): container finished" podID="5faa7088-04c1-4d75-abab-1e426f7cd032" containerID="be4b0cf2dadf9624690883fa000c62e0340c3aa5e3610baa84f61b2c83b8a5e8" exitCode=0 Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.509992 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-klgr9" event={"ID":"5faa7088-04c1-4d75-abab-1e426f7cd032","Type":"ContainerDied","Data":"be4b0cf2dadf9624690883fa000c62e0340c3aa5e3610baa84f61b2c83b8a5e8"} Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.510089 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-klgr9" event={"ID":"5faa7088-04c1-4d75-abab-1e426f7cd032","Type":"ContainerStarted","Data":"64e34c8b0d592d6d8c407ab5a282525681ef1f3e756d7e0b129a53c04c7ec4d9"} Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.520670 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.529222 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.529199829 podStartE2EDuration="2.529199829s" podCreationTimestamp="2025-12-11 13:49:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:16.527868833 +0000 UTC m=+47.371591429" watchObservedRunningTime="2025-12-11 13:49:16.529199829 +0000 UTC m=+47.372922415" Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.601434 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c2qj7"] Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.602912 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2qj7" Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.606132 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.612168 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2qj7"] Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.675908 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn95r\" (UniqueName: \"kubernetes.io/projected/ed489b52-31c7-44c8-b634-4a99e1644f65-kube-api-access-pn95r\") pod \"redhat-marketplace-c2qj7\" (UID: \"ed489b52-31c7-44c8-b634-4a99e1644f65\") " pod="openshift-marketplace/redhat-marketplace-c2qj7" Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.676039 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed489b52-31c7-44c8-b634-4a99e1644f65-catalog-content\") pod \"redhat-marketplace-c2qj7\" (UID: \"ed489b52-31c7-44c8-b634-4a99e1644f65\") " pod="openshift-marketplace/redhat-marketplace-c2qj7" Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.676267 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed489b52-31c7-44c8-b634-4a99e1644f65-utilities\") pod \"redhat-marketplace-c2qj7\" (UID: \"ed489b52-31c7-44c8-b634-4a99e1644f65\") " pod="openshift-marketplace/redhat-marketplace-c2qj7" Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.777862 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn95r\" (UniqueName: \"kubernetes.io/projected/ed489b52-31c7-44c8-b634-4a99e1644f65-kube-api-access-pn95r\") pod \"redhat-marketplace-c2qj7\" (UID: \"ed489b52-31c7-44c8-b634-4a99e1644f65\") " pod="openshift-marketplace/redhat-marketplace-c2qj7" Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.777966 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed489b52-31c7-44c8-b634-4a99e1644f65-catalog-content\") pod \"redhat-marketplace-c2qj7\" (UID: \"ed489b52-31c7-44c8-b634-4a99e1644f65\") " pod="openshift-marketplace/redhat-marketplace-c2qj7" Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.778022 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed489b52-31c7-44c8-b634-4a99e1644f65-utilities\") pod \"redhat-marketplace-c2qj7\" (UID: \"ed489b52-31c7-44c8-b634-4a99e1644f65\") " pod="openshift-marketplace/redhat-marketplace-c2qj7" Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.778538 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed489b52-31c7-44c8-b634-4a99e1644f65-utilities\") pod \"redhat-marketplace-c2qj7\" (UID: \"ed489b52-31c7-44c8-b634-4a99e1644f65\") " pod="openshift-marketplace/redhat-marketplace-c2qj7" Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.778760 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed489b52-31c7-44c8-b634-4a99e1644f65-catalog-content\") pod \"redhat-marketplace-c2qj7\" (UID: \"ed489b52-31c7-44c8-b634-4a99e1644f65\") " pod="openshift-marketplace/redhat-marketplace-c2qj7" Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.798870 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn95r\" (UniqueName: \"kubernetes.io/projected/ed489b52-31c7-44c8-b634-4a99e1644f65-kube-api-access-pn95r\") pod \"redhat-marketplace-c2qj7\" (UID: \"ed489b52-31c7-44c8-b634-4a99e1644f65\") " pod="openshift-marketplace/redhat-marketplace-c2qj7" Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.921643 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2qj7" Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.927975 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-dtlb9" Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.931103 5050 patch_prober.go:28] interesting pod/router-default-5444994796-dtlb9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 13:49:16 crc kubenswrapper[5050]: [-]has-synced failed: reason withheld Dec 11 13:49:16 crc kubenswrapper[5050]: [+]process-running ok Dec 11 13:49:16 crc kubenswrapper[5050]: healthz check failed Dec 11 13:49:16 crc kubenswrapper[5050]: I1211 13:49:16.931153 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dtlb9" podUID="cef7b97f-083b-44ae-9357-94f97b3eb30c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.007781 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qg6rf"] Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.010124 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qg6rf" Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.026125 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-rmlj6" Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.030723 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qg6rf"] Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.034238 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-rmlj6" Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.040928 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-24wkc" Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.064859 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-24wkc" Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.083395 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4feb3774-888a-4be4-b47a-b929ec6e98dc-utilities\") pod \"redhat-marketplace-qg6rf\" (UID: \"4feb3774-888a-4be4-b47a-b929ec6e98dc\") " pod="openshift-marketplace/redhat-marketplace-qg6rf" Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.083476 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjhqt\" (UniqueName: \"kubernetes.io/projected/4feb3774-888a-4be4-b47a-b929ec6e98dc-kube-api-access-kjhqt\") pod \"redhat-marketplace-qg6rf\" (UID: \"4feb3774-888a-4be4-b47a-b929ec6e98dc\") " pod="openshift-marketplace/redhat-marketplace-qg6rf" Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.083498 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4feb3774-888a-4be4-b47a-b929ec6e98dc-catalog-content\") pod \"redhat-marketplace-qg6rf\" (UID: \"4feb3774-888a-4be4-b47a-b929ec6e98dc\") " pod="openshift-marketplace/redhat-marketplace-qg6rf" Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.184819 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4feb3774-888a-4be4-b47a-b929ec6e98dc-utilities\") pod \"redhat-marketplace-qg6rf\" (UID: \"4feb3774-888a-4be4-b47a-b929ec6e98dc\") " pod="openshift-marketplace/redhat-marketplace-qg6rf" Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.184896 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjhqt\" (UniqueName: \"kubernetes.io/projected/4feb3774-888a-4be4-b47a-b929ec6e98dc-kube-api-access-kjhqt\") pod \"redhat-marketplace-qg6rf\" (UID: \"4feb3774-888a-4be4-b47a-b929ec6e98dc\") " pod="openshift-marketplace/redhat-marketplace-qg6rf" Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.184929 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4feb3774-888a-4be4-b47a-b929ec6e98dc-catalog-content\") pod \"redhat-marketplace-qg6rf\" (UID: \"4feb3774-888a-4be4-b47a-b929ec6e98dc\") " pod="openshift-marketplace/redhat-marketplace-qg6rf" Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.185633 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4feb3774-888a-4be4-b47a-b929ec6e98dc-catalog-content\") pod \"redhat-marketplace-qg6rf\" (UID: \"4feb3774-888a-4be4-b47a-b929ec6e98dc\") " pod="openshift-marketplace/redhat-marketplace-qg6rf" Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.185842 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4feb3774-888a-4be4-b47a-b929ec6e98dc-utilities\") pod \"redhat-marketplace-qg6rf\" (UID: \"4feb3774-888a-4be4-b47a-b929ec6e98dc\") " pod="openshift-marketplace/redhat-marketplace-qg6rf" Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.208352 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjhqt\" (UniqueName: \"kubernetes.io/projected/4feb3774-888a-4be4-b47a-b929ec6e98dc-kube-api-access-kjhqt\") pod \"redhat-marketplace-qg6rf\" (UID: \"4feb3774-888a-4be4-b47a-b929ec6e98dc\") " pod="openshift-marketplace/redhat-marketplace-qg6rf" Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.227272 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2qj7"] Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.281732 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.294347 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.362455 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qg6rf" Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.409916 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-p9jmx" Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.448002 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-p9jmx" Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.518895 5050 generic.go:334] "Generic (PLEG): container finished" podID="ed489b52-31c7-44c8-b634-4a99e1644f65" containerID="27f485196185adee8bc0a085d247e392dee7b1a6bbe7175fdbd60a09f00d556d" exitCode=0 Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.519277 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2qj7" event={"ID":"ed489b52-31c7-44c8-b634-4a99e1644f65","Type":"ContainerDied","Data":"27f485196185adee8bc0a085d247e392dee7b1a6bbe7175fdbd60a09f00d556d"} Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.519310 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2qj7" event={"ID":"ed489b52-31c7-44c8-b634-4a99e1644f65","Type":"ContainerStarted","Data":"8003bc3efdc910447f0660155579af94a999bdf4071aefbddb36aaa63b17eaba"} Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.525726 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" event={"ID":"79162d90-14dd-4df9-9bcd-10c2c666cae7","Type":"ContainerStarted","Data":"26e4ee24352e9ac39e8dce34a5a1fb157e7c079c8875c26bba5453d6ea044440"} Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.525888 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.533508 5050 generic.go:334] "Generic (PLEG): container finished" podID="02214444-97ce-4ba2-bc85-df0ea02e6945" containerID="642966b0abdd7d2e5ecb4ac0520ba63c1bf303bf250313c67d1df2af3b1db33f" exitCode=0 Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.535050 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"02214444-97ce-4ba2-bc85-df0ea02e6945","Type":"ContainerDied","Data":"642966b0abdd7d2e5ecb4ac0520ba63c1bf303bf250313c67d1df2af3b1db33f"} Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.562493 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.573380 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" podStartSLOduration=29.573353894 podStartE2EDuration="29.573353894s" podCreationTimestamp="2025-12-11 13:48:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:17.568832501 +0000 UTC m=+48.412555107" watchObservedRunningTime="2025-12-11 13:49:17.573353894 +0000 UTC m=+48.417076500" Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.614179 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qg6rf"] Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.807220 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jsc5s"] Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.808640 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jsc5s" Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.815107 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.822045 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jsc5s"] Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.842197 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424345-gxsbc" Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.901519 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dg672\" (UniqueName: \"kubernetes.io/projected/78f80616-d1e0-4152-a7fb-99a512670f27-kube-api-access-dg672\") pod \"78f80616-d1e0-4152-a7fb-99a512670f27\" (UID: \"78f80616-d1e0-4152-a7fb-99a512670f27\") " Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.901578 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78f80616-d1e0-4152-a7fb-99a512670f27-config-volume\") pod \"78f80616-d1e0-4152-a7fb-99a512670f27\" (UID: \"78f80616-d1e0-4152-a7fb-99a512670f27\") " Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.901656 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/78f80616-d1e0-4152-a7fb-99a512670f27-secret-volume\") pod \"78f80616-d1e0-4152-a7fb-99a512670f27\" (UID: \"78f80616-d1e0-4152-a7fb-99a512670f27\") " Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.901970 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/680b10f8-ff96-43e4-b2e2-22ee05c6d815-utilities\") pod \"redhat-operators-jsc5s\" (UID: \"680b10f8-ff96-43e4-b2e2-22ee05c6d815\") " pod="openshift-marketplace/redhat-operators-jsc5s" Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.902026 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/680b10f8-ff96-43e4-b2e2-22ee05c6d815-catalog-content\") pod \"redhat-operators-jsc5s\" (UID: \"680b10f8-ff96-43e4-b2e2-22ee05c6d815\") " pod="openshift-marketplace/redhat-operators-jsc5s" Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.902149 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc6lq\" (UniqueName: \"kubernetes.io/projected/680b10f8-ff96-43e4-b2e2-22ee05c6d815-kube-api-access-xc6lq\") pod \"redhat-operators-jsc5s\" (UID: \"680b10f8-ff96-43e4-b2e2-22ee05c6d815\") " pod="openshift-marketplace/redhat-operators-jsc5s" Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.903579 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78f80616-d1e0-4152-a7fb-99a512670f27-config-volume" (OuterVolumeSpecName: "config-volume") pod "78f80616-d1e0-4152-a7fb-99a512670f27" (UID: "78f80616-d1e0-4152-a7fb-99a512670f27"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.923488 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78f80616-d1e0-4152-a7fb-99a512670f27-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "78f80616-d1e0-4152-a7fb-99a512670f27" (UID: "78f80616-d1e0-4152-a7fb-99a512670f27"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.923880 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78f80616-d1e0-4152-a7fb-99a512670f27-kube-api-access-dg672" (OuterVolumeSpecName: "kube-api-access-dg672") pod "78f80616-d1e0-4152-a7fb-99a512670f27" (UID: "78f80616-d1e0-4152-a7fb-99a512670f27"). InnerVolumeSpecName "kube-api-access-dg672". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.941280 5050 patch_prober.go:28] interesting pod/router-default-5444994796-dtlb9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 13:49:17 crc kubenswrapper[5050]: [-]has-synced failed: reason withheld Dec 11 13:49:17 crc kubenswrapper[5050]: [+]process-running ok Dec 11 13:49:17 crc kubenswrapper[5050]: healthz check failed Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.941354 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dtlb9" podUID="cef7b97f-083b-44ae-9357-94f97b3eb30c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 13:49:17 crc kubenswrapper[5050]: I1211 13:49:17.982183 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-p9jmx"] Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.003502 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc6lq\" (UniqueName: \"kubernetes.io/projected/680b10f8-ff96-43e4-b2e2-22ee05c6d815-kube-api-access-xc6lq\") pod \"redhat-operators-jsc5s\" (UID: \"680b10f8-ff96-43e4-b2e2-22ee05c6d815\") " pod="openshift-marketplace/redhat-operators-jsc5s" Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.003641 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/680b10f8-ff96-43e4-b2e2-22ee05c6d815-utilities\") pod \"redhat-operators-jsc5s\" (UID: \"680b10f8-ff96-43e4-b2e2-22ee05c6d815\") " pod="openshift-marketplace/redhat-operators-jsc5s" Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.003675 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/680b10f8-ff96-43e4-b2e2-22ee05c6d815-catalog-content\") pod \"redhat-operators-jsc5s\" (UID: \"680b10f8-ff96-43e4-b2e2-22ee05c6d815\") " pod="openshift-marketplace/redhat-operators-jsc5s" Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.003725 5050 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/78f80616-d1e0-4152-a7fb-99a512670f27-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.003772 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dg672\" (UniqueName: \"kubernetes.io/projected/78f80616-d1e0-4152-a7fb-99a512670f27-kube-api-access-dg672\") on node \"crc\" DevicePath \"\"" Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.003784 5050 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78f80616-d1e0-4152-a7fb-99a512670f27-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.004399 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/680b10f8-ff96-43e4-b2e2-22ee05c6d815-catalog-content\") pod \"redhat-operators-jsc5s\" (UID: \"680b10f8-ff96-43e4-b2e2-22ee05c6d815\") " pod="openshift-marketplace/redhat-operators-jsc5s" Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.004415 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/680b10f8-ff96-43e4-b2e2-22ee05c6d815-utilities\") pod \"redhat-operators-jsc5s\" (UID: \"680b10f8-ff96-43e4-b2e2-22ee05c6d815\") " pod="openshift-marketplace/redhat-operators-jsc5s" Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.020059 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc6lq\" (UniqueName: \"kubernetes.io/projected/680b10f8-ff96-43e4-b2e2-22ee05c6d815-kube-api-access-xc6lq\") pod \"redhat-operators-jsc5s\" (UID: \"680b10f8-ff96-43e4-b2e2-22ee05c6d815\") " pod="openshift-marketplace/redhat-operators-jsc5s" Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.151861 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jsc5s" Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.151988 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.166770 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.206782 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-brfnx"] Dec 11 13:49:18 crc kubenswrapper[5050]: E1211 13:49:18.207066 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78f80616-d1e0-4152-a7fb-99a512670f27" containerName="collect-profiles" Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.207080 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="78f80616-d1e0-4152-a7fb-99a512670f27" containerName="collect-profiles" Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.207239 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="78f80616-d1e0-4152-a7fb-99a512670f27" containerName="collect-profiles" Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.211417 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-brfnx" Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.221106 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-brfnx"] Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.237201 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=0.23717824 podStartE2EDuration="237.17824ms" podCreationTimestamp="2025-12-11 13:49:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:18.23680373 +0000 UTC m=+49.080526336" watchObservedRunningTime="2025-12-11 13:49:18.23717824 +0000 UTC m=+49.080900826" Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.314116 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwcmn\" (UniqueName: \"kubernetes.io/projected/ddbc9fa4-63d4-48f6-b8e0-a2c36815399c-kube-api-access-rwcmn\") pod \"redhat-operators-brfnx\" (UID: \"ddbc9fa4-63d4-48f6-b8e0-a2c36815399c\") " pod="openshift-marketplace/redhat-operators-brfnx" Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.314190 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddbc9fa4-63d4-48f6-b8e0-a2c36815399c-catalog-content\") pod \"redhat-operators-brfnx\" (UID: \"ddbc9fa4-63d4-48f6-b8e0-a2c36815399c\") " pod="openshift-marketplace/redhat-operators-brfnx" Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.314263 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddbc9fa4-63d4-48f6-b8e0-a2c36815399c-utilities\") pod \"redhat-operators-brfnx\" (UID: \"ddbc9fa4-63d4-48f6-b8e0-a2c36815399c\") " pod="openshift-marketplace/redhat-operators-brfnx" Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.416292 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwcmn\" (UniqueName: \"kubernetes.io/projected/ddbc9fa4-63d4-48f6-b8e0-a2c36815399c-kube-api-access-rwcmn\") pod \"redhat-operators-brfnx\" (UID: \"ddbc9fa4-63d4-48f6-b8e0-a2c36815399c\") " pod="openshift-marketplace/redhat-operators-brfnx" Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.416389 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddbc9fa4-63d4-48f6-b8e0-a2c36815399c-catalog-content\") pod \"redhat-operators-brfnx\" (UID: \"ddbc9fa4-63d4-48f6-b8e0-a2c36815399c\") " pod="openshift-marketplace/redhat-operators-brfnx" Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.416535 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddbc9fa4-63d4-48f6-b8e0-a2c36815399c-utilities\") pod \"redhat-operators-brfnx\" (UID: \"ddbc9fa4-63d4-48f6-b8e0-a2c36815399c\") " pod="openshift-marketplace/redhat-operators-brfnx" Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.417705 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddbc9fa4-63d4-48f6-b8e0-a2c36815399c-utilities\") pod \"redhat-operators-brfnx\" (UID: \"ddbc9fa4-63d4-48f6-b8e0-a2c36815399c\") " pod="openshift-marketplace/redhat-operators-brfnx" Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.423380 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddbc9fa4-63d4-48f6-b8e0-a2c36815399c-catalog-content\") pod \"redhat-operators-brfnx\" (UID: \"ddbc9fa4-63d4-48f6-b8e0-a2c36815399c\") " pod="openshift-marketplace/redhat-operators-brfnx" Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.440362 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwcmn\" (UniqueName: \"kubernetes.io/projected/ddbc9fa4-63d4-48f6-b8e0-a2c36815399c-kube-api-access-rwcmn\") pod \"redhat-operators-brfnx\" (UID: \"ddbc9fa4-63d4-48f6-b8e0-a2c36815399c\") " pod="openshift-marketplace/redhat-operators-brfnx" Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.520967 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jsc5s"] Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.565272 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-brfnx" Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.608119 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424345-gxsbc" event={"ID":"78f80616-d1e0-4152-a7fb-99a512670f27","Type":"ContainerDied","Data":"724ea55e5f204f0621a9a9de791cb1b4f412b55a23d60a387abfe8fa28b17be1"} Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.608162 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="724ea55e5f204f0621a9a9de791cb1b4f412b55a23d60a387abfe8fa28b17be1" Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.608235 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424345-gxsbc" Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.625789 5050 generic.go:334] "Generic (PLEG): container finished" podID="4feb3774-888a-4be4-b47a-b929ec6e98dc" containerID="b4b3620bdcd8ef1816412eab1f16d5be51a84910a3ada8edec0ed53d1d51119e" exitCode=0 Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.626213 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qg6rf" event={"ID":"4feb3774-888a-4be4-b47a-b929ec6e98dc","Type":"ContainerDied","Data":"b4b3620bdcd8ef1816412eab1f16d5be51a84910a3ada8edec0ed53d1d51119e"} Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.626270 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qg6rf" event={"ID":"4feb3774-888a-4be4-b47a-b929ec6e98dc","Type":"ContainerStarted","Data":"ee78b515078382bf5f878c959a16396b36a6333b319361d7dfcd70126318fe5e"} Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.627562 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-p9jmx" podUID="8f687791-2831-4665-bb32-ee48ab6e70be" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://89e622f1f2a7ec200a5a01f9388e42ff9bc39192cbc98de7d5503694481f631b" gracePeriod=30 Dec 11 13:49:18 crc kubenswrapper[5050]: W1211 13:49:18.673270 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod680b10f8_ff96_43e4_b2e2_22ee05c6d815.slice/crio-e12308e356f44667a71fa270f79df39493bcc94358da9ad0f6345566a75653dd WatchSource:0}: Error finding container e12308e356f44667a71fa270f79df39493bcc94358da9ad0f6345566a75653dd: Status 404 returned error can't find the container with id e12308e356f44667a71fa270f79df39493bcc94358da9ad0f6345566a75653dd Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.934465 5050 patch_prober.go:28] interesting pod/router-default-5444994796-dtlb9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 13:49:18 crc kubenswrapper[5050]: [-]has-synced failed: reason withheld Dec 11 13:49:18 crc kubenswrapper[5050]: [+]process-running ok Dec 11 13:49:18 crc kubenswrapper[5050]: healthz check failed Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.934831 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dtlb9" podUID="cef7b97f-083b-44ae-9357-94f97b3eb30c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.979805 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 13:49:18 crc kubenswrapper[5050]: I1211 13:49:18.992598 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-brfnx"] Dec 11 13:49:19 crc kubenswrapper[5050]: I1211 13:49:19.036750 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02214444-97ce-4ba2-bc85-df0ea02e6945-kubelet-dir\") pod \"02214444-97ce-4ba2-bc85-df0ea02e6945\" (UID: \"02214444-97ce-4ba2-bc85-df0ea02e6945\") " Dec 11 13:49:19 crc kubenswrapper[5050]: I1211 13:49:19.036871 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02214444-97ce-4ba2-bc85-df0ea02e6945-kube-api-access\") pod \"02214444-97ce-4ba2-bc85-df0ea02e6945\" (UID: \"02214444-97ce-4ba2-bc85-df0ea02e6945\") " Dec 11 13:49:19 crc kubenswrapper[5050]: I1211 13:49:19.036907 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/02214444-97ce-4ba2-bc85-df0ea02e6945-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "02214444-97ce-4ba2-bc85-df0ea02e6945" (UID: "02214444-97ce-4ba2-bc85-df0ea02e6945"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:49:19 crc kubenswrapper[5050]: I1211 13:49:19.039345 5050 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/02214444-97ce-4ba2-bc85-df0ea02e6945-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 11 13:49:19 crc kubenswrapper[5050]: I1211 13:49:19.043053 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02214444-97ce-4ba2-bc85-df0ea02e6945-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "02214444-97ce-4ba2-bc85-df0ea02e6945" (UID: "02214444-97ce-4ba2-bc85-df0ea02e6945"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:49:19 crc kubenswrapper[5050]: I1211 13:49:19.140465 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02214444-97ce-4ba2-bc85-df0ea02e6945-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 13:49:19 crc kubenswrapper[5050]: I1211 13:49:19.288412 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 11 13:49:19 crc kubenswrapper[5050]: E1211 13:49:19.288851 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02214444-97ce-4ba2-bc85-df0ea02e6945" containerName="pruner" Dec 11 13:49:19 crc kubenswrapper[5050]: I1211 13:49:19.288871 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="02214444-97ce-4ba2-bc85-df0ea02e6945" containerName="pruner" Dec 11 13:49:19 crc kubenswrapper[5050]: I1211 13:49:19.289103 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="02214444-97ce-4ba2-bc85-df0ea02e6945" containerName="pruner" Dec 11 13:49:19 crc kubenswrapper[5050]: I1211 13:49:19.289755 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 13:49:19 crc kubenswrapper[5050]: I1211 13:49:19.294751 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 11 13:49:19 crc kubenswrapper[5050]: I1211 13:49:19.301369 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 11 13:49:19 crc kubenswrapper[5050]: I1211 13:49:19.302193 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 11 13:49:19 crc kubenswrapper[5050]: I1211 13:49:19.343279 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d07c2cf7-922f-4910-9438-05be007c7a77-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d07c2cf7-922f-4910-9438-05be007c7a77\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 13:49:19 crc kubenswrapper[5050]: I1211 13:49:19.343545 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d07c2cf7-922f-4910-9438-05be007c7a77-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d07c2cf7-922f-4910-9438-05be007c7a77\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 13:49:19 crc kubenswrapper[5050]: I1211 13:49:19.445107 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d07c2cf7-922f-4910-9438-05be007c7a77-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d07c2cf7-922f-4910-9438-05be007c7a77\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 13:49:19 crc kubenswrapper[5050]: I1211 13:49:19.445188 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d07c2cf7-922f-4910-9438-05be007c7a77-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d07c2cf7-922f-4910-9438-05be007c7a77\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 13:49:19 crc kubenswrapper[5050]: I1211 13:49:19.445288 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d07c2cf7-922f-4910-9438-05be007c7a77-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"d07c2cf7-922f-4910-9438-05be007c7a77\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 13:49:19 crc kubenswrapper[5050]: I1211 13:49:19.466048 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d07c2cf7-922f-4910-9438-05be007c7a77-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"d07c2cf7-922f-4910-9438-05be007c7a77\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 13:49:19 crc kubenswrapper[5050]: I1211 13:49:19.630666 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 13:49:19 crc kubenswrapper[5050]: I1211 13:49:19.641818 5050 generic.go:334] "Generic (PLEG): container finished" podID="680b10f8-ff96-43e4-b2e2-22ee05c6d815" containerID="4725dadd78ab5fd96e93872efee3fa35732ab1e9f26edf5d6e88947ab949d85f" exitCode=0 Dec 11 13:49:19 crc kubenswrapper[5050]: I1211 13:49:19.641952 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jsc5s" event={"ID":"680b10f8-ff96-43e4-b2e2-22ee05c6d815","Type":"ContainerDied","Data":"4725dadd78ab5fd96e93872efee3fa35732ab1e9f26edf5d6e88947ab949d85f"} Dec 11 13:49:19 crc kubenswrapper[5050]: I1211 13:49:19.642064 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jsc5s" event={"ID":"680b10f8-ff96-43e4-b2e2-22ee05c6d815","Type":"ContainerStarted","Data":"e12308e356f44667a71fa270f79df39493bcc94358da9ad0f6345566a75653dd"} Dec 11 13:49:19 crc kubenswrapper[5050]: I1211 13:49:19.650401 5050 generic.go:334] "Generic (PLEG): container finished" podID="ddbc9fa4-63d4-48f6-b8e0-a2c36815399c" containerID="4b9d620346737c5034b375bd4eb516a3bc029b7ff2674357a61ffc2fcd692c95" exitCode=0 Dec 11 13:49:19 crc kubenswrapper[5050]: I1211 13:49:19.650497 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brfnx" event={"ID":"ddbc9fa4-63d4-48f6-b8e0-a2c36815399c","Type":"ContainerDied","Data":"4b9d620346737c5034b375bd4eb516a3bc029b7ff2674357a61ffc2fcd692c95"} Dec 11 13:49:19 crc kubenswrapper[5050]: I1211 13:49:19.650526 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brfnx" event={"ID":"ddbc9fa4-63d4-48f6-b8e0-a2c36815399c","Type":"ContainerStarted","Data":"edf7c8ef757f638c196d42e866daf80c659b883fa9673a9128fc31b4f3a6eaa6"} Dec 11 13:49:19 crc kubenswrapper[5050]: I1211 13:49:19.654458 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"02214444-97ce-4ba2-bc85-df0ea02e6945","Type":"ContainerDied","Data":"8d382d3e2ce055c826fa62878b3ca257c147d5184d130abc3fc91b447eb8eaef"} Dec 11 13:49:19 crc kubenswrapper[5050]: I1211 13:49:19.654491 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 13:49:19 crc kubenswrapper[5050]: I1211 13:49:19.654503 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d382d3e2ce055c826fa62878b3ca257c147d5184d130abc3fc91b447eb8eaef" Dec 11 13:49:19 crc kubenswrapper[5050]: I1211 13:49:19.940435 5050 patch_prober.go:28] interesting pod/router-default-5444994796-dtlb9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 13:49:19 crc kubenswrapper[5050]: [-]has-synced failed: reason withheld Dec 11 13:49:19 crc kubenswrapper[5050]: [+]process-running ok Dec 11 13:49:19 crc kubenswrapper[5050]: healthz check failed Dec 11 13:49:19 crc kubenswrapper[5050]: I1211 13:49:19.944598 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dtlb9" podUID="cef7b97f-083b-44ae-9357-94f97b3eb30c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 13:49:20 crc kubenswrapper[5050]: I1211 13:49:20.066664 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 11 13:49:20 crc kubenswrapper[5050]: W1211 13:49:20.088242 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd07c2cf7_922f_4910_9438_05be007c7a77.slice/crio-69d7615dfcc732fc106cdd8f18e7cfc35b615042f8c0a4950e2290fe5ee4fc8a WatchSource:0}: Error finding container 69d7615dfcc732fc106cdd8f18e7cfc35b615042f8c0a4950e2290fe5ee4fc8a: Status 404 returned error can't find the container with id 69d7615dfcc732fc106cdd8f18e7cfc35b615042f8c0a4950e2290fe5ee4fc8a Dec 11 13:49:20 crc kubenswrapper[5050]: I1211 13:49:20.213922 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:20 crc kubenswrapper[5050]: I1211 13:49:20.218133 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 13:49:20 crc kubenswrapper[5050]: I1211 13:49:20.358098 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:49:20 crc kubenswrapper[5050]: I1211 13:49:20.358217 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:49:20 crc kubenswrapper[5050]: I1211 13:49:20.374911 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:49:20 crc kubenswrapper[5050]: I1211 13:49:20.615670 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:49:20 crc kubenswrapper[5050]: I1211 13:49:20.664372 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d07c2cf7-922f-4910-9438-05be007c7a77","Type":"ContainerStarted","Data":"69d7615dfcc732fc106cdd8f18e7cfc35b615042f8c0a4950e2290fe5ee4fc8a"} Dec 11 13:49:20 crc kubenswrapper[5050]: I1211 13:49:20.767316 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 13:49:20 crc kubenswrapper[5050]: I1211 13:49:20.937987 5050 patch_prober.go:28] interesting pod/router-default-5444994796-dtlb9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 13:49:20 crc kubenswrapper[5050]: [-]has-synced failed: reason withheld Dec 11 13:49:20 crc kubenswrapper[5050]: [+]process-running ok Dec 11 13:49:20 crc kubenswrapper[5050]: healthz check failed Dec 11 13:49:20 crc kubenswrapper[5050]: I1211 13:49:20.938163 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dtlb9" podUID="cef7b97f-083b-44ae-9357-94f97b3eb30c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 13:49:21 crc kubenswrapper[5050]: W1211 13:49:21.230767 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-75ad2c0a879b8077464aa8a8f74562cba6b75075ac8df693dffe61cbbbe7105d WatchSource:0}: Error finding container 75ad2c0a879b8077464aa8a8f74562cba6b75075ac8df693dffe61cbbbe7105d: Status 404 returned error can't find the container with id 75ad2c0a879b8077464aa8a8f74562cba6b75075ac8df693dffe61cbbbe7105d Dec 11 13:49:21 crc kubenswrapper[5050]: I1211 13:49:21.681778 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:49:21 crc kubenswrapper[5050]: I1211 13:49:21.681867 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:49:21 crc kubenswrapper[5050]: I1211 13:49:21.687647 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:49:21 crc kubenswrapper[5050]: I1211 13:49:21.687661 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:49:21 crc kubenswrapper[5050]: I1211 13:49:21.804282 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d07c2cf7-922f-4910-9438-05be007c7a77","Type":"ContainerStarted","Data":"f6b2d34f592040fa481ebb223e953809ac109e71f5f014e56b40a6c8e28ed632"} Dec 11 13:49:21 crc kubenswrapper[5050]: I1211 13:49:21.807260 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"75ad2c0a879b8077464aa8a8f74562cba6b75075ac8df693dffe61cbbbe7105d"} Dec 11 13:49:21 crc kubenswrapper[5050]: I1211 13:49:21.867797 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:49:21 crc kubenswrapper[5050]: I1211 13:49:21.931573 5050 patch_prober.go:28] interesting pod/router-default-5444994796-dtlb9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 13:49:21 crc kubenswrapper[5050]: [-]has-synced failed: reason withheld Dec 11 13:49:21 crc kubenswrapper[5050]: [+]process-running ok Dec 11 13:49:21 crc kubenswrapper[5050]: healthz check failed Dec 11 13:49:21 crc kubenswrapper[5050]: I1211 13:49:21.931631 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dtlb9" podUID="cef7b97f-083b-44ae-9357-94f97b3eb30c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 13:49:21 crc kubenswrapper[5050]: I1211 13:49:21.961027 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 13:49:22 crc kubenswrapper[5050]: W1211 13:49:22.347025 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-29d718a822b6f2be02341b947bae8a5c7bac7afdbaeacc3a80b724bd49d60237 WatchSource:0}: Error finding container 29d718a822b6f2be02341b947bae8a5c7bac7afdbaeacc3a80b724bd49d60237: Status 404 returned error can't find the container with id 29d718a822b6f2be02341b947bae8a5c7bac7afdbaeacc3a80b724bd49d60237 Dec 11 13:49:22 crc kubenswrapper[5050]: I1211 13:49:22.376434 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-25p7l" Dec 11 13:49:22 crc kubenswrapper[5050]: W1211 13:49:22.389903 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-882f90937fe3cc4524b9652009980205788eb04d9b406045f538f4174658e8a2 WatchSource:0}: Error finding container 882f90937fe3cc4524b9652009980205788eb04d9b406045f538f4174658e8a2: Status 404 returned error can't find the container with id 882f90937fe3cc4524b9652009980205788eb04d9b406045f538f4174658e8a2 Dec 11 13:49:22 crc kubenswrapper[5050]: I1211 13:49:22.820269 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"29d718a822b6f2be02341b947bae8a5c7bac7afdbaeacc3a80b724bd49d60237"} Dec 11 13:49:22 crc kubenswrapper[5050]: I1211 13:49:22.828151 5050 generic.go:334] "Generic (PLEG): container finished" podID="d07c2cf7-922f-4910-9438-05be007c7a77" containerID="f6b2d34f592040fa481ebb223e953809ac109e71f5f014e56b40a6c8e28ed632" exitCode=0 Dec 11 13:49:22 crc kubenswrapper[5050]: I1211 13:49:22.828251 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d07c2cf7-922f-4910-9438-05be007c7a77","Type":"ContainerDied","Data":"f6b2d34f592040fa481ebb223e953809ac109e71f5f014e56b40a6c8e28ed632"} Dec 11 13:49:22 crc kubenswrapper[5050]: I1211 13:49:22.844957 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"fb2900ab5f1a071b4d9a8cc77c98fdd0387ca6d90231515e83a18ba54412b19f"} Dec 11 13:49:22 crc kubenswrapper[5050]: I1211 13:49:22.852151 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"882f90937fe3cc4524b9652009980205788eb04d9b406045f538f4174658e8a2"} Dec 11 13:49:22 crc kubenswrapper[5050]: I1211 13:49:22.932335 5050 patch_prober.go:28] interesting pod/router-default-5444994796-dtlb9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 13:49:22 crc kubenswrapper[5050]: [-]has-synced failed: reason withheld Dec 11 13:49:22 crc kubenswrapper[5050]: [+]process-running ok Dec 11 13:49:22 crc kubenswrapper[5050]: healthz check failed Dec 11 13:49:22 crc kubenswrapper[5050]: I1211 13:49:22.932424 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dtlb9" podUID="cef7b97f-083b-44ae-9357-94f97b3eb30c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 13:49:23 crc kubenswrapper[5050]: I1211 13:49:23.865115 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"194d1ad9f2d48773c7eaeeb063faa02467fa00926a1796298d9b711ed41ae08b"} Dec 11 13:49:23 crc kubenswrapper[5050]: I1211 13:49:23.865536 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:49:23 crc kubenswrapper[5050]: I1211 13:49:23.868361 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"47a9827aba1707764ea889761a56d59308274a4a9f28cf4ab18fa25acc1ec357"} Dec 11 13:49:23 crc kubenswrapper[5050]: I1211 13:49:23.931966 5050 patch_prober.go:28] interesting pod/router-default-5444994796-dtlb9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 13:49:23 crc kubenswrapper[5050]: [-]has-synced failed: reason withheld Dec 11 13:49:23 crc kubenswrapper[5050]: [+]process-running ok Dec 11 13:49:23 crc kubenswrapper[5050]: healthz check failed Dec 11 13:49:23 crc kubenswrapper[5050]: I1211 13:49:23.932063 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dtlb9" podUID="cef7b97f-083b-44ae-9357-94f97b3eb30c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 13:49:24 crc kubenswrapper[5050]: I1211 13:49:24.932172 5050 patch_prober.go:28] interesting pod/router-default-5444994796-dtlb9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 13:49:24 crc kubenswrapper[5050]: [-]has-synced failed: reason withheld Dec 11 13:49:24 crc kubenswrapper[5050]: [+]process-running ok Dec 11 13:49:24 crc kubenswrapper[5050]: healthz check failed Dec 11 13:49:24 crc kubenswrapper[5050]: I1211 13:49:24.932314 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dtlb9" podUID="cef7b97f-083b-44ae-9357-94f97b3eb30c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 13:49:25 crc kubenswrapper[5050]: I1211 13:49:25.595970 5050 patch_prober.go:28] interesting pod/console-f9d7485db-gp9fp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Dec 11 13:49:25 crc kubenswrapper[5050]: I1211 13:49:25.596773 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-gp9fp" podUID="f633d554-794b-4a64-9699-27fbc28a4d7c" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Dec 11 13:49:25 crc kubenswrapper[5050]: I1211 13:49:25.641591 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-qc97s" Dec 11 13:49:25 crc kubenswrapper[5050]: I1211 13:49:25.931285 5050 patch_prober.go:28] interesting pod/router-default-5444994796-dtlb9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 13:49:25 crc kubenswrapper[5050]: [-]has-synced failed: reason withheld Dec 11 13:49:25 crc kubenswrapper[5050]: [+]process-running ok Dec 11 13:49:25 crc kubenswrapper[5050]: healthz check failed Dec 11 13:49:25 crc kubenswrapper[5050]: I1211 13:49:25.931388 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dtlb9" podUID="cef7b97f-083b-44ae-9357-94f97b3eb30c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 13:49:26 crc kubenswrapper[5050]: I1211 13:49:26.930550 5050 patch_prober.go:28] interesting pod/router-default-5444994796-dtlb9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 13:49:26 crc kubenswrapper[5050]: [-]has-synced failed: reason withheld Dec 11 13:49:26 crc kubenswrapper[5050]: [+]process-running ok Dec 11 13:49:26 crc kubenswrapper[5050]: healthz check failed Dec 11 13:49:26 crc kubenswrapper[5050]: I1211 13:49:26.930624 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dtlb9" podUID="cef7b97f-083b-44ae-9357-94f97b3eb30c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 13:49:27 crc kubenswrapper[5050]: E1211 13:49:27.412386 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="89e622f1f2a7ec200a5a01f9388e42ff9bc39192cbc98de7d5503694481f631b" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 11 13:49:27 crc kubenswrapper[5050]: E1211 13:49:27.414487 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="89e622f1f2a7ec200a5a01f9388e42ff9bc39192cbc98de7d5503694481f631b" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 11 13:49:27 crc kubenswrapper[5050]: E1211 13:49:27.417562 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="89e622f1f2a7ec200a5a01f9388e42ff9bc39192cbc98de7d5503694481f631b" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 11 13:49:27 crc kubenswrapper[5050]: E1211 13:49:27.417619 5050 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-p9jmx" podUID="8f687791-2831-4665-bb32-ee48ab6e70be" containerName="kube-multus-additional-cni-plugins" Dec 11 13:49:27 crc kubenswrapper[5050]: I1211 13:49:27.930434 5050 patch_prober.go:28] interesting pod/router-default-5444994796-dtlb9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 13:49:27 crc kubenswrapper[5050]: [-]has-synced failed: reason withheld Dec 11 13:49:27 crc kubenswrapper[5050]: [+]process-running ok Dec 11 13:49:27 crc kubenswrapper[5050]: healthz check failed Dec 11 13:49:27 crc kubenswrapper[5050]: I1211 13:49:27.930516 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dtlb9" podUID="cef7b97f-083b-44ae-9357-94f97b3eb30c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 13:49:28 crc kubenswrapper[5050]: I1211 13:49:28.930306 5050 patch_prober.go:28] interesting pod/router-default-5444994796-dtlb9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 13:49:28 crc kubenswrapper[5050]: [-]has-synced failed: reason withheld Dec 11 13:49:28 crc kubenswrapper[5050]: [+]process-running ok Dec 11 13:49:28 crc kubenswrapper[5050]: healthz check failed Dec 11 13:49:28 crc kubenswrapper[5050]: I1211 13:49:28.930384 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dtlb9" podUID="cef7b97f-083b-44ae-9357-94f97b3eb30c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 13:49:29 crc kubenswrapper[5050]: I1211 13:49:29.930758 5050 patch_prober.go:28] interesting pod/router-default-5444994796-dtlb9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 13:49:29 crc kubenswrapper[5050]: [-]has-synced failed: reason withheld Dec 11 13:49:29 crc kubenswrapper[5050]: [+]process-running ok Dec 11 13:49:29 crc kubenswrapper[5050]: healthz check failed Dec 11 13:49:29 crc kubenswrapper[5050]: I1211 13:49:29.931479 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dtlb9" podUID="cef7b97f-083b-44ae-9357-94f97b3eb30c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 13:49:30 crc kubenswrapper[5050]: I1211 13:49:30.936798 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-dtlb9" Dec 11 13:49:30 crc kubenswrapper[5050]: I1211 13:49:30.941100 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-dtlb9" Dec 11 13:49:36 crc kubenswrapper[5050]: I1211 13:49:35.893786 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:49:36 crc kubenswrapper[5050]: I1211 13:49:36.333375 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-gp9fp" Dec 11 13:49:36 crc kubenswrapper[5050]: I1211 13:49:36.337588 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-gp9fp" Dec 11 13:49:37 crc kubenswrapper[5050]: E1211 13:49:37.412370 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="89e622f1f2a7ec200a5a01f9388e42ff9bc39192cbc98de7d5503694481f631b" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 11 13:49:37 crc kubenswrapper[5050]: E1211 13:49:37.413832 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="89e622f1f2a7ec200a5a01f9388e42ff9bc39192cbc98de7d5503694481f631b" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 11 13:49:37 crc kubenswrapper[5050]: E1211 13:49:37.415881 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="89e622f1f2a7ec200a5a01f9388e42ff9bc39192cbc98de7d5503694481f631b" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 11 13:49:37 crc kubenswrapper[5050]: E1211 13:49:37.415910 5050 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-p9jmx" podUID="8f687791-2831-4665-bb32-ee48ab6e70be" containerName="kube-multus-additional-cni-plugins" Dec 11 13:49:43 crc kubenswrapper[5050]: I1211 13:49:43.842601 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 13:49:43 crc kubenswrapper[5050]: I1211 13:49:43.927890 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d07c2cf7-922f-4910-9438-05be007c7a77-kubelet-dir\") pod \"d07c2cf7-922f-4910-9438-05be007c7a77\" (UID: \"d07c2cf7-922f-4910-9438-05be007c7a77\") " Dec 11 13:49:43 crc kubenswrapper[5050]: I1211 13:49:43.928215 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d07c2cf7-922f-4910-9438-05be007c7a77-kube-api-access\") pod \"d07c2cf7-922f-4910-9438-05be007c7a77\" (UID: \"d07c2cf7-922f-4910-9438-05be007c7a77\") " Dec 11 13:49:43 crc kubenswrapper[5050]: I1211 13:49:43.928121 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d07c2cf7-922f-4910-9438-05be007c7a77-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d07c2cf7-922f-4910-9438-05be007c7a77" (UID: "d07c2cf7-922f-4910-9438-05be007c7a77"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:49:43 crc kubenswrapper[5050]: I1211 13:49:43.928727 5050 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d07c2cf7-922f-4910-9438-05be007c7a77-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 11 13:49:43 crc kubenswrapper[5050]: I1211 13:49:43.933490 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d07c2cf7-922f-4910-9438-05be007c7a77-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d07c2cf7-922f-4910-9438-05be007c7a77" (UID: "d07c2cf7-922f-4910-9438-05be007c7a77"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:49:44 crc kubenswrapper[5050]: I1211 13:49:44.001787 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"d07c2cf7-922f-4910-9438-05be007c7a77","Type":"ContainerDied","Data":"69d7615dfcc732fc106cdd8f18e7cfc35b615042f8c0a4950e2290fe5ee4fc8a"} Dec 11 13:49:44 crc kubenswrapper[5050]: I1211 13:49:44.001831 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69d7615dfcc732fc106cdd8f18e7cfc35b615042f8c0a4950e2290fe5ee4fc8a" Dec 11 13:49:44 crc kubenswrapper[5050]: I1211 13:49:44.001890 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 13:49:44 crc kubenswrapper[5050]: I1211 13:49:44.029693 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d07c2cf7-922f-4910-9438-05be007c7a77-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 13:49:46 crc kubenswrapper[5050]: I1211 13:49:46.993629 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jns7" Dec 11 13:49:47 crc kubenswrapper[5050]: E1211 13:49:47.412454 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="89e622f1f2a7ec200a5a01f9388e42ff9bc39192cbc98de7d5503694481f631b" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 11 13:49:47 crc kubenswrapper[5050]: E1211 13:49:47.414565 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="89e622f1f2a7ec200a5a01f9388e42ff9bc39192cbc98de7d5503694481f631b" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 11 13:49:47 crc kubenswrapper[5050]: E1211 13:49:47.416520 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="89e622f1f2a7ec200a5a01f9388e42ff9bc39192cbc98de7d5503694481f631b" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 11 13:49:47 crc kubenswrapper[5050]: E1211 13:49:47.416588 5050 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-p9jmx" podUID="8f687791-2831-4665-bb32-ee48ab6e70be" containerName="kube-multus-additional-cni-plugins" Dec 11 13:49:50 crc kubenswrapper[5050]: I1211 13:49:50.040944 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-p9jmx_8f687791-2831-4665-bb32-ee48ab6e70be/kube-multus-additional-cni-plugins/0.log" Dec 11 13:49:50 crc kubenswrapper[5050]: I1211 13:49:50.043189 5050 generic.go:334] "Generic (PLEG): container finished" podID="8f687791-2831-4665-bb32-ee48ab6e70be" containerID="89e622f1f2a7ec200a5a01f9388e42ff9bc39192cbc98de7d5503694481f631b" exitCode=137 Dec 11 13:49:50 crc kubenswrapper[5050]: I1211 13:49:50.043270 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-p9jmx" event={"ID":"8f687791-2831-4665-bb32-ee48ab6e70be","Type":"ContainerDied","Data":"89e622f1f2a7ec200a5a01f9388e42ff9bc39192cbc98de7d5503694481f631b"} Dec 11 13:49:53 crc kubenswrapper[5050]: I1211 13:49:53.564077 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 11 13:49:57 crc kubenswrapper[5050]: I1211 13:49:57.282079 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 11 13:49:57 crc kubenswrapper[5050]: E1211 13:49:57.282528 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d07c2cf7-922f-4910-9438-05be007c7a77" containerName="pruner" Dec 11 13:49:57 crc kubenswrapper[5050]: I1211 13:49:57.282559 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="d07c2cf7-922f-4910-9438-05be007c7a77" containerName="pruner" Dec 11 13:49:57 crc kubenswrapper[5050]: I1211 13:49:57.282772 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="d07c2cf7-922f-4910-9438-05be007c7a77" containerName="pruner" Dec 11 13:49:57 crc kubenswrapper[5050]: I1211 13:49:57.283549 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 13:49:57 crc kubenswrapper[5050]: I1211 13:49:57.286078 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 11 13:49:57 crc kubenswrapper[5050]: I1211 13:49:57.286128 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 11 13:49:57 crc kubenswrapper[5050]: I1211 13:49:57.291313 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 11 13:49:57 crc kubenswrapper[5050]: I1211 13:49:57.326984 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=4.326927822 podStartE2EDuration="4.326927822s" podCreationTimestamp="2025-12-11 13:49:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:49:57.323111196 +0000 UTC m=+88.166833812" watchObservedRunningTime="2025-12-11 13:49:57.326927822 +0000 UTC m=+88.170650408" Dec 11 13:49:57 crc kubenswrapper[5050]: E1211 13:49:57.410422 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 89e622f1f2a7ec200a5a01f9388e42ff9bc39192cbc98de7d5503694481f631b is running failed: container process not found" containerID="89e622f1f2a7ec200a5a01f9388e42ff9bc39192cbc98de7d5503694481f631b" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 11 13:49:57 crc kubenswrapper[5050]: E1211 13:49:57.410895 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 89e622f1f2a7ec200a5a01f9388e42ff9bc39192cbc98de7d5503694481f631b is running failed: container process not found" containerID="89e622f1f2a7ec200a5a01f9388e42ff9bc39192cbc98de7d5503694481f631b" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 11 13:49:57 crc kubenswrapper[5050]: E1211 13:49:57.411262 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 89e622f1f2a7ec200a5a01f9388e42ff9bc39192cbc98de7d5503694481f631b is running failed: container process not found" containerID="89e622f1f2a7ec200a5a01f9388e42ff9bc39192cbc98de7d5503694481f631b" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 11 13:49:57 crc kubenswrapper[5050]: E1211 13:49:57.411294 5050 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 89e622f1f2a7ec200a5a01f9388e42ff9bc39192cbc98de7d5503694481f631b is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-p9jmx" podUID="8f687791-2831-4665-bb32-ee48ab6e70be" containerName="kube-multus-additional-cni-plugins" Dec 11 13:49:57 crc kubenswrapper[5050]: I1211 13:49:57.437916 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9cf72ec8-510f-4f7a-95d7-180ac3d9fd20-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9cf72ec8-510f-4f7a-95d7-180ac3d9fd20\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 13:49:57 crc kubenswrapper[5050]: I1211 13:49:57.438048 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9cf72ec8-510f-4f7a-95d7-180ac3d9fd20-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9cf72ec8-510f-4f7a-95d7-180ac3d9fd20\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 13:49:57 crc kubenswrapper[5050]: I1211 13:49:57.539387 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9cf72ec8-510f-4f7a-95d7-180ac3d9fd20-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9cf72ec8-510f-4f7a-95d7-180ac3d9fd20\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 13:49:57 crc kubenswrapper[5050]: I1211 13:49:57.539765 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9cf72ec8-510f-4f7a-95d7-180ac3d9fd20-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9cf72ec8-510f-4f7a-95d7-180ac3d9fd20\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 13:49:57 crc kubenswrapper[5050]: I1211 13:49:57.539893 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9cf72ec8-510f-4f7a-95d7-180ac3d9fd20-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9cf72ec8-510f-4f7a-95d7-180ac3d9fd20\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 13:49:57 crc kubenswrapper[5050]: I1211 13:49:57.567336 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9cf72ec8-510f-4f7a-95d7-180ac3d9fd20-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9cf72ec8-510f-4f7a-95d7-180ac3d9fd20\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 13:49:57 crc kubenswrapper[5050]: I1211 13:49:57.605127 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 13:50:01 crc kubenswrapper[5050]: I1211 13:50:01.287241 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 11 13:50:01 crc kubenswrapper[5050]: I1211 13:50:01.289396 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 11 13:50:01 crc kubenswrapper[5050]: I1211 13:50:01.289627 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 11 13:50:01 crc kubenswrapper[5050]: I1211 13:50:01.395473 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5cc2be88-d194-4626-9f82-a4ccf377ce0d-kube-api-access\") pod \"installer-9-crc\" (UID: \"5cc2be88-d194-4626-9f82-a4ccf377ce0d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 13:50:01 crc kubenswrapper[5050]: I1211 13:50:01.395545 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5cc2be88-d194-4626-9f82-a4ccf377ce0d-var-lock\") pod \"installer-9-crc\" (UID: \"5cc2be88-d194-4626-9f82-a4ccf377ce0d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 13:50:01 crc kubenswrapper[5050]: I1211 13:50:01.395719 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5cc2be88-d194-4626-9f82-a4ccf377ce0d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"5cc2be88-d194-4626-9f82-a4ccf377ce0d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 13:50:01 crc kubenswrapper[5050]: I1211 13:50:01.497330 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5cc2be88-d194-4626-9f82-a4ccf377ce0d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"5cc2be88-d194-4626-9f82-a4ccf377ce0d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 13:50:01 crc kubenswrapper[5050]: I1211 13:50:01.497392 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5cc2be88-d194-4626-9f82-a4ccf377ce0d-kube-api-access\") pod \"installer-9-crc\" (UID: \"5cc2be88-d194-4626-9f82-a4ccf377ce0d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 13:50:01 crc kubenswrapper[5050]: I1211 13:50:01.497425 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5cc2be88-d194-4626-9f82-a4ccf377ce0d-var-lock\") pod \"installer-9-crc\" (UID: \"5cc2be88-d194-4626-9f82-a4ccf377ce0d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 13:50:01 crc kubenswrapper[5050]: I1211 13:50:01.497492 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5cc2be88-d194-4626-9f82-a4ccf377ce0d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"5cc2be88-d194-4626-9f82-a4ccf377ce0d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 13:50:01 crc kubenswrapper[5050]: I1211 13:50:01.497548 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5cc2be88-d194-4626-9f82-a4ccf377ce0d-var-lock\") pod \"installer-9-crc\" (UID: \"5cc2be88-d194-4626-9f82-a4ccf377ce0d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 13:50:01 crc kubenswrapper[5050]: I1211 13:50:01.521772 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5cc2be88-d194-4626-9f82-a4ccf377ce0d-kube-api-access\") pod \"installer-9-crc\" (UID: \"5cc2be88-d194-4626-9f82-a4ccf377ce0d\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 13:50:01 crc kubenswrapper[5050]: I1211 13:50:01.606645 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 11 13:50:01 crc kubenswrapper[5050]: I1211 13:50:01.982467 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 13:50:02 crc kubenswrapper[5050]: E1211 13:50:02.059919 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:e9bc35478da4e272fcc5e4573ebac9535075e1f2d8c613b985ef6e3a3c0c813e: Get \"https://registry.redhat.io/v2/redhat/redhat-operator-index/blobs/sha256:e9bc35478da4e272fcc5e4573ebac9535075e1f2d8c613b985ef6e3a3c0c813e\": context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 11 13:50:02 crc kubenswrapper[5050]: E1211 13:50:02.060116 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rwcmn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-brfnx_openshift-marketplace(ddbc9fa4-63d4-48f6-b8e0-a2c36815399c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:e9bc35478da4e272fcc5e4573ebac9535075e1f2d8c613b985ef6e3a3c0c813e: Get \"https://registry.redhat.io/v2/redhat/redhat-operator-index/blobs/sha256:e9bc35478da4e272fcc5e4573ebac9535075e1f2d8c613b985ef6e3a3c0c813e\": context canceled" logger="UnhandledError" Dec 11 13:50:02 crc kubenswrapper[5050]: E1211 13:50:02.061333 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:e9bc35478da4e272fcc5e4573ebac9535075e1f2d8c613b985ef6e3a3c0c813e: Get \\\"https://registry.redhat.io/v2/redhat/redhat-operator-index/blobs/sha256:e9bc35478da4e272fcc5e4573ebac9535075e1f2d8c613b985ef6e3a3c0c813e\\\": context canceled\"" pod="openshift-marketplace/redhat-operators-brfnx" podUID="ddbc9fa4-63d4-48f6-b8e0-a2c36815399c" Dec 11 13:50:06 crc kubenswrapper[5050]: E1211 13:50:06.824521 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:e9bc35478da4e272fcc5e4573ebac9535075e1f2d8c613b985ef6e3a3c0c813e: Get \"https://registry.redhat.io/v2/redhat/redhat-operator-index/blobs/sha256:e9bc35478da4e272fcc5e4573ebac9535075e1f2d8c613b985ef6e3a3c0c813e\": context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 11 13:50:06 crc kubenswrapper[5050]: E1211 13:50:06.825354 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xc6lq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-jsc5s_openshift-marketplace(680b10f8-ff96-43e4-b2e2-22ee05c6d815): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:e9bc35478da4e272fcc5e4573ebac9535075e1f2d8c613b985ef6e3a3c0c813e: Get \"https://registry.redhat.io/v2/redhat/redhat-operator-index/blobs/sha256:e9bc35478da4e272fcc5e4573ebac9535075e1f2d8c613b985ef6e3a3c0c813e\": context canceled" logger="UnhandledError" Dec 11 13:50:06 crc kubenswrapper[5050]: E1211 13:50:06.827240 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:e9bc35478da4e272fcc5e4573ebac9535075e1f2d8c613b985ef6e3a3c0c813e: Get \\\"https://registry.redhat.io/v2/redhat/redhat-operator-index/blobs/sha256:e9bc35478da4e272fcc5e4573ebac9535075e1f2d8c613b985ef6e3a3c0c813e\\\": context canceled\"" pod="openshift-marketplace/redhat-operators-jsc5s" podUID="680b10f8-ff96-43e4-b2e2-22ee05c6d815" Dec 11 13:50:07 crc kubenswrapper[5050]: E1211 13:50:07.410436 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 89e622f1f2a7ec200a5a01f9388e42ff9bc39192cbc98de7d5503694481f631b is running failed: container process not found" containerID="89e622f1f2a7ec200a5a01f9388e42ff9bc39192cbc98de7d5503694481f631b" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 11 13:50:07 crc kubenswrapper[5050]: E1211 13:50:07.411131 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 89e622f1f2a7ec200a5a01f9388e42ff9bc39192cbc98de7d5503694481f631b is running failed: container process not found" containerID="89e622f1f2a7ec200a5a01f9388e42ff9bc39192cbc98de7d5503694481f631b" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 11 13:50:07 crc kubenswrapper[5050]: E1211 13:50:07.411425 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 89e622f1f2a7ec200a5a01f9388e42ff9bc39192cbc98de7d5503694481f631b is running failed: container process not found" containerID="89e622f1f2a7ec200a5a01f9388e42ff9bc39192cbc98de7d5503694481f631b" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 11 13:50:07 crc kubenswrapper[5050]: E1211 13:50:07.411463 5050 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 89e622f1f2a7ec200a5a01f9388e42ff9bc39192cbc98de7d5503694481f631b is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-p9jmx" podUID="8f687791-2831-4665-bb32-ee48ab6e70be" containerName="kube-multus-additional-cni-plugins" Dec 11 13:50:12 crc kubenswrapper[5050]: E1211 13:50:12.869273 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-jsc5s" podUID="680b10f8-ff96-43e4-b2e2-22ee05c6d815" Dec 11 13:50:12 crc kubenswrapper[5050]: E1211 13:50:12.933131 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 11 13:50:12 crc kubenswrapper[5050]: E1211 13:50:12.933377 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pn95r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-c2qj7_openshift-marketplace(ed489b52-31c7-44c8-b634-4a99e1644f65): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 13:50:12 crc kubenswrapper[5050]: E1211 13:50:12.934709 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-c2qj7" podUID="ed489b52-31c7-44c8-b634-4a99e1644f65" Dec 11 13:50:14 crc kubenswrapper[5050]: E1211 13:50:14.077412 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-c2qj7" podUID="ed489b52-31c7-44c8-b634-4a99e1644f65" Dec 11 13:50:14 crc kubenswrapper[5050]: E1211 13:50:14.150196 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 11 13:50:14 crc kubenswrapper[5050]: E1211 13:50:14.150358 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gnn9x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-6n98v_openshift-marketplace(b2ee71a3-392e-442c-aa3b-bec310a86031): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 13:50:14 crc kubenswrapper[5050]: E1211 13:50:14.151557 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-6n98v" podUID="b2ee71a3-392e-442c-aa3b-bec310a86031" Dec 11 13:50:15 crc kubenswrapper[5050]: E1211 13:50:15.622195 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-6n98v" podUID="b2ee71a3-392e-442c-aa3b-bec310a86031" Dec 11 13:50:15 crc kubenswrapper[5050]: I1211 13:50:15.693635 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-p9jmx_8f687791-2831-4665-bb32-ee48ab6e70be/kube-multus-additional-cni-plugins/0.log" Dec 11 13:50:15 crc kubenswrapper[5050]: I1211 13:50:15.693861 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-p9jmx" Dec 11 13:50:15 crc kubenswrapper[5050]: E1211 13:50:15.705097 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 11 13:50:15 crc kubenswrapper[5050]: E1211 13:50:15.705261 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8jzz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-pvns6_openshift-marketplace(6f9d51ca-1ebf-4986-ba4b-08939d025cbd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 13:50:15 crc kubenswrapper[5050]: E1211 13:50:15.705765 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 11 13:50:15 crc kubenswrapper[5050]: E1211 13:50:15.706048 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kjhqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-qg6rf_openshift-marketplace(4feb3774-888a-4be4-b47a-b929ec6e98dc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 13:50:15 crc kubenswrapper[5050]: E1211 13:50:15.707135 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-qg6rf" podUID="4feb3774-888a-4be4-b47a-b929ec6e98dc" Dec 11 13:50:15 crc kubenswrapper[5050]: E1211 13:50:15.707181 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-pvns6" podUID="6f9d51ca-1ebf-4986-ba4b-08939d025cbd" Dec 11 13:50:15 crc kubenswrapper[5050]: E1211 13:50:15.728289 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 11 13:50:15 crc kubenswrapper[5050]: E1211 13:50:15.728424 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-967wt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-klgr9_openshift-marketplace(5faa7088-04c1-4d75-abab-1e426f7cd032): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 13:50:15 crc kubenswrapper[5050]: E1211 13:50:15.731492 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-klgr9" podUID="5faa7088-04c1-4d75-abab-1e426f7cd032" Dec 11 13:50:15 crc kubenswrapper[5050]: E1211 13:50:15.754066 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 11 13:50:15 crc kubenswrapper[5050]: E1211 13:50:15.754210 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7dv7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-bfjt2_openshift-marketplace(345ac559-35e7-487b-859a-e583a6e88c6c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 13:50:15 crc kubenswrapper[5050]: E1211 13:50:15.756116 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-bfjt2" podUID="345ac559-35e7-487b-859a-e583a6e88c6c" Dec 11 13:50:15 crc kubenswrapper[5050]: I1211 13:50:15.815583 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8f687791-2831-4665-bb32-ee48ab6e70be-cni-sysctl-allowlist\") pod \"8f687791-2831-4665-bb32-ee48ab6e70be\" (UID: \"8f687791-2831-4665-bb32-ee48ab6e70be\") " Dec 11 13:50:15 crc kubenswrapper[5050]: I1211 13:50:15.815644 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/8f687791-2831-4665-bb32-ee48ab6e70be-ready\") pod \"8f687791-2831-4665-bb32-ee48ab6e70be\" (UID: \"8f687791-2831-4665-bb32-ee48ab6e70be\") " Dec 11 13:50:15 crc kubenswrapper[5050]: I1211 13:50:15.815663 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8f687791-2831-4665-bb32-ee48ab6e70be-tuning-conf-dir\") pod \"8f687791-2831-4665-bb32-ee48ab6e70be\" (UID: \"8f687791-2831-4665-bb32-ee48ab6e70be\") " Dec 11 13:50:15 crc kubenswrapper[5050]: I1211 13:50:15.815759 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2d6m\" (UniqueName: \"kubernetes.io/projected/8f687791-2831-4665-bb32-ee48ab6e70be-kube-api-access-s2d6m\") pod \"8f687791-2831-4665-bb32-ee48ab6e70be\" (UID: \"8f687791-2831-4665-bb32-ee48ab6e70be\") " Dec 11 13:50:15 crc kubenswrapper[5050]: I1211 13:50:15.815745 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f687791-2831-4665-bb32-ee48ab6e70be-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "8f687791-2831-4665-bb32-ee48ab6e70be" (UID: "8f687791-2831-4665-bb32-ee48ab6e70be"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:50:15 crc kubenswrapper[5050]: I1211 13:50:15.816350 5050 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8f687791-2831-4665-bb32-ee48ab6e70be-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:15 crc kubenswrapper[5050]: I1211 13:50:15.816371 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f687791-2831-4665-bb32-ee48ab6e70be-ready" (OuterVolumeSpecName: "ready") pod "8f687791-2831-4665-bb32-ee48ab6e70be" (UID: "8f687791-2831-4665-bb32-ee48ab6e70be"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:50:15 crc kubenswrapper[5050]: I1211 13:50:15.816554 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f687791-2831-4665-bb32-ee48ab6e70be-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "8f687791-2831-4665-bb32-ee48ab6e70be" (UID: "8f687791-2831-4665-bb32-ee48ab6e70be"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:50:15 crc kubenswrapper[5050]: I1211 13:50:15.822982 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f687791-2831-4665-bb32-ee48ab6e70be-kube-api-access-s2d6m" (OuterVolumeSpecName: "kube-api-access-s2d6m") pod "8f687791-2831-4665-bb32-ee48ab6e70be" (UID: "8f687791-2831-4665-bb32-ee48ab6e70be"). InnerVolumeSpecName "kube-api-access-s2d6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:50:15 crc kubenswrapper[5050]: I1211 13:50:15.917981 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2d6m\" (UniqueName: \"kubernetes.io/projected/8f687791-2831-4665-bb32-ee48ab6e70be-kube-api-access-s2d6m\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:15 crc kubenswrapper[5050]: I1211 13:50:15.918029 5050 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8f687791-2831-4665-bb32-ee48ab6e70be-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:15 crc kubenswrapper[5050]: I1211 13:50:15.918039 5050 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/8f687791-2831-4665-bb32-ee48ab6e70be-ready\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:16 crc kubenswrapper[5050]: I1211 13:50:16.041001 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 11 13:50:16 crc kubenswrapper[5050]: I1211 13:50:16.080673 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 11 13:50:16 crc kubenswrapper[5050]: I1211 13:50:16.197540 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9cf72ec8-510f-4f7a-95d7-180ac3d9fd20","Type":"ContainerStarted","Data":"39d95c8b119af8f3f9cc86d8f0986824b9418739a1a263f0587ecabb12e0cdfc"} Dec 11 13:50:16 crc kubenswrapper[5050]: I1211 13:50:16.199872 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-p9jmx_8f687791-2831-4665-bb32-ee48ab6e70be/kube-multus-additional-cni-plugins/0.log" Dec 11 13:50:16 crc kubenswrapper[5050]: I1211 13:50:16.199973 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-p9jmx" event={"ID":"8f687791-2831-4665-bb32-ee48ab6e70be","Type":"ContainerDied","Data":"a06ea0d64ae569e2b4d2d528c3a46baf1a1102780ca5077c0c515d53a8841083"} Dec 11 13:50:16 crc kubenswrapper[5050]: I1211 13:50:16.200060 5050 scope.go:117] "RemoveContainer" containerID="89e622f1f2a7ec200a5a01f9388e42ff9bc39192cbc98de7d5503694481f631b" Dec 11 13:50:16 crc kubenswrapper[5050]: I1211 13:50:16.200137 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-p9jmx" Dec 11 13:50:16 crc kubenswrapper[5050]: I1211 13:50:16.203300 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"5cc2be88-d194-4626-9f82-a4ccf377ce0d","Type":"ContainerStarted","Data":"6241fcea317d7837cddb5a29b2d8a144737b4d5c061392f56b5d841e9449ab1f"} Dec 11 13:50:16 crc kubenswrapper[5050]: E1211 13:50:16.205067 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-klgr9" podUID="5faa7088-04c1-4d75-abab-1e426f7cd032" Dec 11 13:50:16 crc kubenswrapper[5050]: E1211 13:50:16.205724 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-qg6rf" podUID="4feb3774-888a-4be4-b47a-b929ec6e98dc" Dec 11 13:50:16 crc kubenswrapper[5050]: E1211 13:50:16.205787 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-bfjt2" podUID="345ac559-35e7-487b-859a-e583a6e88c6c" Dec 11 13:50:16 crc kubenswrapper[5050]: E1211 13:50:16.205819 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-pvns6" podUID="6f9d51ca-1ebf-4986-ba4b-08939d025cbd" Dec 11 13:50:16 crc kubenswrapper[5050]: I1211 13:50:16.292268 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-p9jmx"] Dec 11 13:50:16 crc kubenswrapper[5050]: I1211 13:50:16.293146 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-p9jmx"] Dec 11 13:50:17 crc kubenswrapper[5050]: I1211 13:50:17.210198 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"5cc2be88-d194-4626-9f82-a4ccf377ce0d","Type":"ContainerStarted","Data":"ad31ba71f196ede4a4174c658f065f02a45637c6b7edd0ace8a127e232567385"} Dec 11 13:50:17 crc kubenswrapper[5050]: I1211 13:50:17.213454 5050 generic.go:334] "Generic (PLEG): container finished" podID="9cf72ec8-510f-4f7a-95d7-180ac3d9fd20" containerID="3cdbf8555ff74fbecaf466734fa5ab3a609a0c8017feec9b71345dc78055216e" exitCode=0 Dec 11 13:50:17 crc kubenswrapper[5050]: I1211 13:50:17.213510 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9cf72ec8-510f-4f7a-95d7-180ac3d9fd20","Type":"ContainerDied","Data":"3cdbf8555ff74fbecaf466734fa5ab3a609a0c8017feec9b71345dc78055216e"} Dec 11 13:50:17 crc kubenswrapper[5050]: I1211 13:50:17.230110 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=16.230090882 podStartE2EDuration="16.230090882s" podCreationTimestamp="2025-12-11 13:50:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:50:17.229439142 +0000 UTC m=+108.073161728" watchObservedRunningTime="2025-12-11 13:50:17.230090882 +0000 UTC m=+108.073813458" Dec 11 13:50:17 crc kubenswrapper[5050]: I1211 13:50:17.588736 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f687791-2831-4665-bb32-ee48ab6e70be" path="/var/lib/kubelet/pods/8f687791-2831-4665-bb32-ee48ab6e70be/volumes" Dec 11 13:50:18 crc kubenswrapper[5050]: I1211 13:50:18.468425 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 13:50:18 crc kubenswrapper[5050]: I1211 13:50:18.549654 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9cf72ec8-510f-4f7a-95d7-180ac3d9fd20-kube-api-access\") pod \"9cf72ec8-510f-4f7a-95d7-180ac3d9fd20\" (UID: \"9cf72ec8-510f-4f7a-95d7-180ac3d9fd20\") " Dec 11 13:50:18 crc kubenswrapper[5050]: I1211 13:50:18.549737 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9cf72ec8-510f-4f7a-95d7-180ac3d9fd20-kubelet-dir\") pod \"9cf72ec8-510f-4f7a-95d7-180ac3d9fd20\" (UID: \"9cf72ec8-510f-4f7a-95d7-180ac3d9fd20\") " Dec 11 13:50:18 crc kubenswrapper[5050]: I1211 13:50:18.553977 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cf72ec8-510f-4f7a-95d7-180ac3d9fd20-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9cf72ec8-510f-4f7a-95d7-180ac3d9fd20" (UID: "9cf72ec8-510f-4f7a-95d7-180ac3d9fd20"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:50:18 crc kubenswrapper[5050]: I1211 13:50:18.558122 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cf72ec8-510f-4f7a-95d7-180ac3d9fd20-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9cf72ec8-510f-4f7a-95d7-180ac3d9fd20" (UID: "9cf72ec8-510f-4f7a-95d7-180ac3d9fd20"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:50:18 crc kubenswrapper[5050]: I1211 13:50:18.651739 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9cf72ec8-510f-4f7a-95d7-180ac3d9fd20-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:18 crc kubenswrapper[5050]: I1211 13:50:18.651787 5050 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9cf72ec8-510f-4f7a-95d7-180ac3d9fd20-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:19 crc kubenswrapper[5050]: I1211 13:50:19.229787 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9cf72ec8-510f-4f7a-95d7-180ac3d9fd20","Type":"ContainerDied","Data":"39d95c8b119af8f3f9cc86d8f0986824b9418739a1a263f0587ecabb12e0cdfc"} Dec 11 13:50:19 crc kubenswrapper[5050]: I1211 13:50:19.229837 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 13:50:19 crc kubenswrapper[5050]: I1211 13:50:19.229840 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39d95c8b119af8f3f9cc86d8f0986824b9418739a1a263f0587ecabb12e0cdfc" Dec 11 13:50:23 crc kubenswrapper[5050]: I1211 13:50:23.253483 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brfnx" event={"ID":"ddbc9fa4-63d4-48f6-b8e0-a2c36815399c","Type":"ContainerStarted","Data":"2c409605482c83ca511a448721060531eac0329b59f94fdbcccff56c48275b7c"} Dec 11 13:50:24 crc kubenswrapper[5050]: I1211 13:50:24.260897 5050 generic.go:334] "Generic (PLEG): container finished" podID="ddbc9fa4-63d4-48f6-b8e0-a2c36815399c" containerID="2c409605482c83ca511a448721060531eac0329b59f94fdbcccff56c48275b7c" exitCode=0 Dec 11 13:50:24 crc kubenswrapper[5050]: I1211 13:50:24.260957 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brfnx" event={"ID":"ddbc9fa4-63d4-48f6-b8e0-a2c36815399c","Type":"ContainerDied","Data":"2c409605482c83ca511a448721060531eac0329b59f94fdbcccff56c48275b7c"} Dec 11 13:50:26 crc kubenswrapper[5050]: I1211 13:50:26.270943 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brfnx" event={"ID":"ddbc9fa4-63d4-48f6-b8e0-a2c36815399c","Type":"ContainerStarted","Data":"aa6b0c02ea4dd7bd70eca83559c47c4b398c88cb0070aca9a6a6b059dcb03dd8"} Dec 11 13:50:26 crc kubenswrapper[5050]: I1211 13:50:26.292550 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-brfnx" podStartSLOduration=2.202047178 podStartE2EDuration="1m8.292530155s" podCreationTimestamp="2025-12-11 13:49:18 +0000 UTC" firstStartedPulling="2025-12-11 13:49:19.699730876 +0000 UTC m=+50.543453462" lastFinishedPulling="2025-12-11 13:50:25.790213853 +0000 UTC m=+116.633936439" observedRunningTime="2025-12-11 13:50:26.291594267 +0000 UTC m=+117.135316853" watchObservedRunningTime="2025-12-11 13:50:26.292530155 +0000 UTC m=+117.136252741" Dec 11 13:50:28 crc kubenswrapper[5050]: I1211 13:50:28.290408 5050 generic.go:334] "Generic (PLEG): container finished" podID="b2ee71a3-392e-442c-aa3b-bec310a86031" containerID="98ae423705a56d7c67be8f9fcd4ada09e0693bde23c796cc98c7ca9fa573a400" exitCode=0 Dec 11 13:50:28 crc kubenswrapper[5050]: I1211 13:50:28.290479 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6n98v" event={"ID":"b2ee71a3-392e-442c-aa3b-bec310a86031","Type":"ContainerDied","Data":"98ae423705a56d7c67be8f9fcd4ada09e0693bde23c796cc98c7ca9fa573a400"} Dec 11 13:50:28 crc kubenswrapper[5050]: I1211 13:50:28.292433 5050 generic.go:334] "Generic (PLEG): container finished" podID="ed489b52-31c7-44c8-b634-4a99e1644f65" containerID="039e0ec92a9d49248c88597aa940cbe4190a3efea4fa79b28c39e5e6c3475d6a" exitCode=0 Dec 11 13:50:28 crc kubenswrapper[5050]: I1211 13:50:28.292565 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2qj7" event={"ID":"ed489b52-31c7-44c8-b634-4a99e1644f65","Type":"ContainerDied","Data":"039e0ec92a9d49248c88597aa940cbe4190a3efea4fa79b28c39e5e6c3475d6a"} Dec 11 13:50:28 crc kubenswrapper[5050]: I1211 13:50:28.295541 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jsc5s" event={"ID":"680b10f8-ff96-43e4-b2e2-22ee05c6d815","Type":"ContainerStarted","Data":"8b0103cb0a70f15cbc1e6b83960a4ecad55396f52a4bfb3d0309876fd8a20590"} Dec 11 13:50:28 crc kubenswrapper[5050]: I1211 13:50:28.566232 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-brfnx" Dec 11 13:50:28 crc kubenswrapper[5050]: I1211 13:50:28.566289 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-brfnx" Dec 11 13:50:29 crc kubenswrapper[5050]: I1211 13:50:29.301081 5050 generic.go:334] "Generic (PLEG): container finished" podID="680b10f8-ff96-43e4-b2e2-22ee05c6d815" containerID="8b0103cb0a70f15cbc1e6b83960a4ecad55396f52a4bfb3d0309876fd8a20590" exitCode=0 Dec 11 13:50:29 crc kubenswrapper[5050]: I1211 13:50:29.301153 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jsc5s" event={"ID":"680b10f8-ff96-43e4-b2e2-22ee05c6d815","Type":"ContainerDied","Data":"8b0103cb0a70f15cbc1e6b83960a4ecad55396f52a4bfb3d0309876fd8a20590"} Dec 11 13:50:29 crc kubenswrapper[5050]: I1211 13:50:29.846339 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-brfnx" podUID="ddbc9fa4-63d4-48f6-b8e0-a2c36815399c" containerName="registry-server" probeResult="failure" output=< Dec 11 13:50:29 crc kubenswrapper[5050]: timeout: failed to connect service ":50051" within 1s Dec 11 13:50:29 crc kubenswrapper[5050]: > Dec 11 13:50:30 crc kubenswrapper[5050]: I1211 13:50:30.311423 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvns6" event={"ID":"6f9d51ca-1ebf-4986-ba4b-08939d025cbd","Type":"ContainerStarted","Data":"356266226c3e3e247d874d7cedd227c8c9924a3eeee1038302774a63f8dd156f"} Dec 11 13:50:30 crc kubenswrapper[5050]: I1211 13:50:30.313501 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6n98v" event={"ID":"b2ee71a3-392e-442c-aa3b-bec310a86031","Type":"ContainerStarted","Data":"dfd20eb336193611fa0682d43fce71cb8854edd92d8af8903e3b99c890351872"} Dec 11 13:50:30 crc kubenswrapper[5050]: I1211 13:50:30.315352 5050 generic.go:334] "Generic (PLEG): container finished" podID="5faa7088-04c1-4d75-abab-1e426f7cd032" containerID="6c88c33d12545816768a18a1f15c48e0f04a99bed5dc89024e8ea3a8a816356d" exitCode=0 Dec 11 13:50:30 crc kubenswrapper[5050]: I1211 13:50:30.315408 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-klgr9" event={"ID":"5faa7088-04c1-4d75-abab-1e426f7cd032","Type":"ContainerDied","Data":"6c88c33d12545816768a18a1f15c48e0f04a99bed5dc89024e8ea3a8a816356d"} Dec 11 13:50:30 crc kubenswrapper[5050]: I1211 13:50:30.319373 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qg6rf" event={"ID":"4feb3774-888a-4be4-b47a-b929ec6e98dc","Type":"ContainerStarted","Data":"c97be9f9bc94115dd5ed635c1b25876754c45231b8349986a5ac49b0a7cfdce2"} Dec 11 13:50:30 crc kubenswrapper[5050]: I1211 13:50:30.322252 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2qj7" event={"ID":"ed489b52-31c7-44c8-b634-4a99e1644f65","Type":"ContainerStarted","Data":"dc830007bd91bbaf0d691bea30b8eccc2bcfd27e6c94b428b50f9493d2ea6b93"} Dec 11 13:50:30 crc kubenswrapper[5050]: I1211 13:50:30.332719 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jsc5s" event={"ID":"680b10f8-ff96-43e4-b2e2-22ee05c6d815","Type":"ContainerStarted","Data":"e284d7968a917c91aed620df3cec36c9db27f417a664bc53be9523947ed1bef2"} Dec 11 13:50:30 crc kubenswrapper[5050]: I1211 13:50:30.356538 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c2qj7" podStartSLOduration=2.130220289 podStartE2EDuration="1m14.35651811s" podCreationTimestamp="2025-12-11 13:49:16 +0000 UTC" firstStartedPulling="2025-12-11 13:49:17.521290574 +0000 UTC m=+48.365013160" lastFinishedPulling="2025-12-11 13:50:29.747588395 +0000 UTC m=+120.591310981" observedRunningTime="2025-12-11 13:50:30.353923331 +0000 UTC m=+121.197645917" watchObservedRunningTime="2025-12-11 13:50:30.35651811 +0000 UTC m=+121.200240696" Dec 11 13:50:30 crc kubenswrapper[5050]: I1211 13:50:30.390507 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6n98v" podStartSLOduration=3.079501744 podStartE2EDuration="1m16.390488656s" podCreationTimestamp="2025-12-11 13:49:14 +0000 UTC" firstStartedPulling="2025-12-11 13:49:16.509122695 +0000 UTC m=+47.352845281" lastFinishedPulling="2025-12-11 13:50:29.820109607 +0000 UTC m=+120.663832193" observedRunningTime="2025-12-11 13:50:30.388214197 +0000 UTC m=+121.231936783" watchObservedRunningTime="2025-12-11 13:50:30.390488656 +0000 UTC m=+121.234211242" Dec 11 13:50:30 crc kubenswrapper[5050]: I1211 13:50:30.407441 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jsc5s" podStartSLOduration=3.076264578 podStartE2EDuration="1m13.407418012s" podCreationTimestamp="2025-12-11 13:49:17 +0000 UTC" firstStartedPulling="2025-12-11 13:49:19.697727402 +0000 UTC m=+50.541449988" lastFinishedPulling="2025-12-11 13:50:30.028880836 +0000 UTC m=+120.872603422" observedRunningTime="2025-12-11 13:50:30.406045901 +0000 UTC m=+121.249768497" watchObservedRunningTime="2025-12-11 13:50:30.407418012 +0000 UTC m=+121.251140608" Dec 11 13:50:30 crc kubenswrapper[5050]: E1211 13:50:30.981813 5050 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4feb3774_888a_4be4_b47a_b929ec6e98dc.slice/crio-c97be9f9bc94115dd5ed635c1b25876754c45231b8349986a5ac49b0a7cfdce2.scope\": RecentStats: unable to find data in memory cache]" Dec 11 13:50:32 crc kubenswrapper[5050]: I1211 13:50:32.350190 5050 generic.go:334] "Generic (PLEG): container finished" podID="6f9d51ca-1ebf-4986-ba4b-08939d025cbd" containerID="356266226c3e3e247d874d7cedd227c8c9924a3eeee1038302774a63f8dd156f" exitCode=0 Dec 11 13:50:32 crc kubenswrapper[5050]: I1211 13:50:32.350270 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvns6" event={"ID":"6f9d51ca-1ebf-4986-ba4b-08939d025cbd","Type":"ContainerDied","Data":"356266226c3e3e247d874d7cedd227c8c9924a3eeee1038302774a63f8dd156f"} Dec 11 13:50:32 crc kubenswrapper[5050]: I1211 13:50:32.355943 5050 generic.go:334] "Generic (PLEG): container finished" podID="4feb3774-888a-4be4-b47a-b929ec6e98dc" containerID="c97be9f9bc94115dd5ed635c1b25876754c45231b8349986a5ac49b0a7cfdce2" exitCode=0 Dec 11 13:50:32 crc kubenswrapper[5050]: I1211 13:50:32.355997 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qg6rf" event={"ID":"4feb3774-888a-4be4-b47a-b929ec6e98dc","Type":"ContainerDied","Data":"c97be9f9bc94115dd5ed635c1b25876754c45231b8349986a5ac49b0a7cfdce2"} Dec 11 13:50:33 crc kubenswrapper[5050]: I1211 13:50:33.363846 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-klgr9" event={"ID":"5faa7088-04c1-4d75-abab-1e426f7cd032","Type":"ContainerStarted","Data":"c1d11feca1d0095daddd3422394680e19f240a7953b861267d74c918f945c142"} Dec 11 13:50:33 crc kubenswrapper[5050]: I1211 13:50:33.383672 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-klgr9" podStartSLOduration=3.606687187 podStartE2EDuration="1m19.383630746s" podCreationTimestamp="2025-12-11 13:49:14 +0000 UTC" firstStartedPulling="2025-12-11 13:49:16.511446628 +0000 UTC m=+47.355169214" lastFinishedPulling="2025-12-11 13:50:32.288390187 +0000 UTC m=+123.132112773" observedRunningTime="2025-12-11 13:50:33.383185612 +0000 UTC m=+124.226908198" watchObservedRunningTime="2025-12-11 13:50:33.383630746 +0000 UTC m=+124.227353362" Dec 11 13:50:34 crc kubenswrapper[5050]: I1211 13:50:34.953934 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6n98v" Dec 11 13:50:34 crc kubenswrapper[5050]: I1211 13:50:34.954284 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6n98v" Dec 11 13:50:35 crc kubenswrapper[5050]: I1211 13:50:35.001930 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6n98v" Dec 11 13:50:35 crc kubenswrapper[5050]: I1211 13:50:35.355706 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-klgr9" Dec 11 13:50:35 crc kubenswrapper[5050]: I1211 13:50:35.355767 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-klgr9" Dec 11 13:50:35 crc kubenswrapper[5050]: I1211 13:50:35.401466 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-klgr9" Dec 11 13:50:35 crc kubenswrapper[5050]: I1211 13:50:35.426415 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6n98v" Dec 11 13:50:36 crc kubenswrapper[5050]: I1211 13:50:36.922023 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c2qj7" Dec 11 13:50:36 crc kubenswrapper[5050]: I1211 13:50:36.922360 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c2qj7" Dec 11 13:50:36 crc kubenswrapper[5050]: I1211 13:50:36.966075 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c2qj7" Dec 11 13:50:37 crc kubenswrapper[5050]: I1211 13:50:37.437189 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c2qj7" Dec 11 13:50:38 crc kubenswrapper[5050]: I1211 13:50:38.151990 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jsc5s" Dec 11 13:50:38 crc kubenswrapper[5050]: I1211 13:50:38.152557 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jsc5s" Dec 11 13:50:38 crc kubenswrapper[5050]: I1211 13:50:38.196904 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jsc5s" Dec 11 13:50:38 crc kubenswrapper[5050]: I1211 13:50:38.432845 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jsc5s" Dec 11 13:50:38 crc kubenswrapper[5050]: I1211 13:50:38.609100 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-brfnx" Dec 11 13:50:38 crc kubenswrapper[5050]: I1211 13:50:38.648316 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-brfnx" Dec 11 13:50:39 crc kubenswrapper[5050]: I1211 13:50:39.399088 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvns6" event={"ID":"6f9d51ca-1ebf-4986-ba4b-08939d025cbd","Type":"ContainerStarted","Data":"3c6fed5218851381430b7af63dc6db763fa3f0fed877187a918be00508d2b02f"} Dec 11 13:50:39 crc kubenswrapper[5050]: I1211 13:50:39.615914 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xnlm8"] Dec 11 13:50:39 crc kubenswrapper[5050]: E1211 13:50:39.616348 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f687791-2831-4665-bb32-ee48ab6e70be" containerName="kube-multus-additional-cni-plugins" Dec 11 13:50:39 crc kubenswrapper[5050]: I1211 13:50:39.616367 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f687791-2831-4665-bb32-ee48ab6e70be" containerName="kube-multus-additional-cni-plugins" Dec 11 13:50:39 crc kubenswrapper[5050]: E1211 13:50:39.616381 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cf72ec8-510f-4f7a-95d7-180ac3d9fd20" containerName="pruner" Dec 11 13:50:39 crc kubenswrapper[5050]: I1211 13:50:39.616387 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cf72ec8-510f-4f7a-95d7-180ac3d9fd20" containerName="pruner" Dec 11 13:50:39 crc kubenswrapper[5050]: I1211 13:50:39.616517 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f687791-2831-4665-bb32-ee48ab6e70be" containerName="kube-multus-additional-cni-plugins" Dec 11 13:50:39 crc kubenswrapper[5050]: I1211 13:50:39.616533 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cf72ec8-510f-4f7a-95d7-180ac3d9fd20" containerName="pruner" Dec 11 13:50:39 crc kubenswrapper[5050]: I1211 13:50:39.617193 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xnlm8" Dec 11 13:50:39 crc kubenswrapper[5050]: I1211 13:50:39.639089 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xnlm8"] Dec 11 13:50:39 crc kubenswrapper[5050]: I1211 13:50:39.685470 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a036509b-fc6d-42a8-98df-36b63a49163a-trusted-ca\") pod \"image-registry-66df7c8f76-xnlm8\" (UID: \"a036509b-fc6d-42a8-98df-36b63a49163a\") " pod="openshift-image-registry/image-registry-66df7c8f76-xnlm8" Dec 11 13:50:39 crc kubenswrapper[5050]: I1211 13:50:39.685556 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a036509b-fc6d-42a8-98df-36b63a49163a-registry-certificates\") pod \"image-registry-66df7c8f76-xnlm8\" (UID: \"a036509b-fc6d-42a8-98df-36b63a49163a\") " pod="openshift-image-registry/image-registry-66df7c8f76-xnlm8" Dec 11 13:50:39 crc kubenswrapper[5050]: I1211 13:50:39.685586 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsxdw\" (UniqueName: \"kubernetes.io/projected/a036509b-fc6d-42a8-98df-36b63a49163a-kube-api-access-vsxdw\") pod \"image-registry-66df7c8f76-xnlm8\" (UID: \"a036509b-fc6d-42a8-98df-36b63a49163a\") " pod="openshift-image-registry/image-registry-66df7c8f76-xnlm8" Dec 11 13:50:39 crc kubenswrapper[5050]: I1211 13:50:39.685607 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a036509b-fc6d-42a8-98df-36b63a49163a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xnlm8\" (UID: \"a036509b-fc6d-42a8-98df-36b63a49163a\") " pod="openshift-image-registry/image-registry-66df7c8f76-xnlm8" Dec 11 13:50:39 crc kubenswrapper[5050]: I1211 13:50:39.685645 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xnlm8\" (UID: \"a036509b-fc6d-42a8-98df-36b63a49163a\") " pod="openshift-image-registry/image-registry-66df7c8f76-xnlm8" Dec 11 13:50:39 crc kubenswrapper[5050]: I1211 13:50:39.685677 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a036509b-fc6d-42a8-98df-36b63a49163a-registry-tls\") pod \"image-registry-66df7c8f76-xnlm8\" (UID: \"a036509b-fc6d-42a8-98df-36b63a49163a\") " pod="openshift-image-registry/image-registry-66df7c8f76-xnlm8" Dec 11 13:50:39 crc kubenswrapper[5050]: I1211 13:50:39.685762 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a036509b-fc6d-42a8-98df-36b63a49163a-bound-sa-token\") pod \"image-registry-66df7c8f76-xnlm8\" (UID: \"a036509b-fc6d-42a8-98df-36b63a49163a\") " pod="openshift-image-registry/image-registry-66df7c8f76-xnlm8" Dec 11 13:50:39 crc kubenswrapper[5050]: I1211 13:50:39.685796 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a036509b-fc6d-42a8-98df-36b63a49163a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xnlm8\" (UID: \"a036509b-fc6d-42a8-98df-36b63a49163a\") " pod="openshift-image-registry/image-registry-66df7c8f76-xnlm8" Dec 11 13:50:39 crc kubenswrapper[5050]: I1211 13:50:39.730644 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xnlm8\" (UID: \"a036509b-fc6d-42a8-98df-36b63a49163a\") " pod="openshift-image-registry/image-registry-66df7c8f76-xnlm8" Dec 11 13:50:39 crc kubenswrapper[5050]: I1211 13:50:39.787765 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a036509b-fc6d-42a8-98df-36b63a49163a-bound-sa-token\") pod \"image-registry-66df7c8f76-xnlm8\" (UID: \"a036509b-fc6d-42a8-98df-36b63a49163a\") " pod="openshift-image-registry/image-registry-66df7c8f76-xnlm8" Dec 11 13:50:39 crc kubenswrapper[5050]: I1211 13:50:39.787830 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a036509b-fc6d-42a8-98df-36b63a49163a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xnlm8\" (UID: \"a036509b-fc6d-42a8-98df-36b63a49163a\") " pod="openshift-image-registry/image-registry-66df7c8f76-xnlm8" Dec 11 13:50:39 crc kubenswrapper[5050]: I1211 13:50:39.788864 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a036509b-fc6d-42a8-98df-36b63a49163a-trusted-ca\") pod \"image-registry-66df7c8f76-xnlm8\" (UID: \"a036509b-fc6d-42a8-98df-36b63a49163a\") " pod="openshift-image-registry/image-registry-66df7c8f76-xnlm8" Dec 11 13:50:39 crc kubenswrapper[5050]: I1211 13:50:39.788900 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a036509b-fc6d-42a8-98df-36b63a49163a-registry-certificates\") pod \"image-registry-66df7c8f76-xnlm8\" (UID: \"a036509b-fc6d-42a8-98df-36b63a49163a\") " pod="openshift-image-registry/image-registry-66df7c8f76-xnlm8" Dec 11 13:50:39 crc kubenswrapper[5050]: I1211 13:50:39.788919 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsxdw\" (UniqueName: \"kubernetes.io/projected/a036509b-fc6d-42a8-98df-36b63a49163a-kube-api-access-vsxdw\") pod \"image-registry-66df7c8f76-xnlm8\" (UID: \"a036509b-fc6d-42a8-98df-36b63a49163a\") " pod="openshift-image-registry/image-registry-66df7c8f76-xnlm8" Dec 11 13:50:39 crc kubenswrapper[5050]: I1211 13:50:39.788940 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a036509b-fc6d-42a8-98df-36b63a49163a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xnlm8\" (UID: \"a036509b-fc6d-42a8-98df-36b63a49163a\") " pod="openshift-image-registry/image-registry-66df7c8f76-xnlm8" Dec 11 13:50:39 crc kubenswrapper[5050]: I1211 13:50:39.788966 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a036509b-fc6d-42a8-98df-36b63a49163a-registry-tls\") pod \"image-registry-66df7c8f76-xnlm8\" (UID: \"a036509b-fc6d-42a8-98df-36b63a49163a\") " pod="openshift-image-registry/image-registry-66df7c8f76-xnlm8" Dec 11 13:50:39 crc kubenswrapper[5050]: I1211 13:50:39.790728 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a036509b-fc6d-42a8-98df-36b63a49163a-registry-certificates\") pod \"image-registry-66df7c8f76-xnlm8\" (UID: \"a036509b-fc6d-42a8-98df-36b63a49163a\") " pod="openshift-image-registry/image-registry-66df7c8f76-xnlm8" Dec 11 13:50:39 crc kubenswrapper[5050]: I1211 13:50:39.791583 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a036509b-fc6d-42a8-98df-36b63a49163a-trusted-ca\") pod \"image-registry-66df7c8f76-xnlm8\" (UID: \"a036509b-fc6d-42a8-98df-36b63a49163a\") " pod="openshift-image-registry/image-registry-66df7c8f76-xnlm8" Dec 11 13:50:39 crc kubenswrapper[5050]: I1211 13:50:39.792116 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a036509b-fc6d-42a8-98df-36b63a49163a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xnlm8\" (UID: \"a036509b-fc6d-42a8-98df-36b63a49163a\") " pod="openshift-image-registry/image-registry-66df7c8f76-xnlm8" Dec 11 13:50:39 crc kubenswrapper[5050]: I1211 13:50:39.800613 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a036509b-fc6d-42a8-98df-36b63a49163a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xnlm8\" (UID: \"a036509b-fc6d-42a8-98df-36b63a49163a\") " pod="openshift-image-registry/image-registry-66df7c8f76-xnlm8" Dec 11 13:50:39 crc kubenswrapper[5050]: I1211 13:50:39.801027 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a036509b-fc6d-42a8-98df-36b63a49163a-registry-tls\") pod \"image-registry-66df7c8f76-xnlm8\" (UID: \"a036509b-fc6d-42a8-98df-36b63a49163a\") " pod="openshift-image-registry/image-registry-66df7c8f76-xnlm8" Dec 11 13:50:39 crc kubenswrapper[5050]: I1211 13:50:39.807284 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a036509b-fc6d-42a8-98df-36b63a49163a-bound-sa-token\") pod \"image-registry-66df7c8f76-xnlm8\" (UID: \"a036509b-fc6d-42a8-98df-36b63a49163a\") " pod="openshift-image-registry/image-registry-66df7c8f76-xnlm8" Dec 11 13:50:39 crc kubenswrapper[5050]: I1211 13:50:39.807364 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsxdw\" (UniqueName: \"kubernetes.io/projected/a036509b-fc6d-42a8-98df-36b63a49163a-kube-api-access-vsxdw\") pod \"image-registry-66df7c8f76-xnlm8\" (UID: \"a036509b-fc6d-42a8-98df-36b63a49163a\") " pod="openshift-image-registry/image-registry-66df7c8f76-xnlm8" Dec 11 13:50:39 crc kubenswrapper[5050]: I1211 13:50:39.936150 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xnlm8" Dec 11 13:50:40 crc kubenswrapper[5050]: I1211 13:50:40.505693 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-brfnx"] Dec 11 13:50:40 crc kubenswrapper[5050]: I1211 13:50:40.506061 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-brfnx" podUID="ddbc9fa4-63d4-48f6-b8e0-a2c36815399c" containerName="registry-server" containerID="cri-o://aa6b0c02ea4dd7bd70eca83559c47c4b398c88cb0070aca9a6a6b059dcb03dd8" gracePeriod=2 Dec 11 13:50:42 crc kubenswrapper[5050]: I1211 13:50:42.418675 5050 generic.go:334] "Generic (PLEG): container finished" podID="ddbc9fa4-63d4-48f6-b8e0-a2c36815399c" containerID="aa6b0c02ea4dd7bd70eca83559c47c4b398c88cb0070aca9a6a6b059dcb03dd8" exitCode=0 Dec 11 13:50:42 crc kubenswrapper[5050]: I1211 13:50:42.418772 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brfnx" event={"ID":"ddbc9fa4-63d4-48f6-b8e0-a2c36815399c","Type":"ContainerDied","Data":"aa6b0c02ea4dd7bd70eca83559c47c4b398c88cb0070aca9a6a6b059dcb03dd8"} Dec 11 13:50:42 crc kubenswrapper[5050]: I1211 13:50:42.441611 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pvns6" podStartSLOduration=7.9106802720000005 podStartE2EDuration="1m28.441592398s" podCreationTimestamp="2025-12-11 13:49:14 +0000 UTC" firstStartedPulling="2025-12-11 13:49:16.507227834 +0000 UTC m=+47.350950420" lastFinishedPulling="2025-12-11 13:50:37.03813996 +0000 UTC m=+127.881862546" observedRunningTime="2025-12-11 13:50:42.439147763 +0000 UTC m=+133.282870349" watchObservedRunningTime="2025-12-11 13:50:42.441592398 +0000 UTC m=+133.285314994" Dec 11 13:50:43 crc kubenswrapper[5050]: I1211 13:50:43.219655 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-brfnx" Dec 11 13:50:43 crc kubenswrapper[5050]: I1211 13:50:43.235846 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwcmn\" (UniqueName: \"kubernetes.io/projected/ddbc9fa4-63d4-48f6-b8e0-a2c36815399c-kube-api-access-rwcmn\") pod \"ddbc9fa4-63d4-48f6-b8e0-a2c36815399c\" (UID: \"ddbc9fa4-63d4-48f6-b8e0-a2c36815399c\") " Dec 11 13:50:43 crc kubenswrapper[5050]: I1211 13:50:43.235965 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddbc9fa4-63d4-48f6-b8e0-a2c36815399c-catalog-content\") pod \"ddbc9fa4-63d4-48f6-b8e0-a2c36815399c\" (UID: \"ddbc9fa4-63d4-48f6-b8e0-a2c36815399c\") " Dec 11 13:50:43 crc kubenswrapper[5050]: I1211 13:50:43.236000 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddbc9fa4-63d4-48f6-b8e0-a2c36815399c-utilities\") pod \"ddbc9fa4-63d4-48f6-b8e0-a2c36815399c\" (UID: \"ddbc9fa4-63d4-48f6-b8e0-a2c36815399c\") " Dec 11 13:50:43 crc kubenswrapper[5050]: I1211 13:50:43.236961 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddbc9fa4-63d4-48f6-b8e0-a2c36815399c-utilities" (OuterVolumeSpecName: "utilities") pod "ddbc9fa4-63d4-48f6-b8e0-a2c36815399c" (UID: "ddbc9fa4-63d4-48f6-b8e0-a2c36815399c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:50:43 crc kubenswrapper[5050]: I1211 13:50:43.244485 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddbc9fa4-63d4-48f6-b8e0-a2c36815399c-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:43 crc kubenswrapper[5050]: I1211 13:50:43.248978 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddbc9fa4-63d4-48f6-b8e0-a2c36815399c-kube-api-access-rwcmn" (OuterVolumeSpecName: "kube-api-access-rwcmn") pod "ddbc9fa4-63d4-48f6-b8e0-a2c36815399c" (UID: "ddbc9fa4-63d4-48f6-b8e0-a2c36815399c"). InnerVolumeSpecName "kube-api-access-rwcmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:50:43 crc kubenswrapper[5050]: I1211 13:50:43.345973 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwcmn\" (UniqueName: \"kubernetes.io/projected/ddbc9fa4-63d4-48f6-b8e0-a2c36815399c-kube-api-access-rwcmn\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:43 crc kubenswrapper[5050]: I1211 13:50:43.366300 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddbc9fa4-63d4-48f6-b8e0-a2c36815399c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ddbc9fa4-63d4-48f6-b8e0-a2c36815399c" (UID: "ddbc9fa4-63d4-48f6-b8e0-a2c36815399c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:50:43 crc kubenswrapper[5050]: I1211 13:50:43.431939 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-brfnx" event={"ID":"ddbc9fa4-63d4-48f6-b8e0-a2c36815399c","Type":"ContainerDied","Data":"edf7c8ef757f638c196d42e866daf80c659b883fa9673a9128fc31b4f3a6eaa6"} Dec 11 13:50:43 crc kubenswrapper[5050]: I1211 13:50:43.431990 5050 scope.go:117] "RemoveContainer" containerID="aa6b0c02ea4dd7bd70eca83559c47c4b398c88cb0070aca9a6a6b059dcb03dd8" Dec 11 13:50:43 crc kubenswrapper[5050]: I1211 13:50:43.432106 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-brfnx" Dec 11 13:50:43 crc kubenswrapper[5050]: I1211 13:50:43.447938 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddbc9fa4-63d4-48f6-b8e0-a2c36815399c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:43 crc kubenswrapper[5050]: I1211 13:50:43.460902 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-brfnx"] Dec 11 13:50:43 crc kubenswrapper[5050]: I1211 13:50:43.463779 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-brfnx"] Dec 11 13:50:43 crc kubenswrapper[5050]: I1211 13:50:43.553072 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddbc9fa4-63d4-48f6-b8e0-a2c36815399c" path="/var/lib/kubelet/pods/ddbc9fa4-63d4-48f6-b8e0-a2c36815399c/volumes" Dec 11 13:50:43 crc kubenswrapper[5050]: I1211 13:50:43.902411 5050 scope.go:117] "RemoveContainer" containerID="2c409605482c83ca511a448721060531eac0329b59f94fdbcccff56c48275b7c" Dec 11 13:50:45 crc kubenswrapper[5050]: I1211 13:50:45.169962 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pvns6" Dec 11 13:50:45 crc kubenswrapper[5050]: I1211 13:50:45.170695 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pvns6" Dec 11 13:50:45 crc kubenswrapper[5050]: I1211 13:50:45.216037 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pvns6" Dec 11 13:50:45 crc kubenswrapper[5050]: I1211 13:50:45.396879 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-klgr9" Dec 11 13:50:45 crc kubenswrapper[5050]: I1211 13:50:45.481948 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pvns6" Dec 11 13:50:46 crc kubenswrapper[5050]: I1211 13:50:46.493585 5050 scope.go:117] "RemoveContainer" containerID="4b9d620346737c5034b375bd4eb516a3bc029b7ff2674357a61ffc2fcd692c95" Dec 11 13:50:47 crc kubenswrapper[5050]: I1211 13:50:47.212055 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-l4w2d"] Dec 11 13:50:47 crc kubenswrapper[5050]: I1211 13:50:47.307470 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-klgr9"] Dec 11 13:50:47 crc kubenswrapper[5050]: I1211 13:50:47.308079 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-klgr9" podUID="5faa7088-04c1-4d75-abab-1e426f7cd032" containerName="registry-server" containerID="cri-o://c1d11feca1d0095daddd3422394680e19f240a7953b861267d74c918f945c142" gracePeriod=2 Dec 11 13:50:47 crc kubenswrapper[5050]: I1211 13:50:47.961417 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xnlm8"] Dec 11 13:50:47 crc kubenswrapper[5050]: W1211 13:50:47.966230 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda036509b_fc6d_42a8_98df_36b63a49163a.slice/crio-67fb309fefcc23c4e8b3dfb6518c303ab492a2a18b38f3ff8747fe957165e176 WatchSource:0}: Error finding container 67fb309fefcc23c4e8b3dfb6518c303ab492a2a18b38f3ff8747fe957165e176: Status 404 returned error can't find the container with id 67fb309fefcc23c4e8b3dfb6518c303ab492a2a18b38f3ff8747fe957165e176 Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.468765 5050 generic.go:334] "Generic (PLEG): container finished" podID="5faa7088-04c1-4d75-abab-1e426f7cd032" containerID="c1d11feca1d0095daddd3422394680e19f240a7953b861267d74c918f945c142" exitCode=0 Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.468856 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-klgr9" event={"ID":"5faa7088-04c1-4d75-abab-1e426f7cd032","Type":"ContainerDied","Data":"c1d11feca1d0095daddd3422394680e19f240a7953b861267d74c918f945c142"} Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.471188 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xnlm8" event={"ID":"a036509b-fc6d-42a8-98df-36b63a49163a","Type":"ContainerStarted","Data":"67fb309fefcc23c4e8b3dfb6518c303ab492a2a18b38f3ff8747fe957165e176"} Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.564731 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bfjt2"] Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.584661 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pvns6"] Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.585491 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pvns6" podUID="6f9d51ca-1ebf-4986-ba4b-08939d025cbd" containerName="registry-server" containerID="cri-o://3c6fed5218851381430b7af63dc6db763fa3f0fed877187a918be00508d2b02f" gracePeriod=30 Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.589747 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6n98v"] Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.590049 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6n98v" podUID="b2ee71a3-392e-442c-aa3b-bec310a86031" containerName="registry-server" containerID="cri-o://dfd20eb336193611fa0682d43fce71cb8854edd92d8af8903e3b99c890351872" gracePeriod=30 Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.597430 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rmlj6"] Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.597694 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-rmlj6" podUID="18d0253b-d7da-4be9-8fc3-6911f1c92076" containerName="marketplace-operator" containerID="cri-o://1b79085b5d7c7cda571d08b9ebfbe11a57da21a8992138208b2e1f41be5d86d2" gracePeriod=30 Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.603555 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2qj7"] Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.603809 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-c2qj7" podUID="ed489b52-31c7-44c8-b634-4a99e1644f65" containerName="registry-server" containerID="cri-o://dc830007bd91bbaf0d691bea30b8eccc2bcfd27e6c94b428b50f9493d2ea6b93" gracePeriod=30 Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.615686 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qg6rf"] Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.620598 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jsc5s"] Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.620928 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jsc5s" podUID="680b10f8-ff96-43e4-b2e2-22ee05c6d815" containerName="registry-server" containerID="cri-o://e284d7968a917c91aed620df3cec36c9db27f417a664bc53be9523947ed1bef2" gracePeriod=30 Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.654763 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cgxqx"] Dec 11 13:50:48 crc kubenswrapper[5050]: E1211 13:50:48.654983 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddbc9fa4-63d4-48f6-b8e0-a2c36815399c" containerName="extract-content" Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.655001 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddbc9fa4-63d4-48f6-b8e0-a2c36815399c" containerName="extract-content" Dec 11 13:50:48 crc kubenswrapper[5050]: E1211 13:50:48.655127 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddbc9fa4-63d4-48f6-b8e0-a2c36815399c" containerName="extract-utilities" Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.655134 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddbc9fa4-63d4-48f6-b8e0-a2c36815399c" containerName="extract-utilities" Dec 11 13:50:48 crc kubenswrapper[5050]: E1211 13:50:48.655166 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddbc9fa4-63d4-48f6-b8e0-a2c36815399c" containerName="registry-server" Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.655173 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddbc9fa4-63d4-48f6-b8e0-a2c36815399c" containerName="registry-server" Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.655274 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddbc9fa4-63d4-48f6-b8e0-a2c36815399c" containerName="registry-server" Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.655685 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cgxqx" Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.670965 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cgxqx"] Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.740991 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bgjs\" (UniqueName: \"kubernetes.io/projected/fd564500-1ab5-401f-84a8-79c80dfe50ab-kube-api-access-5bgjs\") pod \"marketplace-operator-79b997595-cgxqx\" (UID: \"fd564500-1ab5-401f-84a8-79c80dfe50ab\") " pod="openshift-marketplace/marketplace-operator-79b997595-cgxqx" Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.741052 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fd564500-1ab5-401f-84a8-79c80dfe50ab-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cgxqx\" (UID: \"fd564500-1ab5-401f-84a8-79c80dfe50ab\") " pod="openshift-marketplace/marketplace-operator-79b997595-cgxqx" Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.741089 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fd564500-1ab5-401f-84a8-79c80dfe50ab-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cgxqx\" (UID: \"fd564500-1ab5-401f-84a8-79c80dfe50ab\") " pod="openshift-marketplace/marketplace-operator-79b997595-cgxqx" Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.842291 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bgjs\" (UniqueName: \"kubernetes.io/projected/fd564500-1ab5-401f-84a8-79c80dfe50ab-kube-api-access-5bgjs\") pod \"marketplace-operator-79b997595-cgxqx\" (UID: \"fd564500-1ab5-401f-84a8-79c80dfe50ab\") " pod="openshift-marketplace/marketplace-operator-79b997595-cgxqx" Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.842341 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fd564500-1ab5-401f-84a8-79c80dfe50ab-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cgxqx\" (UID: \"fd564500-1ab5-401f-84a8-79c80dfe50ab\") " pod="openshift-marketplace/marketplace-operator-79b997595-cgxqx" Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.842388 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fd564500-1ab5-401f-84a8-79c80dfe50ab-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cgxqx\" (UID: \"fd564500-1ab5-401f-84a8-79c80dfe50ab\") " pod="openshift-marketplace/marketplace-operator-79b997595-cgxqx" Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.843874 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fd564500-1ab5-401f-84a8-79c80dfe50ab-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cgxqx\" (UID: \"fd564500-1ab5-401f-84a8-79c80dfe50ab\") " pod="openshift-marketplace/marketplace-operator-79b997595-cgxqx" Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.848925 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fd564500-1ab5-401f-84a8-79c80dfe50ab-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cgxqx\" (UID: \"fd564500-1ab5-401f-84a8-79c80dfe50ab\") " pod="openshift-marketplace/marketplace-operator-79b997595-cgxqx" Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.858236 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bgjs\" (UniqueName: \"kubernetes.io/projected/fd564500-1ab5-401f-84a8-79c80dfe50ab-kube-api-access-5bgjs\") pod \"marketplace-operator-79b997595-cgxqx\" (UID: \"fd564500-1ab5-401f-84a8-79c80dfe50ab\") " pod="openshift-marketplace/marketplace-operator-79b997595-cgxqx" Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.896765 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-klgr9" Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.944380 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5faa7088-04c1-4d75-abab-1e426f7cd032-utilities\") pod \"5faa7088-04c1-4d75-abab-1e426f7cd032\" (UID: \"5faa7088-04c1-4d75-abab-1e426f7cd032\") " Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.944455 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5faa7088-04c1-4d75-abab-1e426f7cd032-catalog-content\") pod \"5faa7088-04c1-4d75-abab-1e426f7cd032\" (UID: \"5faa7088-04c1-4d75-abab-1e426f7cd032\") " Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.944483 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-967wt\" (UniqueName: \"kubernetes.io/projected/5faa7088-04c1-4d75-abab-1e426f7cd032-kube-api-access-967wt\") pod \"5faa7088-04c1-4d75-abab-1e426f7cd032\" (UID: \"5faa7088-04c1-4d75-abab-1e426f7cd032\") " Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.945383 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5faa7088-04c1-4d75-abab-1e426f7cd032-utilities" (OuterVolumeSpecName: "utilities") pod "5faa7088-04c1-4d75-abab-1e426f7cd032" (UID: "5faa7088-04c1-4d75-abab-1e426f7cd032"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.947353 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5faa7088-04c1-4d75-abab-1e426f7cd032-kube-api-access-967wt" (OuterVolumeSpecName: "kube-api-access-967wt") pod "5faa7088-04c1-4d75-abab-1e426f7cd032" (UID: "5faa7088-04c1-4d75-abab-1e426f7cd032"). InnerVolumeSpecName "kube-api-access-967wt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.980629 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cgxqx" Dec 11 13:50:48 crc kubenswrapper[5050]: I1211 13:50:48.995396 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5faa7088-04c1-4d75-abab-1e426f7cd032-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5faa7088-04c1-4d75-abab-1e426f7cd032" (UID: "5faa7088-04c1-4d75-abab-1e426f7cd032"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:50:49 crc kubenswrapper[5050]: I1211 13:50:49.046282 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5faa7088-04c1-4d75-abab-1e426f7cd032-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:49 crc kubenswrapper[5050]: I1211 13:50:49.046344 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5faa7088-04c1-4d75-abab-1e426f7cd032-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:49 crc kubenswrapper[5050]: I1211 13:50:49.046358 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-967wt\" (UniqueName: \"kubernetes.io/projected/5faa7088-04c1-4d75-abab-1e426f7cd032-kube-api-access-967wt\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:49 crc kubenswrapper[5050]: I1211 13:50:49.192923 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cgxqx"] Dec 11 13:50:49 crc kubenswrapper[5050]: W1211 13:50:49.217766 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd564500_1ab5_401f_84a8_79c80dfe50ab.slice/crio-65b61b6f105e2e64dd3b5cc541558bdafc230df61ca561a6c65ec440170f31a9 WatchSource:0}: Error finding container 65b61b6f105e2e64dd3b5cc541558bdafc230df61ca561a6c65ec440170f31a9: Status 404 returned error can't find the container with id 65b61b6f105e2e64dd3b5cc541558bdafc230df61ca561a6c65ec440170f31a9 Dec 11 13:50:49 crc kubenswrapper[5050]: I1211 13:50:49.481198 5050 generic.go:334] "Generic (PLEG): container finished" podID="6f9d51ca-1ebf-4986-ba4b-08939d025cbd" containerID="3c6fed5218851381430b7af63dc6db763fa3f0fed877187a918be00508d2b02f" exitCode=0 Dec 11 13:50:49 crc kubenswrapper[5050]: I1211 13:50:49.481288 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvns6" event={"ID":"6f9d51ca-1ebf-4986-ba4b-08939d025cbd","Type":"ContainerDied","Data":"3c6fed5218851381430b7af63dc6db763fa3f0fed877187a918be00508d2b02f"} Dec 11 13:50:49 crc kubenswrapper[5050]: I1211 13:50:49.484294 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-klgr9" event={"ID":"5faa7088-04c1-4d75-abab-1e426f7cd032","Type":"ContainerDied","Data":"64e34c8b0d592d6d8c407ab5a282525681ef1f3e756d7e0b129a53c04c7ec4d9"} Dec 11 13:50:49 crc kubenswrapper[5050]: I1211 13:50:49.484389 5050 scope.go:117] "RemoveContainer" containerID="c1d11feca1d0095daddd3422394680e19f240a7953b861267d74c918f945c142" Dec 11 13:50:49 crc kubenswrapper[5050]: I1211 13:50:49.484325 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-klgr9" Dec 11 13:50:49 crc kubenswrapper[5050]: I1211 13:50:49.486358 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qg6rf" event={"ID":"4feb3774-888a-4be4-b47a-b929ec6e98dc","Type":"ContainerStarted","Data":"6e859335514bf6abe68d3832dd99b4d01e6a93c97a5650896c28944e07889652"} Dec 11 13:50:49 crc kubenswrapper[5050]: I1211 13:50:49.486478 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qg6rf" podUID="4feb3774-888a-4be4-b47a-b929ec6e98dc" containerName="registry-server" containerID="cri-o://6e859335514bf6abe68d3832dd99b4d01e6a93c97a5650896c28944e07889652" gracePeriod=30 Dec 11 13:50:49 crc kubenswrapper[5050]: I1211 13:50:49.490692 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xnlm8" event={"ID":"a036509b-fc6d-42a8-98df-36b63a49163a","Type":"ContainerStarted","Data":"f31ccb7da4a5e4db5069360ca094f74f419cf59152581a92c8acae85394cd5be"} Dec 11 13:50:49 crc kubenswrapper[5050]: I1211 13:50:49.490847 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-xnlm8" Dec 11 13:50:49 crc kubenswrapper[5050]: I1211 13:50:49.493333 5050 generic.go:334] "Generic (PLEG): container finished" podID="345ac559-35e7-487b-859a-e583a6e88c6c" containerID="cebace4781b247657a72e3ee17203e22a4044f9a2514ffbc0bd35eba65363cce" exitCode=0 Dec 11 13:50:49 crc kubenswrapper[5050]: I1211 13:50:49.493451 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bfjt2" event={"ID":"345ac559-35e7-487b-859a-e583a6e88c6c","Type":"ContainerDied","Data":"cebace4781b247657a72e3ee17203e22a4044f9a2514ffbc0bd35eba65363cce"} Dec 11 13:50:49 crc kubenswrapper[5050]: I1211 13:50:49.494719 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cgxqx" event={"ID":"fd564500-1ab5-401f-84a8-79c80dfe50ab","Type":"ContainerStarted","Data":"65b61b6f105e2e64dd3b5cc541558bdafc230df61ca561a6c65ec440170f31a9"} Dec 11 13:50:49 crc kubenswrapper[5050]: I1211 13:50:49.497143 5050 generic.go:334] "Generic (PLEG): container finished" podID="18d0253b-d7da-4be9-8fc3-6911f1c92076" containerID="1b79085b5d7c7cda571d08b9ebfbe11a57da21a8992138208b2e1f41be5d86d2" exitCode=0 Dec 11 13:50:49 crc kubenswrapper[5050]: I1211 13:50:49.497171 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rmlj6" event={"ID":"18d0253b-d7da-4be9-8fc3-6911f1c92076","Type":"ContainerDied","Data":"1b79085b5d7c7cda571d08b9ebfbe11a57da21a8992138208b2e1f41be5d86d2"} Dec 11 13:50:49 crc kubenswrapper[5050]: I1211 13:50:49.510769 5050 scope.go:117] "RemoveContainer" containerID="6c88c33d12545816768a18a1f15c48e0f04a99bed5dc89024e8ea3a8a816356d" Dec 11 13:50:49 crc kubenswrapper[5050]: I1211 13:50:49.523579 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qg6rf" podStartSLOduration=4.61768881 podStartE2EDuration="1m33.523564098s" podCreationTimestamp="2025-12-11 13:49:16 +0000 UTC" firstStartedPulling="2025-12-11 13:49:18.68031692 +0000 UTC m=+49.524039496" lastFinishedPulling="2025-12-11 13:50:47.586192198 +0000 UTC m=+138.429914784" observedRunningTime="2025-12-11 13:50:49.521583395 +0000 UTC m=+140.365305981" watchObservedRunningTime="2025-12-11 13:50:49.523564098 +0000 UTC m=+140.367286674" Dec 11 13:50:49 crc kubenswrapper[5050]: I1211 13:50:49.560162 5050 scope.go:117] "RemoveContainer" containerID="be4b0cf2dadf9624690883fa000c62e0340c3aa5e3610baa84f61b2c83b8a5e8" Dec 11 13:50:49 crc kubenswrapper[5050]: I1211 13:50:49.576383 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-xnlm8" podStartSLOduration=10.576365529 podStartE2EDuration="10.576365529s" podCreationTimestamp="2025-12-11 13:50:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:50:49.572983418 +0000 UTC m=+140.416706024" watchObservedRunningTime="2025-12-11 13:50:49.576365529 +0000 UTC m=+140.420088115" Dec 11 13:50:49 crc kubenswrapper[5050]: I1211 13:50:49.601639 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-klgr9"] Dec 11 13:50:49 crc kubenswrapper[5050]: I1211 13:50:49.605996 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-klgr9"] Dec 11 13:50:49 crc kubenswrapper[5050]: I1211 13:50:49.823999 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bfjt2" Dec 11 13:50:49 crc kubenswrapper[5050]: I1211 13:50:49.856146 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/345ac559-35e7-487b-859a-e583a6e88c6c-catalog-content\") pod \"345ac559-35e7-487b-859a-e583a6e88c6c\" (UID: \"345ac559-35e7-487b-859a-e583a6e88c6c\") " Dec 11 13:50:49 crc kubenswrapper[5050]: I1211 13:50:49.856220 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/345ac559-35e7-487b-859a-e583a6e88c6c-utilities\") pod \"345ac559-35e7-487b-859a-e583a6e88c6c\" (UID: \"345ac559-35e7-487b-859a-e583a6e88c6c\") " Dec 11 13:50:49 crc kubenswrapper[5050]: I1211 13:50:49.856273 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dv7p\" (UniqueName: \"kubernetes.io/projected/345ac559-35e7-487b-859a-e583a6e88c6c-kube-api-access-7dv7p\") pod \"345ac559-35e7-487b-859a-e583a6e88c6c\" (UID: \"345ac559-35e7-487b-859a-e583a6e88c6c\") " Dec 11 13:50:49 crc kubenswrapper[5050]: I1211 13:50:49.857716 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/345ac559-35e7-487b-859a-e583a6e88c6c-utilities" (OuterVolumeSpecName: "utilities") pod "345ac559-35e7-487b-859a-e583a6e88c6c" (UID: "345ac559-35e7-487b-859a-e583a6e88c6c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:50:49 crc kubenswrapper[5050]: I1211 13:50:49.869976 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/345ac559-35e7-487b-859a-e583a6e88c6c-kube-api-access-7dv7p" (OuterVolumeSpecName: "kube-api-access-7dv7p") pod "345ac559-35e7-487b-859a-e583a6e88c6c" (UID: "345ac559-35e7-487b-859a-e583a6e88c6c"). InnerVolumeSpecName "kube-api-access-7dv7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:50:49 crc kubenswrapper[5050]: I1211 13:50:49.914512 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/345ac559-35e7-487b-859a-e583a6e88c6c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "345ac559-35e7-487b-859a-e583a6e88c6c" (UID: "345ac559-35e7-487b-859a-e583a6e88c6c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:50:49 crc kubenswrapper[5050]: I1211 13:50:49.958124 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/345ac559-35e7-487b-859a-e583a6e88c6c-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:49 crc kubenswrapper[5050]: I1211 13:50:49.958170 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dv7p\" (UniqueName: \"kubernetes.io/projected/345ac559-35e7-487b-859a-e583a6e88c6c-kube-api-access-7dv7p\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:49 crc kubenswrapper[5050]: I1211 13:50:49.958180 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/345ac559-35e7-487b-859a-e583a6e88c6c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:50 crc kubenswrapper[5050]: I1211 13:50:50.506433 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bfjt2" event={"ID":"345ac559-35e7-487b-859a-e583a6e88c6c","Type":"ContainerDied","Data":"280d9c487b1a04a11e7f0306857366d64611b16580c0f8711a327b0875c6f2dc"} Dec 11 13:50:50 crc kubenswrapper[5050]: I1211 13:50:50.506806 5050 scope.go:117] "RemoveContainer" containerID="cebace4781b247657a72e3ee17203e22a4044f9a2514ffbc0bd35eba65363cce" Dec 11 13:50:50 crc kubenswrapper[5050]: I1211 13:50:50.506468 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bfjt2" Dec 11 13:50:50 crc kubenswrapper[5050]: I1211 13:50:50.515138 5050 generic.go:334] "Generic (PLEG): container finished" podID="680b10f8-ff96-43e4-b2e2-22ee05c6d815" containerID="e284d7968a917c91aed620df3cec36c9db27f417a664bc53be9523947ed1bef2" exitCode=0 Dec 11 13:50:50 crc kubenswrapper[5050]: I1211 13:50:50.515238 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jsc5s" event={"ID":"680b10f8-ff96-43e4-b2e2-22ee05c6d815","Type":"ContainerDied","Data":"e284d7968a917c91aed620df3cec36c9db27f417a664bc53be9523947ed1bef2"} Dec 11 13:50:50 crc kubenswrapper[5050]: I1211 13:50:50.565467 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bfjt2"] Dec 11 13:50:50 crc kubenswrapper[5050]: I1211 13:50:50.568029 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bfjt2"] Dec 11 13:50:50 crc kubenswrapper[5050]: I1211 13:50:50.693524 5050 scope.go:117] "RemoveContainer" containerID="a1b1333ae6f444e6041eb7cf2ec3fea0b56de643dae8170a2360f333b629fe15" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.098288 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pvns6" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.101361 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rmlj6" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.108276 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jsc5s" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.179400 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/680b10f8-ff96-43e4-b2e2-22ee05c6d815-utilities\") pod \"680b10f8-ff96-43e4-b2e2-22ee05c6d815\" (UID: \"680b10f8-ff96-43e4-b2e2-22ee05c6d815\") " Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.179725 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/680b10f8-ff96-43e4-b2e2-22ee05c6d815-catalog-content\") pod \"680b10f8-ff96-43e4-b2e2-22ee05c6d815\" (UID: \"680b10f8-ff96-43e4-b2e2-22ee05c6d815\") " Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.179855 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jzz5\" (UniqueName: \"kubernetes.io/projected/6f9d51ca-1ebf-4986-ba4b-08939d025cbd-kube-api-access-8jzz5\") pod \"6f9d51ca-1ebf-4986-ba4b-08939d025cbd\" (UID: \"6f9d51ca-1ebf-4986-ba4b-08939d025cbd\") " Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.179962 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/18d0253b-d7da-4be9-8fc3-6911f1c92076-marketplace-operator-metrics\") pod \"18d0253b-d7da-4be9-8fc3-6911f1c92076\" (UID: \"18d0253b-d7da-4be9-8fc3-6911f1c92076\") " Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.180087 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xc6lq\" (UniqueName: \"kubernetes.io/projected/680b10f8-ff96-43e4-b2e2-22ee05c6d815-kube-api-access-xc6lq\") pod \"680b10f8-ff96-43e4-b2e2-22ee05c6d815\" (UID: \"680b10f8-ff96-43e4-b2e2-22ee05c6d815\") " Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.180192 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f9d51ca-1ebf-4986-ba4b-08939d025cbd-catalog-content\") pod \"6f9d51ca-1ebf-4986-ba4b-08939d025cbd\" (UID: \"6f9d51ca-1ebf-4986-ba4b-08939d025cbd\") " Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.180336 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pksc7\" (UniqueName: \"kubernetes.io/projected/18d0253b-d7da-4be9-8fc3-6911f1c92076-kube-api-access-pksc7\") pod \"18d0253b-d7da-4be9-8fc3-6911f1c92076\" (UID: \"18d0253b-d7da-4be9-8fc3-6911f1c92076\") " Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.180457 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/18d0253b-d7da-4be9-8fc3-6911f1c92076-marketplace-trusted-ca\") pod \"18d0253b-d7da-4be9-8fc3-6911f1c92076\" (UID: \"18d0253b-d7da-4be9-8fc3-6911f1c92076\") " Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.180557 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f9d51ca-1ebf-4986-ba4b-08939d025cbd-utilities\") pod \"6f9d51ca-1ebf-4986-ba4b-08939d025cbd\" (UID: \"6f9d51ca-1ebf-4986-ba4b-08939d025cbd\") " Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.180348 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/680b10f8-ff96-43e4-b2e2-22ee05c6d815-utilities" (OuterVolumeSpecName: "utilities") pod "680b10f8-ff96-43e4-b2e2-22ee05c6d815" (UID: "680b10f8-ff96-43e4-b2e2-22ee05c6d815"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.181033 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/680b10f8-ff96-43e4-b2e2-22ee05c6d815-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.181245 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18d0253b-d7da-4be9-8fc3-6911f1c92076-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "18d0253b-d7da-4be9-8fc3-6911f1c92076" (UID: "18d0253b-d7da-4be9-8fc3-6911f1c92076"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.181450 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f9d51ca-1ebf-4986-ba4b-08939d025cbd-utilities" (OuterVolumeSpecName: "utilities") pod "6f9d51ca-1ebf-4986-ba4b-08939d025cbd" (UID: "6f9d51ca-1ebf-4986-ba4b-08939d025cbd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.186248 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18d0253b-d7da-4be9-8fc3-6911f1c92076-kube-api-access-pksc7" (OuterVolumeSpecName: "kube-api-access-pksc7") pod "18d0253b-d7da-4be9-8fc3-6911f1c92076" (UID: "18d0253b-d7da-4be9-8fc3-6911f1c92076"). InnerVolumeSpecName "kube-api-access-pksc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.186352 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/680b10f8-ff96-43e4-b2e2-22ee05c6d815-kube-api-access-xc6lq" (OuterVolumeSpecName: "kube-api-access-xc6lq") pod "680b10f8-ff96-43e4-b2e2-22ee05c6d815" (UID: "680b10f8-ff96-43e4-b2e2-22ee05c6d815"). InnerVolumeSpecName "kube-api-access-xc6lq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.187103 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18d0253b-d7da-4be9-8fc3-6911f1c92076-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "18d0253b-d7da-4be9-8fc3-6911f1c92076" (UID: "18d0253b-d7da-4be9-8fc3-6911f1c92076"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.191663 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f9d51ca-1ebf-4986-ba4b-08939d025cbd-kube-api-access-8jzz5" (OuterVolumeSpecName: "kube-api-access-8jzz5") pod "6f9d51ca-1ebf-4986-ba4b-08939d025cbd" (UID: "6f9d51ca-1ebf-4986-ba4b-08939d025cbd"). InnerVolumeSpecName "kube-api-access-8jzz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.273607 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f9d51ca-1ebf-4986-ba4b-08939d025cbd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6f9d51ca-1ebf-4986-ba4b-08939d025cbd" (UID: "6f9d51ca-1ebf-4986-ba4b-08939d025cbd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.282395 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pksc7\" (UniqueName: \"kubernetes.io/projected/18d0253b-d7da-4be9-8fc3-6911f1c92076-kube-api-access-pksc7\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.282766 5050 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/18d0253b-d7da-4be9-8fc3-6911f1c92076-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.282779 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6f9d51ca-1ebf-4986-ba4b-08939d025cbd-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.282794 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jzz5\" (UniqueName: \"kubernetes.io/projected/6f9d51ca-1ebf-4986-ba4b-08939d025cbd-kube-api-access-8jzz5\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.282807 5050 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/18d0253b-d7da-4be9-8fc3-6911f1c92076-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.282819 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xc6lq\" (UniqueName: \"kubernetes.io/projected/680b10f8-ff96-43e4-b2e2-22ee05c6d815-kube-api-access-xc6lq\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.282831 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6f9d51ca-1ebf-4986-ba4b-08939d025cbd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.291081 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qg6rf_4feb3774-888a-4be4-b47a-b929ec6e98dc/registry-server/0.log" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.291888 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qg6rf" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.344394 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/680b10f8-ff96-43e4-b2e2-22ee05c6d815-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "680b10f8-ff96-43e4-b2e2-22ee05c6d815" (UID: "680b10f8-ff96-43e4-b2e2-22ee05c6d815"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.383288 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4feb3774-888a-4be4-b47a-b929ec6e98dc-utilities\") pod \"4feb3774-888a-4be4-b47a-b929ec6e98dc\" (UID: \"4feb3774-888a-4be4-b47a-b929ec6e98dc\") " Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.383431 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjhqt\" (UniqueName: \"kubernetes.io/projected/4feb3774-888a-4be4-b47a-b929ec6e98dc-kube-api-access-kjhqt\") pod \"4feb3774-888a-4be4-b47a-b929ec6e98dc\" (UID: \"4feb3774-888a-4be4-b47a-b929ec6e98dc\") " Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.383571 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4feb3774-888a-4be4-b47a-b929ec6e98dc-catalog-content\") pod \"4feb3774-888a-4be4-b47a-b929ec6e98dc\" (UID: \"4feb3774-888a-4be4-b47a-b929ec6e98dc\") " Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.383846 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/680b10f8-ff96-43e4-b2e2-22ee05c6d815-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.385059 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4feb3774-888a-4be4-b47a-b929ec6e98dc-utilities" (OuterVolumeSpecName: "utilities") pod "4feb3774-888a-4be4-b47a-b929ec6e98dc" (UID: "4feb3774-888a-4be4-b47a-b929ec6e98dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.389879 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4feb3774-888a-4be4-b47a-b929ec6e98dc-kube-api-access-kjhqt" (OuterVolumeSpecName: "kube-api-access-kjhqt") pod "4feb3774-888a-4be4-b47a-b929ec6e98dc" (UID: "4feb3774-888a-4be4-b47a-b929ec6e98dc"). InnerVolumeSpecName "kube-api-access-kjhqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.405249 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4feb3774-888a-4be4-b47a-b929ec6e98dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4feb3774-888a-4be4-b47a-b929ec6e98dc" (UID: "4feb3774-888a-4be4-b47a-b929ec6e98dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.485657 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4feb3774-888a-4be4-b47a-b929ec6e98dc-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.485697 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4feb3774-888a-4be4-b47a-b929ec6e98dc-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.485714 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjhqt\" (UniqueName: \"kubernetes.io/projected/4feb3774-888a-4be4-b47a-b929ec6e98dc-kube-api-access-kjhqt\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.529296 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-rmlj6" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.529332 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-rmlj6" event={"ID":"18d0253b-d7da-4be9-8fc3-6911f1c92076","Type":"ContainerDied","Data":"e382accad787c3c82b21882c1f9b59f994472f44e849045646ec38f8d589ec58"} Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.529418 5050 scope.go:117] "RemoveContainer" containerID="1b79085b5d7c7cda571d08b9ebfbe11a57da21a8992138208b2e1f41be5d86d2" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.534959 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pvns6" event={"ID":"6f9d51ca-1ebf-4986-ba4b-08939d025cbd","Type":"ContainerDied","Data":"3ab2d98cd21468a2a7a6c4906995e53767611bcced38c6daf72e1cc89655c6b9"} Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.535095 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pvns6" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.541687 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-qg6rf_4feb3774-888a-4be4-b47a-b929ec6e98dc/registry-server/0.log" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.546056 5050 generic.go:334] "Generic (PLEG): container finished" podID="4feb3774-888a-4be4-b47a-b929ec6e98dc" containerID="6e859335514bf6abe68d3832dd99b4d01e6a93c97a5650896c28944e07889652" exitCode=1 Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.546233 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qg6rf" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.565206 5050 generic.go:334] "Generic (PLEG): container finished" podID="ed489b52-31c7-44c8-b634-4a99e1644f65" containerID="dc830007bd91bbaf0d691bea30b8eccc2bcfd27e6c94b428b50f9493d2ea6b93" exitCode=0 Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.567162 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="345ac559-35e7-487b-859a-e583a6e88c6c" path="/var/lib/kubelet/pods/345ac559-35e7-487b-859a-e583a6e88c6c/volumes" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.567795 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5faa7088-04c1-4d75-abab-1e426f7cd032" path="/var/lib/kubelet/pods/5faa7088-04c1-4d75-abab-1e426f7cd032/volumes" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.568943 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rmlj6"] Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.568977 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qg6rf" event={"ID":"4feb3774-888a-4be4-b47a-b929ec6e98dc","Type":"ContainerDied","Data":"6e859335514bf6abe68d3832dd99b4d01e6a93c97a5650896c28944e07889652"} Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.569073 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-rmlj6"] Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.569093 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qg6rf" event={"ID":"4feb3774-888a-4be4-b47a-b929ec6e98dc","Type":"ContainerDied","Data":"ee78b515078382bf5f878c959a16396b36a6333b319361d7dfcd70126318fe5e"} Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.569107 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2qj7" event={"ID":"ed489b52-31c7-44c8-b634-4a99e1644f65","Type":"ContainerDied","Data":"dc830007bd91bbaf0d691bea30b8eccc2bcfd27e6c94b428b50f9493d2ea6b93"} Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.570091 5050 scope.go:117] "RemoveContainer" containerID="3c6fed5218851381430b7af63dc6db763fa3f0fed877187a918be00508d2b02f" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.573798 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jsc5s" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.573802 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jsc5s" event={"ID":"680b10f8-ff96-43e4-b2e2-22ee05c6d815","Type":"ContainerDied","Data":"e12308e356f44667a71fa270f79df39493bcc94358da9ad0f6345566a75653dd"} Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.579046 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cgxqx" event={"ID":"fd564500-1ab5-401f-84a8-79c80dfe50ab","Type":"ContainerStarted","Data":"8524af2e52e0f2f7c5a578351b1c48256af5c75dc11b8e910edae9209aea1b48"} Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.579587 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-cgxqx" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.580778 5050 generic.go:334] "Generic (PLEG): container finished" podID="b2ee71a3-392e-442c-aa3b-bec310a86031" containerID="dfd20eb336193611fa0682d43fce71cb8854edd92d8af8903e3b99c890351872" exitCode=0 Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.580814 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6n98v" event={"ID":"b2ee71a3-392e-442c-aa3b-bec310a86031","Type":"ContainerDied","Data":"dfd20eb336193611fa0682d43fce71cb8854edd92d8af8903e3b99c890351872"} Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.593314 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pvns6"] Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.601437 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pvns6"] Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.608341 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-cgxqx" podStartSLOduration=3.608299864 podStartE2EDuration="3.608299864s" podCreationTimestamp="2025-12-11 13:50:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:50:51.605750456 +0000 UTC m=+142.449473052" watchObservedRunningTime="2025-12-11 13:50:51.608299864 +0000 UTC m=+142.452022450" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.611547 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-cgxqx" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.626314 5050 scope.go:117] "RemoveContainer" containerID="356266226c3e3e247d874d7cedd227c8c9924a3eeee1038302774a63f8dd156f" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.657482 5050 scope.go:117] "RemoveContainer" containerID="67c63d37d60c469331b6535229dfd7849f44b1d981ab7424f9b7e634fb28e562" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.672233 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jsc5s"] Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.679215 5050 scope.go:117] "RemoveContainer" containerID="6e859335514bf6abe68d3832dd99b4d01e6a93c97a5650896c28944e07889652" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.679321 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jsc5s"] Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.689425 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qg6rf"] Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.691722 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qg6rf"] Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.699128 5050 scope.go:117] "RemoveContainer" containerID="c97be9f9bc94115dd5ed635c1b25876754c45231b8349986a5ac49b0a7cfdce2" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.711410 5050 scope.go:117] "RemoveContainer" containerID="b4b3620bdcd8ef1816412eab1f16d5be51a84910a3ada8edec0ed53d1d51119e" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.740194 5050 scope.go:117] "RemoveContainer" containerID="6e859335514bf6abe68d3832dd99b4d01e6a93c97a5650896c28944e07889652" Dec 11 13:50:51 crc kubenswrapper[5050]: E1211 13:50:51.740691 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e859335514bf6abe68d3832dd99b4d01e6a93c97a5650896c28944e07889652\": container with ID starting with 6e859335514bf6abe68d3832dd99b4d01e6a93c97a5650896c28944e07889652 not found: ID does not exist" containerID="6e859335514bf6abe68d3832dd99b4d01e6a93c97a5650896c28944e07889652" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.740720 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e859335514bf6abe68d3832dd99b4d01e6a93c97a5650896c28944e07889652"} err="failed to get container status \"6e859335514bf6abe68d3832dd99b4d01e6a93c97a5650896c28944e07889652\": rpc error: code = NotFound desc = could not find container \"6e859335514bf6abe68d3832dd99b4d01e6a93c97a5650896c28944e07889652\": container with ID starting with 6e859335514bf6abe68d3832dd99b4d01e6a93c97a5650896c28944e07889652 not found: ID does not exist" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.740758 5050 scope.go:117] "RemoveContainer" containerID="c97be9f9bc94115dd5ed635c1b25876754c45231b8349986a5ac49b0a7cfdce2" Dec 11 13:50:51 crc kubenswrapper[5050]: E1211 13:50:51.741164 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c97be9f9bc94115dd5ed635c1b25876754c45231b8349986a5ac49b0a7cfdce2\": container with ID starting with c97be9f9bc94115dd5ed635c1b25876754c45231b8349986a5ac49b0a7cfdce2 not found: ID does not exist" containerID="c97be9f9bc94115dd5ed635c1b25876754c45231b8349986a5ac49b0a7cfdce2" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.741205 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c97be9f9bc94115dd5ed635c1b25876754c45231b8349986a5ac49b0a7cfdce2"} err="failed to get container status \"c97be9f9bc94115dd5ed635c1b25876754c45231b8349986a5ac49b0a7cfdce2\": rpc error: code = NotFound desc = could not find container \"c97be9f9bc94115dd5ed635c1b25876754c45231b8349986a5ac49b0a7cfdce2\": container with ID starting with c97be9f9bc94115dd5ed635c1b25876754c45231b8349986a5ac49b0a7cfdce2 not found: ID does not exist" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.741219 5050 scope.go:117] "RemoveContainer" containerID="b4b3620bdcd8ef1816412eab1f16d5be51a84910a3ada8edec0ed53d1d51119e" Dec 11 13:50:51 crc kubenswrapper[5050]: E1211 13:50:51.741507 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4b3620bdcd8ef1816412eab1f16d5be51a84910a3ada8edec0ed53d1d51119e\": container with ID starting with b4b3620bdcd8ef1816412eab1f16d5be51a84910a3ada8edec0ed53d1d51119e not found: ID does not exist" containerID="b4b3620bdcd8ef1816412eab1f16d5be51a84910a3ada8edec0ed53d1d51119e" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.741525 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4b3620bdcd8ef1816412eab1f16d5be51a84910a3ada8edec0ed53d1d51119e"} err="failed to get container status \"b4b3620bdcd8ef1816412eab1f16d5be51a84910a3ada8edec0ed53d1d51119e\": rpc error: code = NotFound desc = could not find container \"b4b3620bdcd8ef1816412eab1f16d5be51a84910a3ada8edec0ed53d1d51119e\": container with ID starting with b4b3620bdcd8ef1816412eab1f16d5be51a84910a3ada8edec0ed53d1d51119e not found: ID does not exist" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.741537 5050 scope.go:117] "RemoveContainer" containerID="e284d7968a917c91aed620df3cec36c9db27f417a664bc53be9523947ed1bef2" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.759869 5050 scope.go:117] "RemoveContainer" containerID="8b0103cb0a70f15cbc1e6b83960a4ecad55396f52a4bfb3d0309876fd8a20590" Dec 11 13:50:51 crc kubenswrapper[5050]: I1211 13:50:51.775948 5050 scope.go:117] "RemoveContainer" containerID="4725dadd78ab5fd96e93872efee3fa35732ab1e9f26edf5d6e88947ab949d85f" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.124831 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qr26j"] Dec 11 13:50:53 crc kubenswrapper[5050]: E1211 13:50:53.126783 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="680b10f8-ff96-43e4-b2e2-22ee05c6d815" containerName="extract-utilities" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.126941 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="680b10f8-ff96-43e4-b2e2-22ee05c6d815" containerName="extract-utilities" Dec 11 13:50:53 crc kubenswrapper[5050]: E1211 13:50:53.127162 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="680b10f8-ff96-43e4-b2e2-22ee05c6d815" containerName="extract-content" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.127317 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="680b10f8-ff96-43e4-b2e2-22ee05c6d815" containerName="extract-content" Dec 11 13:50:53 crc kubenswrapper[5050]: E1211 13:50:53.127502 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f9d51ca-1ebf-4986-ba4b-08939d025cbd" containerName="registry-server" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.127671 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f9d51ca-1ebf-4986-ba4b-08939d025cbd" containerName="registry-server" Dec 11 13:50:53 crc kubenswrapper[5050]: E1211 13:50:53.127850 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="345ac559-35e7-487b-859a-e583a6e88c6c" containerName="extract-utilities" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.128066 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="345ac559-35e7-487b-859a-e583a6e88c6c" containerName="extract-utilities" Dec 11 13:50:53 crc kubenswrapper[5050]: E1211 13:50:53.129618 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="345ac559-35e7-487b-859a-e583a6e88c6c" containerName="extract-content" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.129910 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="345ac559-35e7-487b-859a-e583a6e88c6c" containerName="extract-content" Dec 11 13:50:53 crc kubenswrapper[5050]: E1211 13:50:53.130163 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f9d51ca-1ebf-4986-ba4b-08939d025cbd" containerName="extract-utilities" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.130391 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f9d51ca-1ebf-4986-ba4b-08939d025cbd" containerName="extract-utilities" Dec 11 13:50:53 crc kubenswrapper[5050]: E1211 13:50:53.131178 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4feb3774-888a-4be4-b47a-b929ec6e98dc" containerName="extract-content" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.131438 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="4feb3774-888a-4be4-b47a-b929ec6e98dc" containerName="extract-content" Dec 11 13:50:53 crc kubenswrapper[5050]: E1211 13:50:53.131720 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18d0253b-d7da-4be9-8fc3-6911f1c92076" containerName="marketplace-operator" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.131997 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="18d0253b-d7da-4be9-8fc3-6911f1c92076" containerName="marketplace-operator" Dec 11 13:50:53 crc kubenswrapper[5050]: E1211 13:50:53.132254 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5faa7088-04c1-4d75-abab-1e426f7cd032" containerName="registry-server" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.132411 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="5faa7088-04c1-4d75-abab-1e426f7cd032" containerName="registry-server" Dec 11 13:50:53 crc kubenswrapper[5050]: E1211 13:50:53.132589 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5faa7088-04c1-4d75-abab-1e426f7cd032" containerName="extract-utilities" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.132788 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="5faa7088-04c1-4d75-abab-1e426f7cd032" containerName="extract-utilities" Dec 11 13:50:53 crc kubenswrapper[5050]: E1211 13:50:53.132921 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4feb3774-888a-4be4-b47a-b929ec6e98dc" containerName="registry-server" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.133067 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="4feb3774-888a-4be4-b47a-b929ec6e98dc" containerName="registry-server" Dec 11 13:50:53 crc kubenswrapper[5050]: E1211 13:50:53.133439 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4feb3774-888a-4be4-b47a-b929ec6e98dc" containerName="extract-utilities" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.133596 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="4feb3774-888a-4be4-b47a-b929ec6e98dc" containerName="extract-utilities" Dec 11 13:50:53 crc kubenswrapper[5050]: E1211 13:50:53.133731 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f9d51ca-1ebf-4986-ba4b-08939d025cbd" containerName="extract-content" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.133845 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f9d51ca-1ebf-4986-ba4b-08939d025cbd" containerName="extract-content" Dec 11 13:50:53 crc kubenswrapper[5050]: E1211 13:50:53.134038 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5faa7088-04c1-4d75-abab-1e426f7cd032" containerName="extract-content" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.134245 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="5faa7088-04c1-4d75-abab-1e426f7cd032" containerName="extract-content" Dec 11 13:50:53 crc kubenswrapper[5050]: E1211 13:50:53.134413 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="680b10f8-ff96-43e4-b2e2-22ee05c6d815" containerName="registry-server" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.134577 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="680b10f8-ff96-43e4-b2e2-22ee05c6d815" containerName="registry-server" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.134970 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f9d51ca-1ebf-4986-ba4b-08939d025cbd" containerName="registry-server" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.135214 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="5faa7088-04c1-4d75-abab-1e426f7cd032" containerName="registry-server" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.135388 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="680b10f8-ff96-43e4-b2e2-22ee05c6d815" containerName="registry-server" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.135712 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="345ac559-35e7-487b-859a-e583a6e88c6c" containerName="extract-content" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.136069 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="18d0253b-d7da-4be9-8fc3-6911f1c92076" containerName="marketplace-operator" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.139989 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="4feb3774-888a-4be4-b47a-b929ec6e98dc" containerName="registry-server" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.141822 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qr26j" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.142550 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qr26j"] Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.144873 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.210125 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5zdt\" (UniqueName: \"kubernetes.io/projected/19a435e3-6f05-43af-af8d-6216a0306a47-kube-api-access-w5zdt\") pod \"certified-operators-qr26j\" (UID: \"19a435e3-6f05-43af-af8d-6216a0306a47\") " pod="openshift-marketplace/certified-operators-qr26j" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.210245 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19a435e3-6f05-43af-af8d-6216a0306a47-utilities\") pod \"certified-operators-qr26j\" (UID: \"19a435e3-6f05-43af-af8d-6216a0306a47\") " pod="openshift-marketplace/certified-operators-qr26j" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.210316 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19a435e3-6f05-43af-af8d-6216a0306a47-catalog-content\") pod \"certified-operators-qr26j\" (UID: \"19a435e3-6f05-43af-af8d-6216a0306a47\") " pod="openshift-marketplace/certified-operators-qr26j" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.312185 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5zdt\" (UniqueName: \"kubernetes.io/projected/19a435e3-6f05-43af-af8d-6216a0306a47-kube-api-access-w5zdt\") pod \"certified-operators-qr26j\" (UID: \"19a435e3-6f05-43af-af8d-6216a0306a47\") " pod="openshift-marketplace/certified-operators-qr26j" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.312666 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19a435e3-6f05-43af-af8d-6216a0306a47-utilities\") pod \"certified-operators-qr26j\" (UID: \"19a435e3-6f05-43af-af8d-6216a0306a47\") " pod="openshift-marketplace/certified-operators-qr26j" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.312937 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19a435e3-6f05-43af-af8d-6216a0306a47-catalog-content\") pod \"certified-operators-qr26j\" (UID: \"19a435e3-6f05-43af-af8d-6216a0306a47\") " pod="openshift-marketplace/certified-operators-qr26j" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.313419 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19a435e3-6f05-43af-af8d-6216a0306a47-utilities\") pod \"certified-operators-qr26j\" (UID: \"19a435e3-6f05-43af-af8d-6216a0306a47\") " pod="openshift-marketplace/certified-operators-qr26j" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.313745 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19a435e3-6f05-43af-af8d-6216a0306a47-catalog-content\") pod \"certified-operators-qr26j\" (UID: \"19a435e3-6f05-43af-af8d-6216a0306a47\") " pod="openshift-marketplace/certified-operators-qr26j" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.344829 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5zdt\" (UniqueName: \"kubernetes.io/projected/19a435e3-6f05-43af-af8d-6216a0306a47-kube-api-access-w5zdt\") pod \"certified-operators-qr26j\" (UID: \"19a435e3-6f05-43af-af8d-6216a0306a47\") " pod="openshift-marketplace/certified-operators-qr26j" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.471617 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qr26j" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.565300 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18d0253b-d7da-4be9-8fc3-6911f1c92076" path="/var/lib/kubelet/pods/18d0253b-d7da-4be9-8fc3-6911f1c92076/volumes" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.565911 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4feb3774-888a-4be4-b47a-b929ec6e98dc" path="/var/lib/kubelet/pods/4feb3774-888a-4be4-b47a-b929ec6e98dc/volumes" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.566663 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="680b10f8-ff96-43e4-b2e2-22ee05c6d815" path="/var/lib/kubelet/pods/680b10f8-ff96-43e4-b2e2-22ee05c6d815/volumes" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.568201 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f9d51ca-1ebf-4986-ba4b-08939d025cbd" path="/var/lib/kubelet/pods/6f9d51ca-1ebf-4986-ba4b-08939d025cbd/volumes" Dec 11 13:50:53 crc kubenswrapper[5050]: I1211 13:50:53.672426 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qr26j"] Dec 11 13:50:53 crc kubenswrapper[5050]: W1211 13:50:53.681466 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19a435e3_6f05_43af_af8d_6216a0306a47.slice/crio-ae45ca8fb70eab61a1df6b31f373404396849343d6caa1da76a76b57f5cf2f5d WatchSource:0}: Error finding container ae45ca8fb70eab61a1df6b31f373404396849343d6caa1da76a76b57f5cf2f5d: Status 404 returned error can't find the container with id ae45ca8fb70eab61a1df6b31f373404396849343d6caa1da76a76b57f5cf2f5d Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.088121 5050 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.091053 5050 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.091117 5050 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.091293 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.091381 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://a8ed77421db2de6a328f1925acda12a41ca2a56cf101a962c541cf04afa16205" gracePeriod=15 Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.091792 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://d6f8c2f45cdfba5b4eb7bdbcb5d48fd2dcf70731aeee14cf3a54dbfc597b8af9" gracePeriod=15 Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.091904 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://dc076730e23713ac357165823064125008a54267d6bdfc7f63dfe6f59fda1d8f" gracePeriod=15 Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.092052 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://4ae4bb2bd785bbc626814e84d88ab77f2705e63cd987329e5da3806265c35256" gracePeriod=15 Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.092098 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://5c2283a6b4d7e02755ef3c112c65859a852db517e3fd1f33779935a45d11db1f" gracePeriod=15 Dec 11 13:50:54 crc kubenswrapper[5050]: E1211 13:50:54.093943 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.093990 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 11 13:50:54 crc kubenswrapper[5050]: E1211 13:50:54.094023 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.094030 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 11 13:50:54 crc kubenswrapper[5050]: E1211 13:50:54.094046 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.094051 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 11 13:50:54 crc kubenswrapper[5050]: E1211 13:50:54.094063 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.094069 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 11 13:50:54 crc kubenswrapper[5050]: E1211 13:50:54.094083 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.094089 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 11 13:50:54 crc kubenswrapper[5050]: E1211 13:50:54.094097 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.094103 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 11 13:50:54 crc kubenswrapper[5050]: E1211 13:50:54.094113 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.094120 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.094220 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.094230 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.094238 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.094248 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.094255 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.094263 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.097968 5050 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.124752 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.125345 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.125415 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.125480 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.125508 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.125539 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.125571 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.125621 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.125703 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.226571 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.226625 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.226650 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.226700 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.226722 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.226757 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.226755 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.226789 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.226815 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.226838 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.226882 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.226897 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.226904 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.226949 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.226950 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.227065 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.424767 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:50:54 crc kubenswrapper[5050]: W1211 13:50:54.456955 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-193dd1ca5e9643d4629a312627b8ccb04b80ceb1ce50240cbe2b506fc05caca2 WatchSource:0}: Error finding container 193dd1ca5e9643d4629a312627b8ccb04b80ceb1ce50240cbe2b506fc05caca2: Status 404 returned error can't find the container with id 193dd1ca5e9643d4629a312627b8ccb04b80ceb1ce50240cbe2b506fc05caca2 Dec 11 13:50:54 crc kubenswrapper[5050]: E1211 13:50:54.461388 5050 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.147:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18802d79a7dda0f5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 13:50:54.459846901 +0000 UTC m=+145.303569487,LastTimestamp:2025-12-11 13:50:54.459846901 +0000 UTC m=+145.303569487,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.626031 5050 generic.go:334] "Generic (PLEG): container finished" podID="19a435e3-6f05-43af-af8d-6216a0306a47" containerID="e0a4b81865d1f44ae0f473341ed7173b82a9f6fbebab6a9398a80f26b96a7ba7" exitCode=0 Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.626294 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qr26j" event={"ID":"19a435e3-6f05-43af-af8d-6216a0306a47","Type":"ContainerDied","Data":"e0a4b81865d1f44ae0f473341ed7173b82a9f6fbebab6a9398a80f26b96a7ba7"} Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.626505 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qr26j" event={"ID":"19a435e3-6f05-43af-af8d-6216a0306a47","Type":"ContainerStarted","Data":"ae45ca8fb70eab61a1df6b31f373404396849343d6caa1da76a76b57f5cf2f5d"} Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.627060 5050 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.627261 5050 status_manager.go:851] "Failed to get status for pod" podUID="19a435e3-6f05-43af-af8d-6216a0306a47" pod="openshift-marketplace/certified-operators-qr26j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qr26j\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.629244 5050 generic.go:334] "Generic (PLEG): container finished" podID="5cc2be88-d194-4626-9f82-a4ccf377ce0d" containerID="ad31ba71f196ede4a4174c658f065f02a45637c6b7edd0ace8a127e232567385" exitCode=0 Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.629306 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"5cc2be88-d194-4626-9f82-a4ccf377ce0d","Type":"ContainerDied","Data":"ad31ba71f196ede4a4174c658f065f02a45637c6b7edd0ace8a127e232567385"} Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.629764 5050 status_manager.go:851] "Failed to get status for pod" podUID="19a435e3-6f05-43af-af8d-6216a0306a47" pod="openshift-marketplace/certified-operators-qr26j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qr26j\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.629955 5050 status_manager.go:851] "Failed to get status for pod" podUID="5cc2be88-d194-4626-9f82-a4ccf377ce0d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.630329 5050 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.631370 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"193dd1ca5e9643d4629a312627b8ccb04b80ceb1ce50240cbe2b506fc05caca2"} Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.637534 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.643312 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.644692 5050 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d6f8c2f45cdfba5b4eb7bdbcb5d48fd2dcf70731aeee14cf3a54dbfc597b8af9" exitCode=0 Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.644725 5050 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4ae4bb2bd785bbc626814e84d88ab77f2705e63cd987329e5da3806265c35256" exitCode=0 Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.644735 5050 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5c2283a6b4d7e02755ef3c112c65859a852db517e3fd1f33779935a45d11db1f" exitCode=0 Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.644746 5050 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="dc076730e23713ac357165823064125008a54267d6bdfc7f63dfe6f59fda1d8f" exitCode=2 Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.644820 5050 scope.go:117] "RemoveContainer" containerID="dfb942fdbd9d669df2b809fbb38479ab9e6cfcf013fb67c19a2155335b5018e3" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.768666 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6n98v" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.769176 5050 status_manager.go:851] "Failed to get status for pod" podUID="b2ee71a3-392e-442c-aa3b-bec310a86031" pod="openshift-marketplace/community-operators-6n98v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6n98v\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.769481 5050 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.769864 5050 status_manager.go:851] "Failed to get status for pod" podUID="19a435e3-6f05-43af-af8d-6216a0306a47" pod="openshift-marketplace/certified-operators-qr26j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qr26j\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.770091 5050 status_manager.go:851] "Failed to get status for pod" podUID="5cc2be88-d194-4626-9f82-a4ccf377ce0d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.772775 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2qj7" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.773092 5050 status_manager.go:851] "Failed to get status for pod" podUID="b2ee71a3-392e-442c-aa3b-bec310a86031" pod="openshift-marketplace/community-operators-6n98v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6n98v\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.773293 5050 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.773482 5050 status_manager.go:851] "Failed to get status for pod" podUID="19a435e3-6f05-43af-af8d-6216a0306a47" pod="openshift-marketplace/certified-operators-qr26j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qr26j\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.773714 5050 status_manager.go:851] "Failed to get status for pod" podUID="5cc2be88-d194-4626-9f82-a4ccf377ce0d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.773984 5050 status_manager.go:851] "Failed to get status for pod" podUID="ed489b52-31c7-44c8-b634-4a99e1644f65" pod="openshift-marketplace/redhat-marketplace-c2qj7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2qj7\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.838591 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed489b52-31c7-44c8-b634-4a99e1644f65-utilities\") pod \"ed489b52-31c7-44c8-b634-4a99e1644f65\" (UID: \"ed489b52-31c7-44c8-b634-4a99e1644f65\") " Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.838657 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2ee71a3-392e-442c-aa3b-bec310a86031-utilities\") pod \"b2ee71a3-392e-442c-aa3b-bec310a86031\" (UID: \"b2ee71a3-392e-442c-aa3b-bec310a86031\") " Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.838772 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2ee71a3-392e-442c-aa3b-bec310a86031-catalog-content\") pod \"b2ee71a3-392e-442c-aa3b-bec310a86031\" (UID: \"b2ee71a3-392e-442c-aa3b-bec310a86031\") " Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.838902 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed489b52-31c7-44c8-b634-4a99e1644f65-catalog-content\") pod \"ed489b52-31c7-44c8-b634-4a99e1644f65\" (UID: \"ed489b52-31c7-44c8-b634-4a99e1644f65\") " Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.838945 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnn9x\" (UniqueName: \"kubernetes.io/projected/b2ee71a3-392e-442c-aa3b-bec310a86031-kube-api-access-gnn9x\") pod \"b2ee71a3-392e-442c-aa3b-bec310a86031\" (UID: \"b2ee71a3-392e-442c-aa3b-bec310a86031\") " Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.839075 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn95r\" (UniqueName: \"kubernetes.io/projected/ed489b52-31c7-44c8-b634-4a99e1644f65-kube-api-access-pn95r\") pod \"ed489b52-31c7-44c8-b634-4a99e1644f65\" (UID: \"ed489b52-31c7-44c8-b634-4a99e1644f65\") " Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.842098 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed489b52-31c7-44c8-b634-4a99e1644f65-utilities" (OuterVolumeSpecName: "utilities") pod "ed489b52-31c7-44c8-b634-4a99e1644f65" (UID: "ed489b52-31c7-44c8-b634-4a99e1644f65"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.845620 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed489b52-31c7-44c8-b634-4a99e1644f65-kube-api-access-pn95r" (OuterVolumeSpecName: "kube-api-access-pn95r") pod "ed489b52-31c7-44c8-b634-4a99e1644f65" (UID: "ed489b52-31c7-44c8-b634-4a99e1644f65"). InnerVolumeSpecName "kube-api-access-pn95r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.848424 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2ee71a3-392e-442c-aa3b-bec310a86031-kube-api-access-gnn9x" (OuterVolumeSpecName: "kube-api-access-gnn9x") pod "b2ee71a3-392e-442c-aa3b-bec310a86031" (UID: "b2ee71a3-392e-442c-aa3b-bec310a86031"). InnerVolumeSpecName "kube-api-access-gnn9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.853891 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2ee71a3-392e-442c-aa3b-bec310a86031-utilities" (OuterVolumeSpecName: "utilities") pod "b2ee71a3-392e-442c-aa3b-bec310a86031" (UID: "b2ee71a3-392e-442c-aa3b-bec310a86031"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.867961 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed489b52-31c7-44c8-b634-4a99e1644f65-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ed489b52-31c7-44c8-b634-4a99e1644f65" (UID: "ed489b52-31c7-44c8-b634-4a99e1644f65"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.894977 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2ee71a3-392e-442c-aa3b-bec310a86031-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b2ee71a3-392e-442c-aa3b-bec310a86031" (UID: "b2ee71a3-392e-442c-aa3b-bec310a86031"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.943283 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed489b52-31c7-44c8-b634-4a99e1644f65-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.943803 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2ee71a3-392e-442c-aa3b-bec310a86031-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.943813 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2ee71a3-392e-442c-aa3b-bec310a86031-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.943829 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed489b52-31c7-44c8-b634-4a99e1644f65-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.943841 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnn9x\" (UniqueName: \"kubernetes.io/projected/b2ee71a3-392e-442c-aa3b-bec310a86031-kube-api-access-gnn9x\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:54 crc kubenswrapper[5050]: I1211 13:50:54.943854 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn95r\" (UniqueName: \"kubernetes.io/projected/ed489b52-31c7-44c8-b634-4a99e1644f65-kube-api-access-pn95r\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.652816 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6n98v" event={"ID":"b2ee71a3-392e-442c-aa3b-bec310a86031","Type":"ContainerDied","Data":"1e4397a2f8804938e58f33dac0e85aff1ed115029e1a95d27202ae10ef582834"} Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.652884 5050 scope.go:117] "RemoveContainer" containerID="dfd20eb336193611fa0682d43fce71cb8854edd92d8af8903e3b99c890351872" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.654717 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ec4865b13a5739e01019f3a5ccd3d4af2398f0dfccd3926e65b992410cae09bc"} Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.655505 5050 status_manager.go:851] "Failed to get status for pod" podUID="b2ee71a3-392e-442c-aa3b-bec310a86031" pod="openshift-marketplace/community-operators-6n98v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6n98v\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.655795 5050 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.656193 5050 status_manager.go:851] "Failed to get status for pod" podUID="19a435e3-6f05-43af-af8d-6216a0306a47" pod="openshift-marketplace/certified-operators-qr26j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qr26j\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.656245 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6n98v" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.656432 5050 status_manager.go:851] "Failed to get status for pod" podUID="ed489b52-31c7-44c8-b634-4a99e1644f65" pod="openshift-marketplace/redhat-marketplace-c2qj7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2qj7\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.656681 5050 status_manager.go:851] "Failed to get status for pod" podUID="5cc2be88-d194-4626-9f82-a4ccf377ce0d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.656822 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2qj7" event={"ID":"ed489b52-31c7-44c8-b634-4a99e1644f65","Type":"ContainerDied","Data":"8003bc3efdc910447f0660155579af94a999bdf4071aefbddb36aaa63b17eaba"} Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.656869 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2qj7" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.656946 5050 status_manager.go:851] "Failed to get status for pod" podUID="b2ee71a3-392e-442c-aa3b-bec310a86031" pod="openshift-marketplace/community-operators-6n98v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6n98v\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.657173 5050 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.657350 5050 status_manager.go:851] "Failed to get status for pod" podUID="19a435e3-6f05-43af-af8d-6216a0306a47" pod="openshift-marketplace/certified-operators-qr26j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qr26j\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.657523 5050 status_manager.go:851] "Failed to get status for pod" podUID="ed489b52-31c7-44c8-b634-4a99e1644f65" pod="openshift-marketplace/redhat-marketplace-c2qj7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2qj7\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.657693 5050 status_manager.go:851] "Failed to get status for pod" podUID="5cc2be88-d194-4626-9f82-a4ccf377ce0d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.657891 5050 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.658225 5050 status_manager.go:851] "Failed to get status for pod" podUID="19a435e3-6f05-43af-af8d-6216a0306a47" pod="openshift-marketplace/certified-operators-qr26j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qr26j\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.658460 5050 status_manager.go:851] "Failed to get status for pod" podUID="5cc2be88-d194-4626-9f82-a4ccf377ce0d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.658634 5050 status_manager.go:851] "Failed to get status for pod" podUID="ed489b52-31c7-44c8-b634-4a99e1644f65" pod="openshift-marketplace/redhat-marketplace-c2qj7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2qj7\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.658785 5050 status_manager.go:851] "Failed to get status for pod" podUID="b2ee71a3-392e-442c-aa3b-bec310a86031" pod="openshift-marketplace/community-operators-6n98v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6n98v\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.661294 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.661526 5050 status_manager.go:851] "Failed to get status for pod" podUID="b2ee71a3-392e-442c-aa3b-bec310a86031" pod="openshift-marketplace/community-operators-6n98v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6n98v\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.661712 5050 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.661936 5050 status_manager.go:851] "Failed to get status for pod" podUID="19a435e3-6f05-43af-af8d-6216a0306a47" pod="openshift-marketplace/certified-operators-qr26j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qr26j\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.662130 5050 status_manager.go:851] "Failed to get status for pod" podUID="ed489b52-31c7-44c8-b634-4a99e1644f65" pod="openshift-marketplace/redhat-marketplace-c2qj7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2qj7\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.662352 5050 status_manager.go:851] "Failed to get status for pod" podUID="5cc2be88-d194-4626-9f82-a4ccf377ce0d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.662585 5050 status_manager.go:851] "Failed to get status for pod" podUID="ed489b52-31c7-44c8-b634-4a99e1644f65" pod="openshift-marketplace/redhat-marketplace-c2qj7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2qj7\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.662744 5050 status_manager.go:851] "Failed to get status for pod" podUID="5cc2be88-d194-4626-9f82-a4ccf377ce0d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.662947 5050 status_manager.go:851] "Failed to get status for pod" podUID="b2ee71a3-392e-442c-aa3b-bec310a86031" pod="openshift-marketplace/community-operators-6n98v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6n98v\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.663157 5050 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.663357 5050 status_manager.go:851] "Failed to get status for pod" podUID="19a435e3-6f05-43af-af8d-6216a0306a47" pod="openshift-marketplace/certified-operators-qr26j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qr26j\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.664490 5050 generic.go:334] "Generic (PLEG): container finished" podID="19a435e3-6f05-43af-af8d-6216a0306a47" containerID="7476eda9d1b33a6f09624048413a2266ecc57d3c400d0956fe17b6f429429ca8" exitCode=0 Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.664511 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qr26j" event={"ID":"19a435e3-6f05-43af-af8d-6216a0306a47","Type":"ContainerDied","Data":"7476eda9d1b33a6f09624048413a2266ecc57d3c400d0956fe17b6f429429ca8"} Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.665104 5050 status_manager.go:851] "Failed to get status for pod" podUID="ed489b52-31c7-44c8-b634-4a99e1644f65" pod="openshift-marketplace/redhat-marketplace-c2qj7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2qj7\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.665354 5050 status_manager.go:851] "Failed to get status for pod" podUID="5cc2be88-d194-4626-9f82-a4ccf377ce0d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.665673 5050 status_manager.go:851] "Failed to get status for pod" podUID="b2ee71a3-392e-442c-aa3b-bec310a86031" pod="openshift-marketplace/community-operators-6n98v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6n98v\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.665899 5050 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.665929 5050 scope.go:117] "RemoveContainer" containerID="98ae423705a56d7c67be8f9fcd4ada09e0693bde23c796cc98c7ca9fa573a400" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.666117 5050 status_manager.go:851] "Failed to get status for pod" podUID="19a435e3-6f05-43af-af8d-6216a0306a47" pod="openshift-marketplace/certified-operators-qr26j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qr26j\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.680871 5050 scope.go:117] "RemoveContainer" containerID="6377df013b1ebe57e0e97327b1811556e09c1daa1fe33d72e24c6a7aa8a121c9" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.704269 5050 scope.go:117] "RemoveContainer" containerID="dc830007bd91bbaf0d691bea30b8eccc2bcfd27e6c94b428b50f9493d2ea6b93" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.718523 5050 scope.go:117] "RemoveContainer" containerID="039e0ec92a9d49248c88597aa940cbe4190a3efea4fa79b28c39e5e6c3475d6a" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.733848 5050 scope.go:117] "RemoveContainer" containerID="27f485196185adee8bc0a085d247e392dee7b1a6bbe7175fdbd60a09f00d556d" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.883538 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.884869 5050 status_manager.go:851] "Failed to get status for pod" podUID="b2ee71a3-392e-442c-aa3b-bec310a86031" pod="openshift-marketplace/community-operators-6n98v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6n98v\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.885253 5050 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.885746 5050 status_manager.go:851] "Failed to get status for pod" podUID="19a435e3-6f05-43af-af8d-6216a0306a47" pod="openshift-marketplace/certified-operators-qr26j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qr26j\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.886049 5050 status_manager.go:851] "Failed to get status for pod" podUID="ed489b52-31c7-44c8-b634-4a99e1644f65" pod="openshift-marketplace/redhat-marketplace-c2qj7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2qj7\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.886316 5050 status_manager.go:851] "Failed to get status for pod" podUID="5cc2be88-d194-4626-9f82-a4ccf377ce0d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.954691 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5cc2be88-d194-4626-9f82-a4ccf377ce0d-kube-api-access\") pod \"5cc2be88-d194-4626-9f82-a4ccf377ce0d\" (UID: \"5cc2be88-d194-4626-9f82-a4ccf377ce0d\") " Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.954729 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5cc2be88-d194-4626-9f82-a4ccf377ce0d-var-lock\") pod \"5cc2be88-d194-4626-9f82-a4ccf377ce0d\" (UID: \"5cc2be88-d194-4626-9f82-a4ccf377ce0d\") " Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.954863 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5cc2be88-d194-4626-9f82-a4ccf377ce0d-kubelet-dir\") pod \"5cc2be88-d194-4626-9f82-a4ccf377ce0d\" (UID: \"5cc2be88-d194-4626-9f82-a4ccf377ce0d\") " Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.954868 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cc2be88-d194-4626-9f82-a4ccf377ce0d-var-lock" (OuterVolumeSpecName: "var-lock") pod "5cc2be88-d194-4626-9f82-a4ccf377ce0d" (UID: "5cc2be88-d194-4626-9f82-a4ccf377ce0d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.954985 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cc2be88-d194-4626-9f82-a4ccf377ce0d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5cc2be88-d194-4626-9f82-a4ccf377ce0d" (UID: "5cc2be88-d194-4626-9f82-a4ccf377ce0d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.955195 5050 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5cc2be88-d194-4626-9f82-a4ccf377ce0d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.955219 5050 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5cc2be88-d194-4626-9f82-a4ccf377ce0d-var-lock\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:55 crc kubenswrapper[5050]: I1211 13:50:55.958877 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cc2be88-d194-4626-9f82-a4ccf377ce0d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5cc2be88-d194-4626-9f82-a4ccf377ce0d" (UID: "5cc2be88-d194-4626-9f82-a4ccf377ce0d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.056163 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5cc2be88-d194-4626-9f82-a4ccf377ce0d-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.472598 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.473992 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.475133 5050 status_manager.go:851] "Failed to get status for pod" podUID="b2ee71a3-392e-442c-aa3b-bec310a86031" pod="openshift-marketplace/community-operators-6n98v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6n98v\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.475378 5050 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.475683 5050 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.476135 5050 status_manager.go:851] "Failed to get status for pod" podUID="19a435e3-6f05-43af-af8d-6216a0306a47" pod="openshift-marketplace/certified-operators-qr26j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qr26j\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.476526 5050 status_manager.go:851] "Failed to get status for pod" podUID="ed489b52-31c7-44c8-b634-4a99e1644f65" pod="openshift-marketplace/redhat-marketplace-c2qj7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2qj7\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.476713 5050 status_manager.go:851] "Failed to get status for pod" podUID="5cc2be88-d194-4626-9f82-a4ccf377ce0d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.561274 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.561333 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.561403 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.561408 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.561470 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.561495 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.561648 5050 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.561666 5050 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.561674 5050 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.675444 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.676171 5050 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a8ed77421db2de6a328f1925acda12a41ca2a56cf101a962c541cf04afa16205" exitCode=0 Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.676266 5050 scope.go:117] "RemoveContainer" containerID="d6f8c2f45cdfba5b4eb7bdbcb5d48fd2dcf70731aeee14cf3a54dbfc597b8af9" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.676275 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.679109 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qr26j" event={"ID":"19a435e3-6f05-43af-af8d-6216a0306a47","Type":"ContainerStarted","Data":"f5f769558566de6e907bac6258e77eb84f0af74ea3ded0f14dc2aea20d3a2903"} Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.679611 5050 status_manager.go:851] "Failed to get status for pod" podUID="19a435e3-6f05-43af-af8d-6216a0306a47" pod="openshift-marketplace/certified-operators-qr26j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qr26j\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.680044 5050 status_manager.go:851] "Failed to get status for pod" podUID="5cc2be88-d194-4626-9f82-a4ccf377ce0d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.680406 5050 status_manager.go:851] "Failed to get status for pod" podUID="ed489b52-31c7-44c8-b634-4a99e1644f65" pod="openshift-marketplace/redhat-marketplace-c2qj7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2qj7\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.680615 5050 status_manager.go:851] "Failed to get status for pod" podUID="b2ee71a3-392e-442c-aa3b-bec310a86031" pod="openshift-marketplace/community-operators-6n98v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6n98v\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.680942 5050 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.681201 5050 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.681704 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"5cc2be88-d194-4626-9f82-a4ccf377ce0d","Type":"ContainerDied","Data":"6241fcea317d7837cddb5a29b2d8a144737b4d5c061392f56b5d841e9449ab1f"} Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.681725 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.681736 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6241fcea317d7837cddb5a29b2d8a144737b4d5c061392f56b5d841e9449ab1f" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.704263 5050 status_manager.go:851] "Failed to get status for pod" podUID="b2ee71a3-392e-442c-aa3b-bec310a86031" pod="openshift-marketplace/community-operators-6n98v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6n98v\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.704659 5050 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.704996 5050 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.705056 5050 scope.go:117] "RemoveContainer" containerID="4ae4bb2bd785bbc626814e84d88ab77f2705e63cd987329e5da3806265c35256" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.705459 5050 status_manager.go:851] "Failed to get status for pod" podUID="19a435e3-6f05-43af-af8d-6216a0306a47" pod="openshift-marketplace/certified-operators-qr26j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qr26j\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.705953 5050 status_manager.go:851] "Failed to get status for pod" podUID="ed489b52-31c7-44c8-b634-4a99e1644f65" pod="openshift-marketplace/redhat-marketplace-c2qj7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2qj7\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.706315 5050 status_manager.go:851] "Failed to get status for pod" podUID="5cc2be88-d194-4626-9f82-a4ccf377ce0d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.706691 5050 status_manager.go:851] "Failed to get status for pod" podUID="b2ee71a3-392e-442c-aa3b-bec310a86031" pod="openshift-marketplace/community-operators-6n98v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6n98v\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.706954 5050 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.707221 5050 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.707451 5050 status_manager.go:851] "Failed to get status for pod" podUID="19a435e3-6f05-43af-af8d-6216a0306a47" pod="openshift-marketplace/certified-operators-qr26j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qr26j\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.707633 5050 status_manager.go:851] "Failed to get status for pod" podUID="ed489b52-31c7-44c8-b634-4a99e1644f65" pod="openshift-marketplace/redhat-marketplace-c2qj7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2qj7\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.707788 5050 status_manager.go:851] "Failed to get status for pod" podUID="5cc2be88-d194-4626-9f82-a4ccf377ce0d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.717939 5050 scope.go:117] "RemoveContainer" containerID="5c2283a6b4d7e02755ef3c112c65859a852db517e3fd1f33779935a45d11db1f" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.730810 5050 scope.go:117] "RemoveContainer" containerID="dc076730e23713ac357165823064125008a54267d6bdfc7f63dfe6f59fda1d8f" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.741923 5050 scope.go:117] "RemoveContainer" containerID="a8ed77421db2de6a328f1925acda12a41ca2a56cf101a962c541cf04afa16205" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.755077 5050 scope.go:117] "RemoveContainer" containerID="d61208f7fc4b2f939d418793e7d1e0f9b12ce4a9c4fd5be0d223050a428f4fad" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.771734 5050 scope.go:117] "RemoveContainer" containerID="d6f8c2f45cdfba5b4eb7bdbcb5d48fd2dcf70731aeee14cf3a54dbfc597b8af9" Dec 11 13:50:56 crc kubenswrapper[5050]: E1211 13:50:56.773361 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6f8c2f45cdfba5b4eb7bdbcb5d48fd2dcf70731aeee14cf3a54dbfc597b8af9\": container with ID starting with d6f8c2f45cdfba5b4eb7bdbcb5d48fd2dcf70731aeee14cf3a54dbfc597b8af9 not found: ID does not exist" containerID="d6f8c2f45cdfba5b4eb7bdbcb5d48fd2dcf70731aeee14cf3a54dbfc597b8af9" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.773408 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6f8c2f45cdfba5b4eb7bdbcb5d48fd2dcf70731aeee14cf3a54dbfc597b8af9"} err="failed to get container status \"d6f8c2f45cdfba5b4eb7bdbcb5d48fd2dcf70731aeee14cf3a54dbfc597b8af9\": rpc error: code = NotFound desc = could not find container \"d6f8c2f45cdfba5b4eb7bdbcb5d48fd2dcf70731aeee14cf3a54dbfc597b8af9\": container with ID starting with d6f8c2f45cdfba5b4eb7bdbcb5d48fd2dcf70731aeee14cf3a54dbfc597b8af9 not found: ID does not exist" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.773437 5050 scope.go:117] "RemoveContainer" containerID="4ae4bb2bd785bbc626814e84d88ab77f2705e63cd987329e5da3806265c35256" Dec 11 13:50:56 crc kubenswrapper[5050]: E1211 13:50:56.773994 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ae4bb2bd785bbc626814e84d88ab77f2705e63cd987329e5da3806265c35256\": container with ID starting with 4ae4bb2bd785bbc626814e84d88ab77f2705e63cd987329e5da3806265c35256 not found: ID does not exist" containerID="4ae4bb2bd785bbc626814e84d88ab77f2705e63cd987329e5da3806265c35256" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.774078 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ae4bb2bd785bbc626814e84d88ab77f2705e63cd987329e5da3806265c35256"} err="failed to get container status \"4ae4bb2bd785bbc626814e84d88ab77f2705e63cd987329e5da3806265c35256\": rpc error: code = NotFound desc = could not find container \"4ae4bb2bd785bbc626814e84d88ab77f2705e63cd987329e5da3806265c35256\": container with ID starting with 4ae4bb2bd785bbc626814e84d88ab77f2705e63cd987329e5da3806265c35256 not found: ID does not exist" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.774102 5050 scope.go:117] "RemoveContainer" containerID="5c2283a6b4d7e02755ef3c112c65859a852db517e3fd1f33779935a45d11db1f" Dec 11 13:50:56 crc kubenswrapper[5050]: E1211 13:50:56.774395 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c2283a6b4d7e02755ef3c112c65859a852db517e3fd1f33779935a45d11db1f\": container with ID starting with 5c2283a6b4d7e02755ef3c112c65859a852db517e3fd1f33779935a45d11db1f not found: ID does not exist" containerID="5c2283a6b4d7e02755ef3c112c65859a852db517e3fd1f33779935a45d11db1f" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.774416 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c2283a6b4d7e02755ef3c112c65859a852db517e3fd1f33779935a45d11db1f"} err="failed to get container status \"5c2283a6b4d7e02755ef3c112c65859a852db517e3fd1f33779935a45d11db1f\": rpc error: code = NotFound desc = could not find container \"5c2283a6b4d7e02755ef3c112c65859a852db517e3fd1f33779935a45d11db1f\": container with ID starting with 5c2283a6b4d7e02755ef3c112c65859a852db517e3fd1f33779935a45d11db1f not found: ID does not exist" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.774434 5050 scope.go:117] "RemoveContainer" containerID="dc076730e23713ac357165823064125008a54267d6bdfc7f63dfe6f59fda1d8f" Dec 11 13:50:56 crc kubenswrapper[5050]: E1211 13:50:56.774691 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc076730e23713ac357165823064125008a54267d6bdfc7f63dfe6f59fda1d8f\": container with ID starting with dc076730e23713ac357165823064125008a54267d6bdfc7f63dfe6f59fda1d8f not found: ID does not exist" containerID="dc076730e23713ac357165823064125008a54267d6bdfc7f63dfe6f59fda1d8f" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.774711 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc076730e23713ac357165823064125008a54267d6bdfc7f63dfe6f59fda1d8f"} err="failed to get container status \"dc076730e23713ac357165823064125008a54267d6bdfc7f63dfe6f59fda1d8f\": rpc error: code = NotFound desc = could not find container \"dc076730e23713ac357165823064125008a54267d6bdfc7f63dfe6f59fda1d8f\": container with ID starting with dc076730e23713ac357165823064125008a54267d6bdfc7f63dfe6f59fda1d8f not found: ID does not exist" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.774724 5050 scope.go:117] "RemoveContainer" containerID="a8ed77421db2de6a328f1925acda12a41ca2a56cf101a962c541cf04afa16205" Dec 11 13:50:56 crc kubenswrapper[5050]: E1211 13:50:56.774933 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8ed77421db2de6a328f1925acda12a41ca2a56cf101a962c541cf04afa16205\": container with ID starting with a8ed77421db2de6a328f1925acda12a41ca2a56cf101a962c541cf04afa16205 not found: ID does not exist" containerID="a8ed77421db2de6a328f1925acda12a41ca2a56cf101a962c541cf04afa16205" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.774954 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8ed77421db2de6a328f1925acda12a41ca2a56cf101a962c541cf04afa16205"} err="failed to get container status \"a8ed77421db2de6a328f1925acda12a41ca2a56cf101a962c541cf04afa16205\": rpc error: code = NotFound desc = could not find container \"a8ed77421db2de6a328f1925acda12a41ca2a56cf101a962c541cf04afa16205\": container with ID starting with a8ed77421db2de6a328f1925acda12a41ca2a56cf101a962c541cf04afa16205 not found: ID does not exist" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.774969 5050 scope.go:117] "RemoveContainer" containerID="d61208f7fc4b2f939d418793e7d1e0f9b12ce4a9c4fd5be0d223050a428f4fad" Dec 11 13:50:56 crc kubenswrapper[5050]: E1211 13:50:56.775336 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d61208f7fc4b2f939d418793e7d1e0f9b12ce4a9c4fd5be0d223050a428f4fad\": container with ID starting with d61208f7fc4b2f939d418793e7d1e0f9b12ce4a9c4fd5be0d223050a428f4fad not found: ID does not exist" containerID="d61208f7fc4b2f939d418793e7d1e0f9b12ce4a9c4fd5be0d223050a428f4fad" Dec 11 13:50:56 crc kubenswrapper[5050]: I1211 13:50:56.775354 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d61208f7fc4b2f939d418793e7d1e0f9b12ce4a9c4fd5be0d223050a428f4fad"} err="failed to get container status \"d61208f7fc4b2f939d418793e7d1e0f9b12ce4a9c4fd5be0d223050a428f4fad\": rpc error: code = NotFound desc = could not find container \"d61208f7fc4b2f939d418793e7d1e0f9b12ce4a9c4fd5be0d223050a428f4fad\": container with ID starting with d61208f7fc4b2f939d418793e7d1e0f9b12ce4a9c4fd5be0d223050a428f4fad not found: ID does not exist" Dec 11 13:50:56 crc kubenswrapper[5050]: E1211 13:50:56.791186 5050 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.147:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18802d79a7dda0f5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 13:50:54.459846901 +0000 UTC m=+145.303569487,LastTimestamp:2025-12-11 13:50:54.459846901 +0000 UTC m=+145.303569487,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 13:50:57 crc kubenswrapper[5050]: I1211 13:50:57.554002 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 11 13:50:57 crc kubenswrapper[5050]: E1211 13:50:57.562302 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:50:57Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:50:57Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:50:57Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T13:50:57Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:34f522750c260aee8d7d3d8c16bba58727f5dfb964b4aecc8b09e3e6f7056f12\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:9acec1ab208005d77c0ac2722e15bf8620aff3b5c4ab7910d45b05a66d2bb912\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1628955991},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:280527b88ffb9a3722a8575a09953fdf0ffded772ca59c8ebce3a4cd2c62d7cd\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:9c58f6c7c4b4317092e82d86d8cc80efd47c4982299f9bbdb4e8444d4d3df9ca\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1234628436},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:04ccbfd75344536604a32b67f586e94cdcd8de3f756189e2f5b8e26a203d0423\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d1fb80806f091a0f5bb1f602d8de38f67c4a42b5076e43f559fa77b8ca880d37\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202228571},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:be25e28aabd5a6e06b4df55e58fa4be426c96c57e3387969e0070e6058149d04\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:e6f1bca5d60a93ec9f9bd8ae305cd4ded3f62b2a51bbfdf59e056ea57c0c5b9f\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1154573130},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:57 crc kubenswrapper[5050]: E1211 13:50:57.562755 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:57 crc kubenswrapper[5050]: E1211 13:50:57.563108 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:57 crc kubenswrapper[5050]: E1211 13:50:57.563422 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:57 crc kubenswrapper[5050]: E1211 13:50:57.563857 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:57 crc kubenswrapper[5050]: E1211 13:50:57.563904 5050 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 13:50:57 crc kubenswrapper[5050]: E1211 13:50:57.711304 5050 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:57 crc kubenswrapper[5050]: E1211 13:50:57.711975 5050 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:57 crc kubenswrapper[5050]: E1211 13:50:57.712343 5050 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:57 crc kubenswrapper[5050]: E1211 13:50:57.712610 5050 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:57 crc kubenswrapper[5050]: E1211 13:50:57.712846 5050 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:57 crc kubenswrapper[5050]: I1211 13:50:57.712927 5050 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 11 13:50:57 crc kubenswrapper[5050]: E1211 13:50:57.713159 5050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="200ms" Dec 11 13:50:57 crc kubenswrapper[5050]: E1211 13:50:57.913654 5050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="400ms" Dec 11 13:50:58 crc kubenswrapper[5050]: E1211 13:50:58.314449 5050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="800ms" Dec 11 13:50:59 crc kubenswrapper[5050]: E1211 13:50:59.115761 5050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="1.6s" Dec 11 13:50:59 crc kubenswrapper[5050]: I1211 13:50:59.549765 5050 status_manager.go:851] "Failed to get status for pod" podUID="ed489b52-31c7-44c8-b634-4a99e1644f65" pod="openshift-marketplace/redhat-marketplace-c2qj7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2qj7\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:59 crc kubenswrapper[5050]: I1211 13:50:59.550115 5050 status_manager.go:851] "Failed to get status for pod" podUID="5cc2be88-d194-4626-9f82-a4ccf377ce0d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:59 crc kubenswrapper[5050]: I1211 13:50:59.550548 5050 status_manager.go:851] "Failed to get status for pod" podUID="b2ee71a3-392e-442c-aa3b-bec310a86031" pod="openshift-marketplace/community-operators-6n98v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6n98v\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:59 crc kubenswrapper[5050]: I1211 13:50:59.550907 5050 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:50:59 crc kubenswrapper[5050]: I1211 13:50:59.551193 5050 status_manager.go:851] "Failed to get status for pod" podUID="19a435e3-6f05-43af-af8d-6216a0306a47" pod="openshift-marketplace/certified-operators-qr26j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qr26j\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:51:00 crc kubenswrapper[5050]: E1211 13:51:00.716585 5050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="3.2s" Dec 11 13:51:03 crc kubenswrapper[5050]: I1211 13:51:03.472539 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qr26j" Dec 11 13:51:03 crc kubenswrapper[5050]: I1211 13:51:03.472887 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qr26j" Dec 11 13:51:03 crc kubenswrapper[5050]: I1211 13:51:03.512593 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qr26j" Dec 11 13:51:03 crc kubenswrapper[5050]: I1211 13:51:03.513273 5050 status_manager.go:851] "Failed to get status for pod" podUID="ed489b52-31c7-44c8-b634-4a99e1644f65" pod="openshift-marketplace/redhat-marketplace-c2qj7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2qj7\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:51:03 crc kubenswrapper[5050]: I1211 13:51:03.513703 5050 status_manager.go:851] "Failed to get status for pod" podUID="5cc2be88-d194-4626-9f82-a4ccf377ce0d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:51:03 crc kubenswrapper[5050]: I1211 13:51:03.514070 5050 status_manager.go:851] "Failed to get status for pod" podUID="b2ee71a3-392e-442c-aa3b-bec310a86031" pod="openshift-marketplace/community-operators-6n98v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6n98v\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:51:03 crc kubenswrapper[5050]: I1211 13:51:03.514649 5050 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:51:03 crc kubenswrapper[5050]: I1211 13:51:03.514920 5050 status_manager.go:851] "Failed to get status for pod" podUID="19a435e3-6f05-43af-af8d-6216a0306a47" pod="openshift-marketplace/certified-operators-qr26j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qr26j\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:51:03 crc kubenswrapper[5050]: I1211 13:51:03.777197 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qr26j" Dec 11 13:51:03 crc kubenswrapper[5050]: I1211 13:51:03.778812 5050 status_manager.go:851] "Failed to get status for pod" podUID="ed489b52-31c7-44c8-b634-4a99e1644f65" pod="openshift-marketplace/redhat-marketplace-c2qj7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2qj7\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:51:03 crc kubenswrapper[5050]: I1211 13:51:03.779384 5050 status_manager.go:851] "Failed to get status for pod" podUID="5cc2be88-d194-4626-9f82-a4ccf377ce0d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:51:03 crc kubenswrapper[5050]: I1211 13:51:03.780757 5050 status_manager.go:851] "Failed to get status for pod" podUID="b2ee71a3-392e-442c-aa3b-bec310a86031" pod="openshift-marketplace/community-operators-6n98v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6n98v\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:51:03 crc kubenswrapper[5050]: I1211 13:51:03.781396 5050 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:51:03 crc kubenswrapper[5050]: I1211 13:51:03.781703 5050 status_manager.go:851] "Failed to get status for pod" podUID="19a435e3-6f05-43af-af8d-6216a0306a47" pod="openshift-marketplace/certified-operators-qr26j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qr26j\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:51:03 crc kubenswrapper[5050]: E1211 13:51:03.917995 5050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="6.4s" Dec 11 13:51:05 crc kubenswrapper[5050]: I1211 13:51:05.547183 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:51:05 crc kubenswrapper[5050]: I1211 13:51:05.548735 5050 status_manager.go:851] "Failed to get status for pod" podUID="ed489b52-31c7-44c8-b634-4a99e1644f65" pod="openshift-marketplace/redhat-marketplace-c2qj7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2qj7\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:51:05 crc kubenswrapper[5050]: I1211 13:51:05.549118 5050 status_manager.go:851] "Failed to get status for pod" podUID="5cc2be88-d194-4626-9f82-a4ccf377ce0d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:51:05 crc kubenswrapper[5050]: I1211 13:51:05.549332 5050 status_manager.go:851] "Failed to get status for pod" podUID="b2ee71a3-392e-442c-aa3b-bec310a86031" pod="openshift-marketplace/community-operators-6n98v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6n98v\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:51:05 crc kubenswrapper[5050]: I1211 13:51:05.549509 5050 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:51:05 crc kubenswrapper[5050]: I1211 13:51:05.549664 5050 status_manager.go:851] "Failed to get status for pod" podUID="19a435e3-6f05-43af-af8d-6216a0306a47" pod="openshift-marketplace/certified-operators-qr26j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qr26j\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:51:05 crc kubenswrapper[5050]: I1211 13:51:05.561538 5050 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8faf83f9-4f21-437e-89d4-28a1f993604a" Dec 11 13:51:05 crc kubenswrapper[5050]: I1211 13:51:05.561578 5050 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8faf83f9-4f21-437e-89d4-28a1f993604a" Dec 11 13:51:05 crc kubenswrapper[5050]: E1211 13:51:05.562104 5050 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:51:05 crc kubenswrapper[5050]: I1211 13:51:05.562909 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:51:05 crc kubenswrapper[5050]: W1211 13:51:05.589977 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-e4c055dec48ca5a5eb48bcc1465964e54d9d29cc8cac7dfa83844481633241c3 WatchSource:0}: Error finding container e4c055dec48ca5a5eb48bcc1465964e54d9d29cc8cac7dfa83844481633241c3: Status 404 returned error can't find the container with id e4c055dec48ca5a5eb48bcc1465964e54d9d29cc8cac7dfa83844481633241c3 Dec 11 13:51:05 crc kubenswrapper[5050]: I1211 13:51:05.733605 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e4c055dec48ca5a5eb48bcc1465964e54d9d29cc8cac7dfa83844481633241c3"} Dec 11 13:51:06 crc kubenswrapper[5050]: I1211 13:51:06.742629 5050 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="c911ab1aeef4e08246e5444bf1dac8ddfe77fccc617e5b9e75eca7d123ae4d00" exitCode=0 Dec 11 13:51:06 crc kubenswrapper[5050]: I1211 13:51:06.742892 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"c911ab1aeef4e08246e5444bf1dac8ddfe77fccc617e5b9e75eca7d123ae4d00"} Dec 11 13:51:06 crc kubenswrapper[5050]: I1211 13:51:06.743258 5050 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8faf83f9-4f21-437e-89d4-28a1f993604a" Dec 11 13:51:06 crc kubenswrapper[5050]: I1211 13:51:06.743276 5050 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8faf83f9-4f21-437e-89d4-28a1f993604a" Dec 11 13:51:06 crc kubenswrapper[5050]: I1211 13:51:06.743885 5050 status_manager.go:851] "Failed to get status for pod" podUID="ed489b52-31c7-44c8-b634-4a99e1644f65" pod="openshift-marketplace/redhat-marketplace-c2qj7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2qj7\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:51:06 crc kubenswrapper[5050]: E1211 13:51:06.743896 5050 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:51:06 crc kubenswrapper[5050]: I1211 13:51:06.744091 5050 status_manager.go:851] "Failed to get status for pod" podUID="5cc2be88-d194-4626-9f82-a4ccf377ce0d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:51:06 crc kubenswrapper[5050]: I1211 13:51:06.744474 5050 status_manager.go:851] "Failed to get status for pod" podUID="b2ee71a3-392e-442c-aa3b-bec310a86031" pod="openshift-marketplace/community-operators-6n98v" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-6n98v\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:51:06 crc kubenswrapper[5050]: I1211 13:51:06.745040 5050 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:51:06 crc kubenswrapper[5050]: I1211 13:51:06.745301 5050 status_manager.go:851] "Failed to get status for pod" podUID="19a435e3-6f05-43af-af8d-6216a0306a47" pod="openshift-marketplace/certified-operators-qr26j" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qr26j\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 13:51:06 crc kubenswrapper[5050]: E1211 13:51:06.792921 5050 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.147:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18802d79a7dda0f5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 13:50:54.459846901 +0000 UTC m=+145.303569487,LastTimestamp:2025-12-11 13:50:54.459846901 +0000 UTC m=+145.303569487,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 13:51:07 crc kubenswrapper[5050]: I1211 13:51:07.754069 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 11 13:51:07 crc kubenswrapper[5050]: I1211 13:51:07.754384 5050 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="a5990aa25c5e6fd0c4328adb0efd594ff4a31f1ee1734928beaa2608b6f16ccf" exitCode=1 Dec 11 13:51:07 crc kubenswrapper[5050]: I1211 13:51:07.754441 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"a5990aa25c5e6fd0c4328adb0efd594ff4a31f1ee1734928beaa2608b6f16ccf"} Dec 11 13:51:07 crc kubenswrapper[5050]: I1211 13:51:07.754904 5050 scope.go:117] "RemoveContainer" containerID="a5990aa25c5e6fd0c4328adb0efd594ff4a31f1ee1734928beaa2608b6f16ccf" Dec 11 13:51:07 crc kubenswrapper[5050]: I1211 13:51:07.759520 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6f05160aeeaa71c3b7be23b2e4cd6b761ffbab4d63139a3d7e57408cd0a5f4d8"} Dec 11 13:51:07 crc kubenswrapper[5050]: I1211 13:51:07.759553 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bdc5023a5b80cbc534e9fae8c924add56e2bae71eb5c0725fc0e14f4b3419495"} Dec 11 13:51:08 crc kubenswrapper[5050]: I1211 13:51:08.767113 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 11 13:51:08 crc kubenswrapper[5050]: I1211 13:51:08.767415 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"134dcca79c4eddbde6843383841b57eedfeb7b3ef8680a39b2a851f7ca914c11"} Dec 11 13:51:08 crc kubenswrapper[5050]: I1211 13:51:08.771106 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4d6e5891cba16d7e722dc6d36f1bc0efba85af34f2cddb3b6ec44ecf80f30c8c"} Dec 11 13:51:08 crc kubenswrapper[5050]: I1211 13:51:08.771134 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"025643cdc6b639ff6be4545a52a775fdb061b704aafb617ad4832f99aaab9c49"} Dec 11 13:51:08 crc kubenswrapper[5050]: I1211 13:51:08.771146 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"93551d09e7dc2ccc7fc70e424b46c49b182b58739d72238debaf3ef2d24e3b26"} Dec 11 13:51:08 crc kubenswrapper[5050]: I1211 13:51:08.771567 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:51:08 crc kubenswrapper[5050]: I1211 13:51:08.771777 5050 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8faf83f9-4f21-437e-89d4-28a1f993604a" Dec 11 13:51:08 crc kubenswrapper[5050]: I1211 13:51:08.771902 5050 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8faf83f9-4f21-437e-89d4-28a1f993604a" Dec 11 13:51:09 crc kubenswrapper[5050]: I1211 13:51:09.275390 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:51:09 crc kubenswrapper[5050]: I1211 13:51:09.275919 5050 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 11 13:51:09 crc kubenswrapper[5050]: I1211 13:51:09.275957 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 11 13:51:09 crc kubenswrapper[5050]: I1211 13:51:09.941830 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-xnlm8" Dec 11 13:51:10 crc kubenswrapper[5050]: I1211 13:51:10.563561 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:51:10 crc kubenswrapper[5050]: I1211 13:51:10.563878 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:51:10 crc kubenswrapper[5050]: I1211 13:51:10.571413 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:51:10 crc kubenswrapper[5050]: I1211 13:51:10.797061 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 13:51:10 crc kubenswrapper[5050]: I1211 13:51:10.797131 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 13:51:12 crc kubenswrapper[5050]: I1211 13:51:12.244430 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" podUID="fb05f4f3-f5be-4823-934f-14d5c48b43c1" containerName="oauth-openshift" containerID="cri-o://d748d9f42dc03e51951c66af131a100bb2dd61db202faab1acd4f50abc2e82ec" gracePeriod=15 Dec 11 13:51:12 crc kubenswrapper[5050]: I1211 13:51:12.604243 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:51:12 crc kubenswrapper[5050]: I1211 13:51:12.795368 5050 generic.go:334] "Generic (PLEG): container finished" podID="fb05f4f3-f5be-4823-934f-14d5c48b43c1" containerID="d748d9f42dc03e51951c66af131a100bb2dd61db202faab1acd4f50abc2e82ec" exitCode=0 Dec 11 13:51:12 crc kubenswrapper[5050]: I1211 13:51:12.795423 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" event={"ID":"fb05f4f3-f5be-4823-934f-14d5c48b43c1","Type":"ContainerDied","Data":"d748d9f42dc03e51951c66af131a100bb2dd61db202faab1acd4f50abc2e82ec"} Dec 11 13:51:12 crc kubenswrapper[5050]: I1211 13:51:12.795459 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" Dec 11 13:51:12 crc kubenswrapper[5050]: I1211 13:51:12.795483 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-l4w2d" event={"ID":"fb05f4f3-f5be-4823-934f-14d5c48b43c1","Type":"ContainerDied","Data":"46d98d9627e4081e6cfc4e647640929d9823db15e9bb787f200eb7c0a070447e"} Dec 11 13:51:12 crc kubenswrapper[5050]: I1211 13:51:12.795502 5050 scope.go:117] "RemoveContainer" containerID="d748d9f42dc03e51951c66af131a100bb2dd61db202faab1acd4f50abc2e82ec" Dec 11 13:51:12 crc kubenswrapper[5050]: I1211 13:51:12.814720 5050 scope.go:117] "RemoveContainer" containerID="d748d9f42dc03e51951c66af131a100bb2dd61db202faab1acd4f50abc2e82ec" Dec 11 13:51:12 crc kubenswrapper[5050]: E1211 13:51:12.815686 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d748d9f42dc03e51951c66af131a100bb2dd61db202faab1acd4f50abc2e82ec\": container with ID starting with d748d9f42dc03e51951c66af131a100bb2dd61db202faab1acd4f50abc2e82ec not found: ID does not exist" containerID="d748d9f42dc03e51951c66af131a100bb2dd61db202faab1acd4f50abc2e82ec" Dec 11 13:51:12 crc kubenswrapper[5050]: I1211 13:51:12.815753 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d748d9f42dc03e51951c66af131a100bb2dd61db202faab1acd4f50abc2e82ec"} err="failed to get container status \"d748d9f42dc03e51951c66af131a100bb2dd61db202faab1acd4f50abc2e82ec\": rpc error: code = NotFound desc = could not find container \"d748d9f42dc03e51951c66af131a100bb2dd61db202faab1acd4f50abc2e82ec\": container with ID starting with d748d9f42dc03e51951c66af131a100bb2dd61db202faab1acd4f50abc2e82ec not found: ID does not exist" Dec 11 13:51:13 crc kubenswrapper[5050]: I1211 13:51:13.782825 5050 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:51:13 crc kubenswrapper[5050]: I1211 13:51:13.801705 5050 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8faf83f9-4f21-437e-89d4-28a1f993604a" Dec 11 13:51:13 crc kubenswrapper[5050]: I1211 13:51:13.801734 5050 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8faf83f9-4f21-437e-89d4-28a1f993604a" Dec 11 13:51:13 crc kubenswrapper[5050]: I1211 13:51:13.805365 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:51:13 crc kubenswrapper[5050]: I1211 13:51:13.888741 5050 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="c5b4feb0-bb2d-462a-a457-ab1a7e8ae042" Dec 11 13:51:14 crc kubenswrapper[5050]: I1211 13:51:14.500824 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:51:14 crc kubenswrapper[5050]: I1211 13:51:14.806742 5050 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8faf83f9-4f21-437e-89d4-28a1f993604a" Dec 11 13:51:14 crc kubenswrapper[5050]: I1211 13:51:14.806777 5050 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8faf83f9-4f21-437e-89d4-28a1f993604a" Dec 11 13:51:14 crc kubenswrapper[5050]: I1211 13:51:14.810463 5050 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="c5b4feb0-bb2d-462a-a457-ab1a7e8ae042" Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.027296 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-service-ca\") pod \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.027343 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-serving-cert\") pod \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.027360 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fb05f4f3-f5be-4823-934f-14d5c48b43c1-audit-dir\") pod \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.027376 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fb05f4f3-f5be-4823-934f-14d5c48b43c1-audit-policies\") pod \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.027422 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-session\") pod \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.027458 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-user-template-provider-selection\") pod \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.027486 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-user-idp-0-file-data\") pod \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.027502 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-router-certs\") pod \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.027508 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fb05f4f3-f5be-4823-934f-14d5c48b43c1-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "fb05f4f3-f5be-4823-934f-14d5c48b43c1" (UID: "fb05f4f3-f5be-4823-934f-14d5c48b43c1"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.027542 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmdlb\" (UniqueName: \"kubernetes.io/projected/fb05f4f3-f5be-4823-934f-14d5c48b43c1-kube-api-access-kmdlb\") pod \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.027564 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-user-template-error\") pod \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.027585 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-cliconfig\") pod \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.028323 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb05f4f3-f5be-4823-934f-14d5c48b43c1-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "fb05f4f3-f5be-4823-934f-14d5c48b43c1" (UID: "fb05f4f3-f5be-4823-934f-14d5c48b43c1"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.028348 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "fb05f4f3-f5be-4823-934f-14d5c48b43c1" (UID: "fb05f4f3-f5be-4823-934f-14d5c48b43c1"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.027616 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-ocp-branding-template\") pod \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.028676 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-trusted-ca-bundle\") pod \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.028355 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "fb05f4f3-f5be-4823-934f-14d5c48b43c1" (UID: "fb05f4f3-f5be-4823-934f-14d5c48b43c1"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.029076 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "fb05f4f3-f5be-4823-934f-14d5c48b43c1" (UID: "fb05f4f3-f5be-4823-934f-14d5c48b43c1"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.028703 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-user-template-login\") pod \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\" (UID: \"fb05f4f3-f5be-4823-934f-14d5c48b43c1\") " Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.030278 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.030307 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.030319 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.030328 5050 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/fb05f4f3-f5be-4823-934f-14d5c48b43c1-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.030336 5050 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/fb05f4f3-f5be-4823-934f-14d5c48b43c1-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.038276 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "fb05f4f3-f5be-4823-934f-14d5c48b43c1" (UID: "fb05f4f3-f5be-4823-934f-14d5c48b43c1"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.038539 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "fb05f4f3-f5be-4823-934f-14d5c48b43c1" (UID: "fb05f4f3-f5be-4823-934f-14d5c48b43c1"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.038844 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "fb05f4f3-f5be-4823-934f-14d5c48b43c1" (UID: "fb05f4f3-f5be-4823-934f-14d5c48b43c1"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.039039 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "fb05f4f3-f5be-4823-934f-14d5c48b43c1" (UID: "fb05f4f3-f5be-4823-934f-14d5c48b43c1"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.039213 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb05f4f3-f5be-4823-934f-14d5c48b43c1-kube-api-access-kmdlb" (OuterVolumeSpecName: "kube-api-access-kmdlb") pod "fb05f4f3-f5be-4823-934f-14d5c48b43c1" (UID: "fb05f4f3-f5be-4823-934f-14d5c48b43c1"). InnerVolumeSpecName "kube-api-access-kmdlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.039615 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "fb05f4f3-f5be-4823-934f-14d5c48b43c1" (UID: "fb05f4f3-f5be-4823-934f-14d5c48b43c1"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.039782 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "fb05f4f3-f5be-4823-934f-14d5c48b43c1" (UID: "fb05f4f3-f5be-4823-934f-14d5c48b43c1"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.040169 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "fb05f4f3-f5be-4823-934f-14d5c48b43c1" (UID: "fb05f4f3-f5be-4823-934f-14d5c48b43c1"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.040341 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "fb05f4f3-f5be-4823-934f-14d5c48b43c1" (UID: "fb05f4f3-f5be-4823-934f-14d5c48b43c1"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.132660 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.132710 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.132725 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmdlb\" (UniqueName: \"kubernetes.io/projected/fb05f4f3-f5be-4823-934f-14d5c48b43c1-kube-api-access-kmdlb\") on node \"crc\" DevicePath \"\"" Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.132738 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.132754 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.132766 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.132778 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.132790 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 11 13:51:15 crc kubenswrapper[5050]: I1211 13:51:15.132803 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/fb05f4f3-f5be-4823-934f-14d5c48b43c1-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 11 13:51:19 crc kubenswrapper[5050]: I1211 13:51:19.275136 5050 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 11 13:51:19 crc kubenswrapper[5050]: I1211 13:51:19.276170 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 11 13:51:24 crc kubenswrapper[5050]: I1211 13:51:24.120913 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 11 13:51:24 crc kubenswrapper[5050]: I1211 13:51:24.386317 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 11 13:51:24 crc kubenswrapper[5050]: I1211 13:51:24.472166 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 11 13:51:24 crc kubenswrapper[5050]: I1211 13:51:24.697184 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 11 13:51:25 crc kubenswrapper[5050]: I1211 13:51:25.342109 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 11 13:51:25 crc kubenswrapper[5050]: I1211 13:51:25.403267 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 11 13:51:25 crc kubenswrapper[5050]: I1211 13:51:25.736554 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 11 13:51:26 crc kubenswrapper[5050]: I1211 13:51:26.094251 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 11 13:51:26 crc kubenswrapper[5050]: I1211 13:51:26.448163 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 11 13:51:26 crc kubenswrapper[5050]: I1211 13:51:26.579570 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 11 13:51:26 crc kubenswrapper[5050]: I1211 13:51:26.643216 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 11 13:51:26 crc kubenswrapper[5050]: I1211 13:51:26.773590 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 11 13:51:26 crc kubenswrapper[5050]: I1211 13:51:26.825793 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 11 13:51:26 crc kubenswrapper[5050]: I1211 13:51:26.830233 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 11 13:51:26 crc kubenswrapper[5050]: I1211 13:51:26.839048 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 11 13:51:27 crc kubenswrapper[5050]: I1211 13:51:27.113729 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 11 13:51:27 crc kubenswrapper[5050]: I1211 13:51:27.156445 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 11 13:51:27 crc kubenswrapper[5050]: I1211 13:51:27.209815 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 11 13:51:27 crc kubenswrapper[5050]: I1211 13:51:27.255719 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 11 13:51:27 crc kubenswrapper[5050]: I1211 13:51:27.410397 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 11 13:51:27 crc kubenswrapper[5050]: I1211 13:51:27.476306 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 11 13:51:27 crc kubenswrapper[5050]: I1211 13:51:27.541561 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 11 13:51:27 crc kubenswrapper[5050]: I1211 13:51:27.731200 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 11 13:51:27 crc kubenswrapper[5050]: I1211 13:51:27.801074 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 11 13:51:27 crc kubenswrapper[5050]: I1211 13:51:27.902315 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 11 13:51:27 crc kubenswrapper[5050]: I1211 13:51:27.984984 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 11 13:51:28 crc kubenswrapper[5050]: I1211 13:51:28.013975 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 11 13:51:28 crc kubenswrapper[5050]: I1211 13:51:28.057300 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 11 13:51:28 crc kubenswrapper[5050]: I1211 13:51:28.075856 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 11 13:51:28 crc kubenswrapper[5050]: I1211 13:51:28.108340 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 11 13:51:28 crc kubenswrapper[5050]: I1211 13:51:28.197670 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 11 13:51:28 crc kubenswrapper[5050]: I1211 13:51:28.334798 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 11 13:51:28 crc kubenswrapper[5050]: I1211 13:51:28.445594 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 11 13:51:28 crc kubenswrapper[5050]: I1211 13:51:28.621452 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 11 13:51:28 crc kubenswrapper[5050]: I1211 13:51:28.677904 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 11 13:51:28 crc kubenswrapper[5050]: I1211 13:51:28.680350 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 11 13:51:28 crc kubenswrapper[5050]: I1211 13:51:28.816084 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 11 13:51:28 crc kubenswrapper[5050]: I1211 13:51:28.865957 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 11 13:51:28 crc kubenswrapper[5050]: I1211 13:51:28.914659 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 11 13:51:29 crc kubenswrapper[5050]: I1211 13:51:29.092686 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 11 13:51:29 crc kubenswrapper[5050]: I1211 13:51:29.258985 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 11 13:51:29 crc kubenswrapper[5050]: I1211 13:51:29.276245 5050 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 11 13:51:29 crc kubenswrapper[5050]: I1211 13:51:29.276303 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 11 13:51:29 crc kubenswrapper[5050]: I1211 13:51:29.276374 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:51:29 crc kubenswrapper[5050]: I1211 13:51:29.276974 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"134dcca79c4eddbde6843383841b57eedfeb7b3ef8680a39b2a851f7ca914c11"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Dec 11 13:51:29 crc kubenswrapper[5050]: I1211 13:51:29.277098 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://134dcca79c4eddbde6843383841b57eedfeb7b3ef8680a39b2a851f7ca914c11" gracePeriod=30 Dec 11 13:51:29 crc kubenswrapper[5050]: I1211 13:51:29.313920 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 11 13:51:29 crc kubenswrapper[5050]: I1211 13:51:29.390856 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 11 13:51:29 crc kubenswrapper[5050]: I1211 13:51:29.405808 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 11 13:51:29 crc kubenswrapper[5050]: I1211 13:51:29.516745 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 11 13:51:29 crc kubenswrapper[5050]: I1211 13:51:29.528222 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 11 13:51:29 crc kubenswrapper[5050]: I1211 13:51:29.535868 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 11 13:51:29 crc kubenswrapper[5050]: I1211 13:51:29.537280 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 11 13:51:29 crc kubenswrapper[5050]: I1211 13:51:29.626816 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 11 13:51:29 crc kubenswrapper[5050]: I1211 13:51:29.640529 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 11 13:51:29 crc kubenswrapper[5050]: I1211 13:51:29.709258 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 11 13:51:29 crc kubenswrapper[5050]: I1211 13:51:29.725390 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 11 13:51:29 crc kubenswrapper[5050]: I1211 13:51:29.857941 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 11 13:51:29 crc kubenswrapper[5050]: I1211 13:51:29.881958 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 11 13:51:29 crc kubenswrapper[5050]: I1211 13:51:29.956127 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 11 13:51:29 crc kubenswrapper[5050]: I1211 13:51:29.987780 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.078181 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.180773 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.212462 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.242332 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.271418 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.322318 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.358951 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.533750 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.735963 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.749593 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.784956 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.856124 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.926516 5050 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.928534 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=36.928511625 podStartE2EDuration="36.928511625s" podCreationTimestamp="2025-12-11 13:50:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:51:13.837965281 +0000 UTC m=+164.681687887" watchObservedRunningTime="2025-12-11 13:51:30.928511625 +0000 UTC m=+181.772234201" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.940751 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qr26j" podStartSLOduration=36.430015533 podStartE2EDuration="37.940711631s" podCreationTimestamp="2025-12-11 13:50:53 +0000 UTC" firstStartedPulling="2025-12-11 13:50:54.628132958 +0000 UTC m=+145.471855544" lastFinishedPulling="2025-12-11 13:50:56.138829036 +0000 UTC m=+146.982551642" observedRunningTime="2025-12-11 13:51:13.851891804 +0000 UTC m=+164.695614390" watchObservedRunningTime="2025-12-11 13:51:30.940711631 +0000 UTC m=+181.784434217" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.940989 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.943055 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-l4w2d","openshift-kube-apiserver/kube-apiserver-crc","openshift-marketplace/community-operators-6n98v","openshift-marketplace/redhat-marketplace-c2qj7"] Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.943130 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-d878cb77-dmcvf"] Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.943719 5050 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8faf83f9-4f21-437e-89d4-28a1f993604a" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.943766 5050 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8faf83f9-4f21-437e-89d4-28a1f993604a" Dec 11 13:51:30 crc kubenswrapper[5050]: E1211 13:51:30.943766 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ee71a3-392e-442c-aa3b-bec310a86031" containerName="extract-utilities" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.943795 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ee71a3-392e-442c-aa3b-bec310a86031" containerName="extract-utilities" Dec 11 13:51:30 crc kubenswrapper[5050]: E1211 13:51:30.943809 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed489b52-31c7-44c8-b634-4a99e1644f65" containerName="registry-server" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.943816 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed489b52-31c7-44c8-b634-4a99e1644f65" containerName="registry-server" Dec 11 13:51:30 crc kubenswrapper[5050]: E1211 13:51:30.943823 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb05f4f3-f5be-4823-934f-14d5c48b43c1" containerName="oauth-openshift" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.943830 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb05f4f3-f5be-4823-934f-14d5c48b43c1" containerName="oauth-openshift" Dec 11 13:51:30 crc kubenswrapper[5050]: E1211 13:51:30.943845 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ee71a3-392e-442c-aa3b-bec310a86031" containerName="extract-content" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.943850 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ee71a3-392e-442c-aa3b-bec310a86031" containerName="extract-content" Dec 11 13:51:30 crc kubenswrapper[5050]: E1211 13:51:30.943861 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cc2be88-d194-4626-9f82-a4ccf377ce0d" containerName="installer" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.943867 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cc2be88-d194-4626-9f82-a4ccf377ce0d" containerName="installer" Dec 11 13:51:30 crc kubenswrapper[5050]: E1211 13:51:30.943881 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed489b52-31c7-44c8-b634-4a99e1644f65" containerName="extract-utilities" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.943887 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed489b52-31c7-44c8-b634-4a99e1644f65" containerName="extract-utilities" Dec 11 13:51:30 crc kubenswrapper[5050]: E1211 13:51:30.943895 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2ee71a3-392e-442c-aa3b-bec310a86031" containerName="registry-server" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.943901 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2ee71a3-392e-442c-aa3b-bec310a86031" containerName="registry-server" Dec 11 13:51:30 crc kubenswrapper[5050]: E1211 13:51:30.943910 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed489b52-31c7-44c8-b634-4a99e1644f65" containerName="extract-content" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.943915 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed489b52-31c7-44c8-b634-4a99e1644f65" containerName="extract-content" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.944303 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2ee71a3-392e-442c-aa3b-bec310a86031" containerName="registry-server" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.944320 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed489b52-31c7-44c8-b634-4a99e1644f65" containerName="registry-server" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.944329 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb05f4f3-f5be-4823-934f-14d5c48b43c1" containerName="oauth-openshift" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.944339 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cc2be88-d194-4626-9f82-a4ccf377ce0d" containerName="installer" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.944962 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.947333 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.948529 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.948885 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.948960 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.949318 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.949686 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.949695 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.950083 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.950337 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.950606 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.951543 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.951562 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.954540 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.955287 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.960149 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.963386 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 11 13:51:30 crc kubenswrapper[5050]: I1211 13:51:30.994532 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=17.994498628 podStartE2EDuration="17.994498628s" podCreationTimestamp="2025-12-11 13:51:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:51:30.992880105 +0000 UTC m=+181.836602691" watchObservedRunningTime="2025-12-11 13:51:30.994498628 +0000 UTC m=+181.838221214" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.004914 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.033964 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.052243 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-session\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.052328 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-router-certs\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.052362 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/effe4522-49a7-4d34-b64d-6ab0012f5548-audit-dir\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.052421 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-service-ca\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.052439 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-user-template-error\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.052466 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.052488 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.052504 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc85w\" (UniqueName: \"kubernetes.io/projected/effe4522-49a7-4d34-b64d-6ab0012f5548-kube-api-access-vc85w\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.052536 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.052554 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.052576 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-user-template-login\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.052600 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/effe4522-49a7-4d34-b64d-6ab0012f5548-audit-policies\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.052616 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-serving-cert\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.052634 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.154344 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-service-ca\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.154400 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-user-template-error\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.154431 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.154455 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.154476 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vc85w\" (UniqueName: \"kubernetes.io/projected/effe4522-49a7-4d34-b64d-6ab0012f5548-kube-api-access-vc85w\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.154509 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.154527 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.154557 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-user-template-login\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.154580 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/effe4522-49a7-4d34-b64d-6ab0012f5548-audit-policies\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.154598 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-serving-cert\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.154615 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.154638 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-session\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.154661 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-router-certs\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.154679 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/effe4522-49a7-4d34-b64d-6ab0012f5548-audit-dir\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.154761 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/effe4522-49a7-4d34-b64d-6ab0012f5548-audit-dir\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.156382 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/effe4522-49a7-4d34-b64d-6ab0012f5548-audit-policies\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.156394 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.156646 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-service-ca\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.157112 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.160128 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-user-template-error\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.160193 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.160409 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-serving-cert\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.161387 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-session\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.161543 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-router-certs\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.162699 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.163056 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-user-template-login\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.163293 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.178390 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vc85w\" (UniqueName: \"kubernetes.io/projected/effe4522-49a7-4d34-b64d-6ab0012f5548-kube-api-access-vc85w\") pod \"oauth-openshift-d878cb77-dmcvf\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.273498 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.375737 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.403458 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.475212 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.556560 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2ee71a3-392e-442c-aa3b-bec310a86031" path="/var/lib/kubelet/pods/b2ee71a3-392e-442c-aa3b-bec310a86031/volumes" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.557885 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed489b52-31c7-44c8-b634-4a99e1644f65" path="/var/lib/kubelet/pods/ed489b52-31c7-44c8-b634-4a99e1644f65/volumes" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.558620 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb05f4f3-f5be-4823-934f-14d5c48b43c1" path="/var/lib/kubelet/pods/fb05f4f3-f5be-4823-934f-14d5c48b43c1/volumes" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.579193 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.586031 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.686579 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.693628 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.693633 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.701138 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.778841 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.881869 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 11 13:51:31 crc kubenswrapper[5050]: I1211 13:51:31.886696 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 11 13:51:32 crc kubenswrapper[5050]: I1211 13:51:32.103539 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 11 13:51:32 crc kubenswrapper[5050]: I1211 13:51:32.116942 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 11 13:51:32 crc kubenswrapper[5050]: I1211 13:51:32.139093 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 11 13:51:32 crc kubenswrapper[5050]: I1211 13:51:32.189854 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 11 13:51:32 crc kubenswrapper[5050]: I1211 13:51:32.329903 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 11 13:51:32 crc kubenswrapper[5050]: I1211 13:51:32.367638 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 11 13:51:32 crc kubenswrapper[5050]: I1211 13:51:32.469294 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 11 13:51:32 crc kubenswrapper[5050]: I1211 13:51:32.653562 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 11 13:51:32 crc kubenswrapper[5050]: I1211 13:51:32.671686 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 11 13:51:32 crc kubenswrapper[5050]: I1211 13:51:32.695398 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 11 13:51:32 crc kubenswrapper[5050]: I1211 13:51:32.713473 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 11 13:51:32 crc kubenswrapper[5050]: I1211 13:51:32.723519 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 11 13:51:32 crc kubenswrapper[5050]: I1211 13:51:32.730041 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 11 13:51:32 crc kubenswrapper[5050]: I1211 13:51:32.769370 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 11 13:51:32 crc kubenswrapper[5050]: I1211 13:51:32.817854 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 11 13:51:32 crc kubenswrapper[5050]: I1211 13:51:32.904920 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 11 13:51:32 crc kubenswrapper[5050]: I1211 13:51:32.924321 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 11 13:51:32 crc kubenswrapper[5050]: I1211 13:51:32.954815 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 11 13:51:32 crc kubenswrapper[5050]: I1211 13:51:32.971820 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 11 13:51:33 crc kubenswrapper[5050]: I1211 13:51:33.069290 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 11 13:51:33 crc kubenswrapper[5050]: I1211 13:51:33.319970 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 11 13:51:33 crc kubenswrapper[5050]: I1211 13:51:33.646151 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 11 13:51:33 crc kubenswrapper[5050]: I1211 13:51:33.683557 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 11 13:51:33 crc kubenswrapper[5050]: I1211 13:51:33.716512 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 11 13:51:33 crc kubenswrapper[5050]: I1211 13:51:33.843129 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 11 13:51:33 crc kubenswrapper[5050]: I1211 13:51:33.851272 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 11 13:51:33 crc kubenswrapper[5050]: I1211 13:51:33.867716 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 11 13:51:33 crc kubenswrapper[5050]: I1211 13:51:33.927396 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 11 13:51:33 crc kubenswrapper[5050]: I1211 13:51:33.950945 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 11 13:51:33 crc kubenswrapper[5050]: I1211 13:51:33.953742 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 11 13:51:34 crc kubenswrapper[5050]: I1211 13:51:34.044837 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 11 13:51:34 crc kubenswrapper[5050]: I1211 13:51:34.053200 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 11 13:51:34 crc kubenswrapper[5050]: I1211 13:51:34.120714 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 11 13:51:34 crc kubenswrapper[5050]: I1211 13:51:34.188513 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 11 13:51:34 crc kubenswrapper[5050]: I1211 13:51:34.248699 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 11 13:51:34 crc kubenswrapper[5050]: I1211 13:51:34.264751 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 11 13:51:34 crc kubenswrapper[5050]: I1211 13:51:34.266041 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 11 13:51:34 crc kubenswrapper[5050]: I1211 13:51:34.387979 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 11 13:51:34 crc kubenswrapper[5050]: I1211 13:51:34.440359 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 11 13:51:34 crc kubenswrapper[5050]: I1211 13:51:34.488288 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 11 13:51:34 crc kubenswrapper[5050]: I1211 13:51:34.492743 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 11 13:51:34 crc kubenswrapper[5050]: I1211 13:51:34.519269 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 11 13:51:34 crc kubenswrapper[5050]: I1211 13:51:34.672652 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 11 13:51:34 crc kubenswrapper[5050]: I1211 13:51:34.788122 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 11 13:51:34 crc kubenswrapper[5050]: I1211 13:51:34.844501 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 11 13:51:34 crc kubenswrapper[5050]: I1211 13:51:34.860362 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 11 13:51:35 crc kubenswrapper[5050]: I1211 13:51:35.045368 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 11 13:51:35 crc kubenswrapper[5050]: I1211 13:51:35.048317 5050 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 11 13:51:35 crc kubenswrapper[5050]: I1211 13:51:35.079526 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 11 13:51:35 crc kubenswrapper[5050]: I1211 13:51:35.198790 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-d878cb77-dmcvf"] Dec 11 13:51:35 crc kubenswrapper[5050]: I1211 13:51:35.344260 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 11 13:51:35 crc kubenswrapper[5050]: I1211 13:51:35.451118 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 11 13:51:35 crc kubenswrapper[5050]: I1211 13:51:35.524100 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-d878cb77-dmcvf"] Dec 11 13:51:35 crc kubenswrapper[5050]: I1211 13:51:35.528959 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 11 13:51:35 crc kubenswrapper[5050]: I1211 13:51:35.529249 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 11 13:51:35 crc kubenswrapper[5050]: I1211 13:51:35.529424 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 11 13:51:35 crc kubenswrapper[5050]: I1211 13:51:35.549802 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 11 13:51:35 crc kubenswrapper[5050]: I1211 13:51:35.614255 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 11 13:51:35 crc kubenswrapper[5050]: I1211 13:51:35.625190 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 11 13:51:35 crc kubenswrapper[5050]: I1211 13:51:35.628892 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 11 13:51:35 crc kubenswrapper[5050]: I1211 13:51:35.641192 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 11 13:51:35 crc kubenswrapper[5050]: I1211 13:51:35.776085 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 11 13:51:35 crc kubenswrapper[5050]: I1211 13:51:35.824797 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 11 13:51:35 crc kubenswrapper[5050]: I1211 13:51:35.934620 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 11 13:51:35 crc kubenswrapper[5050]: I1211 13:51:35.942757 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" event={"ID":"effe4522-49a7-4d34-b64d-6ab0012f5548","Type":"ContainerStarted","Data":"f69ece235426f2d56583702038cafcec531810dbd2c457246b91befdd1da4c4d"} Dec 11 13:51:35 crc kubenswrapper[5050]: I1211 13:51:35.960820 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 11 13:51:35 crc kubenswrapper[5050]: I1211 13:51:35.966115 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 11 13:51:35 crc kubenswrapper[5050]: I1211 13:51:35.975216 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 11 13:51:36 crc kubenswrapper[5050]: I1211 13:51:36.015148 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 11 13:51:36 crc kubenswrapper[5050]: I1211 13:51:36.051207 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 11 13:51:36 crc kubenswrapper[5050]: I1211 13:51:36.135654 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 11 13:51:36 crc kubenswrapper[5050]: I1211 13:51:36.270613 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 11 13:51:36 crc kubenswrapper[5050]: I1211 13:51:36.357349 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 11 13:51:36 crc kubenswrapper[5050]: I1211 13:51:36.413264 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 11 13:51:36 crc kubenswrapper[5050]: I1211 13:51:36.440033 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 11 13:51:36 crc kubenswrapper[5050]: I1211 13:51:36.459435 5050 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 11 13:51:36 crc kubenswrapper[5050]: I1211 13:51:36.459705 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://ec4865b13a5739e01019f3a5ccd3d4af2398f0dfccd3926e65b992410cae09bc" gracePeriod=5 Dec 11 13:51:36 crc kubenswrapper[5050]: I1211 13:51:36.515357 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 11 13:51:36 crc kubenswrapper[5050]: I1211 13:51:36.620179 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 11 13:51:36 crc kubenswrapper[5050]: I1211 13:51:36.641143 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 11 13:51:36 crc kubenswrapper[5050]: I1211 13:51:36.650468 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 11 13:51:36 crc kubenswrapper[5050]: I1211 13:51:36.711102 5050 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 11 13:51:36 crc kubenswrapper[5050]: I1211 13:51:36.787695 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 11 13:51:36 crc kubenswrapper[5050]: I1211 13:51:36.812237 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 11 13:51:36 crc kubenswrapper[5050]: I1211 13:51:36.814588 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 11 13:51:36 crc kubenswrapper[5050]: I1211 13:51:36.916657 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 11 13:51:36 crc kubenswrapper[5050]: I1211 13:51:36.917701 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 11 13:51:36 crc kubenswrapper[5050]: I1211 13:51:36.968981 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" event={"ID":"effe4522-49a7-4d34-b64d-6ab0012f5548","Type":"ContainerStarted","Data":"9ec87625c63bb19608c5fab2a9d7e38a82ac0120b7f7afb4ed2fe01cd46ecae8"} Dec 11 13:51:36 crc kubenswrapper[5050]: I1211 13:51:36.970299 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:36 crc kubenswrapper[5050]: I1211 13:51:36.974899 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 13:51:37 crc kubenswrapper[5050]: I1211 13:51:37.022889 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" podStartSLOduration=50.022873914 podStartE2EDuration="50.022873914s" podCreationTimestamp="2025-12-11 13:50:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:51:36.999319804 +0000 UTC m=+187.843042390" watchObservedRunningTime="2025-12-11 13:51:37.022873914 +0000 UTC m=+187.866596500" Dec 11 13:51:37 crc kubenswrapper[5050]: I1211 13:51:37.033029 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 11 13:51:37 crc kubenswrapper[5050]: I1211 13:51:37.176293 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 11 13:51:37 crc kubenswrapper[5050]: I1211 13:51:37.222276 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 11 13:51:37 crc kubenswrapper[5050]: I1211 13:51:37.228718 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 11 13:51:37 crc kubenswrapper[5050]: I1211 13:51:37.239208 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 11 13:51:37 crc kubenswrapper[5050]: I1211 13:51:37.250060 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 11 13:51:37 crc kubenswrapper[5050]: I1211 13:51:37.325686 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 11 13:51:37 crc kubenswrapper[5050]: I1211 13:51:37.367999 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 11 13:51:37 crc kubenswrapper[5050]: I1211 13:51:37.419931 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 11 13:51:37 crc kubenswrapper[5050]: I1211 13:51:37.427064 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 11 13:51:37 crc kubenswrapper[5050]: I1211 13:51:37.502687 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 11 13:51:37 crc kubenswrapper[5050]: I1211 13:51:37.528608 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 11 13:51:37 crc kubenswrapper[5050]: I1211 13:51:37.685858 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 11 13:51:37 crc kubenswrapper[5050]: I1211 13:51:37.687752 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 11 13:51:37 crc kubenswrapper[5050]: I1211 13:51:37.774437 5050 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 11 13:51:37 crc kubenswrapper[5050]: I1211 13:51:37.854902 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 11 13:51:37 crc kubenswrapper[5050]: I1211 13:51:37.873956 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 11 13:51:37 crc kubenswrapper[5050]: I1211 13:51:37.905746 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 11 13:51:37 crc kubenswrapper[5050]: I1211 13:51:37.993959 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 11 13:51:38 crc kubenswrapper[5050]: I1211 13:51:38.031098 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 11 13:51:38 crc kubenswrapper[5050]: I1211 13:51:38.094851 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 11 13:51:38 crc kubenswrapper[5050]: I1211 13:51:38.155950 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 11 13:51:38 crc kubenswrapper[5050]: I1211 13:51:38.222245 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 11 13:51:38 crc kubenswrapper[5050]: I1211 13:51:38.323730 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 11 13:51:38 crc kubenswrapper[5050]: I1211 13:51:38.325162 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 11 13:51:38 crc kubenswrapper[5050]: I1211 13:51:38.350158 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 11 13:51:38 crc kubenswrapper[5050]: I1211 13:51:38.398636 5050 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 11 13:51:38 crc kubenswrapper[5050]: I1211 13:51:38.476514 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 11 13:51:38 crc kubenswrapper[5050]: I1211 13:51:38.625401 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 11 13:51:38 crc kubenswrapper[5050]: I1211 13:51:38.640990 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 11 13:51:38 crc kubenswrapper[5050]: I1211 13:51:38.784789 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 11 13:51:38 crc kubenswrapper[5050]: I1211 13:51:38.817758 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 11 13:51:38 crc kubenswrapper[5050]: I1211 13:51:38.857330 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 11 13:51:38 crc kubenswrapper[5050]: I1211 13:51:38.922199 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 11 13:51:38 crc kubenswrapper[5050]: I1211 13:51:38.957815 5050 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 11 13:51:38 crc kubenswrapper[5050]: I1211 13:51:38.964684 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 11 13:51:38 crc kubenswrapper[5050]: I1211 13:51:38.985996 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 11 13:51:39 crc kubenswrapper[5050]: I1211 13:51:39.069221 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 11 13:51:39 crc kubenswrapper[5050]: I1211 13:51:39.093521 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 11 13:51:39 crc kubenswrapper[5050]: I1211 13:51:39.104200 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 11 13:51:39 crc kubenswrapper[5050]: I1211 13:51:39.152586 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 11 13:51:39 crc kubenswrapper[5050]: I1211 13:51:39.158143 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 11 13:51:39 crc kubenswrapper[5050]: I1211 13:51:39.204755 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 11 13:51:39 crc kubenswrapper[5050]: I1211 13:51:39.284564 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 11 13:51:39 crc kubenswrapper[5050]: I1211 13:51:39.367720 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 11 13:51:39 crc kubenswrapper[5050]: I1211 13:51:39.367909 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 11 13:51:39 crc kubenswrapper[5050]: I1211 13:51:39.659103 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 11 13:51:39 crc kubenswrapper[5050]: I1211 13:51:39.729160 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 11 13:51:39 crc kubenswrapper[5050]: I1211 13:51:39.875164 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 11 13:51:39 crc kubenswrapper[5050]: I1211 13:51:39.911408 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 11 13:51:39 crc kubenswrapper[5050]: I1211 13:51:39.972626 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 11 13:51:40 crc kubenswrapper[5050]: I1211 13:51:40.069351 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 11 13:51:40 crc kubenswrapper[5050]: I1211 13:51:40.260427 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 11 13:51:40 crc kubenswrapper[5050]: I1211 13:51:40.286768 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 11 13:51:40 crc kubenswrapper[5050]: I1211 13:51:40.516574 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 11 13:51:40 crc kubenswrapper[5050]: I1211 13:51:40.536578 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 11 13:51:40 crc kubenswrapper[5050]: I1211 13:51:40.796873 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 13:51:40 crc kubenswrapper[5050]: I1211 13:51:40.796928 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 13:51:40 crc kubenswrapper[5050]: I1211 13:51:40.864440 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 11 13:51:41 crc kubenswrapper[5050]: I1211 13:51:41.133388 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 11 13:51:41 crc kubenswrapper[5050]: I1211 13:51:41.223496 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 11 13:51:41 crc kubenswrapper[5050]: I1211 13:51:41.244692 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 11 13:51:41 crc kubenswrapper[5050]: I1211 13:51:41.497926 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 11 13:51:41 crc kubenswrapper[5050]: I1211 13:51:41.514135 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 11 13:51:41 crc kubenswrapper[5050]: I1211 13:51:41.703510 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 11 13:51:41 crc kubenswrapper[5050]: I1211 13:51:41.762661 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 11 13:51:41 crc kubenswrapper[5050]: I1211 13:51:41.764919 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 11 13:51:42 crc kubenswrapper[5050]: I1211 13:51:42.684389 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 11 13:51:42 crc kubenswrapper[5050]: I1211 13:51:42.684746 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:51:42 crc kubenswrapper[5050]: I1211 13:51:42.740670 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 11 13:51:42 crc kubenswrapper[5050]: I1211 13:51:42.846726 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 11 13:51:42 crc kubenswrapper[5050]: I1211 13:51:42.846787 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 11 13:51:42 crc kubenswrapper[5050]: I1211 13:51:42.846849 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 11 13:51:42 crc kubenswrapper[5050]: I1211 13:51:42.846887 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 11 13:51:42 crc kubenswrapper[5050]: I1211 13:51:42.846909 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:51:42 crc kubenswrapper[5050]: I1211 13:51:42.846945 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 11 13:51:42 crc kubenswrapper[5050]: I1211 13:51:42.846963 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:51:42 crc kubenswrapper[5050]: I1211 13:51:42.846975 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:51:42 crc kubenswrapper[5050]: I1211 13:51:42.847044 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:51:42 crc kubenswrapper[5050]: I1211 13:51:42.847258 5050 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 11 13:51:42 crc kubenswrapper[5050]: I1211 13:51:42.847276 5050 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 11 13:51:42 crc kubenswrapper[5050]: I1211 13:51:42.847291 5050 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 11 13:51:42 crc kubenswrapper[5050]: I1211 13:51:42.847303 5050 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 11 13:51:42 crc kubenswrapper[5050]: I1211 13:51:42.855931 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:51:42 crc kubenswrapper[5050]: I1211 13:51:42.948374 5050 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 11 13:51:42 crc kubenswrapper[5050]: I1211 13:51:42.987420 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 11 13:51:43 crc kubenswrapper[5050]: I1211 13:51:43.006437 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 11 13:51:43 crc kubenswrapper[5050]: I1211 13:51:43.006581 5050 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="ec4865b13a5739e01019f3a5ccd3d4af2398f0dfccd3926e65b992410cae09bc" exitCode=137 Dec 11 13:51:43 crc kubenswrapper[5050]: I1211 13:51:43.006685 5050 scope.go:117] "RemoveContainer" containerID="ec4865b13a5739e01019f3a5ccd3d4af2398f0dfccd3926e65b992410cae09bc" Dec 11 13:51:43 crc kubenswrapper[5050]: I1211 13:51:43.006720 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 13:51:43 crc kubenswrapper[5050]: I1211 13:51:43.023379 5050 scope.go:117] "RemoveContainer" containerID="ec4865b13a5739e01019f3a5ccd3d4af2398f0dfccd3926e65b992410cae09bc" Dec 11 13:51:43 crc kubenswrapper[5050]: E1211 13:51:43.024228 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec4865b13a5739e01019f3a5ccd3d4af2398f0dfccd3926e65b992410cae09bc\": container with ID starting with ec4865b13a5739e01019f3a5ccd3d4af2398f0dfccd3926e65b992410cae09bc not found: ID does not exist" containerID="ec4865b13a5739e01019f3a5ccd3d4af2398f0dfccd3926e65b992410cae09bc" Dec 11 13:51:43 crc kubenswrapper[5050]: I1211 13:51:43.024327 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec4865b13a5739e01019f3a5ccd3d4af2398f0dfccd3926e65b992410cae09bc"} err="failed to get container status \"ec4865b13a5739e01019f3a5ccd3d4af2398f0dfccd3926e65b992410cae09bc\": rpc error: code = NotFound desc = could not find container \"ec4865b13a5739e01019f3a5ccd3d4af2398f0dfccd3926e65b992410cae09bc\": container with ID starting with ec4865b13a5739e01019f3a5ccd3d4af2398f0dfccd3926e65b992410cae09bc not found: ID does not exist" Dec 11 13:51:43 crc kubenswrapper[5050]: I1211 13:51:43.553324 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 11 13:51:43 crc kubenswrapper[5050]: I1211 13:51:43.553939 5050 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 11 13:51:43 crc kubenswrapper[5050]: I1211 13:51:43.564781 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 11 13:51:43 crc kubenswrapper[5050]: I1211 13:51:43.564854 5050 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="567a0207-b007-4e1e-b52a-886b1b7af108" Dec 11 13:51:43 crc kubenswrapper[5050]: I1211 13:51:43.568129 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 11 13:51:43 crc kubenswrapper[5050]: I1211 13:51:43.570337 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 11 13:51:43 crc kubenswrapper[5050]: I1211 13:51:43.570374 5050 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="567a0207-b007-4e1e-b52a-886b1b7af108" Dec 11 13:52:00 crc kubenswrapper[5050]: I1211 13:52:00.104956 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 11 13:52:00 crc kubenswrapper[5050]: I1211 13:52:00.106804 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 11 13:52:00 crc kubenswrapper[5050]: I1211 13:52:00.106866 5050 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="134dcca79c4eddbde6843383841b57eedfeb7b3ef8680a39b2a851f7ca914c11" exitCode=137 Dec 11 13:52:00 crc kubenswrapper[5050]: I1211 13:52:00.106907 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"134dcca79c4eddbde6843383841b57eedfeb7b3ef8680a39b2a851f7ca914c11"} Dec 11 13:52:00 crc kubenswrapper[5050]: I1211 13:52:00.106945 5050 scope.go:117] "RemoveContainer" containerID="a5990aa25c5e6fd0c4328adb0efd594ff4a31f1ee1734928beaa2608b6f16ccf" Dec 11 13:52:01 crc kubenswrapper[5050]: I1211 13:52:01.115493 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 11 13:52:01 crc kubenswrapper[5050]: I1211 13:52:01.116895 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"696de0133c65c6e6ea70d6299312593ddfc638a01a5f4783ed9082195fb6fb31"} Dec 11 13:52:04 crc kubenswrapper[5050]: I1211 13:52:04.500595 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:52:09 crc kubenswrapper[5050]: I1211 13:52:09.275691 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:52:09 crc kubenswrapper[5050]: I1211 13:52:09.281379 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:52:10 crc kubenswrapper[5050]: I1211 13:52:10.167940 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 13:52:10 crc kubenswrapper[5050]: I1211 13:52:10.796789 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 13:52:10 crc kubenswrapper[5050]: I1211 13:52:10.796849 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 13:52:10 crc kubenswrapper[5050]: I1211 13:52:10.796917 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 13:52:10 crc kubenswrapper[5050]: I1211 13:52:10.797609 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dd5e3efb6c32fb9d9f76a12a8e8a2e6fcb32ad3cbf663ac6d264ea7b3f858008"} pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 13:52:10 crc kubenswrapper[5050]: I1211 13:52:10.797717 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" containerID="cri-o://dd5e3efb6c32fb9d9f76a12a8e8a2e6fcb32ad3cbf663ac6d264ea7b3f858008" gracePeriod=600 Dec 11 13:52:11 crc kubenswrapper[5050]: I1211 13:52:11.169782 5050 generic.go:334] "Generic (PLEG): container finished" podID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerID="dd5e3efb6c32fb9d9f76a12a8e8a2e6fcb32ad3cbf663ac6d264ea7b3f858008" exitCode=0 Dec 11 13:52:11 crc kubenswrapper[5050]: I1211 13:52:11.169864 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerDied","Data":"dd5e3efb6c32fb9d9f76a12a8e8a2e6fcb32ad3cbf663ac6d264ea7b3f858008"} Dec 11 13:52:11 crc kubenswrapper[5050]: I1211 13:52:11.170210 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerStarted","Data":"a5e8cac0339bdaff46aa1a1eb16e7989125087ebc73eb9a1b34470494c0f3f9d"} Dec 11 13:52:12 crc kubenswrapper[5050]: I1211 13:52:12.238781 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-77wtp"] Dec 11 13:52:12 crc kubenswrapper[5050]: E1211 13:52:12.239324 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 11 13:52:12 crc kubenswrapper[5050]: I1211 13:52:12.239341 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 11 13:52:12 crc kubenswrapper[5050]: I1211 13:52:12.239465 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 11 13:52:12 crc kubenswrapper[5050]: I1211 13:52:12.240223 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-77wtp" Dec 11 13:52:12 crc kubenswrapper[5050]: I1211 13:52:12.242306 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 11 13:52:12 crc kubenswrapper[5050]: I1211 13:52:12.246714 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-77wtp"] Dec 11 13:52:12 crc kubenswrapper[5050]: I1211 13:52:12.315075 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e9fee45-23cf-40a5-9291-6b92c496035a-catalog-content\") pod \"redhat-marketplace-77wtp\" (UID: \"3e9fee45-23cf-40a5-9291-6b92c496035a\") " pod="openshift-marketplace/redhat-marketplace-77wtp" Dec 11 13:52:12 crc kubenswrapper[5050]: I1211 13:52:12.315137 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d7z7\" (UniqueName: \"kubernetes.io/projected/3e9fee45-23cf-40a5-9291-6b92c496035a-kube-api-access-8d7z7\") pod \"redhat-marketplace-77wtp\" (UID: \"3e9fee45-23cf-40a5-9291-6b92c496035a\") " pod="openshift-marketplace/redhat-marketplace-77wtp" Dec 11 13:52:12 crc kubenswrapper[5050]: I1211 13:52:12.315296 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e9fee45-23cf-40a5-9291-6b92c496035a-utilities\") pod \"redhat-marketplace-77wtp\" (UID: \"3e9fee45-23cf-40a5-9291-6b92c496035a\") " pod="openshift-marketplace/redhat-marketplace-77wtp" Dec 11 13:52:12 crc kubenswrapper[5050]: I1211 13:52:12.416065 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e9fee45-23cf-40a5-9291-6b92c496035a-utilities\") pod \"redhat-marketplace-77wtp\" (UID: \"3e9fee45-23cf-40a5-9291-6b92c496035a\") " pod="openshift-marketplace/redhat-marketplace-77wtp" Dec 11 13:52:12 crc kubenswrapper[5050]: I1211 13:52:12.416145 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e9fee45-23cf-40a5-9291-6b92c496035a-catalog-content\") pod \"redhat-marketplace-77wtp\" (UID: \"3e9fee45-23cf-40a5-9291-6b92c496035a\") " pod="openshift-marketplace/redhat-marketplace-77wtp" Dec 11 13:52:12 crc kubenswrapper[5050]: I1211 13:52:12.416177 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d7z7\" (UniqueName: \"kubernetes.io/projected/3e9fee45-23cf-40a5-9291-6b92c496035a-kube-api-access-8d7z7\") pod \"redhat-marketplace-77wtp\" (UID: \"3e9fee45-23cf-40a5-9291-6b92c496035a\") " pod="openshift-marketplace/redhat-marketplace-77wtp" Dec 11 13:52:12 crc kubenswrapper[5050]: I1211 13:52:12.416609 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e9fee45-23cf-40a5-9291-6b92c496035a-utilities\") pod \"redhat-marketplace-77wtp\" (UID: \"3e9fee45-23cf-40a5-9291-6b92c496035a\") " pod="openshift-marketplace/redhat-marketplace-77wtp" Dec 11 13:52:12 crc kubenswrapper[5050]: I1211 13:52:12.416620 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e9fee45-23cf-40a5-9291-6b92c496035a-catalog-content\") pod \"redhat-marketplace-77wtp\" (UID: \"3e9fee45-23cf-40a5-9291-6b92c496035a\") " pod="openshift-marketplace/redhat-marketplace-77wtp" Dec 11 13:52:12 crc kubenswrapper[5050]: I1211 13:52:12.425535 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7kjsc"] Dec 11 13:52:12 crc kubenswrapper[5050]: I1211 13:52:12.426605 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7kjsc" Dec 11 13:52:12 crc kubenswrapper[5050]: I1211 13:52:12.429050 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 11 13:52:12 crc kubenswrapper[5050]: I1211 13:52:12.434860 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7kjsc"] Dec 11 13:52:12 crc kubenswrapper[5050]: I1211 13:52:12.453668 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d7z7\" (UniqueName: \"kubernetes.io/projected/3e9fee45-23cf-40a5-9291-6b92c496035a-kube-api-access-8d7z7\") pod \"redhat-marketplace-77wtp\" (UID: \"3e9fee45-23cf-40a5-9291-6b92c496035a\") " pod="openshift-marketplace/redhat-marketplace-77wtp" Dec 11 13:52:12 crc kubenswrapper[5050]: I1211 13:52:12.557894 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-77wtp" Dec 11 13:52:12 crc kubenswrapper[5050]: I1211 13:52:12.619327 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f44abb7-49c6-4244-9a69-309876fe3215-catalog-content\") pod \"redhat-operators-7kjsc\" (UID: \"2f44abb7-49c6-4244-9a69-309876fe3215\") " pod="openshift-marketplace/redhat-operators-7kjsc" Dec 11 13:52:12 crc kubenswrapper[5050]: I1211 13:52:12.619771 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f44abb7-49c6-4244-9a69-309876fe3215-utilities\") pod \"redhat-operators-7kjsc\" (UID: \"2f44abb7-49c6-4244-9a69-309876fe3215\") " pod="openshift-marketplace/redhat-operators-7kjsc" Dec 11 13:52:12 crc kubenswrapper[5050]: I1211 13:52:12.619809 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85lkz\" (UniqueName: \"kubernetes.io/projected/2f44abb7-49c6-4244-9a69-309876fe3215-kube-api-access-85lkz\") pod \"redhat-operators-7kjsc\" (UID: \"2f44abb7-49c6-4244-9a69-309876fe3215\") " pod="openshift-marketplace/redhat-operators-7kjsc" Dec 11 13:52:12 crc kubenswrapper[5050]: I1211 13:52:12.720391 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f44abb7-49c6-4244-9a69-309876fe3215-utilities\") pod \"redhat-operators-7kjsc\" (UID: \"2f44abb7-49c6-4244-9a69-309876fe3215\") " pod="openshift-marketplace/redhat-operators-7kjsc" Dec 11 13:52:12 crc kubenswrapper[5050]: I1211 13:52:12.720467 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85lkz\" (UniqueName: \"kubernetes.io/projected/2f44abb7-49c6-4244-9a69-309876fe3215-kube-api-access-85lkz\") pod \"redhat-operators-7kjsc\" (UID: \"2f44abb7-49c6-4244-9a69-309876fe3215\") " pod="openshift-marketplace/redhat-operators-7kjsc" Dec 11 13:52:12 crc kubenswrapper[5050]: I1211 13:52:12.720538 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f44abb7-49c6-4244-9a69-309876fe3215-catalog-content\") pod \"redhat-operators-7kjsc\" (UID: \"2f44abb7-49c6-4244-9a69-309876fe3215\") " pod="openshift-marketplace/redhat-operators-7kjsc" Dec 11 13:52:12 crc kubenswrapper[5050]: I1211 13:52:12.721249 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f44abb7-49c6-4244-9a69-309876fe3215-utilities\") pod \"redhat-operators-7kjsc\" (UID: \"2f44abb7-49c6-4244-9a69-309876fe3215\") " pod="openshift-marketplace/redhat-operators-7kjsc" Dec 11 13:52:12 crc kubenswrapper[5050]: I1211 13:52:12.721757 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f44abb7-49c6-4244-9a69-309876fe3215-catalog-content\") pod \"redhat-operators-7kjsc\" (UID: \"2f44abb7-49c6-4244-9a69-309876fe3215\") " pod="openshift-marketplace/redhat-operators-7kjsc" Dec 11 13:52:12 crc kubenswrapper[5050]: I1211 13:52:12.742453 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-77wtp"] Dec 11 13:52:12 crc kubenswrapper[5050]: I1211 13:52:12.762481 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85lkz\" (UniqueName: \"kubernetes.io/projected/2f44abb7-49c6-4244-9a69-309876fe3215-kube-api-access-85lkz\") pod \"redhat-operators-7kjsc\" (UID: \"2f44abb7-49c6-4244-9a69-309876fe3215\") " pod="openshift-marketplace/redhat-operators-7kjsc" Dec 11 13:52:13 crc kubenswrapper[5050]: I1211 13:52:13.039804 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7kjsc" Dec 11 13:52:13 crc kubenswrapper[5050]: I1211 13:52:13.183389 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-77wtp" event={"ID":"3e9fee45-23cf-40a5-9291-6b92c496035a","Type":"ContainerStarted","Data":"97dbfa2bffcf6ce5f8b48ea37928039ca8a4cfa77ff7e3eb55077ea76bd354ac"} Dec 11 13:52:13 crc kubenswrapper[5050]: I1211 13:52:13.183654 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-77wtp" event={"ID":"3e9fee45-23cf-40a5-9291-6b92c496035a","Type":"ContainerStarted","Data":"3737b4dcea5e23f617ef7051866ff04c8155386a8963a2be45a451ec9cfef88f"} Dec 11 13:52:13 crc kubenswrapper[5050]: I1211 13:52:13.204062 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7kjsc"] Dec 11 13:52:13 crc kubenswrapper[5050]: W1211 13:52:13.209975 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f44abb7_49c6_4244_9a69_309876fe3215.slice/crio-c0231efe495739b0bff814650c65b38c96e38cb53146a56abb8a428b393a3fcc WatchSource:0}: Error finding container c0231efe495739b0bff814650c65b38c96e38cb53146a56abb8a428b393a3fcc: Status 404 returned error can't find the container with id c0231efe495739b0bff814650c65b38c96e38cb53146a56abb8a428b393a3fcc Dec 11 13:52:14 crc kubenswrapper[5050]: I1211 13:52:14.189823 5050 generic.go:334] "Generic (PLEG): container finished" podID="2f44abb7-49c6-4244-9a69-309876fe3215" containerID="b8057aa76b3cf8c210c5ce30bf935274f7040762e0bfa9e6f5fb9ff13ba68b76" exitCode=0 Dec 11 13:52:14 crc kubenswrapper[5050]: I1211 13:52:14.189883 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kjsc" event={"ID":"2f44abb7-49c6-4244-9a69-309876fe3215","Type":"ContainerDied","Data":"b8057aa76b3cf8c210c5ce30bf935274f7040762e0bfa9e6f5fb9ff13ba68b76"} Dec 11 13:52:14 crc kubenswrapper[5050]: I1211 13:52:14.189910 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kjsc" event={"ID":"2f44abb7-49c6-4244-9a69-309876fe3215","Type":"ContainerStarted","Data":"c0231efe495739b0bff814650c65b38c96e38cb53146a56abb8a428b393a3fcc"} Dec 11 13:52:14 crc kubenswrapper[5050]: I1211 13:52:14.193418 5050 generic.go:334] "Generic (PLEG): container finished" podID="3e9fee45-23cf-40a5-9291-6b92c496035a" containerID="97dbfa2bffcf6ce5f8b48ea37928039ca8a4cfa77ff7e3eb55077ea76bd354ac" exitCode=0 Dec 11 13:52:14 crc kubenswrapper[5050]: I1211 13:52:14.193475 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-77wtp" event={"ID":"3e9fee45-23cf-40a5-9291-6b92c496035a","Type":"ContainerDied","Data":"97dbfa2bffcf6ce5f8b48ea37928039ca8a4cfa77ff7e3eb55077ea76bd354ac"} Dec 11 13:52:14 crc kubenswrapper[5050]: I1211 13:52:14.630388 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sfsfn"] Dec 11 13:52:14 crc kubenswrapper[5050]: I1211 13:52:14.631684 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sfsfn" Dec 11 13:52:14 crc kubenswrapper[5050]: I1211 13:52:14.634449 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 11 13:52:14 crc kubenswrapper[5050]: I1211 13:52:14.642118 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sfsfn"] Dec 11 13:52:14 crc kubenswrapper[5050]: I1211 13:52:14.751357 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgfd7\" (UniqueName: \"kubernetes.io/projected/3682492e-0e7c-4b11-818a-e0612a9fc292-kube-api-access-qgfd7\") pod \"community-operators-sfsfn\" (UID: \"3682492e-0e7c-4b11-818a-e0612a9fc292\") " pod="openshift-marketplace/community-operators-sfsfn" Dec 11 13:52:14 crc kubenswrapper[5050]: I1211 13:52:14.751513 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3682492e-0e7c-4b11-818a-e0612a9fc292-catalog-content\") pod \"community-operators-sfsfn\" (UID: \"3682492e-0e7c-4b11-818a-e0612a9fc292\") " pod="openshift-marketplace/community-operators-sfsfn" Dec 11 13:52:14 crc kubenswrapper[5050]: I1211 13:52:14.751578 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3682492e-0e7c-4b11-818a-e0612a9fc292-utilities\") pod \"community-operators-sfsfn\" (UID: \"3682492e-0e7c-4b11-818a-e0612a9fc292\") " pod="openshift-marketplace/community-operators-sfsfn" Dec 11 13:52:14 crc kubenswrapper[5050]: I1211 13:52:14.852978 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgfd7\" (UniqueName: \"kubernetes.io/projected/3682492e-0e7c-4b11-818a-e0612a9fc292-kube-api-access-qgfd7\") pod \"community-operators-sfsfn\" (UID: \"3682492e-0e7c-4b11-818a-e0612a9fc292\") " pod="openshift-marketplace/community-operators-sfsfn" Dec 11 13:52:14 crc kubenswrapper[5050]: I1211 13:52:14.853055 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3682492e-0e7c-4b11-818a-e0612a9fc292-catalog-content\") pod \"community-operators-sfsfn\" (UID: \"3682492e-0e7c-4b11-818a-e0612a9fc292\") " pod="openshift-marketplace/community-operators-sfsfn" Dec 11 13:52:14 crc kubenswrapper[5050]: I1211 13:52:14.853094 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3682492e-0e7c-4b11-818a-e0612a9fc292-utilities\") pod \"community-operators-sfsfn\" (UID: \"3682492e-0e7c-4b11-818a-e0612a9fc292\") " pod="openshift-marketplace/community-operators-sfsfn" Dec 11 13:52:14 crc kubenswrapper[5050]: I1211 13:52:14.853564 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3682492e-0e7c-4b11-818a-e0612a9fc292-catalog-content\") pod \"community-operators-sfsfn\" (UID: \"3682492e-0e7c-4b11-818a-e0612a9fc292\") " pod="openshift-marketplace/community-operators-sfsfn" Dec 11 13:52:14 crc kubenswrapper[5050]: I1211 13:52:14.853600 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3682492e-0e7c-4b11-818a-e0612a9fc292-utilities\") pod \"community-operators-sfsfn\" (UID: \"3682492e-0e7c-4b11-818a-e0612a9fc292\") " pod="openshift-marketplace/community-operators-sfsfn" Dec 11 13:52:14 crc kubenswrapper[5050]: I1211 13:52:14.871785 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgfd7\" (UniqueName: \"kubernetes.io/projected/3682492e-0e7c-4b11-818a-e0612a9fc292-kube-api-access-qgfd7\") pod \"community-operators-sfsfn\" (UID: \"3682492e-0e7c-4b11-818a-e0612a9fc292\") " pod="openshift-marketplace/community-operators-sfsfn" Dec 11 13:52:14 crc kubenswrapper[5050]: I1211 13:52:14.960795 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sfsfn" Dec 11 13:52:15 crc kubenswrapper[5050]: I1211 13:52:15.131758 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sfsfn"] Dec 11 13:52:15 crc kubenswrapper[5050]: W1211 13:52:15.134761 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3682492e_0e7c_4b11_818a_e0612a9fc292.slice/crio-ee88b5398e467c3f964bcab02a2070c256041b24f2ab4741789823aa4d33bd96 WatchSource:0}: Error finding container ee88b5398e467c3f964bcab02a2070c256041b24f2ab4741789823aa4d33bd96: Status 404 returned error can't find the container with id ee88b5398e467c3f964bcab02a2070c256041b24f2ab4741789823aa4d33bd96 Dec 11 13:52:15 crc kubenswrapper[5050]: I1211 13:52:15.199693 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sfsfn" event={"ID":"3682492e-0e7c-4b11-818a-e0612a9fc292","Type":"ContainerStarted","Data":"ee88b5398e467c3f964bcab02a2070c256041b24f2ab4741789823aa4d33bd96"} Dec 11 13:52:16 crc kubenswrapper[5050]: I1211 13:52:16.207241 5050 generic.go:334] "Generic (PLEG): container finished" podID="3e9fee45-23cf-40a5-9291-6b92c496035a" containerID="82e193220fc18fea41a124c1d0686ce9af2d89eb5f81b5bfabd5559cedc767fd" exitCode=0 Dec 11 13:52:16 crc kubenswrapper[5050]: I1211 13:52:16.207339 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-77wtp" event={"ID":"3e9fee45-23cf-40a5-9291-6b92c496035a","Type":"ContainerDied","Data":"82e193220fc18fea41a124c1d0686ce9af2d89eb5f81b5bfabd5559cedc767fd"} Dec 11 13:52:16 crc kubenswrapper[5050]: I1211 13:52:16.209920 5050 generic.go:334] "Generic (PLEG): container finished" podID="2f44abb7-49c6-4244-9a69-309876fe3215" containerID="75afbd52823f82c9a6df1a3f529de63347c49cd0697253058d9ef3c27264f2ca" exitCode=0 Dec 11 13:52:16 crc kubenswrapper[5050]: I1211 13:52:16.209991 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kjsc" event={"ID":"2f44abb7-49c6-4244-9a69-309876fe3215","Type":"ContainerDied","Data":"75afbd52823f82c9a6df1a3f529de63347c49cd0697253058d9ef3c27264f2ca"} Dec 11 13:52:16 crc kubenswrapper[5050]: I1211 13:52:16.217057 5050 generic.go:334] "Generic (PLEG): container finished" podID="3682492e-0e7c-4b11-818a-e0612a9fc292" containerID="0aae3cf4829bf4a557dcbe4f13f4c5b03efaf5b512cba47c682cc385b5e668ce" exitCode=0 Dec 11 13:52:16 crc kubenswrapper[5050]: I1211 13:52:16.217113 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sfsfn" event={"ID":"3682492e-0e7c-4b11-818a-e0612a9fc292","Type":"ContainerDied","Data":"0aae3cf4829bf4a557dcbe4f13f4c5b03efaf5b512cba47c682cc385b5e668ce"} Dec 11 13:52:17 crc kubenswrapper[5050]: I1211 13:52:17.225317 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-77wtp" event={"ID":"3e9fee45-23cf-40a5-9291-6b92c496035a","Type":"ContainerStarted","Data":"83e9702ae8a5c71e51a7594bf8e54542b671d5d9504fab1b98ed03e455ef3029"} Dec 11 13:52:17 crc kubenswrapper[5050]: I1211 13:52:17.227871 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kjsc" event={"ID":"2f44abb7-49c6-4244-9a69-309876fe3215","Type":"ContainerStarted","Data":"e074530b0d4da97fcaefbc59cb64d7c912012148bd9038b67f9390ef6a6645fb"} Dec 11 13:52:17 crc kubenswrapper[5050]: I1211 13:52:17.229544 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sfsfn" event={"ID":"3682492e-0e7c-4b11-818a-e0612a9fc292","Type":"ContainerStarted","Data":"67c2ab1ec3d2d85b07b1aa58097bce910a37116d7e7abd7b858d497e0e29f38a"} Dec 11 13:52:17 crc kubenswrapper[5050]: I1211 13:52:17.249942 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-77wtp" podStartSLOduration=2.598889275 podStartE2EDuration="5.249922003s" podCreationTimestamp="2025-12-11 13:52:12 +0000 UTC" firstStartedPulling="2025-12-11 13:52:14.195254009 +0000 UTC m=+225.038976595" lastFinishedPulling="2025-12-11 13:52:16.846286737 +0000 UTC m=+227.690009323" observedRunningTime="2025-12-11 13:52:17.24757076 +0000 UTC m=+228.091293346" watchObservedRunningTime="2025-12-11 13:52:17.249922003 +0000 UTC m=+228.093644589" Dec 11 13:52:17 crc kubenswrapper[5050]: I1211 13:52:17.267377 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7kjsc" podStartSLOduration=2.743119228 podStartE2EDuration="5.267356389s" podCreationTimestamp="2025-12-11 13:52:12 +0000 UTC" firstStartedPulling="2025-12-11 13:52:14.19193373 +0000 UTC m=+225.035656316" lastFinishedPulling="2025-12-11 13:52:16.716170891 +0000 UTC m=+227.559893477" observedRunningTime="2025-12-11 13:52:17.265183151 +0000 UTC m=+228.108905767" watchObservedRunningTime="2025-12-11 13:52:17.267356389 +0000 UTC m=+228.111078975" Dec 11 13:52:18 crc kubenswrapper[5050]: I1211 13:52:18.236836 5050 generic.go:334] "Generic (PLEG): container finished" podID="3682492e-0e7c-4b11-818a-e0612a9fc292" containerID="67c2ab1ec3d2d85b07b1aa58097bce910a37116d7e7abd7b858d497e0e29f38a" exitCode=0 Dec 11 13:52:18 crc kubenswrapper[5050]: I1211 13:52:18.236877 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sfsfn" event={"ID":"3682492e-0e7c-4b11-818a-e0612a9fc292","Type":"ContainerDied","Data":"67c2ab1ec3d2d85b07b1aa58097bce910a37116d7e7abd7b858d497e0e29f38a"} Dec 11 13:52:19 crc kubenswrapper[5050]: I1211 13:52:19.244859 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sfsfn" event={"ID":"3682492e-0e7c-4b11-818a-e0612a9fc292","Type":"ContainerStarted","Data":"3e82168c51fecb1b40b18c3b1acc35aab38bf3e750a147260026efaa4535f055"} Dec 11 13:52:19 crc kubenswrapper[5050]: I1211 13:52:19.265229 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sfsfn" podStartSLOduration=2.490784368 podStartE2EDuration="5.265210894s" podCreationTimestamp="2025-12-11 13:52:14 +0000 UTC" firstStartedPulling="2025-12-11 13:52:16.218927634 +0000 UTC m=+227.062650220" lastFinishedPulling="2025-12-11 13:52:18.99335416 +0000 UTC m=+229.837076746" observedRunningTime="2025-12-11 13:52:19.264105635 +0000 UTC m=+230.107828241" watchObservedRunningTime="2025-12-11 13:52:19.265210894 +0000 UTC m=+230.108933480" Dec 11 13:52:20 crc kubenswrapper[5050]: I1211 13:52:20.473921 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-f6wfx"] Dec 11 13:52:20 crc kubenswrapper[5050]: I1211 13:52:20.496720 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bvjdq"] Dec 11 13:52:20 crc kubenswrapper[5050]: I1211 13:52:20.496973 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bvjdq" podUID="3069e571-2923-44a7-ae85-7cc7e64991ef" containerName="route-controller-manager" containerID="cri-o://5f39bbd36b3950a89dc4d6758df21536feb19af0dc144cad266650450065aaca" gracePeriod=30 Dec 11 13:52:20 crc kubenswrapper[5050]: I1211 13:52:20.574891 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-skz5v"] Dec 11 13:52:20 crc kubenswrapper[5050]: I1211 13:52:20.575152 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-skz5v" podUID="d40e12c4-8331-453c-b20b-5bbd5e3c2a9c" containerName="controller-manager" containerID="cri-o://a26e87b7916b4cf910405aa4d57ebae142c78b0b630a03eb23a81d830c9750ce" gracePeriod=30 Dec 11 13:52:21 crc kubenswrapper[5050]: I1211 13:52:21.256530 5050 generic.go:334] "Generic (PLEG): container finished" podID="3069e571-2923-44a7-ae85-7cc7e64991ef" containerID="5f39bbd36b3950a89dc4d6758df21536feb19af0dc144cad266650450065aaca" exitCode=0 Dec 11 13:52:21 crc kubenswrapper[5050]: I1211 13:52:21.256628 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bvjdq" event={"ID":"3069e571-2923-44a7-ae85-7cc7e64991ef","Type":"ContainerDied","Data":"5f39bbd36b3950a89dc4d6758df21536feb19af0dc144cad266650450065aaca"} Dec 11 13:52:21 crc kubenswrapper[5050]: I1211 13:52:21.258644 5050 generic.go:334] "Generic (PLEG): container finished" podID="d40e12c4-8331-453c-b20b-5bbd5e3c2a9c" containerID="a26e87b7916b4cf910405aa4d57ebae142c78b0b630a03eb23a81d830c9750ce" exitCode=0 Dec 11 13:52:21 crc kubenswrapper[5050]: I1211 13:52:21.258676 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-skz5v" event={"ID":"d40e12c4-8331-453c-b20b-5bbd5e3c2a9c","Type":"ContainerDied","Data":"a26e87b7916b4cf910405aa4d57ebae142c78b0b630a03eb23a81d830c9750ce"} Dec 11 13:52:21 crc kubenswrapper[5050]: I1211 13:52:21.681519 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bvjdq" Dec 11 13:52:21 crc kubenswrapper[5050]: I1211 13:52:21.709440 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn"] Dec 11 13:52:21 crc kubenswrapper[5050]: E1211 13:52:21.709668 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3069e571-2923-44a7-ae85-7cc7e64991ef" containerName="route-controller-manager" Dec 11 13:52:21 crc kubenswrapper[5050]: I1211 13:52:21.709683 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="3069e571-2923-44a7-ae85-7cc7e64991ef" containerName="route-controller-manager" Dec 11 13:52:21 crc kubenswrapper[5050]: I1211 13:52:21.709818 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="3069e571-2923-44a7-ae85-7cc7e64991ef" containerName="route-controller-manager" Dec 11 13:52:21 crc kubenswrapper[5050]: I1211 13:52:21.710252 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn" Dec 11 13:52:21 crc kubenswrapper[5050]: I1211 13:52:21.726953 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn"] Dec 11 13:52:21 crc kubenswrapper[5050]: I1211 13:52:21.844722 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3069e571-2923-44a7-ae85-7cc7e64991ef-serving-cert\") pod \"3069e571-2923-44a7-ae85-7cc7e64991ef\" (UID: \"3069e571-2923-44a7-ae85-7cc7e64991ef\") " Dec 11 13:52:21 crc kubenswrapper[5050]: I1211 13:52:21.844792 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3069e571-2923-44a7-ae85-7cc7e64991ef-config\") pod \"3069e571-2923-44a7-ae85-7cc7e64991ef\" (UID: \"3069e571-2923-44a7-ae85-7cc7e64991ef\") " Dec 11 13:52:21 crc kubenswrapper[5050]: I1211 13:52:21.844835 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3069e571-2923-44a7-ae85-7cc7e64991ef-client-ca\") pod \"3069e571-2923-44a7-ae85-7cc7e64991ef\" (UID: \"3069e571-2923-44a7-ae85-7cc7e64991ef\") " Dec 11 13:52:21 crc kubenswrapper[5050]: I1211 13:52:21.844880 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v57tk\" (UniqueName: \"kubernetes.io/projected/3069e571-2923-44a7-ae85-7cc7e64991ef-kube-api-access-v57tk\") pod \"3069e571-2923-44a7-ae85-7cc7e64991ef\" (UID: \"3069e571-2923-44a7-ae85-7cc7e64991ef\") " Dec 11 13:52:21 crc kubenswrapper[5050]: I1211 13:52:21.845280 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhrrj\" (UniqueName: \"kubernetes.io/projected/5b7ceea3-4e92-46ee-81de-5b8f932144ad-kube-api-access-lhrrj\") pod \"route-controller-manager-767f6d799d-cv7mn\" (UID: \"5b7ceea3-4e92-46ee-81de-5b8f932144ad\") " pod="openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn" Dec 11 13:52:21 crc kubenswrapper[5050]: I1211 13:52:21.845347 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b7ceea3-4e92-46ee-81de-5b8f932144ad-config\") pod \"route-controller-manager-767f6d799d-cv7mn\" (UID: \"5b7ceea3-4e92-46ee-81de-5b8f932144ad\") " pod="openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn" Dec 11 13:52:21 crc kubenswrapper[5050]: I1211 13:52:21.845373 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b7ceea3-4e92-46ee-81de-5b8f932144ad-client-ca\") pod \"route-controller-manager-767f6d799d-cv7mn\" (UID: \"5b7ceea3-4e92-46ee-81de-5b8f932144ad\") " pod="openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn" Dec 11 13:52:21 crc kubenswrapper[5050]: I1211 13:52:21.845414 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b7ceea3-4e92-46ee-81de-5b8f932144ad-serving-cert\") pod \"route-controller-manager-767f6d799d-cv7mn\" (UID: \"5b7ceea3-4e92-46ee-81de-5b8f932144ad\") " pod="openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn" Dec 11 13:52:21 crc kubenswrapper[5050]: I1211 13:52:21.845727 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3069e571-2923-44a7-ae85-7cc7e64991ef-client-ca" (OuterVolumeSpecName: "client-ca") pod "3069e571-2923-44a7-ae85-7cc7e64991ef" (UID: "3069e571-2923-44a7-ae85-7cc7e64991ef"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:52:21 crc kubenswrapper[5050]: I1211 13:52:21.846381 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3069e571-2923-44a7-ae85-7cc7e64991ef-config" (OuterVolumeSpecName: "config") pod "3069e571-2923-44a7-ae85-7cc7e64991ef" (UID: "3069e571-2923-44a7-ae85-7cc7e64991ef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:52:21 crc kubenswrapper[5050]: I1211 13:52:21.865277 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3069e571-2923-44a7-ae85-7cc7e64991ef-kube-api-access-v57tk" (OuterVolumeSpecName: "kube-api-access-v57tk") pod "3069e571-2923-44a7-ae85-7cc7e64991ef" (UID: "3069e571-2923-44a7-ae85-7cc7e64991ef"). InnerVolumeSpecName "kube-api-access-v57tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:52:21 crc kubenswrapper[5050]: I1211 13:52:21.865621 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3069e571-2923-44a7-ae85-7cc7e64991ef-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3069e571-2923-44a7-ae85-7cc7e64991ef" (UID: "3069e571-2923-44a7-ae85-7cc7e64991ef"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:52:21 crc kubenswrapper[5050]: I1211 13:52:21.919693 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-skz5v" Dec 11 13:52:21 crc kubenswrapper[5050]: I1211 13:52:21.946700 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b7ceea3-4e92-46ee-81de-5b8f932144ad-serving-cert\") pod \"route-controller-manager-767f6d799d-cv7mn\" (UID: \"5b7ceea3-4e92-46ee-81de-5b8f932144ad\") " pod="openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn" Dec 11 13:52:21 crc kubenswrapper[5050]: I1211 13:52:21.946851 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhrrj\" (UniqueName: \"kubernetes.io/projected/5b7ceea3-4e92-46ee-81de-5b8f932144ad-kube-api-access-lhrrj\") pod \"route-controller-manager-767f6d799d-cv7mn\" (UID: \"5b7ceea3-4e92-46ee-81de-5b8f932144ad\") " pod="openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn" Dec 11 13:52:21 crc kubenswrapper[5050]: I1211 13:52:21.946902 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b7ceea3-4e92-46ee-81de-5b8f932144ad-config\") pod \"route-controller-manager-767f6d799d-cv7mn\" (UID: \"5b7ceea3-4e92-46ee-81de-5b8f932144ad\") " pod="openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn" Dec 11 13:52:21 crc kubenswrapper[5050]: I1211 13:52:21.946923 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b7ceea3-4e92-46ee-81de-5b8f932144ad-client-ca\") pod \"route-controller-manager-767f6d799d-cv7mn\" (UID: \"5b7ceea3-4e92-46ee-81de-5b8f932144ad\") " pod="openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn" Dec 11 13:52:21 crc kubenswrapper[5050]: I1211 13:52:21.946965 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3069e571-2923-44a7-ae85-7cc7e64991ef-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:52:21 crc kubenswrapper[5050]: I1211 13:52:21.946976 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3069e571-2923-44a7-ae85-7cc7e64991ef-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:52:21 crc kubenswrapper[5050]: I1211 13:52:21.946986 5050 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3069e571-2923-44a7-ae85-7cc7e64991ef-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:52:21 crc kubenswrapper[5050]: I1211 13:52:21.946995 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v57tk\" (UniqueName: \"kubernetes.io/projected/3069e571-2923-44a7-ae85-7cc7e64991ef-kube-api-access-v57tk\") on node \"crc\" DevicePath \"\"" Dec 11 13:52:21 crc kubenswrapper[5050]: I1211 13:52:21.947976 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b7ceea3-4e92-46ee-81de-5b8f932144ad-client-ca\") pod \"route-controller-manager-767f6d799d-cv7mn\" (UID: \"5b7ceea3-4e92-46ee-81de-5b8f932144ad\") " pod="openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn" Dec 11 13:52:21 crc kubenswrapper[5050]: I1211 13:52:21.948768 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b7ceea3-4e92-46ee-81de-5b8f932144ad-config\") pod \"route-controller-manager-767f6d799d-cv7mn\" (UID: \"5b7ceea3-4e92-46ee-81de-5b8f932144ad\") " pod="openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn" Dec 11 13:52:21 crc kubenswrapper[5050]: I1211 13:52:21.951911 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b7ceea3-4e92-46ee-81de-5b8f932144ad-serving-cert\") pod \"route-controller-manager-767f6d799d-cv7mn\" (UID: \"5b7ceea3-4e92-46ee-81de-5b8f932144ad\") " pod="openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn" Dec 11 13:52:21 crc kubenswrapper[5050]: I1211 13:52:21.966943 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhrrj\" (UniqueName: \"kubernetes.io/projected/5b7ceea3-4e92-46ee-81de-5b8f932144ad-kube-api-access-lhrrj\") pod \"route-controller-manager-767f6d799d-cv7mn\" (UID: \"5b7ceea3-4e92-46ee-81de-5b8f932144ad\") " pod="openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn" Dec 11 13:52:22 crc kubenswrapper[5050]: I1211 13:52:22.027534 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn" Dec 11 13:52:22 crc kubenswrapper[5050]: I1211 13:52:22.047711 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9bwx\" (UniqueName: \"kubernetes.io/projected/d40e12c4-8331-453c-b20b-5bbd5e3c2a9c-kube-api-access-k9bwx\") pod \"d40e12c4-8331-453c-b20b-5bbd5e3c2a9c\" (UID: \"d40e12c4-8331-453c-b20b-5bbd5e3c2a9c\") " Dec 11 13:52:22 crc kubenswrapper[5050]: I1211 13:52:22.047786 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d40e12c4-8331-453c-b20b-5bbd5e3c2a9c-serving-cert\") pod \"d40e12c4-8331-453c-b20b-5bbd5e3c2a9c\" (UID: \"d40e12c4-8331-453c-b20b-5bbd5e3c2a9c\") " Dec 11 13:52:22 crc kubenswrapper[5050]: I1211 13:52:22.047922 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d40e12c4-8331-453c-b20b-5bbd5e3c2a9c-client-ca\") pod \"d40e12c4-8331-453c-b20b-5bbd5e3c2a9c\" (UID: \"d40e12c4-8331-453c-b20b-5bbd5e3c2a9c\") " Dec 11 13:52:22 crc kubenswrapper[5050]: I1211 13:52:22.047947 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d40e12c4-8331-453c-b20b-5bbd5e3c2a9c-proxy-ca-bundles\") pod \"d40e12c4-8331-453c-b20b-5bbd5e3c2a9c\" (UID: \"d40e12c4-8331-453c-b20b-5bbd5e3c2a9c\") " Dec 11 13:52:22 crc kubenswrapper[5050]: I1211 13:52:22.047967 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d40e12c4-8331-453c-b20b-5bbd5e3c2a9c-config\") pod \"d40e12c4-8331-453c-b20b-5bbd5e3c2a9c\" (UID: \"d40e12c4-8331-453c-b20b-5bbd5e3c2a9c\") " Dec 11 13:52:22 crc kubenswrapper[5050]: I1211 13:52:22.049180 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d40e12c4-8331-453c-b20b-5bbd5e3c2a9c-client-ca" (OuterVolumeSpecName: "client-ca") pod "d40e12c4-8331-453c-b20b-5bbd5e3c2a9c" (UID: "d40e12c4-8331-453c-b20b-5bbd5e3c2a9c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:52:22 crc kubenswrapper[5050]: I1211 13:52:22.049200 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d40e12c4-8331-453c-b20b-5bbd5e3c2a9c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d40e12c4-8331-453c-b20b-5bbd5e3c2a9c" (UID: "d40e12c4-8331-453c-b20b-5bbd5e3c2a9c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:52:22 crc kubenswrapper[5050]: I1211 13:52:22.049263 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d40e12c4-8331-453c-b20b-5bbd5e3c2a9c-config" (OuterVolumeSpecName: "config") pod "d40e12c4-8331-453c-b20b-5bbd5e3c2a9c" (UID: "d40e12c4-8331-453c-b20b-5bbd5e3c2a9c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:52:22 crc kubenswrapper[5050]: I1211 13:52:22.051754 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d40e12c4-8331-453c-b20b-5bbd5e3c2a9c-kube-api-access-k9bwx" (OuterVolumeSpecName: "kube-api-access-k9bwx") pod "d40e12c4-8331-453c-b20b-5bbd5e3c2a9c" (UID: "d40e12c4-8331-453c-b20b-5bbd5e3c2a9c"). InnerVolumeSpecName "kube-api-access-k9bwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:52:22 crc kubenswrapper[5050]: I1211 13:52:22.053343 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d40e12c4-8331-453c-b20b-5bbd5e3c2a9c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d40e12c4-8331-453c-b20b-5bbd5e3c2a9c" (UID: "d40e12c4-8331-453c-b20b-5bbd5e3c2a9c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:52:22 crc kubenswrapper[5050]: I1211 13:52:22.149598 5050 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d40e12c4-8331-453c-b20b-5bbd5e3c2a9c-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:52:22 crc kubenswrapper[5050]: I1211 13:52:22.149636 5050 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d40e12c4-8331-453c-b20b-5bbd5e3c2a9c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 13:52:22 crc kubenswrapper[5050]: I1211 13:52:22.149647 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d40e12c4-8331-453c-b20b-5bbd5e3c2a9c-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:52:22 crc kubenswrapper[5050]: I1211 13:52:22.149658 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9bwx\" (UniqueName: \"kubernetes.io/projected/d40e12c4-8331-453c-b20b-5bbd5e3c2a9c-kube-api-access-k9bwx\") on node \"crc\" DevicePath \"\"" Dec 11 13:52:22 crc kubenswrapper[5050]: I1211 13:52:22.149667 5050 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d40e12c4-8331-453c-b20b-5bbd5e3c2a9c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:52:22 crc kubenswrapper[5050]: I1211 13:52:22.264264 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-skz5v" event={"ID":"d40e12c4-8331-453c-b20b-5bbd5e3c2a9c","Type":"ContainerDied","Data":"4751a1aa5a8b225202fd81acbddffdd316744052a0c04f6559cdb482970ed025"} Dec 11 13:52:22 crc kubenswrapper[5050]: I1211 13:52:22.264324 5050 scope.go:117] "RemoveContainer" containerID="a26e87b7916b4cf910405aa4d57ebae142c78b0b630a03eb23a81d830c9750ce" Dec 11 13:52:22 crc kubenswrapper[5050]: I1211 13:52:22.264435 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-skz5v" Dec 11 13:52:22 crc kubenswrapper[5050]: I1211 13:52:22.271900 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bvjdq" event={"ID":"3069e571-2923-44a7-ae85-7cc7e64991ef","Type":"ContainerDied","Data":"6938ebda5d0b1451828f34f453f145f03f88f1ef22bd0f91990b2361fe972ee9"} Dec 11 13:52:22 crc kubenswrapper[5050]: I1211 13:52:22.272067 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-bvjdq" Dec 11 13:52:22 crc kubenswrapper[5050]: I1211 13:52:22.283419 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn"] Dec 11 13:52:22 crc kubenswrapper[5050]: I1211 13:52:22.301335 5050 scope.go:117] "RemoveContainer" containerID="5f39bbd36b3950a89dc4d6758df21536feb19af0dc144cad266650450065aaca" Dec 11 13:52:22 crc kubenswrapper[5050]: I1211 13:52:22.313171 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-skz5v"] Dec 11 13:52:22 crc kubenswrapper[5050]: I1211 13:52:22.326277 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-skz5v"] Dec 11 13:52:22 crc kubenswrapper[5050]: I1211 13:52:22.333129 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bvjdq"] Dec 11 13:52:22 crc kubenswrapper[5050]: E1211 13:52:22.335795 5050 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3069e571_2923_44a7_ae85_7cc7e64991ef.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd40e12c4_8331_453c_b20b_5bbd5e3c2a9c.slice/crio-4751a1aa5a8b225202fd81acbddffdd316744052a0c04f6559cdb482970ed025\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3069e571_2923_44a7_ae85_7cc7e64991ef.slice/crio-6938ebda5d0b1451828f34f453f145f03f88f1ef22bd0f91990b2361fe972ee9\": RecentStats: unable to find data in memory cache]" Dec 11 13:52:22 crc kubenswrapper[5050]: I1211 13:52:22.337310 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-bvjdq"] Dec 11 13:52:22 crc kubenswrapper[5050]: I1211 13:52:22.558781 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-77wtp" Dec 11 13:52:22 crc kubenswrapper[5050]: I1211 13:52:22.559142 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-77wtp" Dec 11 13:52:22 crc kubenswrapper[5050]: I1211 13:52:22.601371 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-77wtp" Dec 11 13:52:23 crc kubenswrapper[5050]: I1211 13:52:23.040299 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7kjsc" Dec 11 13:52:23 crc kubenswrapper[5050]: I1211 13:52:23.041417 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7kjsc" Dec 11 13:52:23 crc kubenswrapper[5050]: I1211 13:52:23.078444 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7kjsc" Dec 11 13:52:23 crc kubenswrapper[5050]: I1211 13:52:23.279155 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn" event={"ID":"5b7ceea3-4e92-46ee-81de-5b8f932144ad","Type":"ContainerStarted","Data":"a7e786628a712403d39b639bbf553240366744f8bd8280155308fbff198ed283"} Dec 11 13:52:23 crc kubenswrapper[5050]: I1211 13:52:23.321162 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-77wtp" Dec 11 13:52:23 crc kubenswrapper[5050]: I1211 13:52:23.323651 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7kjsc" Dec 11 13:52:23 crc kubenswrapper[5050]: I1211 13:52:23.553077 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3069e571-2923-44a7-ae85-7cc7e64991ef" path="/var/lib/kubelet/pods/3069e571-2923-44a7-ae85-7cc7e64991ef/volumes" Dec 11 13:52:23 crc kubenswrapper[5050]: I1211 13:52:23.553800 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d40e12c4-8331-453c-b20b-5bbd5e3c2a9c" path="/var/lib/kubelet/pods/d40e12c4-8331-453c-b20b-5bbd5e3c2a9c/volumes" Dec 11 13:52:23 crc kubenswrapper[5050]: I1211 13:52:23.807395 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-69cffd76bd-8bkp6"] Dec 11 13:52:23 crc kubenswrapper[5050]: E1211 13:52:23.808049 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d40e12c4-8331-453c-b20b-5bbd5e3c2a9c" containerName="controller-manager" Dec 11 13:52:23 crc kubenswrapper[5050]: I1211 13:52:23.808084 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="d40e12c4-8331-453c-b20b-5bbd5e3c2a9c" containerName="controller-manager" Dec 11 13:52:23 crc kubenswrapper[5050]: I1211 13:52:23.808272 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="d40e12c4-8331-453c-b20b-5bbd5e3c2a9c" containerName="controller-manager" Dec 11 13:52:23 crc kubenswrapper[5050]: I1211 13:52:23.808841 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69cffd76bd-8bkp6" Dec 11 13:52:23 crc kubenswrapper[5050]: I1211 13:52:23.813250 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 11 13:52:23 crc kubenswrapper[5050]: I1211 13:52:23.813250 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 11 13:52:23 crc kubenswrapper[5050]: I1211 13:52:23.816297 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 11 13:52:23 crc kubenswrapper[5050]: I1211 13:52:23.816475 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 11 13:52:23 crc kubenswrapper[5050]: I1211 13:52:23.816913 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 11 13:52:23 crc kubenswrapper[5050]: I1211 13:52:23.817130 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 11 13:52:23 crc kubenswrapper[5050]: I1211 13:52:23.817963 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69cffd76bd-8bkp6"] Dec 11 13:52:23 crc kubenswrapper[5050]: I1211 13:52:23.824514 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 11 13:52:23 crc kubenswrapper[5050]: I1211 13:52:23.986812 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a956942-e4db-4c66-a7c2-1c370c1569f4-client-ca\") pod \"controller-manager-69cffd76bd-8bkp6\" (UID: \"6a956942-e4db-4c66-a7c2-1c370c1569f4\") " pod="openshift-controller-manager/controller-manager-69cffd76bd-8bkp6" Dec 11 13:52:23 crc kubenswrapper[5050]: I1211 13:52:23.986863 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g22wd\" (UniqueName: \"kubernetes.io/projected/6a956942-e4db-4c66-a7c2-1c370c1569f4-kube-api-access-g22wd\") pod \"controller-manager-69cffd76bd-8bkp6\" (UID: \"6a956942-e4db-4c66-a7c2-1c370c1569f4\") " pod="openshift-controller-manager/controller-manager-69cffd76bd-8bkp6" Dec 11 13:52:23 crc kubenswrapper[5050]: I1211 13:52:23.986930 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a956942-e4db-4c66-a7c2-1c370c1569f4-config\") pod \"controller-manager-69cffd76bd-8bkp6\" (UID: \"6a956942-e4db-4c66-a7c2-1c370c1569f4\") " pod="openshift-controller-manager/controller-manager-69cffd76bd-8bkp6" Dec 11 13:52:23 crc kubenswrapper[5050]: I1211 13:52:23.986953 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a956942-e4db-4c66-a7c2-1c370c1569f4-serving-cert\") pod \"controller-manager-69cffd76bd-8bkp6\" (UID: \"6a956942-e4db-4c66-a7c2-1c370c1569f4\") " pod="openshift-controller-manager/controller-manager-69cffd76bd-8bkp6" Dec 11 13:52:23 crc kubenswrapper[5050]: I1211 13:52:23.986969 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a956942-e4db-4c66-a7c2-1c370c1569f4-proxy-ca-bundles\") pod \"controller-manager-69cffd76bd-8bkp6\" (UID: \"6a956942-e4db-4c66-a7c2-1c370c1569f4\") " pod="openshift-controller-manager/controller-manager-69cffd76bd-8bkp6" Dec 11 13:52:24 crc kubenswrapper[5050]: I1211 13:52:24.088171 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a956942-e4db-4c66-a7c2-1c370c1569f4-client-ca\") pod \"controller-manager-69cffd76bd-8bkp6\" (UID: \"6a956942-e4db-4c66-a7c2-1c370c1569f4\") " pod="openshift-controller-manager/controller-manager-69cffd76bd-8bkp6" Dec 11 13:52:24 crc kubenswrapper[5050]: I1211 13:52:24.089188 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g22wd\" (UniqueName: \"kubernetes.io/projected/6a956942-e4db-4c66-a7c2-1c370c1569f4-kube-api-access-g22wd\") pod \"controller-manager-69cffd76bd-8bkp6\" (UID: \"6a956942-e4db-4c66-a7c2-1c370c1569f4\") " pod="openshift-controller-manager/controller-manager-69cffd76bd-8bkp6" Dec 11 13:52:24 crc kubenswrapper[5050]: I1211 13:52:24.089358 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a956942-e4db-4c66-a7c2-1c370c1569f4-config\") pod \"controller-manager-69cffd76bd-8bkp6\" (UID: \"6a956942-e4db-4c66-a7c2-1c370c1569f4\") " pod="openshift-controller-manager/controller-manager-69cffd76bd-8bkp6" Dec 11 13:52:24 crc kubenswrapper[5050]: I1211 13:52:24.089458 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a956942-e4db-4c66-a7c2-1c370c1569f4-serving-cert\") pod \"controller-manager-69cffd76bd-8bkp6\" (UID: \"6a956942-e4db-4c66-a7c2-1c370c1569f4\") " pod="openshift-controller-manager/controller-manager-69cffd76bd-8bkp6" Dec 11 13:52:24 crc kubenswrapper[5050]: I1211 13:52:24.089541 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a956942-e4db-4c66-a7c2-1c370c1569f4-proxy-ca-bundles\") pod \"controller-manager-69cffd76bd-8bkp6\" (UID: \"6a956942-e4db-4c66-a7c2-1c370c1569f4\") " pod="openshift-controller-manager/controller-manager-69cffd76bd-8bkp6" Dec 11 13:52:24 crc kubenswrapper[5050]: I1211 13:52:24.089695 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a956942-e4db-4c66-a7c2-1c370c1569f4-client-ca\") pod \"controller-manager-69cffd76bd-8bkp6\" (UID: \"6a956942-e4db-4c66-a7c2-1c370c1569f4\") " pod="openshift-controller-manager/controller-manager-69cffd76bd-8bkp6" Dec 11 13:52:24 crc kubenswrapper[5050]: I1211 13:52:24.090667 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6a956942-e4db-4c66-a7c2-1c370c1569f4-proxy-ca-bundles\") pod \"controller-manager-69cffd76bd-8bkp6\" (UID: \"6a956942-e4db-4c66-a7c2-1c370c1569f4\") " pod="openshift-controller-manager/controller-manager-69cffd76bd-8bkp6" Dec 11 13:52:24 crc kubenswrapper[5050]: I1211 13:52:24.091045 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a956942-e4db-4c66-a7c2-1c370c1569f4-config\") pod \"controller-manager-69cffd76bd-8bkp6\" (UID: \"6a956942-e4db-4c66-a7c2-1c370c1569f4\") " pod="openshift-controller-manager/controller-manager-69cffd76bd-8bkp6" Dec 11 13:52:24 crc kubenswrapper[5050]: I1211 13:52:24.103943 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a956942-e4db-4c66-a7c2-1c370c1569f4-serving-cert\") pod \"controller-manager-69cffd76bd-8bkp6\" (UID: \"6a956942-e4db-4c66-a7c2-1c370c1569f4\") " pod="openshift-controller-manager/controller-manager-69cffd76bd-8bkp6" Dec 11 13:52:24 crc kubenswrapper[5050]: I1211 13:52:24.111951 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g22wd\" (UniqueName: \"kubernetes.io/projected/6a956942-e4db-4c66-a7c2-1c370c1569f4-kube-api-access-g22wd\") pod \"controller-manager-69cffd76bd-8bkp6\" (UID: \"6a956942-e4db-4c66-a7c2-1c370c1569f4\") " pod="openshift-controller-manager/controller-manager-69cffd76bd-8bkp6" Dec 11 13:52:24 crc kubenswrapper[5050]: I1211 13:52:24.126731 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69cffd76bd-8bkp6" Dec 11 13:52:24 crc kubenswrapper[5050]: I1211 13:52:24.303586 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69cffd76bd-8bkp6"] Dec 11 13:52:24 crc kubenswrapper[5050]: I1211 13:52:24.961651 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sfsfn" Dec 11 13:52:24 crc kubenswrapper[5050]: I1211 13:52:24.961704 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sfsfn" Dec 11 13:52:24 crc kubenswrapper[5050]: I1211 13:52:24.999527 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sfsfn" Dec 11 13:52:25 crc kubenswrapper[5050]: I1211 13:52:25.295289 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn" event={"ID":"5b7ceea3-4e92-46ee-81de-5b8f932144ad","Type":"ContainerStarted","Data":"c5f542dfbbbe579335c7c9dd39dbf3a87a4d70edea814f820b53757fc41f7607"} Dec 11 13:52:25 crc kubenswrapper[5050]: I1211 13:52:25.295656 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn" Dec 11 13:52:25 crc kubenswrapper[5050]: I1211 13:52:25.296294 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69cffd76bd-8bkp6" event={"ID":"6a956942-e4db-4c66-a7c2-1c370c1569f4","Type":"ContainerStarted","Data":"abbb98db2a4a22273080da7e82e528f04ecd9efd37eee52f71d3c5fe7d719895"} Dec 11 13:52:25 crc kubenswrapper[5050]: I1211 13:52:25.296349 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69cffd76bd-8bkp6" event={"ID":"6a956942-e4db-4c66-a7c2-1c370c1569f4","Type":"ContainerStarted","Data":"8f09a787dc3d167589eaa29e14c00431f4f35f169b447ab2c21424b4bddc41c4"} Dec 11 13:52:25 crc kubenswrapper[5050]: I1211 13:52:25.302302 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn" Dec 11 13:52:25 crc kubenswrapper[5050]: I1211 13:52:25.327766 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn" podStartSLOduration=5.327738071 podStartE2EDuration="5.327738071s" podCreationTimestamp="2025-12-11 13:52:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:52:25.322945518 +0000 UTC m=+236.166668104" watchObservedRunningTime="2025-12-11 13:52:25.327738071 +0000 UTC m=+236.171460657" Dec 11 13:52:25 crc kubenswrapper[5050]: I1211 13:52:25.352414 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-69cffd76bd-8bkp6" podStartSLOduration=5.352395128 podStartE2EDuration="5.352395128s" podCreationTimestamp="2025-12-11 13:52:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:52:25.351700089 +0000 UTC m=+236.195422685" watchObservedRunningTime="2025-12-11 13:52:25.352395128 +0000 UTC m=+236.196117714" Dec 11 13:52:25 crc kubenswrapper[5050]: I1211 13:52:25.360242 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sfsfn" Dec 11 13:52:26 crc kubenswrapper[5050]: I1211 13:52:26.301400 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-69cffd76bd-8bkp6" Dec 11 13:52:26 crc kubenswrapper[5050]: I1211 13:52:26.305478 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-69cffd76bd-8bkp6" Dec 11 13:52:31 crc kubenswrapper[5050]: I1211 13:52:31.556153 5050 trace.go:236] Trace[1365132251]: "Calculate volume metrics of catalog-content for pod openshift-marketplace/redhat-marketplace-77wtp" (11-Dec-2025 13:52:29.461) (total time: 2094ms): Dec 11 13:52:31 crc kubenswrapper[5050]: Trace[1365132251]: [2.094996513s] [2.094996513s] END Dec 11 13:52:31 crc kubenswrapper[5050]: I1211 13:52:31.564148 5050 trace.go:236] Trace[1288683877]: "Calculate volume metrics of catalog-content for pod openshift-marketplace/redhat-operators-7kjsc" (11-Dec-2025 13:52:29.461) (total time: 2103ms): Dec 11 13:52:31 crc kubenswrapper[5050]: Trace[1288683877]: [2.103054697s] [2.103054697s] END Dec 11 13:52:45 crc kubenswrapper[5050]: I1211 13:52:45.523799 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" podUID="79162d90-14dd-4df9-9bcd-10c2c666cae7" containerName="registry" containerID="cri-o://26e4ee24352e9ac39e8dce34a5a1fb157e7c079c8875c26bba5453d6ea044440" gracePeriod=30 Dec 11 13:52:46 crc kubenswrapper[5050]: I1211 13:52:46.015179 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:52:46 crc kubenswrapper[5050]: I1211 13:52:46.095002 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/79162d90-14dd-4df9-9bcd-10c2c666cae7-registry-certificates\") pod \"79162d90-14dd-4df9-9bcd-10c2c666cae7\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " Dec 11 13:52:46 crc kubenswrapper[5050]: I1211 13:52:46.095099 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/79162d90-14dd-4df9-9bcd-10c2c666cae7-bound-sa-token\") pod \"79162d90-14dd-4df9-9bcd-10c2c666cae7\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " Dec 11 13:52:46 crc kubenswrapper[5050]: I1211 13:52:46.095131 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56fqp\" (UniqueName: \"kubernetes.io/projected/79162d90-14dd-4df9-9bcd-10c2c666cae7-kube-api-access-56fqp\") pod \"79162d90-14dd-4df9-9bcd-10c2c666cae7\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " Dec 11 13:52:46 crc kubenswrapper[5050]: I1211 13:52:46.095153 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/79162d90-14dd-4df9-9bcd-10c2c666cae7-installation-pull-secrets\") pod \"79162d90-14dd-4df9-9bcd-10c2c666cae7\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " Dec 11 13:52:46 crc kubenswrapper[5050]: I1211 13:52:46.095173 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79162d90-14dd-4df9-9bcd-10c2c666cae7-trusted-ca\") pod \"79162d90-14dd-4df9-9bcd-10c2c666cae7\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " Dec 11 13:52:46 crc kubenswrapper[5050]: I1211 13:52:46.095392 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"79162d90-14dd-4df9-9bcd-10c2c666cae7\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " Dec 11 13:52:46 crc kubenswrapper[5050]: I1211 13:52:46.095419 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/79162d90-14dd-4df9-9bcd-10c2c666cae7-ca-trust-extracted\") pod \"79162d90-14dd-4df9-9bcd-10c2c666cae7\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " Dec 11 13:52:46 crc kubenswrapper[5050]: I1211 13:52:46.095491 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/79162d90-14dd-4df9-9bcd-10c2c666cae7-registry-tls\") pod \"79162d90-14dd-4df9-9bcd-10c2c666cae7\" (UID: \"79162d90-14dd-4df9-9bcd-10c2c666cae7\") " Dec 11 13:52:46 crc kubenswrapper[5050]: I1211 13:52:46.097859 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79162d90-14dd-4df9-9bcd-10c2c666cae7-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "79162d90-14dd-4df9-9bcd-10c2c666cae7" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:52:46 crc kubenswrapper[5050]: I1211 13:52:46.098931 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79162d90-14dd-4df9-9bcd-10c2c666cae7-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "79162d90-14dd-4df9-9bcd-10c2c666cae7" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:52:46 crc kubenswrapper[5050]: I1211 13:52:46.100852 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79162d90-14dd-4df9-9bcd-10c2c666cae7-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "79162d90-14dd-4df9-9bcd-10c2c666cae7" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:52:46 crc kubenswrapper[5050]: I1211 13:52:46.101740 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79162d90-14dd-4df9-9bcd-10c2c666cae7-kube-api-access-56fqp" (OuterVolumeSpecName: "kube-api-access-56fqp") pod "79162d90-14dd-4df9-9bcd-10c2c666cae7" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7"). InnerVolumeSpecName "kube-api-access-56fqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:52:46 crc kubenswrapper[5050]: I1211 13:52:46.107991 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79162d90-14dd-4df9-9bcd-10c2c666cae7-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "79162d90-14dd-4df9-9bcd-10c2c666cae7" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:52:46 crc kubenswrapper[5050]: I1211 13:52:46.109294 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "79162d90-14dd-4df9-9bcd-10c2c666cae7" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 11 13:52:46 crc kubenswrapper[5050]: I1211 13:52:46.111494 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79162d90-14dd-4df9-9bcd-10c2c666cae7-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "79162d90-14dd-4df9-9bcd-10c2c666cae7" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:52:46 crc kubenswrapper[5050]: I1211 13:52:46.114988 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79162d90-14dd-4df9-9bcd-10c2c666cae7-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "79162d90-14dd-4df9-9bcd-10c2c666cae7" (UID: "79162d90-14dd-4df9-9bcd-10c2c666cae7"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 13:52:46 crc kubenswrapper[5050]: I1211 13:52:46.197348 5050 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/79162d90-14dd-4df9-9bcd-10c2c666cae7-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 11 13:52:46 crc kubenswrapper[5050]: I1211 13:52:46.197385 5050 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/79162d90-14dd-4df9-9bcd-10c2c666cae7-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 11 13:52:46 crc kubenswrapper[5050]: I1211 13:52:46.197396 5050 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/79162d90-14dd-4df9-9bcd-10c2c666cae7-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 11 13:52:46 crc kubenswrapper[5050]: I1211 13:52:46.197407 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56fqp\" (UniqueName: \"kubernetes.io/projected/79162d90-14dd-4df9-9bcd-10c2c666cae7-kube-api-access-56fqp\") on node \"crc\" DevicePath \"\"" Dec 11 13:52:46 crc kubenswrapper[5050]: I1211 13:52:46.197417 5050 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/79162d90-14dd-4df9-9bcd-10c2c666cae7-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 11 13:52:46 crc kubenswrapper[5050]: I1211 13:52:46.197424 5050 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/79162d90-14dd-4df9-9bcd-10c2c666cae7-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 13:52:46 crc kubenswrapper[5050]: I1211 13:52:46.197432 5050 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/79162d90-14dd-4df9-9bcd-10c2c666cae7-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 11 13:52:46 crc kubenswrapper[5050]: I1211 13:52:46.408685 5050 generic.go:334] "Generic (PLEG): container finished" podID="79162d90-14dd-4df9-9bcd-10c2c666cae7" containerID="26e4ee24352e9ac39e8dce34a5a1fb157e7c079c8875c26bba5453d6ea044440" exitCode=0 Dec 11 13:52:46 crc kubenswrapper[5050]: I1211 13:52:46.408759 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" event={"ID":"79162d90-14dd-4df9-9bcd-10c2c666cae7","Type":"ContainerDied","Data":"26e4ee24352e9ac39e8dce34a5a1fb157e7c079c8875c26bba5453d6ea044440"} Dec 11 13:52:46 crc kubenswrapper[5050]: I1211 13:52:46.408793 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" Dec 11 13:52:46 crc kubenswrapper[5050]: I1211 13:52:46.408814 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" event={"ID":"79162d90-14dd-4df9-9bcd-10c2c666cae7","Type":"ContainerDied","Data":"d3e34f50bdd8a4b52036aa7d833fa84972a452c54ee684cc847577f254bfe6e8"} Dec 11 13:52:46 crc kubenswrapper[5050]: I1211 13:52:46.408837 5050 scope.go:117] "RemoveContainer" containerID="26e4ee24352e9ac39e8dce34a5a1fb157e7c079c8875c26bba5453d6ea044440" Dec 11 13:52:46 crc kubenswrapper[5050]: I1211 13:52:46.429173 5050 scope.go:117] "RemoveContainer" containerID="26e4ee24352e9ac39e8dce34a5a1fb157e7c079c8875c26bba5453d6ea044440" Dec 11 13:52:46 crc kubenswrapper[5050]: E1211 13:52:46.429728 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26e4ee24352e9ac39e8dce34a5a1fb157e7c079c8875c26bba5453d6ea044440\": container with ID starting with 26e4ee24352e9ac39e8dce34a5a1fb157e7c079c8875c26bba5453d6ea044440 not found: ID does not exist" containerID="26e4ee24352e9ac39e8dce34a5a1fb157e7c079c8875c26bba5453d6ea044440" Dec 11 13:52:46 crc kubenswrapper[5050]: I1211 13:52:46.429795 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26e4ee24352e9ac39e8dce34a5a1fb157e7c079c8875c26bba5453d6ea044440"} err="failed to get container status \"26e4ee24352e9ac39e8dce34a5a1fb157e7c079c8875c26bba5453d6ea044440\": rpc error: code = NotFound desc = could not find container \"26e4ee24352e9ac39e8dce34a5a1fb157e7c079c8875c26bba5453d6ea044440\": container with ID starting with 26e4ee24352e9ac39e8dce34a5a1fb157e7c079c8875c26bba5453d6ea044440 not found: ID does not exist" Dec 11 13:52:46 crc kubenswrapper[5050]: I1211 13:52:46.447316 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-f6wfx"] Dec 11 13:52:46 crc kubenswrapper[5050]: I1211 13:52:46.455366 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-f6wfx"] Dec 11 13:52:47 crc kubenswrapper[5050]: I1211 13:52:47.552939 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79162d90-14dd-4df9-9bcd-10c2c666cae7" path="/var/lib/kubelet/pods/79162d90-14dd-4df9-9bcd-10c2c666cae7/volumes" Dec 11 13:52:50 crc kubenswrapper[5050]: I1211 13:52:50.890567 5050 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-f6wfx container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.36:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 13:52:50 crc kubenswrapper[5050]: I1211 13:52:50.890956 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-f6wfx" podUID="79162d90-14dd-4df9-9bcd-10c2c666cae7" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.36:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 13:54:40 crc kubenswrapper[5050]: I1211 13:54:40.796248 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 13:54:40 crc kubenswrapper[5050]: I1211 13:54:40.796720 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 13:55:10 crc kubenswrapper[5050]: I1211 13:55:10.796170 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 13:55:10 crc kubenswrapper[5050]: I1211 13:55:10.796957 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 13:55:40 crc kubenswrapper[5050]: I1211 13:55:40.797170 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 13:55:40 crc kubenswrapper[5050]: I1211 13:55:40.797808 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 13:55:40 crc kubenswrapper[5050]: I1211 13:55:40.797876 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 13:55:40 crc kubenswrapper[5050]: I1211 13:55:40.798661 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a5e8cac0339bdaff46aa1a1eb16e7989125087ebc73eb9a1b34470494c0f3f9d"} pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 13:55:40 crc kubenswrapper[5050]: I1211 13:55:40.798744 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" containerID="cri-o://a5e8cac0339bdaff46aa1a1eb16e7989125087ebc73eb9a1b34470494c0f3f9d" gracePeriod=600 Dec 11 13:55:41 crc kubenswrapper[5050]: I1211 13:55:41.302353 5050 generic.go:334] "Generic (PLEG): container finished" podID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerID="a5e8cac0339bdaff46aa1a1eb16e7989125087ebc73eb9a1b34470494c0f3f9d" exitCode=0 Dec 11 13:55:41 crc kubenswrapper[5050]: I1211 13:55:41.302434 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerDied","Data":"a5e8cac0339bdaff46aa1a1eb16e7989125087ebc73eb9a1b34470494c0f3f9d"} Dec 11 13:55:41 crc kubenswrapper[5050]: I1211 13:55:41.302928 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerStarted","Data":"6bf16e768e2e41dab60a5957da98ce41ef5df422f328d1b4578bc9e09da04537"} Dec 11 13:55:41 crc kubenswrapper[5050]: I1211 13:55:41.302952 5050 scope.go:117] "RemoveContainer" containerID="dd5e3efb6c32fb9d9f76a12a8e8a2e6fcb32ad3cbf663ac6d264ea7b3f858008" Dec 11 13:58:10 crc kubenswrapper[5050]: I1211 13:58:10.796218 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 13:58:10 crc kubenswrapper[5050]: I1211 13:58:10.796749 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 13:58:40 crc kubenswrapper[5050]: I1211 13:58:40.796958 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 13:58:40 crc kubenswrapper[5050]: I1211 13:58:40.797533 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 13:59:10 crc kubenswrapper[5050]: I1211 13:59:10.796703 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 13:59:10 crc kubenswrapper[5050]: I1211 13:59:10.797300 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 13:59:10 crc kubenswrapper[5050]: I1211 13:59:10.797356 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 13:59:10 crc kubenswrapper[5050]: I1211 13:59:10.798066 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6bf16e768e2e41dab60a5957da98ce41ef5df422f328d1b4578bc9e09da04537"} pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 13:59:10 crc kubenswrapper[5050]: I1211 13:59:10.798129 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" containerID="cri-o://6bf16e768e2e41dab60a5957da98ce41ef5df422f328d1b4578bc9e09da04537" gracePeriod=600 Dec 11 13:59:11 crc kubenswrapper[5050]: I1211 13:59:11.416047 5050 generic.go:334] "Generic (PLEG): container finished" podID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerID="6bf16e768e2e41dab60a5957da98ce41ef5df422f328d1b4578bc9e09da04537" exitCode=0 Dec 11 13:59:11 crc kubenswrapper[5050]: I1211 13:59:11.416119 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerDied","Data":"6bf16e768e2e41dab60a5957da98ce41ef5df422f328d1b4578bc9e09da04537"} Dec 11 13:59:11 crc kubenswrapper[5050]: I1211 13:59:11.416651 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerStarted","Data":"d1f524fdfc663274504f05a5df8397287ddcfa403493dc19698f5ccd6febdfcc"} Dec 11 13:59:11 crc kubenswrapper[5050]: I1211 13:59:11.416675 5050 scope.go:117] "RemoveContainer" containerID="a5e8cac0339bdaff46aa1a1eb16e7989125087ebc73eb9a1b34470494c0f3f9d" Dec 11 13:59:19 crc kubenswrapper[5050]: I1211 13:59:19.604949 5050 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.521954 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-q9k57"] Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.522972 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" podUID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerName="ovn-controller" containerID="cri-o://ccf6b22e62a8e428060aa8c1c10e874c12d3410de016017c055dc65c2b66b963" gracePeriod=30 Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.523104 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" podUID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerName="ovn-acl-logging" containerID="cri-o://b79cca188f2f2740a6f2ada09d6063ae085ca891409ef51df998ad18b121d465" gracePeriod=30 Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.523107 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" podUID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerName="northd" containerID="cri-o://a3155fb448e8d746437d2814848db17f5b9f4928a65b8d2d5b6554ae4acf4ae7" gracePeriod=30 Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.523127 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" podUID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://c2ba379761001eca9453261ca94fd9bcb5a9b85cec51acadd96ce43a4be35f43" gracePeriod=30 Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.523206 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" podUID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerName="nbdb" containerID="cri-o://5767ff2d50657f2ed1fd5f7fa8bee945962a258ca0ec0507943a5a7295f65efb" gracePeriod=30 Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.523233 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" podUID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerName="sbdb" containerID="cri-o://0d6dc0aec5873226f4eaa2bfbe08e77f854d42624aa47ffb8403123b614fcd9f" gracePeriod=30 Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.523171 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" podUID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerName="kube-rbac-proxy-node" containerID="cri-o://ab073247dca519f0c76d97e6a5a6e3de4ef0c58431d92d503a9c6494d9e2316a" gracePeriod=30 Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.561507 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" podUID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerName="ovnkube-controller" containerID="cri-o://8af0c7f049f1a518d307999685e3bee830df9eb2f0c799eb94c8acaa62f548df" gracePeriod=30 Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.861270 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9k57_ce1e7994-3a5d-488d-90a5-115ca4cb7cf3/ovn-acl-logging/0.log" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.862064 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9k57_ce1e7994-3a5d-488d-90a5-115ca4cb7cf3/ovn-controller/0.log" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.862542 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.914727 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bfxvp"] Dec 11 13:59:25 crc kubenswrapper[5050]: E1211 13:59:25.915174 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerName="ovn-acl-logging" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.915263 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerName="ovn-acl-logging" Dec 11 13:59:25 crc kubenswrapper[5050]: E1211 13:59:25.915357 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79162d90-14dd-4df9-9bcd-10c2c666cae7" containerName="registry" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.915431 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="79162d90-14dd-4df9-9bcd-10c2c666cae7" containerName="registry" Dec 11 13:59:25 crc kubenswrapper[5050]: E1211 13:59:25.915504 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerName="sbdb" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.915629 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerName="sbdb" Dec 11 13:59:25 crc kubenswrapper[5050]: E1211 13:59:25.915705 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerName="northd" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.915768 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerName="northd" Dec 11 13:59:25 crc kubenswrapper[5050]: E1211 13:59:25.915836 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerName="ovn-controller" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.915900 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerName="ovn-controller" Dec 11 13:59:25 crc kubenswrapper[5050]: E1211 13:59:25.915971 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerName="nbdb" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.916062 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerName="nbdb" Dec 11 13:59:25 crc kubenswrapper[5050]: E1211 13:59:25.916143 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerName="ovnkube-controller" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.916207 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerName="ovnkube-controller" Dec 11 13:59:25 crc kubenswrapper[5050]: E1211 13:59:25.916271 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerName="kubecfg-setup" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.916335 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerName="kubecfg-setup" Dec 11 13:59:25 crc kubenswrapper[5050]: E1211 13:59:25.916395 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerName="kube-rbac-proxy-ovn-metrics" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.916460 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerName="kube-rbac-proxy-ovn-metrics" Dec 11 13:59:25 crc kubenswrapper[5050]: E1211 13:59:25.916527 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerName="kube-rbac-proxy-node" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.916589 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerName="kube-rbac-proxy-node" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.916758 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerName="kube-rbac-proxy-ovn-metrics" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.916824 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerName="ovn-controller" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.916904 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerName="northd" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.916973 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerName="kube-rbac-proxy-node" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.917067 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerName="nbdb" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.917133 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerName="ovnkube-controller" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.917197 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerName="sbdb" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.917260 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerName="ovn-acl-logging" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.917324 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="79162d90-14dd-4df9-9bcd-10c2c666cae7" containerName="registry" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.919658 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.931042 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-slash\") pod \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.931084 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-systemd-units\") pod \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.931122 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-run-systemd\") pod \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.931137 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-run-netns\") pod \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.931167 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.931193 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-cni-bin\") pod \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.931208 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" (UID: "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.931235 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" (UID: "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.931228 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-etc-openvswitch\") pod \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.931223 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-slash" (OuterVolumeSpecName: "host-slash") pod "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" (UID: "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.931284 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" (UID: "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.931305 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-node-log\") pod \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.931393 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-env-overrides\") pod \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.931427 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z872c\" (UniqueName: \"kubernetes.io/projected/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-kube-api-access-z872c\") pod \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.931478 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-var-lib-openvswitch\") pod \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.931329 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-node-log" (OuterVolumeSpecName: "node-log") pod "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" (UID: "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.931264 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" (UID: "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.931542 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" (UID: "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.931277 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" (UID: "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.931514 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-run-ovn\") pod \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.931575 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" (UID: "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.931612 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-run-openvswitch\") pod \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.931638 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" (UID: "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.931646 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-ovnkube-config\") pod \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.931686 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-ovn-node-metrics-cert\") pod \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.931730 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-run-ovn-kubernetes\") pod \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.931758 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-ovnkube-script-lib\") pod \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.931786 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-log-socket\") pod \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.931820 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-cni-netd\") pod \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.931850 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-kubelet\") pod \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\" (UID: \"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3\") " Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.931992 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" (UID: "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.932149 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" (UID: "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.932173 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-log-socket" (OuterVolumeSpecName: "log-socket") pod "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" (UID: "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.932001 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" (UID: "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.932196 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" (UID: "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.932249 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" (UID: "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.932275 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" (UID: "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.932411 5050 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.932436 5050 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.932449 5050 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-slash\") on node \"crc\" DevicePath \"\"" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.932461 5050 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.932472 5050 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.932484 5050 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.932496 5050 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.932508 5050 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.932519 5050 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-node-log\") on node \"crc\" DevicePath \"\"" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.932532 5050 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.932545 5050 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.932557 5050 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.932568 5050 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.932581 5050 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.932593 5050 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.932605 5050 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.932616 5050 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-log-socket\") on node \"crc\" DevicePath \"\"" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.938404 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" (UID: "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.938475 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-kube-api-access-z872c" (OuterVolumeSpecName: "kube-api-access-z872c") pod "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" (UID: "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3"). InnerVolumeSpecName "kube-api-access-z872c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:59:25 crc kubenswrapper[5050]: I1211 13:59:25.954582 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" (UID: "ce1e7994-3a5d-488d-90a5-115ca4cb7cf3"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.033324 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-systemd-units\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.033380 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-node-log\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.033395 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-host-cni-bin\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.033418 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-host-kubelet\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.033451 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-log-socket\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.033476 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-run-systemd\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.033495 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-var-lib-openvswitch\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.033514 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-run-openvswitch\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.033532 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-host-slash\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.033549 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-etc-openvswitch\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.033567 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a13dee96-3971-4939-9565-8b6f4507a197-ovn-node-metrics-cert\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.033585 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.033603 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbblx\" (UniqueName: \"kubernetes.io/projected/a13dee96-3971-4939-9565-8b6f4507a197-kube-api-access-bbblx\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.033620 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-host-cni-netd\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.033634 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a13dee96-3971-4939-9565-8b6f4507a197-ovnkube-script-lib\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.033658 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a13dee96-3971-4939-9565-8b6f4507a197-ovnkube-config\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.033674 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a13dee96-3971-4939-9565-8b6f4507a197-env-overrides\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.033690 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-run-ovn\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.033710 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-host-run-netns\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.033735 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-host-run-ovn-kubernetes\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.033767 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z872c\" (UniqueName: \"kubernetes.io/projected/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-kube-api-access-z872c\") on node \"crc\" DevicePath \"\"" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.033777 5050 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.033789 5050 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.135418 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a13dee96-3971-4939-9565-8b6f4507a197-ovn-node-metrics-cert\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.135467 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.135491 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbblx\" (UniqueName: \"kubernetes.io/projected/a13dee96-3971-4939-9565-8b6f4507a197-kube-api-access-bbblx\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.135511 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-host-cni-netd\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.135530 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a13dee96-3971-4939-9565-8b6f4507a197-ovnkube-script-lib\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.135550 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a13dee96-3971-4939-9565-8b6f4507a197-ovnkube-config\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.135565 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a13dee96-3971-4939-9565-8b6f4507a197-env-overrides\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.135583 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-run-ovn\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.135599 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-host-run-netns\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.135623 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-host-run-ovn-kubernetes\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.135641 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-systemd-units\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.135657 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-node-log\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.135801 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-host-cni-bin\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.135841 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-host-kubelet\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.135904 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-log-socket\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.135929 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-run-systemd\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.135948 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-var-lib-openvswitch\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.135964 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-run-openvswitch\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.135989 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-host-slash\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.136005 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-etc-openvswitch\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.136062 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-host-run-ovn-kubernetes\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.136083 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-systemd-units\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.136105 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-node-log\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.136124 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.136130 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-host-cni-bin\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.136146 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-host-kubelet\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.136169 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-log-socket\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.136205 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-run-systemd\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.136224 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-var-lib-openvswitch\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.136244 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-run-openvswitch\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.136263 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-host-slash\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.136484 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-run-ovn\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.136514 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-host-run-netns\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.136067 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-etc-openvswitch\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.136670 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a13dee96-3971-4939-9565-8b6f4507a197-env-overrides\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.136941 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a13dee96-3971-4939-9565-8b6f4507a197-host-cni-netd\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.137049 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/a13dee96-3971-4939-9565-8b6f4507a197-ovnkube-script-lib\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.137145 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a13dee96-3971-4939-9565-8b6f4507a197-ovnkube-config\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.138876 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a13dee96-3971-4939-9565-8b6f4507a197-ovn-node-metrics-cert\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.152247 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbblx\" (UniqueName: \"kubernetes.io/projected/a13dee96-3971-4939-9565-8b6f4507a197-kube-api-access-bbblx\") pod \"ovnkube-node-bfxvp\" (UID: \"a13dee96-3971-4939-9565-8b6f4507a197\") " pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.234215 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:26 crc kubenswrapper[5050]: W1211 13:59:26.268315 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda13dee96_3971_4939_9565_8b6f4507a197.slice/crio-8432de07720e6b1794c40671f0c9e8e063c45f0be8b05c6d52e5e4e5e8a83ade WatchSource:0}: Error finding container 8432de07720e6b1794c40671f0c9e8e063c45f0be8b05c6d52e5e4e5e8a83ade: Status 404 returned error can't find the container with id 8432de07720e6b1794c40671f0c9e8e063c45f0be8b05c6d52e5e4e5e8a83ade Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.492256 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4fhtp_de09c7d4-952a-405d-9a54-32331c538ee2/kube-multus/0.log" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.492507 5050 generic.go:334] "Generic (PLEG): container finished" podID="de09c7d4-952a-405d-9a54-32331c538ee2" containerID="06e98b7bca17e966a2ccbdcf16ada0897f369c85d34360def6417eea7581e4f2" exitCode=2 Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.492564 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4fhtp" event={"ID":"de09c7d4-952a-405d-9a54-32331c538ee2","Type":"ContainerDied","Data":"06e98b7bca17e966a2ccbdcf16ada0897f369c85d34360def6417eea7581e4f2"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.493116 5050 scope.go:117] "RemoveContainer" containerID="06e98b7bca17e966a2ccbdcf16ada0897f369c85d34360def6417eea7581e4f2" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.501255 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9k57_ce1e7994-3a5d-488d-90a5-115ca4cb7cf3/ovn-acl-logging/0.log" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502109 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-q9k57_ce1e7994-3a5d-488d-90a5-115ca4cb7cf3/ovn-controller/0.log" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502642 5050 generic.go:334] "Generic (PLEG): container finished" podID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerID="8af0c7f049f1a518d307999685e3bee830df9eb2f0c799eb94c8acaa62f548df" exitCode=0 Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502665 5050 generic.go:334] "Generic (PLEG): container finished" podID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerID="0d6dc0aec5873226f4eaa2bfbe08e77f854d42624aa47ffb8403123b614fcd9f" exitCode=0 Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502673 5050 generic.go:334] "Generic (PLEG): container finished" podID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerID="5767ff2d50657f2ed1fd5f7fa8bee945962a258ca0ec0507943a5a7295f65efb" exitCode=0 Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502680 5050 generic.go:334] "Generic (PLEG): container finished" podID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerID="a3155fb448e8d746437d2814848db17f5b9f4928a65b8d2d5b6554ae4acf4ae7" exitCode=0 Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502688 5050 generic.go:334] "Generic (PLEG): container finished" podID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerID="c2ba379761001eca9453261ca94fd9bcb5a9b85cec51acadd96ce43a4be35f43" exitCode=0 Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502694 5050 generic.go:334] "Generic (PLEG): container finished" podID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerID="ab073247dca519f0c76d97e6a5a6e3de4ef0c58431d92d503a9c6494d9e2316a" exitCode=0 Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502701 5050 generic.go:334] "Generic (PLEG): container finished" podID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerID="b79cca188f2f2740a6f2ada09d6063ae085ca891409ef51df998ad18b121d465" exitCode=143 Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502709 5050 generic.go:334] "Generic (PLEG): container finished" podID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" containerID="ccf6b22e62a8e428060aa8c1c10e874c12d3410de016017c055dc65c2b66b963" exitCode=143 Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502687 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" event={"ID":"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3","Type":"ContainerDied","Data":"8af0c7f049f1a518d307999685e3bee830df9eb2f0c799eb94c8acaa62f548df"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502760 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502782 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" event={"ID":"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3","Type":"ContainerDied","Data":"0d6dc0aec5873226f4eaa2bfbe08e77f854d42624aa47ffb8403123b614fcd9f"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502796 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" event={"ID":"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3","Type":"ContainerDied","Data":"5767ff2d50657f2ed1fd5f7fa8bee945962a258ca0ec0507943a5a7295f65efb"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502807 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" event={"ID":"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3","Type":"ContainerDied","Data":"a3155fb448e8d746437d2814848db17f5b9f4928a65b8d2d5b6554ae4acf4ae7"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502816 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" event={"ID":"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3","Type":"ContainerDied","Data":"c2ba379761001eca9453261ca94fd9bcb5a9b85cec51acadd96ce43a4be35f43"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502826 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" event={"ID":"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3","Type":"ContainerDied","Data":"ab073247dca519f0c76d97e6a5a6e3de4ef0c58431d92d503a9c6494d9e2316a"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502837 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b79cca188f2f2740a6f2ada09d6063ae085ca891409ef51df998ad18b121d465"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502847 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ccf6b22e62a8e428060aa8c1c10e874c12d3410de016017c055dc65c2b66b963"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502854 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"01285bd00ac1055e7f57130d4d3e052769346733fd1c2854e0e988581a553f66"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502861 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" event={"ID":"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3","Type":"ContainerDied","Data":"b79cca188f2f2740a6f2ada09d6063ae085ca891409ef51df998ad18b121d465"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502868 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8af0c7f049f1a518d307999685e3bee830df9eb2f0c799eb94c8acaa62f548df"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502874 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0d6dc0aec5873226f4eaa2bfbe08e77f854d42624aa47ffb8403123b614fcd9f"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502880 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5767ff2d50657f2ed1fd5f7fa8bee945962a258ca0ec0507943a5a7295f65efb"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502886 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a3155fb448e8d746437d2814848db17f5b9f4928a65b8d2d5b6554ae4acf4ae7"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502891 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c2ba379761001eca9453261ca94fd9bcb5a9b85cec51acadd96ce43a4be35f43"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502898 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ab073247dca519f0c76d97e6a5a6e3de4ef0c58431d92d503a9c6494d9e2316a"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502905 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b79cca188f2f2740a6f2ada09d6063ae085ca891409ef51df998ad18b121d465"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502911 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ccf6b22e62a8e428060aa8c1c10e874c12d3410de016017c055dc65c2b66b963"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502917 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"01285bd00ac1055e7f57130d4d3e052769346733fd1c2854e0e988581a553f66"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502925 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" event={"ID":"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3","Type":"ContainerDied","Data":"ccf6b22e62a8e428060aa8c1c10e874c12d3410de016017c055dc65c2b66b963"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502935 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8af0c7f049f1a518d307999685e3bee830df9eb2f0c799eb94c8acaa62f548df"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502942 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0d6dc0aec5873226f4eaa2bfbe08e77f854d42624aa47ffb8403123b614fcd9f"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502952 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5767ff2d50657f2ed1fd5f7fa8bee945962a258ca0ec0507943a5a7295f65efb"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502959 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a3155fb448e8d746437d2814848db17f5b9f4928a65b8d2d5b6554ae4acf4ae7"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502966 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c2ba379761001eca9453261ca94fd9bcb5a9b85cec51acadd96ce43a4be35f43"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502972 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ab073247dca519f0c76d97e6a5a6e3de4ef0c58431d92d503a9c6494d9e2316a"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502979 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b79cca188f2f2740a6f2ada09d6063ae085ca891409ef51df998ad18b121d465"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502985 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ccf6b22e62a8e428060aa8c1c10e874c12d3410de016017c055dc65c2b66b963"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502992 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"01285bd00ac1055e7f57130d4d3e052769346733fd1c2854e0e988581a553f66"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.503002 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-q9k57" event={"ID":"ce1e7994-3a5d-488d-90a5-115ca4cb7cf3","Type":"ContainerDied","Data":"7a9891925bc898a38965a64c07159961b34a3666d2ded2a3e52a52eba078fcd6"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.503037 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8af0c7f049f1a518d307999685e3bee830df9eb2f0c799eb94c8acaa62f548df"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.503045 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0d6dc0aec5873226f4eaa2bfbe08e77f854d42624aa47ffb8403123b614fcd9f"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.503052 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5767ff2d50657f2ed1fd5f7fa8bee945962a258ca0ec0507943a5a7295f65efb"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.503059 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a3155fb448e8d746437d2814848db17f5b9f4928a65b8d2d5b6554ae4acf4ae7"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.503065 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c2ba379761001eca9453261ca94fd9bcb5a9b85cec51acadd96ce43a4be35f43"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.503071 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ab073247dca519f0c76d97e6a5a6e3de4ef0c58431d92d503a9c6494d9e2316a"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.503078 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b79cca188f2f2740a6f2ada09d6063ae085ca891409ef51df998ad18b121d465"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.503084 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ccf6b22e62a8e428060aa8c1c10e874c12d3410de016017c055dc65c2b66b963"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.503090 5050 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"01285bd00ac1055e7f57130d4d3e052769346733fd1c2854e0e988581a553f66"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.502871 5050 scope.go:117] "RemoveContainer" containerID="8af0c7f049f1a518d307999685e3bee830df9eb2f0c799eb94c8acaa62f548df" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.512895 5050 generic.go:334] "Generic (PLEG): container finished" podID="a13dee96-3971-4939-9565-8b6f4507a197" containerID="876407a3867c470e7ad7638c109e671c46ebd867dde4e448c38c552a4e5ac014" exitCode=0 Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.512961 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" event={"ID":"a13dee96-3971-4939-9565-8b6f4507a197","Type":"ContainerDied","Data":"876407a3867c470e7ad7638c109e671c46ebd867dde4e448c38c552a4e5ac014"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.513004 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" event={"ID":"a13dee96-3971-4939-9565-8b6f4507a197","Type":"ContainerStarted","Data":"8432de07720e6b1794c40671f0c9e8e063c45f0be8b05c6d52e5e4e5e8a83ade"} Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.530971 5050 scope.go:117] "RemoveContainer" containerID="0d6dc0aec5873226f4eaa2bfbe08e77f854d42624aa47ffb8403123b614fcd9f" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.552971 5050 scope.go:117] "RemoveContainer" containerID="5767ff2d50657f2ed1fd5f7fa8bee945962a258ca0ec0507943a5a7295f65efb" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.572209 5050 scope.go:117] "RemoveContainer" containerID="a3155fb448e8d746437d2814848db17f5b9f4928a65b8d2d5b6554ae4acf4ae7" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.583131 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-q9k57"] Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.585171 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-q9k57"] Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.604118 5050 scope.go:117] "RemoveContainer" containerID="c2ba379761001eca9453261ca94fd9bcb5a9b85cec51acadd96ce43a4be35f43" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.619837 5050 scope.go:117] "RemoveContainer" containerID="ab073247dca519f0c76d97e6a5a6e3de4ef0c58431d92d503a9c6494d9e2316a" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.638709 5050 scope.go:117] "RemoveContainer" containerID="b79cca188f2f2740a6f2ada09d6063ae085ca891409ef51df998ad18b121d465" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.681709 5050 scope.go:117] "RemoveContainer" containerID="ccf6b22e62a8e428060aa8c1c10e874c12d3410de016017c055dc65c2b66b963" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.709035 5050 scope.go:117] "RemoveContainer" containerID="01285bd00ac1055e7f57130d4d3e052769346733fd1c2854e0e988581a553f66" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.733680 5050 scope.go:117] "RemoveContainer" containerID="8af0c7f049f1a518d307999685e3bee830df9eb2f0c799eb94c8acaa62f548df" Dec 11 13:59:26 crc kubenswrapper[5050]: E1211 13:59:26.734154 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8af0c7f049f1a518d307999685e3bee830df9eb2f0c799eb94c8acaa62f548df\": container with ID starting with 8af0c7f049f1a518d307999685e3bee830df9eb2f0c799eb94c8acaa62f548df not found: ID does not exist" containerID="8af0c7f049f1a518d307999685e3bee830df9eb2f0c799eb94c8acaa62f548df" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.734193 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8af0c7f049f1a518d307999685e3bee830df9eb2f0c799eb94c8acaa62f548df"} err="failed to get container status \"8af0c7f049f1a518d307999685e3bee830df9eb2f0c799eb94c8acaa62f548df\": rpc error: code = NotFound desc = could not find container \"8af0c7f049f1a518d307999685e3bee830df9eb2f0c799eb94c8acaa62f548df\": container with ID starting with 8af0c7f049f1a518d307999685e3bee830df9eb2f0c799eb94c8acaa62f548df not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.734246 5050 scope.go:117] "RemoveContainer" containerID="0d6dc0aec5873226f4eaa2bfbe08e77f854d42624aa47ffb8403123b614fcd9f" Dec 11 13:59:26 crc kubenswrapper[5050]: E1211 13:59:26.734835 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d6dc0aec5873226f4eaa2bfbe08e77f854d42624aa47ffb8403123b614fcd9f\": container with ID starting with 0d6dc0aec5873226f4eaa2bfbe08e77f854d42624aa47ffb8403123b614fcd9f not found: ID does not exist" containerID="0d6dc0aec5873226f4eaa2bfbe08e77f854d42624aa47ffb8403123b614fcd9f" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.734862 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d6dc0aec5873226f4eaa2bfbe08e77f854d42624aa47ffb8403123b614fcd9f"} err="failed to get container status \"0d6dc0aec5873226f4eaa2bfbe08e77f854d42624aa47ffb8403123b614fcd9f\": rpc error: code = NotFound desc = could not find container \"0d6dc0aec5873226f4eaa2bfbe08e77f854d42624aa47ffb8403123b614fcd9f\": container with ID starting with 0d6dc0aec5873226f4eaa2bfbe08e77f854d42624aa47ffb8403123b614fcd9f not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.734879 5050 scope.go:117] "RemoveContainer" containerID="5767ff2d50657f2ed1fd5f7fa8bee945962a258ca0ec0507943a5a7295f65efb" Dec 11 13:59:26 crc kubenswrapper[5050]: E1211 13:59:26.735210 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5767ff2d50657f2ed1fd5f7fa8bee945962a258ca0ec0507943a5a7295f65efb\": container with ID starting with 5767ff2d50657f2ed1fd5f7fa8bee945962a258ca0ec0507943a5a7295f65efb not found: ID does not exist" containerID="5767ff2d50657f2ed1fd5f7fa8bee945962a258ca0ec0507943a5a7295f65efb" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.735233 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5767ff2d50657f2ed1fd5f7fa8bee945962a258ca0ec0507943a5a7295f65efb"} err="failed to get container status \"5767ff2d50657f2ed1fd5f7fa8bee945962a258ca0ec0507943a5a7295f65efb\": rpc error: code = NotFound desc = could not find container \"5767ff2d50657f2ed1fd5f7fa8bee945962a258ca0ec0507943a5a7295f65efb\": container with ID starting with 5767ff2d50657f2ed1fd5f7fa8bee945962a258ca0ec0507943a5a7295f65efb not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.735248 5050 scope.go:117] "RemoveContainer" containerID="a3155fb448e8d746437d2814848db17f5b9f4928a65b8d2d5b6554ae4acf4ae7" Dec 11 13:59:26 crc kubenswrapper[5050]: E1211 13:59:26.735637 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3155fb448e8d746437d2814848db17f5b9f4928a65b8d2d5b6554ae4acf4ae7\": container with ID starting with a3155fb448e8d746437d2814848db17f5b9f4928a65b8d2d5b6554ae4acf4ae7 not found: ID does not exist" containerID="a3155fb448e8d746437d2814848db17f5b9f4928a65b8d2d5b6554ae4acf4ae7" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.735674 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3155fb448e8d746437d2814848db17f5b9f4928a65b8d2d5b6554ae4acf4ae7"} err="failed to get container status \"a3155fb448e8d746437d2814848db17f5b9f4928a65b8d2d5b6554ae4acf4ae7\": rpc error: code = NotFound desc = could not find container \"a3155fb448e8d746437d2814848db17f5b9f4928a65b8d2d5b6554ae4acf4ae7\": container with ID starting with a3155fb448e8d746437d2814848db17f5b9f4928a65b8d2d5b6554ae4acf4ae7 not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.735700 5050 scope.go:117] "RemoveContainer" containerID="c2ba379761001eca9453261ca94fd9bcb5a9b85cec51acadd96ce43a4be35f43" Dec 11 13:59:26 crc kubenswrapper[5050]: E1211 13:59:26.736020 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2ba379761001eca9453261ca94fd9bcb5a9b85cec51acadd96ce43a4be35f43\": container with ID starting with c2ba379761001eca9453261ca94fd9bcb5a9b85cec51acadd96ce43a4be35f43 not found: ID does not exist" containerID="c2ba379761001eca9453261ca94fd9bcb5a9b85cec51acadd96ce43a4be35f43" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.736041 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2ba379761001eca9453261ca94fd9bcb5a9b85cec51acadd96ce43a4be35f43"} err="failed to get container status \"c2ba379761001eca9453261ca94fd9bcb5a9b85cec51acadd96ce43a4be35f43\": rpc error: code = NotFound desc = could not find container \"c2ba379761001eca9453261ca94fd9bcb5a9b85cec51acadd96ce43a4be35f43\": container with ID starting with c2ba379761001eca9453261ca94fd9bcb5a9b85cec51acadd96ce43a4be35f43 not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.736053 5050 scope.go:117] "RemoveContainer" containerID="ab073247dca519f0c76d97e6a5a6e3de4ef0c58431d92d503a9c6494d9e2316a" Dec 11 13:59:26 crc kubenswrapper[5050]: E1211 13:59:26.736284 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab073247dca519f0c76d97e6a5a6e3de4ef0c58431d92d503a9c6494d9e2316a\": container with ID starting with ab073247dca519f0c76d97e6a5a6e3de4ef0c58431d92d503a9c6494d9e2316a not found: ID does not exist" containerID="ab073247dca519f0c76d97e6a5a6e3de4ef0c58431d92d503a9c6494d9e2316a" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.736305 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab073247dca519f0c76d97e6a5a6e3de4ef0c58431d92d503a9c6494d9e2316a"} err="failed to get container status \"ab073247dca519f0c76d97e6a5a6e3de4ef0c58431d92d503a9c6494d9e2316a\": rpc error: code = NotFound desc = could not find container \"ab073247dca519f0c76d97e6a5a6e3de4ef0c58431d92d503a9c6494d9e2316a\": container with ID starting with ab073247dca519f0c76d97e6a5a6e3de4ef0c58431d92d503a9c6494d9e2316a not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.736319 5050 scope.go:117] "RemoveContainer" containerID="b79cca188f2f2740a6f2ada09d6063ae085ca891409ef51df998ad18b121d465" Dec 11 13:59:26 crc kubenswrapper[5050]: E1211 13:59:26.736531 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b79cca188f2f2740a6f2ada09d6063ae085ca891409ef51df998ad18b121d465\": container with ID starting with b79cca188f2f2740a6f2ada09d6063ae085ca891409ef51df998ad18b121d465 not found: ID does not exist" containerID="b79cca188f2f2740a6f2ada09d6063ae085ca891409ef51df998ad18b121d465" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.736557 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b79cca188f2f2740a6f2ada09d6063ae085ca891409ef51df998ad18b121d465"} err="failed to get container status \"b79cca188f2f2740a6f2ada09d6063ae085ca891409ef51df998ad18b121d465\": rpc error: code = NotFound desc = could not find container \"b79cca188f2f2740a6f2ada09d6063ae085ca891409ef51df998ad18b121d465\": container with ID starting with b79cca188f2f2740a6f2ada09d6063ae085ca891409ef51df998ad18b121d465 not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.736572 5050 scope.go:117] "RemoveContainer" containerID="ccf6b22e62a8e428060aa8c1c10e874c12d3410de016017c055dc65c2b66b963" Dec 11 13:59:26 crc kubenswrapper[5050]: E1211 13:59:26.736972 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccf6b22e62a8e428060aa8c1c10e874c12d3410de016017c055dc65c2b66b963\": container with ID starting with ccf6b22e62a8e428060aa8c1c10e874c12d3410de016017c055dc65c2b66b963 not found: ID does not exist" containerID="ccf6b22e62a8e428060aa8c1c10e874c12d3410de016017c055dc65c2b66b963" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.736995 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccf6b22e62a8e428060aa8c1c10e874c12d3410de016017c055dc65c2b66b963"} err="failed to get container status \"ccf6b22e62a8e428060aa8c1c10e874c12d3410de016017c055dc65c2b66b963\": rpc error: code = NotFound desc = could not find container \"ccf6b22e62a8e428060aa8c1c10e874c12d3410de016017c055dc65c2b66b963\": container with ID starting with ccf6b22e62a8e428060aa8c1c10e874c12d3410de016017c055dc65c2b66b963 not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.737033 5050 scope.go:117] "RemoveContainer" containerID="01285bd00ac1055e7f57130d4d3e052769346733fd1c2854e0e988581a553f66" Dec 11 13:59:26 crc kubenswrapper[5050]: E1211 13:59:26.737263 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01285bd00ac1055e7f57130d4d3e052769346733fd1c2854e0e988581a553f66\": container with ID starting with 01285bd00ac1055e7f57130d4d3e052769346733fd1c2854e0e988581a553f66 not found: ID does not exist" containerID="01285bd00ac1055e7f57130d4d3e052769346733fd1c2854e0e988581a553f66" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.737290 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01285bd00ac1055e7f57130d4d3e052769346733fd1c2854e0e988581a553f66"} err="failed to get container status \"01285bd00ac1055e7f57130d4d3e052769346733fd1c2854e0e988581a553f66\": rpc error: code = NotFound desc = could not find container \"01285bd00ac1055e7f57130d4d3e052769346733fd1c2854e0e988581a553f66\": container with ID starting with 01285bd00ac1055e7f57130d4d3e052769346733fd1c2854e0e988581a553f66 not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.737305 5050 scope.go:117] "RemoveContainer" containerID="8af0c7f049f1a518d307999685e3bee830df9eb2f0c799eb94c8acaa62f548df" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.737493 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8af0c7f049f1a518d307999685e3bee830df9eb2f0c799eb94c8acaa62f548df"} err="failed to get container status \"8af0c7f049f1a518d307999685e3bee830df9eb2f0c799eb94c8acaa62f548df\": rpc error: code = NotFound desc = could not find container \"8af0c7f049f1a518d307999685e3bee830df9eb2f0c799eb94c8acaa62f548df\": container with ID starting with 8af0c7f049f1a518d307999685e3bee830df9eb2f0c799eb94c8acaa62f548df not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.737515 5050 scope.go:117] "RemoveContainer" containerID="0d6dc0aec5873226f4eaa2bfbe08e77f854d42624aa47ffb8403123b614fcd9f" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.737887 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d6dc0aec5873226f4eaa2bfbe08e77f854d42624aa47ffb8403123b614fcd9f"} err="failed to get container status \"0d6dc0aec5873226f4eaa2bfbe08e77f854d42624aa47ffb8403123b614fcd9f\": rpc error: code = NotFound desc = could not find container \"0d6dc0aec5873226f4eaa2bfbe08e77f854d42624aa47ffb8403123b614fcd9f\": container with ID starting with 0d6dc0aec5873226f4eaa2bfbe08e77f854d42624aa47ffb8403123b614fcd9f not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.737905 5050 scope.go:117] "RemoveContainer" containerID="5767ff2d50657f2ed1fd5f7fa8bee945962a258ca0ec0507943a5a7295f65efb" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.738276 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5767ff2d50657f2ed1fd5f7fa8bee945962a258ca0ec0507943a5a7295f65efb"} err="failed to get container status \"5767ff2d50657f2ed1fd5f7fa8bee945962a258ca0ec0507943a5a7295f65efb\": rpc error: code = NotFound desc = could not find container \"5767ff2d50657f2ed1fd5f7fa8bee945962a258ca0ec0507943a5a7295f65efb\": container with ID starting with 5767ff2d50657f2ed1fd5f7fa8bee945962a258ca0ec0507943a5a7295f65efb not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.738332 5050 scope.go:117] "RemoveContainer" containerID="a3155fb448e8d746437d2814848db17f5b9f4928a65b8d2d5b6554ae4acf4ae7" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.738567 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3155fb448e8d746437d2814848db17f5b9f4928a65b8d2d5b6554ae4acf4ae7"} err="failed to get container status \"a3155fb448e8d746437d2814848db17f5b9f4928a65b8d2d5b6554ae4acf4ae7\": rpc error: code = NotFound desc = could not find container \"a3155fb448e8d746437d2814848db17f5b9f4928a65b8d2d5b6554ae4acf4ae7\": container with ID starting with a3155fb448e8d746437d2814848db17f5b9f4928a65b8d2d5b6554ae4acf4ae7 not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.738594 5050 scope.go:117] "RemoveContainer" containerID="c2ba379761001eca9453261ca94fd9bcb5a9b85cec51acadd96ce43a4be35f43" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.738831 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2ba379761001eca9453261ca94fd9bcb5a9b85cec51acadd96ce43a4be35f43"} err="failed to get container status \"c2ba379761001eca9453261ca94fd9bcb5a9b85cec51acadd96ce43a4be35f43\": rpc error: code = NotFound desc = could not find container \"c2ba379761001eca9453261ca94fd9bcb5a9b85cec51acadd96ce43a4be35f43\": container with ID starting with c2ba379761001eca9453261ca94fd9bcb5a9b85cec51acadd96ce43a4be35f43 not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.738852 5050 scope.go:117] "RemoveContainer" containerID="ab073247dca519f0c76d97e6a5a6e3de4ef0c58431d92d503a9c6494d9e2316a" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.739119 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab073247dca519f0c76d97e6a5a6e3de4ef0c58431d92d503a9c6494d9e2316a"} err="failed to get container status \"ab073247dca519f0c76d97e6a5a6e3de4ef0c58431d92d503a9c6494d9e2316a\": rpc error: code = NotFound desc = could not find container \"ab073247dca519f0c76d97e6a5a6e3de4ef0c58431d92d503a9c6494d9e2316a\": container with ID starting with ab073247dca519f0c76d97e6a5a6e3de4ef0c58431d92d503a9c6494d9e2316a not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.739146 5050 scope.go:117] "RemoveContainer" containerID="b79cca188f2f2740a6f2ada09d6063ae085ca891409ef51df998ad18b121d465" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.739386 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b79cca188f2f2740a6f2ada09d6063ae085ca891409ef51df998ad18b121d465"} err="failed to get container status \"b79cca188f2f2740a6f2ada09d6063ae085ca891409ef51df998ad18b121d465\": rpc error: code = NotFound desc = could not find container \"b79cca188f2f2740a6f2ada09d6063ae085ca891409ef51df998ad18b121d465\": container with ID starting with b79cca188f2f2740a6f2ada09d6063ae085ca891409ef51df998ad18b121d465 not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.739405 5050 scope.go:117] "RemoveContainer" containerID="ccf6b22e62a8e428060aa8c1c10e874c12d3410de016017c055dc65c2b66b963" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.739573 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccf6b22e62a8e428060aa8c1c10e874c12d3410de016017c055dc65c2b66b963"} err="failed to get container status \"ccf6b22e62a8e428060aa8c1c10e874c12d3410de016017c055dc65c2b66b963\": rpc error: code = NotFound desc = could not find container \"ccf6b22e62a8e428060aa8c1c10e874c12d3410de016017c055dc65c2b66b963\": container with ID starting with ccf6b22e62a8e428060aa8c1c10e874c12d3410de016017c055dc65c2b66b963 not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.739589 5050 scope.go:117] "RemoveContainer" containerID="01285bd00ac1055e7f57130d4d3e052769346733fd1c2854e0e988581a553f66" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.739771 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01285bd00ac1055e7f57130d4d3e052769346733fd1c2854e0e988581a553f66"} err="failed to get container status \"01285bd00ac1055e7f57130d4d3e052769346733fd1c2854e0e988581a553f66\": rpc error: code = NotFound desc = could not find container \"01285bd00ac1055e7f57130d4d3e052769346733fd1c2854e0e988581a553f66\": container with ID starting with 01285bd00ac1055e7f57130d4d3e052769346733fd1c2854e0e988581a553f66 not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.739788 5050 scope.go:117] "RemoveContainer" containerID="8af0c7f049f1a518d307999685e3bee830df9eb2f0c799eb94c8acaa62f548df" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.739965 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8af0c7f049f1a518d307999685e3bee830df9eb2f0c799eb94c8acaa62f548df"} err="failed to get container status \"8af0c7f049f1a518d307999685e3bee830df9eb2f0c799eb94c8acaa62f548df\": rpc error: code = NotFound desc = could not find container \"8af0c7f049f1a518d307999685e3bee830df9eb2f0c799eb94c8acaa62f548df\": container with ID starting with 8af0c7f049f1a518d307999685e3bee830df9eb2f0c799eb94c8acaa62f548df not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.739984 5050 scope.go:117] "RemoveContainer" containerID="0d6dc0aec5873226f4eaa2bfbe08e77f854d42624aa47ffb8403123b614fcd9f" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.740176 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d6dc0aec5873226f4eaa2bfbe08e77f854d42624aa47ffb8403123b614fcd9f"} err="failed to get container status \"0d6dc0aec5873226f4eaa2bfbe08e77f854d42624aa47ffb8403123b614fcd9f\": rpc error: code = NotFound desc = could not find container \"0d6dc0aec5873226f4eaa2bfbe08e77f854d42624aa47ffb8403123b614fcd9f\": container with ID starting with 0d6dc0aec5873226f4eaa2bfbe08e77f854d42624aa47ffb8403123b614fcd9f not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.740194 5050 scope.go:117] "RemoveContainer" containerID="5767ff2d50657f2ed1fd5f7fa8bee945962a258ca0ec0507943a5a7295f65efb" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.740464 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5767ff2d50657f2ed1fd5f7fa8bee945962a258ca0ec0507943a5a7295f65efb"} err="failed to get container status \"5767ff2d50657f2ed1fd5f7fa8bee945962a258ca0ec0507943a5a7295f65efb\": rpc error: code = NotFound desc = could not find container \"5767ff2d50657f2ed1fd5f7fa8bee945962a258ca0ec0507943a5a7295f65efb\": container with ID starting with 5767ff2d50657f2ed1fd5f7fa8bee945962a258ca0ec0507943a5a7295f65efb not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.740482 5050 scope.go:117] "RemoveContainer" containerID="a3155fb448e8d746437d2814848db17f5b9f4928a65b8d2d5b6554ae4acf4ae7" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.740639 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3155fb448e8d746437d2814848db17f5b9f4928a65b8d2d5b6554ae4acf4ae7"} err="failed to get container status \"a3155fb448e8d746437d2814848db17f5b9f4928a65b8d2d5b6554ae4acf4ae7\": rpc error: code = NotFound desc = could not find container \"a3155fb448e8d746437d2814848db17f5b9f4928a65b8d2d5b6554ae4acf4ae7\": container with ID starting with a3155fb448e8d746437d2814848db17f5b9f4928a65b8d2d5b6554ae4acf4ae7 not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.740664 5050 scope.go:117] "RemoveContainer" containerID="c2ba379761001eca9453261ca94fd9bcb5a9b85cec51acadd96ce43a4be35f43" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.740828 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2ba379761001eca9453261ca94fd9bcb5a9b85cec51acadd96ce43a4be35f43"} err="failed to get container status \"c2ba379761001eca9453261ca94fd9bcb5a9b85cec51acadd96ce43a4be35f43\": rpc error: code = NotFound desc = could not find container \"c2ba379761001eca9453261ca94fd9bcb5a9b85cec51acadd96ce43a4be35f43\": container with ID starting with c2ba379761001eca9453261ca94fd9bcb5a9b85cec51acadd96ce43a4be35f43 not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.740847 5050 scope.go:117] "RemoveContainer" containerID="ab073247dca519f0c76d97e6a5a6e3de4ef0c58431d92d503a9c6494d9e2316a" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.741060 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab073247dca519f0c76d97e6a5a6e3de4ef0c58431d92d503a9c6494d9e2316a"} err="failed to get container status \"ab073247dca519f0c76d97e6a5a6e3de4ef0c58431d92d503a9c6494d9e2316a\": rpc error: code = NotFound desc = could not find container \"ab073247dca519f0c76d97e6a5a6e3de4ef0c58431d92d503a9c6494d9e2316a\": container with ID starting with ab073247dca519f0c76d97e6a5a6e3de4ef0c58431d92d503a9c6494d9e2316a not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.741081 5050 scope.go:117] "RemoveContainer" containerID="b79cca188f2f2740a6f2ada09d6063ae085ca891409ef51df998ad18b121d465" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.741256 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b79cca188f2f2740a6f2ada09d6063ae085ca891409ef51df998ad18b121d465"} err="failed to get container status \"b79cca188f2f2740a6f2ada09d6063ae085ca891409ef51df998ad18b121d465\": rpc error: code = NotFound desc = could not find container \"b79cca188f2f2740a6f2ada09d6063ae085ca891409ef51df998ad18b121d465\": container with ID starting with b79cca188f2f2740a6f2ada09d6063ae085ca891409ef51df998ad18b121d465 not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.741282 5050 scope.go:117] "RemoveContainer" containerID="ccf6b22e62a8e428060aa8c1c10e874c12d3410de016017c055dc65c2b66b963" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.741510 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccf6b22e62a8e428060aa8c1c10e874c12d3410de016017c055dc65c2b66b963"} err="failed to get container status \"ccf6b22e62a8e428060aa8c1c10e874c12d3410de016017c055dc65c2b66b963\": rpc error: code = NotFound desc = could not find container \"ccf6b22e62a8e428060aa8c1c10e874c12d3410de016017c055dc65c2b66b963\": container with ID starting with ccf6b22e62a8e428060aa8c1c10e874c12d3410de016017c055dc65c2b66b963 not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.741528 5050 scope.go:117] "RemoveContainer" containerID="01285bd00ac1055e7f57130d4d3e052769346733fd1c2854e0e988581a553f66" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.741698 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01285bd00ac1055e7f57130d4d3e052769346733fd1c2854e0e988581a553f66"} err="failed to get container status \"01285bd00ac1055e7f57130d4d3e052769346733fd1c2854e0e988581a553f66\": rpc error: code = NotFound desc = could not find container \"01285bd00ac1055e7f57130d4d3e052769346733fd1c2854e0e988581a553f66\": container with ID starting with 01285bd00ac1055e7f57130d4d3e052769346733fd1c2854e0e988581a553f66 not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.741715 5050 scope.go:117] "RemoveContainer" containerID="8af0c7f049f1a518d307999685e3bee830df9eb2f0c799eb94c8acaa62f548df" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.742000 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8af0c7f049f1a518d307999685e3bee830df9eb2f0c799eb94c8acaa62f548df"} err="failed to get container status \"8af0c7f049f1a518d307999685e3bee830df9eb2f0c799eb94c8acaa62f548df\": rpc error: code = NotFound desc = could not find container \"8af0c7f049f1a518d307999685e3bee830df9eb2f0c799eb94c8acaa62f548df\": container with ID starting with 8af0c7f049f1a518d307999685e3bee830df9eb2f0c799eb94c8acaa62f548df not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.742031 5050 scope.go:117] "RemoveContainer" containerID="0d6dc0aec5873226f4eaa2bfbe08e77f854d42624aa47ffb8403123b614fcd9f" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.742447 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d6dc0aec5873226f4eaa2bfbe08e77f854d42624aa47ffb8403123b614fcd9f"} err="failed to get container status \"0d6dc0aec5873226f4eaa2bfbe08e77f854d42624aa47ffb8403123b614fcd9f\": rpc error: code = NotFound desc = could not find container \"0d6dc0aec5873226f4eaa2bfbe08e77f854d42624aa47ffb8403123b614fcd9f\": container with ID starting with 0d6dc0aec5873226f4eaa2bfbe08e77f854d42624aa47ffb8403123b614fcd9f not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.742468 5050 scope.go:117] "RemoveContainer" containerID="5767ff2d50657f2ed1fd5f7fa8bee945962a258ca0ec0507943a5a7295f65efb" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.742752 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5767ff2d50657f2ed1fd5f7fa8bee945962a258ca0ec0507943a5a7295f65efb"} err="failed to get container status \"5767ff2d50657f2ed1fd5f7fa8bee945962a258ca0ec0507943a5a7295f65efb\": rpc error: code = NotFound desc = could not find container \"5767ff2d50657f2ed1fd5f7fa8bee945962a258ca0ec0507943a5a7295f65efb\": container with ID starting with 5767ff2d50657f2ed1fd5f7fa8bee945962a258ca0ec0507943a5a7295f65efb not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.742784 5050 scope.go:117] "RemoveContainer" containerID="a3155fb448e8d746437d2814848db17f5b9f4928a65b8d2d5b6554ae4acf4ae7" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.743159 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3155fb448e8d746437d2814848db17f5b9f4928a65b8d2d5b6554ae4acf4ae7"} err="failed to get container status \"a3155fb448e8d746437d2814848db17f5b9f4928a65b8d2d5b6554ae4acf4ae7\": rpc error: code = NotFound desc = could not find container \"a3155fb448e8d746437d2814848db17f5b9f4928a65b8d2d5b6554ae4acf4ae7\": container with ID starting with a3155fb448e8d746437d2814848db17f5b9f4928a65b8d2d5b6554ae4acf4ae7 not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.743183 5050 scope.go:117] "RemoveContainer" containerID="c2ba379761001eca9453261ca94fd9bcb5a9b85cec51acadd96ce43a4be35f43" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.743559 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2ba379761001eca9453261ca94fd9bcb5a9b85cec51acadd96ce43a4be35f43"} err="failed to get container status \"c2ba379761001eca9453261ca94fd9bcb5a9b85cec51acadd96ce43a4be35f43\": rpc error: code = NotFound desc = could not find container \"c2ba379761001eca9453261ca94fd9bcb5a9b85cec51acadd96ce43a4be35f43\": container with ID starting with c2ba379761001eca9453261ca94fd9bcb5a9b85cec51acadd96ce43a4be35f43 not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.743590 5050 scope.go:117] "RemoveContainer" containerID="ab073247dca519f0c76d97e6a5a6e3de4ef0c58431d92d503a9c6494d9e2316a" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.743868 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab073247dca519f0c76d97e6a5a6e3de4ef0c58431d92d503a9c6494d9e2316a"} err="failed to get container status \"ab073247dca519f0c76d97e6a5a6e3de4ef0c58431d92d503a9c6494d9e2316a\": rpc error: code = NotFound desc = could not find container \"ab073247dca519f0c76d97e6a5a6e3de4ef0c58431d92d503a9c6494d9e2316a\": container with ID starting with ab073247dca519f0c76d97e6a5a6e3de4ef0c58431d92d503a9c6494d9e2316a not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.743889 5050 scope.go:117] "RemoveContainer" containerID="b79cca188f2f2740a6f2ada09d6063ae085ca891409ef51df998ad18b121d465" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.744213 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b79cca188f2f2740a6f2ada09d6063ae085ca891409ef51df998ad18b121d465"} err="failed to get container status \"b79cca188f2f2740a6f2ada09d6063ae085ca891409ef51df998ad18b121d465\": rpc error: code = NotFound desc = could not find container \"b79cca188f2f2740a6f2ada09d6063ae085ca891409ef51df998ad18b121d465\": container with ID starting with b79cca188f2f2740a6f2ada09d6063ae085ca891409ef51df998ad18b121d465 not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.744243 5050 scope.go:117] "RemoveContainer" containerID="ccf6b22e62a8e428060aa8c1c10e874c12d3410de016017c055dc65c2b66b963" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.744492 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccf6b22e62a8e428060aa8c1c10e874c12d3410de016017c055dc65c2b66b963"} err="failed to get container status \"ccf6b22e62a8e428060aa8c1c10e874c12d3410de016017c055dc65c2b66b963\": rpc error: code = NotFound desc = could not find container \"ccf6b22e62a8e428060aa8c1c10e874c12d3410de016017c055dc65c2b66b963\": container with ID starting with ccf6b22e62a8e428060aa8c1c10e874c12d3410de016017c055dc65c2b66b963 not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.744512 5050 scope.go:117] "RemoveContainer" containerID="01285bd00ac1055e7f57130d4d3e052769346733fd1c2854e0e988581a553f66" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.744766 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01285bd00ac1055e7f57130d4d3e052769346733fd1c2854e0e988581a553f66"} err="failed to get container status \"01285bd00ac1055e7f57130d4d3e052769346733fd1c2854e0e988581a553f66\": rpc error: code = NotFound desc = could not find container \"01285bd00ac1055e7f57130d4d3e052769346733fd1c2854e0e988581a553f66\": container with ID starting with 01285bd00ac1055e7f57130d4d3e052769346733fd1c2854e0e988581a553f66 not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.744785 5050 scope.go:117] "RemoveContainer" containerID="8af0c7f049f1a518d307999685e3bee830df9eb2f0c799eb94c8acaa62f548df" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.745124 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8af0c7f049f1a518d307999685e3bee830df9eb2f0c799eb94c8acaa62f548df"} err="failed to get container status \"8af0c7f049f1a518d307999685e3bee830df9eb2f0c799eb94c8acaa62f548df\": rpc error: code = NotFound desc = could not find container \"8af0c7f049f1a518d307999685e3bee830df9eb2f0c799eb94c8acaa62f548df\": container with ID starting with 8af0c7f049f1a518d307999685e3bee830df9eb2f0c799eb94c8acaa62f548df not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.745142 5050 scope.go:117] "RemoveContainer" containerID="0d6dc0aec5873226f4eaa2bfbe08e77f854d42624aa47ffb8403123b614fcd9f" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.745542 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d6dc0aec5873226f4eaa2bfbe08e77f854d42624aa47ffb8403123b614fcd9f"} err="failed to get container status \"0d6dc0aec5873226f4eaa2bfbe08e77f854d42624aa47ffb8403123b614fcd9f\": rpc error: code = NotFound desc = could not find container \"0d6dc0aec5873226f4eaa2bfbe08e77f854d42624aa47ffb8403123b614fcd9f\": container with ID starting with 0d6dc0aec5873226f4eaa2bfbe08e77f854d42624aa47ffb8403123b614fcd9f not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.745591 5050 scope.go:117] "RemoveContainer" containerID="5767ff2d50657f2ed1fd5f7fa8bee945962a258ca0ec0507943a5a7295f65efb" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.745895 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5767ff2d50657f2ed1fd5f7fa8bee945962a258ca0ec0507943a5a7295f65efb"} err="failed to get container status \"5767ff2d50657f2ed1fd5f7fa8bee945962a258ca0ec0507943a5a7295f65efb\": rpc error: code = NotFound desc = could not find container \"5767ff2d50657f2ed1fd5f7fa8bee945962a258ca0ec0507943a5a7295f65efb\": container with ID starting with 5767ff2d50657f2ed1fd5f7fa8bee945962a258ca0ec0507943a5a7295f65efb not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.745917 5050 scope.go:117] "RemoveContainer" containerID="a3155fb448e8d746437d2814848db17f5b9f4928a65b8d2d5b6554ae4acf4ae7" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.746239 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3155fb448e8d746437d2814848db17f5b9f4928a65b8d2d5b6554ae4acf4ae7"} err="failed to get container status \"a3155fb448e8d746437d2814848db17f5b9f4928a65b8d2d5b6554ae4acf4ae7\": rpc error: code = NotFound desc = could not find container \"a3155fb448e8d746437d2814848db17f5b9f4928a65b8d2d5b6554ae4acf4ae7\": container with ID starting with a3155fb448e8d746437d2814848db17f5b9f4928a65b8d2d5b6554ae4acf4ae7 not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.746257 5050 scope.go:117] "RemoveContainer" containerID="c2ba379761001eca9453261ca94fd9bcb5a9b85cec51acadd96ce43a4be35f43" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.746755 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2ba379761001eca9453261ca94fd9bcb5a9b85cec51acadd96ce43a4be35f43"} err="failed to get container status \"c2ba379761001eca9453261ca94fd9bcb5a9b85cec51acadd96ce43a4be35f43\": rpc error: code = NotFound desc = could not find container \"c2ba379761001eca9453261ca94fd9bcb5a9b85cec51acadd96ce43a4be35f43\": container with ID starting with c2ba379761001eca9453261ca94fd9bcb5a9b85cec51acadd96ce43a4be35f43 not found: ID does not exist" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.746787 5050 scope.go:117] "RemoveContainer" containerID="ab073247dca519f0c76d97e6a5a6e3de4ef0c58431d92d503a9c6494d9e2316a" Dec 11 13:59:26 crc kubenswrapper[5050]: I1211 13:59:26.747077 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab073247dca519f0c76d97e6a5a6e3de4ef0c58431d92d503a9c6494d9e2316a"} err="failed to get container status \"ab073247dca519f0c76d97e6a5a6e3de4ef0c58431d92d503a9c6494d9e2316a\": rpc error: code = NotFound desc = could not find container \"ab073247dca519f0c76d97e6a5a6e3de4ef0c58431d92d503a9c6494d9e2316a\": container with ID starting with ab073247dca519f0c76d97e6a5a6e3de4ef0c58431d92d503a9c6494d9e2316a not found: ID does not exist" Dec 11 13:59:27 crc kubenswrapper[5050]: I1211 13:59:27.525724 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" event={"ID":"a13dee96-3971-4939-9565-8b6f4507a197","Type":"ContainerStarted","Data":"73341f50b11a1ae42fca86d843e4f5bfd19e0a5cebd456e70a6a13f3595a909d"} Dec 11 13:59:27 crc kubenswrapper[5050]: I1211 13:59:27.526326 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" event={"ID":"a13dee96-3971-4939-9565-8b6f4507a197","Type":"ContainerStarted","Data":"98f1772af88846f64e6b1d88fc261366db9d2c0b3877e6e38dba75515557bb6b"} Dec 11 13:59:27 crc kubenswrapper[5050]: I1211 13:59:27.526354 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" event={"ID":"a13dee96-3971-4939-9565-8b6f4507a197","Type":"ContainerStarted","Data":"a8a83cc4bfd51685cbb0c2815cb3d68f94949f3cfdb4816da56b127cf8debf8d"} Dec 11 13:59:27 crc kubenswrapper[5050]: I1211 13:59:27.526364 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" event={"ID":"a13dee96-3971-4939-9565-8b6f4507a197","Type":"ContainerStarted","Data":"c19baa5e40f3c300121bc2adfcf1fd34e0dfbe3a61596448358cb964bd3b7382"} Dec 11 13:59:27 crc kubenswrapper[5050]: I1211 13:59:27.526373 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" event={"ID":"a13dee96-3971-4939-9565-8b6f4507a197","Type":"ContainerStarted","Data":"fe57f9e30cb1f5b30ee5ce195111395764d7eda820159f25a239abaa73534a07"} Dec 11 13:59:27 crc kubenswrapper[5050]: I1211 13:59:27.526381 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" event={"ID":"a13dee96-3971-4939-9565-8b6f4507a197","Type":"ContainerStarted","Data":"568f2a628edbb013ca73a1bb81e02a2f9f7381df6df10e5d86c3d51f9c1b395f"} Dec 11 13:59:27 crc kubenswrapper[5050]: I1211 13:59:27.528213 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-4fhtp_de09c7d4-952a-405d-9a54-32331c538ee2/kube-multus/0.log" Dec 11 13:59:27 crc kubenswrapper[5050]: I1211 13:59:27.528250 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-4fhtp" event={"ID":"de09c7d4-952a-405d-9a54-32331c538ee2","Type":"ContainerStarted","Data":"2956f0e8cbbb4b45861bdacfe1832a7607be554d4a25ee20df5c79594bad2290"} Dec 11 13:59:27 crc kubenswrapper[5050]: I1211 13:59:27.554665 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce1e7994-3a5d-488d-90a5-115ca4cb7cf3" path="/var/lib/kubelet/pods/ce1e7994-3a5d-488d-90a5-115ca4cb7cf3/volumes" Dec 11 13:59:29 crc kubenswrapper[5050]: I1211 13:59:29.542591 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" event={"ID":"a13dee96-3971-4939-9565-8b6f4507a197","Type":"ContainerStarted","Data":"e519f87acf1c7f626f5a6e118e47f615c61dc28bea8e4d9848202ec319c328fa"} Dec 11 13:59:29 crc kubenswrapper[5050]: I1211 13:59:29.710418 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-2hvjg"] Dec 11 13:59:29 crc kubenswrapper[5050]: I1211 13:59:29.711323 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2hvjg" Dec 11 13:59:29 crc kubenswrapper[5050]: I1211 13:59:29.713132 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Dec 11 13:59:29 crc kubenswrapper[5050]: I1211 13:59:29.713148 5050 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-5d95f" Dec 11 13:59:29 crc kubenswrapper[5050]: I1211 13:59:29.715492 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Dec 11 13:59:29 crc kubenswrapper[5050]: I1211 13:59:29.715785 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Dec 11 13:59:29 crc kubenswrapper[5050]: I1211 13:59:29.782071 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/805a0aa2-b76d-42f2-8b65-8ffcdd30e32d-crc-storage\") pod \"crc-storage-crc-2hvjg\" (UID: \"805a0aa2-b76d-42f2-8b65-8ffcdd30e32d\") " pod="crc-storage/crc-storage-crc-2hvjg" Dec 11 13:59:29 crc kubenswrapper[5050]: I1211 13:59:29.782222 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/805a0aa2-b76d-42f2-8b65-8ffcdd30e32d-node-mnt\") pod \"crc-storage-crc-2hvjg\" (UID: \"805a0aa2-b76d-42f2-8b65-8ffcdd30e32d\") " pod="crc-storage/crc-storage-crc-2hvjg" Dec 11 13:59:29 crc kubenswrapper[5050]: I1211 13:59:29.782722 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v55wm\" (UniqueName: \"kubernetes.io/projected/805a0aa2-b76d-42f2-8b65-8ffcdd30e32d-kube-api-access-v55wm\") pod \"crc-storage-crc-2hvjg\" (UID: \"805a0aa2-b76d-42f2-8b65-8ffcdd30e32d\") " pod="crc-storage/crc-storage-crc-2hvjg" Dec 11 13:59:29 crc kubenswrapper[5050]: I1211 13:59:29.883391 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/805a0aa2-b76d-42f2-8b65-8ffcdd30e32d-node-mnt\") pod \"crc-storage-crc-2hvjg\" (UID: \"805a0aa2-b76d-42f2-8b65-8ffcdd30e32d\") " pod="crc-storage/crc-storage-crc-2hvjg" Dec 11 13:59:29 crc kubenswrapper[5050]: I1211 13:59:29.883454 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v55wm\" (UniqueName: \"kubernetes.io/projected/805a0aa2-b76d-42f2-8b65-8ffcdd30e32d-kube-api-access-v55wm\") pod \"crc-storage-crc-2hvjg\" (UID: \"805a0aa2-b76d-42f2-8b65-8ffcdd30e32d\") " pod="crc-storage/crc-storage-crc-2hvjg" Dec 11 13:59:29 crc kubenswrapper[5050]: I1211 13:59:29.883481 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/805a0aa2-b76d-42f2-8b65-8ffcdd30e32d-crc-storage\") pod \"crc-storage-crc-2hvjg\" (UID: \"805a0aa2-b76d-42f2-8b65-8ffcdd30e32d\") " pod="crc-storage/crc-storage-crc-2hvjg" Dec 11 13:59:29 crc kubenswrapper[5050]: I1211 13:59:29.883698 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/805a0aa2-b76d-42f2-8b65-8ffcdd30e32d-node-mnt\") pod \"crc-storage-crc-2hvjg\" (UID: \"805a0aa2-b76d-42f2-8b65-8ffcdd30e32d\") " pod="crc-storage/crc-storage-crc-2hvjg" Dec 11 13:59:29 crc kubenswrapper[5050]: I1211 13:59:29.884387 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/805a0aa2-b76d-42f2-8b65-8ffcdd30e32d-crc-storage\") pod \"crc-storage-crc-2hvjg\" (UID: \"805a0aa2-b76d-42f2-8b65-8ffcdd30e32d\") " pod="crc-storage/crc-storage-crc-2hvjg" Dec 11 13:59:29 crc kubenswrapper[5050]: I1211 13:59:29.901049 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v55wm\" (UniqueName: \"kubernetes.io/projected/805a0aa2-b76d-42f2-8b65-8ffcdd30e32d-kube-api-access-v55wm\") pod \"crc-storage-crc-2hvjg\" (UID: \"805a0aa2-b76d-42f2-8b65-8ffcdd30e32d\") " pod="crc-storage/crc-storage-crc-2hvjg" Dec 11 13:59:30 crc kubenswrapper[5050]: I1211 13:59:30.030567 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2hvjg" Dec 11 13:59:30 crc kubenswrapper[5050]: E1211 13:59:30.054573 5050 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2hvjg_crc-storage_805a0aa2-b76d-42f2-8b65-8ffcdd30e32d_0(c70050e022ffe22f7a43fcc01d08b7bb55a56c1d5d2b59273afdf9e1b11bad73): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 11 13:59:30 crc kubenswrapper[5050]: E1211 13:59:30.054650 5050 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2hvjg_crc-storage_805a0aa2-b76d-42f2-8b65-8ffcdd30e32d_0(c70050e022ffe22f7a43fcc01d08b7bb55a56c1d5d2b59273afdf9e1b11bad73): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2hvjg" Dec 11 13:59:30 crc kubenswrapper[5050]: E1211 13:59:30.054670 5050 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2hvjg_crc-storage_805a0aa2-b76d-42f2-8b65-8ffcdd30e32d_0(c70050e022ffe22f7a43fcc01d08b7bb55a56c1d5d2b59273afdf9e1b11bad73): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2hvjg" Dec 11 13:59:30 crc kubenswrapper[5050]: E1211 13:59:30.054865 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-2hvjg_crc-storage(805a0aa2-b76d-42f2-8b65-8ffcdd30e32d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-2hvjg_crc-storage(805a0aa2-b76d-42f2-8b65-8ffcdd30e32d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2hvjg_crc-storage_805a0aa2-b76d-42f2-8b65-8ffcdd30e32d_0(c70050e022ffe22f7a43fcc01d08b7bb55a56c1d5d2b59273afdf9e1b11bad73): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-2hvjg" podUID="805a0aa2-b76d-42f2-8b65-8ffcdd30e32d" Dec 11 13:59:32 crc kubenswrapper[5050]: I1211 13:59:32.301663 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-2hvjg"] Dec 11 13:59:32 crc kubenswrapper[5050]: I1211 13:59:32.302093 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2hvjg" Dec 11 13:59:32 crc kubenswrapper[5050]: I1211 13:59:32.302567 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2hvjg" Dec 11 13:59:32 crc kubenswrapper[5050]: E1211 13:59:32.323394 5050 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2hvjg_crc-storage_805a0aa2-b76d-42f2-8b65-8ffcdd30e32d_0(be102bc17c19faea5ecb1a344c3f36f32d16ad5aed3ccbad806581ce04203f85): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 11 13:59:32 crc kubenswrapper[5050]: E1211 13:59:32.323474 5050 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2hvjg_crc-storage_805a0aa2-b76d-42f2-8b65-8ffcdd30e32d_0(be102bc17c19faea5ecb1a344c3f36f32d16ad5aed3ccbad806581ce04203f85): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2hvjg" Dec 11 13:59:32 crc kubenswrapper[5050]: E1211 13:59:32.323499 5050 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2hvjg_crc-storage_805a0aa2-b76d-42f2-8b65-8ffcdd30e32d_0(be102bc17c19faea5ecb1a344c3f36f32d16ad5aed3ccbad806581ce04203f85): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-2hvjg" Dec 11 13:59:32 crc kubenswrapper[5050]: E1211 13:59:32.323551 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-2hvjg_crc-storage(805a0aa2-b76d-42f2-8b65-8ffcdd30e32d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-2hvjg_crc-storage(805a0aa2-b76d-42f2-8b65-8ffcdd30e32d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-2hvjg_crc-storage_805a0aa2-b76d-42f2-8b65-8ffcdd30e32d_0(be102bc17c19faea5ecb1a344c3f36f32d16ad5aed3ccbad806581ce04203f85): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-2hvjg" podUID="805a0aa2-b76d-42f2-8b65-8ffcdd30e32d" Dec 11 13:59:32 crc kubenswrapper[5050]: I1211 13:59:32.564080 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" event={"ID":"a13dee96-3971-4939-9565-8b6f4507a197","Type":"ContainerStarted","Data":"03c20875a12fa7367ab891bd18d4ba7cbf0b2f62633c17d4d6b1d46e646c602a"} Dec 11 13:59:32 crc kubenswrapper[5050]: I1211 13:59:32.564324 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:32 crc kubenswrapper[5050]: I1211 13:59:32.587173 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:32 crc kubenswrapper[5050]: I1211 13:59:32.592638 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" podStartSLOduration=7.592618974 podStartE2EDuration="7.592618974s" podCreationTimestamp="2025-12-11 13:59:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 13:59:32.592179039 +0000 UTC m=+663.435901645" watchObservedRunningTime="2025-12-11 13:59:32.592618974 +0000 UTC m=+663.436341560" Dec 11 13:59:33 crc kubenswrapper[5050]: I1211 13:59:33.568350 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:33 crc kubenswrapper[5050]: I1211 13:59:33.568653 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:33 crc kubenswrapper[5050]: I1211 13:59:33.591653 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:45 crc kubenswrapper[5050]: I1211 13:59:45.545677 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2hvjg" Dec 11 13:59:45 crc kubenswrapper[5050]: I1211 13:59:45.546799 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2hvjg" Dec 11 13:59:45 crc kubenswrapper[5050]: I1211 13:59:45.751190 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-2hvjg"] Dec 11 13:59:45 crc kubenswrapper[5050]: I1211 13:59:45.757736 5050 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 13:59:46 crc kubenswrapper[5050]: I1211 13:59:46.632930 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-2hvjg" event={"ID":"805a0aa2-b76d-42f2-8b65-8ffcdd30e32d","Type":"ContainerStarted","Data":"2be0900dbd4344e507ca6c43832af636526edbc6d65f77ebfa33b7b643039d3e"} Dec 11 13:59:47 crc kubenswrapper[5050]: I1211 13:59:47.640494 5050 generic.go:334] "Generic (PLEG): container finished" podID="805a0aa2-b76d-42f2-8b65-8ffcdd30e32d" containerID="d216b65a928c99a67fed82e452c22653fec2716e6d9bfbd46f24c3dfa2efb0dd" exitCode=0 Dec 11 13:59:47 crc kubenswrapper[5050]: I1211 13:59:47.640568 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-2hvjg" event={"ID":"805a0aa2-b76d-42f2-8b65-8ffcdd30e32d","Type":"ContainerDied","Data":"d216b65a928c99a67fed82e452c22653fec2716e6d9bfbd46f24c3dfa2efb0dd"} Dec 11 13:59:48 crc kubenswrapper[5050]: I1211 13:59:48.839205 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2hvjg" Dec 11 13:59:48 crc kubenswrapper[5050]: I1211 13:59:48.921581 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v55wm\" (UniqueName: \"kubernetes.io/projected/805a0aa2-b76d-42f2-8b65-8ffcdd30e32d-kube-api-access-v55wm\") pod \"805a0aa2-b76d-42f2-8b65-8ffcdd30e32d\" (UID: \"805a0aa2-b76d-42f2-8b65-8ffcdd30e32d\") " Dec 11 13:59:48 crc kubenswrapper[5050]: I1211 13:59:48.921646 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/805a0aa2-b76d-42f2-8b65-8ffcdd30e32d-crc-storage\") pod \"805a0aa2-b76d-42f2-8b65-8ffcdd30e32d\" (UID: \"805a0aa2-b76d-42f2-8b65-8ffcdd30e32d\") " Dec 11 13:59:48 crc kubenswrapper[5050]: I1211 13:59:48.921709 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/805a0aa2-b76d-42f2-8b65-8ffcdd30e32d-node-mnt\") pod \"805a0aa2-b76d-42f2-8b65-8ffcdd30e32d\" (UID: \"805a0aa2-b76d-42f2-8b65-8ffcdd30e32d\") " Dec 11 13:59:48 crc kubenswrapper[5050]: I1211 13:59:48.921962 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/805a0aa2-b76d-42f2-8b65-8ffcdd30e32d-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "805a0aa2-b76d-42f2-8b65-8ffcdd30e32d" (UID: "805a0aa2-b76d-42f2-8b65-8ffcdd30e32d"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 13:59:48 crc kubenswrapper[5050]: I1211 13:59:48.926643 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/805a0aa2-b76d-42f2-8b65-8ffcdd30e32d-kube-api-access-v55wm" (OuterVolumeSpecName: "kube-api-access-v55wm") pod "805a0aa2-b76d-42f2-8b65-8ffcdd30e32d" (UID: "805a0aa2-b76d-42f2-8b65-8ffcdd30e32d"). InnerVolumeSpecName "kube-api-access-v55wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 13:59:48 crc kubenswrapper[5050]: I1211 13:59:48.935790 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/805a0aa2-b76d-42f2-8b65-8ffcdd30e32d-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "805a0aa2-b76d-42f2-8b65-8ffcdd30e32d" (UID: "805a0aa2-b76d-42f2-8b65-8ffcdd30e32d"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 13:59:49 crc kubenswrapper[5050]: I1211 13:59:49.023114 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v55wm\" (UniqueName: \"kubernetes.io/projected/805a0aa2-b76d-42f2-8b65-8ffcdd30e32d-kube-api-access-v55wm\") on node \"crc\" DevicePath \"\"" Dec 11 13:59:49 crc kubenswrapper[5050]: I1211 13:59:49.023149 5050 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/805a0aa2-b76d-42f2-8b65-8ffcdd30e32d-crc-storage\") on node \"crc\" DevicePath \"\"" Dec 11 13:59:49 crc kubenswrapper[5050]: I1211 13:59:49.023159 5050 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/805a0aa2-b76d-42f2-8b65-8ffcdd30e32d-node-mnt\") on node \"crc\" DevicePath \"\"" Dec 11 13:59:49 crc kubenswrapper[5050]: I1211 13:59:49.650624 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-2hvjg" event={"ID":"805a0aa2-b76d-42f2-8b65-8ffcdd30e32d","Type":"ContainerDied","Data":"2be0900dbd4344e507ca6c43832af636526edbc6d65f77ebfa33b7b643039d3e"} Dec 11 13:59:49 crc kubenswrapper[5050]: I1211 13:59:49.650663 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2be0900dbd4344e507ca6c43832af636526edbc6d65f77ebfa33b7b643039d3e" Dec 11 13:59:49 crc kubenswrapper[5050]: I1211 13:59:49.650743 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-2hvjg" Dec 11 13:59:56 crc kubenswrapper[5050]: I1211 13:59:56.076892 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8x4f86"] Dec 11 13:59:56 crc kubenswrapper[5050]: E1211 13:59:56.078374 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="805a0aa2-b76d-42f2-8b65-8ffcdd30e32d" containerName="storage" Dec 11 13:59:56 crc kubenswrapper[5050]: I1211 13:59:56.078445 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="805a0aa2-b76d-42f2-8b65-8ffcdd30e32d" containerName="storage" Dec 11 13:59:56 crc kubenswrapper[5050]: I1211 13:59:56.078586 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="805a0aa2-b76d-42f2-8b65-8ffcdd30e32d" containerName="storage" Dec 11 13:59:56 crc kubenswrapper[5050]: I1211 13:59:56.079365 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8x4f86" Dec 11 13:59:56 crc kubenswrapper[5050]: I1211 13:59:56.081523 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 11 13:59:56 crc kubenswrapper[5050]: I1211 13:59:56.098577 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8x4f86"] Dec 11 13:59:56 crc kubenswrapper[5050]: I1211 13:59:56.114161 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qvgf\" (UniqueName: \"kubernetes.io/projected/df6dc81d-e08c-4c8d-a97d-911a18545768-kube-api-access-9qvgf\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8x4f86\" (UID: \"df6dc81d-e08c-4c8d-a97d-911a18545768\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8x4f86" Dec 11 13:59:56 crc kubenswrapper[5050]: I1211 13:59:56.114228 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df6dc81d-e08c-4c8d-a97d-911a18545768-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8x4f86\" (UID: \"df6dc81d-e08c-4c8d-a97d-911a18545768\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8x4f86" Dec 11 13:59:56 crc kubenswrapper[5050]: I1211 13:59:56.114276 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df6dc81d-e08c-4c8d-a97d-911a18545768-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8x4f86\" (UID: \"df6dc81d-e08c-4c8d-a97d-911a18545768\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8x4f86" Dec 11 13:59:56 crc kubenswrapper[5050]: I1211 13:59:56.215084 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qvgf\" (UniqueName: \"kubernetes.io/projected/df6dc81d-e08c-4c8d-a97d-911a18545768-kube-api-access-9qvgf\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8x4f86\" (UID: \"df6dc81d-e08c-4c8d-a97d-911a18545768\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8x4f86" Dec 11 13:59:56 crc kubenswrapper[5050]: I1211 13:59:56.215136 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df6dc81d-e08c-4c8d-a97d-911a18545768-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8x4f86\" (UID: \"df6dc81d-e08c-4c8d-a97d-911a18545768\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8x4f86" Dec 11 13:59:56 crc kubenswrapper[5050]: I1211 13:59:56.215169 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df6dc81d-e08c-4c8d-a97d-911a18545768-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8x4f86\" (UID: \"df6dc81d-e08c-4c8d-a97d-911a18545768\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8x4f86" Dec 11 13:59:56 crc kubenswrapper[5050]: I1211 13:59:56.215723 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df6dc81d-e08c-4c8d-a97d-911a18545768-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8x4f86\" (UID: \"df6dc81d-e08c-4c8d-a97d-911a18545768\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8x4f86" Dec 11 13:59:56 crc kubenswrapper[5050]: I1211 13:59:56.215907 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df6dc81d-e08c-4c8d-a97d-911a18545768-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8x4f86\" (UID: \"df6dc81d-e08c-4c8d-a97d-911a18545768\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8x4f86" Dec 11 13:59:56 crc kubenswrapper[5050]: I1211 13:59:56.238713 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qvgf\" (UniqueName: \"kubernetes.io/projected/df6dc81d-e08c-4c8d-a97d-911a18545768-kube-api-access-9qvgf\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8x4f86\" (UID: \"df6dc81d-e08c-4c8d-a97d-911a18545768\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8x4f86" Dec 11 13:59:56 crc kubenswrapper[5050]: I1211 13:59:56.260066 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bfxvp" Dec 11 13:59:56 crc kubenswrapper[5050]: I1211 13:59:56.407355 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8x4f86" Dec 11 13:59:56 crc kubenswrapper[5050]: I1211 13:59:56.588553 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8x4f86"] Dec 11 13:59:56 crc kubenswrapper[5050]: I1211 13:59:56.684105 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8x4f86" event={"ID":"df6dc81d-e08c-4c8d-a97d-911a18545768","Type":"ContainerStarted","Data":"6e692c8c329b4434a08a309347b3d81c3a8d279b946a108ed7f2e9679565c066"} Dec 11 13:59:57 crc kubenswrapper[5050]: I1211 13:59:57.689396 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8x4f86" event={"ID":"df6dc81d-e08c-4c8d-a97d-911a18545768","Type":"ContainerStarted","Data":"f2e19148608fd1035430bf8031f9e994ed9275a73fa5b56cdba4075ff2b083f8"} Dec 11 13:59:58 crc kubenswrapper[5050]: I1211 13:59:58.438891 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bhn66"] Dec 11 13:59:58 crc kubenswrapper[5050]: I1211 13:59:58.439912 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bhn66" Dec 11 13:59:58 crc kubenswrapper[5050]: I1211 13:59:58.450598 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bhn66"] Dec 11 13:59:58 crc kubenswrapper[5050]: I1211 13:59:58.545066 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/051dc05c-79dc-4ea0-b678-3275d37dcc87-catalog-content\") pod \"redhat-operators-bhn66\" (UID: \"051dc05c-79dc-4ea0-b678-3275d37dcc87\") " pod="openshift-marketplace/redhat-operators-bhn66" Dec 11 13:59:58 crc kubenswrapper[5050]: I1211 13:59:58.545138 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd9w5\" (UniqueName: \"kubernetes.io/projected/051dc05c-79dc-4ea0-b678-3275d37dcc87-kube-api-access-zd9w5\") pod \"redhat-operators-bhn66\" (UID: \"051dc05c-79dc-4ea0-b678-3275d37dcc87\") " pod="openshift-marketplace/redhat-operators-bhn66" Dec 11 13:59:58 crc kubenswrapper[5050]: I1211 13:59:58.545167 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/051dc05c-79dc-4ea0-b678-3275d37dcc87-utilities\") pod \"redhat-operators-bhn66\" (UID: \"051dc05c-79dc-4ea0-b678-3275d37dcc87\") " pod="openshift-marketplace/redhat-operators-bhn66" Dec 11 13:59:58 crc kubenswrapper[5050]: I1211 13:59:58.645895 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd9w5\" (UniqueName: \"kubernetes.io/projected/051dc05c-79dc-4ea0-b678-3275d37dcc87-kube-api-access-zd9w5\") pod \"redhat-operators-bhn66\" (UID: \"051dc05c-79dc-4ea0-b678-3275d37dcc87\") " pod="openshift-marketplace/redhat-operators-bhn66" Dec 11 13:59:58 crc kubenswrapper[5050]: I1211 13:59:58.645933 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/051dc05c-79dc-4ea0-b678-3275d37dcc87-utilities\") pod \"redhat-operators-bhn66\" (UID: \"051dc05c-79dc-4ea0-b678-3275d37dcc87\") " pod="openshift-marketplace/redhat-operators-bhn66" Dec 11 13:59:58 crc kubenswrapper[5050]: I1211 13:59:58.646004 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/051dc05c-79dc-4ea0-b678-3275d37dcc87-catalog-content\") pod \"redhat-operators-bhn66\" (UID: \"051dc05c-79dc-4ea0-b678-3275d37dcc87\") " pod="openshift-marketplace/redhat-operators-bhn66" Dec 11 13:59:58 crc kubenswrapper[5050]: I1211 13:59:58.646420 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/051dc05c-79dc-4ea0-b678-3275d37dcc87-catalog-content\") pod \"redhat-operators-bhn66\" (UID: \"051dc05c-79dc-4ea0-b678-3275d37dcc87\") " pod="openshift-marketplace/redhat-operators-bhn66" Dec 11 13:59:58 crc kubenswrapper[5050]: I1211 13:59:58.646530 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/051dc05c-79dc-4ea0-b678-3275d37dcc87-utilities\") pod \"redhat-operators-bhn66\" (UID: \"051dc05c-79dc-4ea0-b678-3275d37dcc87\") " pod="openshift-marketplace/redhat-operators-bhn66" Dec 11 13:59:58 crc kubenswrapper[5050]: I1211 13:59:58.673912 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd9w5\" (UniqueName: \"kubernetes.io/projected/051dc05c-79dc-4ea0-b678-3275d37dcc87-kube-api-access-zd9w5\") pod \"redhat-operators-bhn66\" (UID: \"051dc05c-79dc-4ea0-b678-3275d37dcc87\") " pod="openshift-marketplace/redhat-operators-bhn66" Dec 11 13:59:58 crc kubenswrapper[5050]: I1211 13:59:58.694635 5050 generic.go:334] "Generic (PLEG): container finished" podID="df6dc81d-e08c-4c8d-a97d-911a18545768" containerID="f2e19148608fd1035430bf8031f9e994ed9275a73fa5b56cdba4075ff2b083f8" exitCode=0 Dec 11 13:59:58 crc kubenswrapper[5050]: I1211 13:59:58.694681 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8x4f86" event={"ID":"df6dc81d-e08c-4c8d-a97d-911a18545768","Type":"ContainerDied","Data":"f2e19148608fd1035430bf8031f9e994ed9275a73fa5b56cdba4075ff2b083f8"} Dec 11 13:59:58 crc kubenswrapper[5050]: I1211 13:59:58.769316 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bhn66" Dec 11 13:59:58 crc kubenswrapper[5050]: I1211 13:59:58.997029 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bhn66"] Dec 11 13:59:59 crc kubenswrapper[5050]: W1211 13:59:59.006346 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod051dc05c_79dc_4ea0_b678_3275d37dcc87.slice/crio-1d54091c78e84d35425eafbeaabe2a149da5d12902c058aa79868b7fbff15e48 WatchSource:0}: Error finding container 1d54091c78e84d35425eafbeaabe2a149da5d12902c058aa79868b7fbff15e48: Status 404 returned error can't find the container with id 1d54091c78e84d35425eafbeaabe2a149da5d12902c058aa79868b7fbff15e48 Dec 11 13:59:59 crc kubenswrapper[5050]: I1211 13:59:59.701103 5050 generic.go:334] "Generic (PLEG): container finished" podID="051dc05c-79dc-4ea0-b678-3275d37dcc87" containerID="4b3c1f5a697dc04eda3b5196e49b246c78da411dea0d1b7b4f66b13a8206e4f2" exitCode=0 Dec 11 13:59:59 crc kubenswrapper[5050]: I1211 13:59:59.701208 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhn66" event={"ID":"051dc05c-79dc-4ea0-b678-3275d37dcc87","Type":"ContainerDied","Data":"4b3c1f5a697dc04eda3b5196e49b246c78da411dea0d1b7b4f66b13a8206e4f2"} Dec 11 13:59:59 crc kubenswrapper[5050]: I1211 13:59:59.701402 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhn66" event={"ID":"051dc05c-79dc-4ea0-b678-3275d37dcc87","Type":"ContainerStarted","Data":"1d54091c78e84d35425eafbeaabe2a149da5d12902c058aa79868b7fbff15e48"} Dec 11 14:00:00 crc kubenswrapper[5050]: I1211 14:00:00.162129 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424360-8hl85"] Dec 11 14:00:00 crc kubenswrapper[5050]: I1211 14:00:00.163077 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424360-8hl85" Dec 11 14:00:00 crc kubenswrapper[5050]: I1211 14:00:00.168122 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 11 14:00:00 crc kubenswrapper[5050]: I1211 14:00:00.172161 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 11 14:00:00 crc kubenswrapper[5050]: I1211 14:00:00.172242 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424360-8hl85"] Dec 11 14:00:00 crc kubenswrapper[5050]: I1211 14:00:00.269798 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7e93fea-aeee-42f1-8cc5-204a7365d883-config-volume\") pod \"collect-profiles-29424360-8hl85\" (UID: \"e7e93fea-aeee-42f1-8cc5-204a7365d883\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424360-8hl85" Dec 11 14:00:00 crc kubenswrapper[5050]: I1211 14:00:00.269838 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6bps\" (UniqueName: \"kubernetes.io/projected/e7e93fea-aeee-42f1-8cc5-204a7365d883-kube-api-access-m6bps\") pod \"collect-profiles-29424360-8hl85\" (UID: \"e7e93fea-aeee-42f1-8cc5-204a7365d883\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424360-8hl85" Dec 11 14:00:00 crc kubenswrapper[5050]: I1211 14:00:00.269879 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e7e93fea-aeee-42f1-8cc5-204a7365d883-secret-volume\") pod \"collect-profiles-29424360-8hl85\" (UID: \"e7e93fea-aeee-42f1-8cc5-204a7365d883\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424360-8hl85" Dec 11 14:00:00 crc kubenswrapper[5050]: I1211 14:00:00.370848 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7e93fea-aeee-42f1-8cc5-204a7365d883-config-volume\") pod \"collect-profiles-29424360-8hl85\" (UID: \"e7e93fea-aeee-42f1-8cc5-204a7365d883\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424360-8hl85" Dec 11 14:00:00 crc kubenswrapper[5050]: I1211 14:00:00.371158 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6bps\" (UniqueName: \"kubernetes.io/projected/e7e93fea-aeee-42f1-8cc5-204a7365d883-kube-api-access-m6bps\") pod \"collect-profiles-29424360-8hl85\" (UID: \"e7e93fea-aeee-42f1-8cc5-204a7365d883\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424360-8hl85" Dec 11 14:00:00 crc kubenswrapper[5050]: I1211 14:00:00.371201 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e7e93fea-aeee-42f1-8cc5-204a7365d883-secret-volume\") pod \"collect-profiles-29424360-8hl85\" (UID: \"e7e93fea-aeee-42f1-8cc5-204a7365d883\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424360-8hl85" Dec 11 14:00:00 crc kubenswrapper[5050]: I1211 14:00:00.371785 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7e93fea-aeee-42f1-8cc5-204a7365d883-config-volume\") pod \"collect-profiles-29424360-8hl85\" (UID: \"e7e93fea-aeee-42f1-8cc5-204a7365d883\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424360-8hl85" Dec 11 14:00:00 crc kubenswrapper[5050]: I1211 14:00:00.378186 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e7e93fea-aeee-42f1-8cc5-204a7365d883-secret-volume\") pod \"collect-profiles-29424360-8hl85\" (UID: \"e7e93fea-aeee-42f1-8cc5-204a7365d883\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424360-8hl85" Dec 11 14:00:00 crc kubenswrapper[5050]: I1211 14:00:00.386526 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6bps\" (UniqueName: \"kubernetes.io/projected/e7e93fea-aeee-42f1-8cc5-204a7365d883-kube-api-access-m6bps\") pod \"collect-profiles-29424360-8hl85\" (UID: \"e7e93fea-aeee-42f1-8cc5-204a7365d883\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424360-8hl85" Dec 11 14:00:00 crc kubenswrapper[5050]: I1211 14:00:00.493082 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424360-8hl85" Dec 11 14:00:00 crc kubenswrapper[5050]: I1211 14:00:00.653396 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424360-8hl85"] Dec 11 14:00:00 crc kubenswrapper[5050]: W1211 14:00:00.658370 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7e93fea_aeee_42f1_8cc5_204a7365d883.slice/crio-57fbd45e93ce387ec55271b50cd6fe34279dd9edba60c4758d30d102fe889b64 WatchSource:0}: Error finding container 57fbd45e93ce387ec55271b50cd6fe34279dd9edba60c4758d30d102fe889b64: Status 404 returned error can't find the container with id 57fbd45e93ce387ec55271b50cd6fe34279dd9edba60c4758d30d102fe889b64 Dec 11 14:00:00 crc kubenswrapper[5050]: I1211 14:00:00.707850 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424360-8hl85" event={"ID":"e7e93fea-aeee-42f1-8cc5-204a7365d883","Type":"ContainerStarted","Data":"57fbd45e93ce387ec55271b50cd6fe34279dd9edba60c4758d30d102fe889b64"} Dec 11 14:00:00 crc kubenswrapper[5050]: I1211 14:00:00.709380 5050 generic.go:334] "Generic (PLEG): container finished" podID="df6dc81d-e08c-4c8d-a97d-911a18545768" containerID="40dad23d4c5e5784818e0e090727083981f6401ebc3c3691ce3e178af3ecddbb" exitCode=0 Dec 11 14:00:00 crc kubenswrapper[5050]: I1211 14:00:00.709427 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8x4f86" event={"ID":"df6dc81d-e08c-4c8d-a97d-911a18545768","Type":"ContainerDied","Data":"40dad23d4c5e5784818e0e090727083981f6401ebc3c3691ce3e178af3ecddbb"} Dec 11 14:00:00 crc kubenswrapper[5050]: I1211 14:00:00.712942 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhn66" event={"ID":"051dc05c-79dc-4ea0-b678-3275d37dcc87","Type":"ContainerStarted","Data":"5f4231e8d3f0e985d1f25e14791afe76c95abe4bae699f86323336417f087ff5"} Dec 11 14:00:01 crc kubenswrapper[5050]: I1211 14:00:01.719110 5050 generic.go:334] "Generic (PLEG): container finished" podID="051dc05c-79dc-4ea0-b678-3275d37dcc87" containerID="5f4231e8d3f0e985d1f25e14791afe76c95abe4bae699f86323336417f087ff5" exitCode=0 Dec 11 14:00:01 crc kubenswrapper[5050]: I1211 14:00:01.719185 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhn66" event={"ID":"051dc05c-79dc-4ea0-b678-3275d37dcc87","Type":"ContainerDied","Data":"5f4231e8d3f0e985d1f25e14791afe76c95abe4bae699f86323336417f087ff5"} Dec 11 14:00:01 crc kubenswrapper[5050]: I1211 14:00:01.720459 5050 generic.go:334] "Generic (PLEG): container finished" podID="e7e93fea-aeee-42f1-8cc5-204a7365d883" containerID="778610a6f99edc88e656495bd5e22549a8285191df977232d12b751af22a5717" exitCode=0 Dec 11 14:00:01 crc kubenswrapper[5050]: I1211 14:00:01.720536 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424360-8hl85" event={"ID":"e7e93fea-aeee-42f1-8cc5-204a7365d883","Type":"ContainerDied","Data":"778610a6f99edc88e656495bd5e22549a8285191df977232d12b751af22a5717"} Dec 11 14:00:01 crc kubenswrapper[5050]: I1211 14:00:01.724096 5050 generic.go:334] "Generic (PLEG): container finished" podID="df6dc81d-e08c-4c8d-a97d-911a18545768" containerID="e3e6091a99432f63cf21d944932a22f390d344f8e5614bdc8789caff21e47a5c" exitCode=0 Dec 11 14:00:01 crc kubenswrapper[5050]: I1211 14:00:01.724140 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8x4f86" event={"ID":"df6dc81d-e08c-4c8d-a97d-911a18545768","Type":"ContainerDied","Data":"e3e6091a99432f63cf21d944932a22f390d344f8e5614bdc8789caff21e47a5c"} Dec 11 14:00:02 crc kubenswrapper[5050]: I1211 14:00:02.940563 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8x4f86" Dec 11 14:00:02 crc kubenswrapper[5050]: I1211 14:00:02.984183 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424360-8hl85" Dec 11 14:00:03 crc kubenswrapper[5050]: I1211 14:00:03.018431 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7e93fea-aeee-42f1-8cc5-204a7365d883-config-volume\") pod \"e7e93fea-aeee-42f1-8cc5-204a7365d883\" (UID: \"e7e93fea-aeee-42f1-8cc5-204a7365d883\") " Dec 11 14:00:03 crc kubenswrapper[5050]: I1211 14:00:03.018810 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e7e93fea-aeee-42f1-8cc5-204a7365d883-secret-volume\") pod \"e7e93fea-aeee-42f1-8cc5-204a7365d883\" (UID: \"e7e93fea-aeee-42f1-8cc5-204a7365d883\") " Dec 11 14:00:03 crc kubenswrapper[5050]: I1211 14:00:03.018953 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df6dc81d-e08c-4c8d-a97d-911a18545768-bundle\") pod \"df6dc81d-e08c-4c8d-a97d-911a18545768\" (UID: \"df6dc81d-e08c-4c8d-a97d-911a18545768\") " Dec 11 14:00:03 crc kubenswrapper[5050]: I1211 14:00:03.019339 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6bps\" (UniqueName: \"kubernetes.io/projected/e7e93fea-aeee-42f1-8cc5-204a7365d883-kube-api-access-m6bps\") pod \"e7e93fea-aeee-42f1-8cc5-204a7365d883\" (UID: \"e7e93fea-aeee-42f1-8cc5-204a7365d883\") " Dec 11 14:00:03 crc kubenswrapper[5050]: I1211 14:00:03.019421 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qvgf\" (UniqueName: \"kubernetes.io/projected/df6dc81d-e08c-4c8d-a97d-911a18545768-kube-api-access-9qvgf\") pod \"df6dc81d-e08c-4c8d-a97d-911a18545768\" (UID: \"df6dc81d-e08c-4c8d-a97d-911a18545768\") " Dec 11 14:00:03 crc kubenswrapper[5050]: I1211 14:00:03.019526 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df6dc81d-e08c-4c8d-a97d-911a18545768-util\") pod \"df6dc81d-e08c-4c8d-a97d-911a18545768\" (UID: \"df6dc81d-e08c-4c8d-a97d-911a18545768\") " Dec 11 14:00:03 crc kubenswrapper[5050]: I1211 14:00:03.019580 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e93fea-aeee-42f1-8cc5-204a7365d883-config-volume" (OuterVolumeSpecName: "config-volume") pod "e7e93fea-aeee-42f1-8cc5-204a7365d883" (UID: "e7e93fea-aeee-42f1-8cc5-204a7365d883"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:00:03 crc kubenswrapper[5050]: I1211 14:00:03.020339 5050 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7e93fea-aeee-42f1-8cc5-204a7365d883-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:03 crc kubenswrapper[5050]: I1211 14:00:03.022151 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df6dc81d-e08c-4c8d-a97d-911a18545768-bundle" (OuterVolumeSpecName: "bundle") pod "df6dc81d-e08c-4c8d-a97d-911a18545768" (UID: "df6dc81d-e08c-4c8d-a97d-911a18545768"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:00:03 crc kubenswrapper[5050]: I1211 14:00:03.026134 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e93fea-aeee-42f1-8cc5-204a7365d883-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e7e93fea-aeee-42f1-8cc5-204a7365d883" (UID: "e7e93fea-aeee-42f1-8cc5-204a7365d883"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:00:03 crc kubenswrapper[5050]: I1211 14:00:03.026895 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e93fea-aeee-42f1-8cc5-204a7365d883-kube-api-access-m6bps" (OuterVolumeSpecName: "kube-api-access-m6bps") pod "e7e93fea-aeee-42f1-8cc5-204a7365d883" (UID: "e7e93fea-aeee-42f1-8cc5-204a7365d883"). InnerVolumeSpecName "kube-api-access-m6bps". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:00:03 crc kubenswrapper[5050]: I1211 14:00:03.030202 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df6dc81d-e08c-4c8d-a97d-911a18545768-kube-api-access-9qvgf" (OuterVolumeSpecName: "kube-api-access-9qvgf") pod "df6dc81d-e08c-4c8d-a97d-911a18545768" (UID: "df6dc81d-e08c-4c8d-a97d-911a18545768"). InnerVolumeSpecName "kube-api-access-9qvgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:00:03 crc kubenswrapper[5050]: I1211 14:00:03.031998 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df6dc81d-e08c-4c8d-a97d-911a18545768-util" (OuterVolumeSpecName: "util") pod "df6dc81d-e08c-4c8d-a97d-911a18545768" (UID: "df6dc81d-e08c-4c8d-a97d-911a18545768"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:00:03 crc kubenswrapper[5050]: I1211 14:00:03.121744 5050 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e7e93fea-aeee-42f1-8cc5-204a7365d883-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:03 crc kubenswrapper[5050]: I1211 14:00:03.121789 5050 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/df6dc81d-e08c-4c8d-a97d-911a18545768-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:03 crc kubenswrapper[5050]: I1211 14:00:03.121803 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6bps\" (UniqueName: \"kubernetes.io/projected/e7e93fea-aeee-42f1-8cc5-204a7365d883-kube-api-access-m6bps\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:03 crc kubenswrapper[5050]: I1211 14:00:03.121816 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qvgf\" (UniqueName: \"kubernetes.io/projected/df6dc81d-e08c-4c8d-a97d-911a18545768-kube-api-access-9qvgf\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:03 crc kubenswrapper[5050]: I1211 14:00:03.121827 5050 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/df6dc81d-e08c-4c8d-a97d-911a18545768-util\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:03 crc kubenswrapper[5050]: I1211 14:00:03.733731 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424360-8hl85" Dec 11 14:00:03 crc kubenswrapper[5050]: I1211 14:00:03.733729 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424360-8hl85" event={"ID":"e7e93fea-aeee-42f1-8cc5-204a7365d883","Type":"ContainerDied","Data":"57fbd45e93ce387ec55271b50cd6fe34279dd9edba60c4758d30d102fe889b64"} Dec 11 14:00:03 crc kubenswrapper[5050]: I1211 14:00:03.734287 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57fbd45e93ce387ec55271b50cd6fe34279dd9edba60c4758d30d102fe889b64" Dec 11 14:00:03 crc kubenswrapper[5050]: I1211 14:00:03.735872 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8x4f86" event={"ID":"df6dc81d-e08c-4c8d-a97d-911a18545768","Type":"ContainerDied","Data":"6e692c8c329b4434a08a309347b3d81c3a8d279b946a108ed7f2e9679565c066"} Dec 11 14:00:03 crc kubenswrapper[5050]: I1211 14:00:03.735917 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e692c8c329b4434a08a309347b3d81c3a8d279b946a108ed7f2e9679565c066" Dec 11 14:00:03 crc kubenswrapper[5050]: I1211 14:00:03.735985 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8x4f86" Dec 11 14:00:03 crc kubenswrapper[5050]: I1211 14:00:03.737608 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhn66" event={"ID":"051dc05c-79dc-4ea0-b678-3275d37dcc87","Type":"ContainerStarted","Data":"07263d08691c3a60bc9f5fa1d1408812d70977159b2fada09970140a7bdfae4b"} Dec 11 14:00:03 crc kubenswrapper[5050]: I1211 14:00:03.757202 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bhn66" podStartSLOduration=2.797818501 podStartE2EDuration="5.75718476s" podCreationTimestamp="2025-12-11 13:59:58 +0000 UTC" firstStartedPulling="2025-12-11 13:59:59.702632859 +0000 UTC m=+690.546355445" lastFinishedPulling="2025-12-11 14:00:02.661999118 +0000 UTC m=+693.505721704" observedRunningTime="2025-12-11 14:00:03.753259901 +0000 UTC m=+694.596982487" watchObservedRunningTime="2025-12-11 14:00:03.75718476 +0000 UTC m=+694.600907346" Dec 11 14:00:07 crc kubenswrapper[5050]: I1211 14:00:07.561492 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-jfbvl"] Dec 11 14:00:07 crc kubenswrapper[5050]: E1211 14:00:07.563364 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df6dc81d-e08c-4c8d-a97d-911a18545768" containerName="util" Dec 11 14:00:07 crc kubenswrapper[5050]: I1211 14:00:07.563402 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="df6dc81d-e08c-4c8d-a97d-911a18545768" containerName="util" Dec 11 14:00:07 crc kubenswrapper[5050]: E1211 14:00:07.563425 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7e93fea-aeee-42f1-8cc5-204a7365d883" containerName="collect-profiles" Dec 11 14:00:07 crc kubenswrapper[5050]: I1211 14:00:07.563435 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7e93fea-aeee-42f1-8cc5-204a7365d883" containerName="collect-profiles" Dec 11 14:00:07 crc kubenswrapper[5050]: E1211 14:00:07.563444 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df6dc81d-e08c-4c8d-a97d-911a18545768" containerName="extract" Dec 11 14:00:07 crc kubenswrapper[5050]: I1211 14:00:07.563456 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="df6dc81d-e08c-4c8d-a97d-911a18545768" containerName="extract" Dec 11 14:00:07 crc kubenswrapper[5050]: E1211 14:00:07.563470 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df6dc81d-e08c-4c8d-a97d-911a18545768" containerName="pull" Dec 11 14:00:07 crc kubenswrapper[5050]: I1211 14:00:07.563477 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="df6dc81d-e08c-4c8d-a97d-911a18545768" containerName="pull" Dec 11 14:00:07 crc kubenswrapper[5050]: I1211 14:00:07.563598 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="df6dc81d-e08c-4c8d-a97d-911a18545768" containerName="extract" Dec 11 14:00:07 crc kubenswrapper[5050]: I1211 14:00:07.563616 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7e93fea-aeee-42f1-8cc5-204a7365d883" containerName="collect-profiles" Dec 11 14:00:07 crc kubenswrapper[5050]: I1211 14:00:07.564125 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-jfbvl" Dec 11 14:00:07 crc kubenswrapper[5050]: I1211 14:00:07.566070 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-b8vjd" Dec 11 14:00:07 crc kubenswrapper[5050]: I1211 14:00:07.566109 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 11 14:00:07 crc kubenswrapper[5050]: I1211 14:00:07.566223 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 11 14:00:07 crc kubenswrapper[5050]: I1211 14:00:07.582198 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-jfbvl"] Dec 11 14:00:07 crc kubenswrapper[5050]: I1211 14:00:07.676581 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-222zk\" (UniqueName: \"kubernetes.io/projected/6db92812-9f29-4bde-9013-18611df4ded4-kube-api-access-222zk\") pod \"nmstate-operator-6769fb99d-jfbvl\" (UID: \"6db92812-9f29-4bde-9013-18611df4ded4\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-jfbvl" Dec 11 14:00:07 crc kubenswrapper[5050]: I1211 14:00:07.777370 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-222zk\" (UniqueName: \"kubernetes.io/projected/6db92812-9f29-4bde-9013-18611df4ded4-kube-api-access-222zk\") pod \"nmstate-operator-6769fb99d-jfbvl\" (UID: \"6db92812-9f29-4bde-9013-18611df4ded4\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-jfbvl" Dec 11 14:00:07 crc kubenswrapper[5050]: I1211 14:00:07.794031 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-222zk\" (UniqueName: \"kubernetes.io/projected/6db92812-9f29-4bde-9013-18611df4ded4-kube-api-access-222zk\") pod \"nmstate-operator-6769fb99d-jfbvl\" (UID: \"6db92812-9f29-4bde-9013-18611df4ded4\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-jfbvl" Dec 11 14:00:07 crc kubenswrapper[5050]: I1211 14:00:07.878124 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-jfbvl" Dec 11 14:00:08 crc kubenswrapper[5050]: I1211 14:00:08.066939 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-jfbvl"] Dec 11 14:00:08 crc kubenswrapper[5050]: I1211 14:00:08.762073 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-jfbvl" event={"ID":"6db92812-9f29-4bde-9013-18611df4ded4","Type":"ContainerStarted","Data":"4d015e6f39765c2e5d3ebc3ae8e09ce548de51371de6cf43b118d7f975978b4d"} Dec 11 14:00:08 crc kubenswrapper[5050]: I1211 14:00:08.770381 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bhn66" Dec 11 14:00:08 crc kubenswrapper[5050]: I1211 14:00:08.770461 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bhn66" Dec 11 14:00:08 crc kubenswrapper[5050]: I1211 14:00:08.812425 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bhn66" Dec 11 14:00:09 crc kubenswrapper[5050]: I1211 14:00:09.813913 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bhn66" Dec 11 14:00:11 crc kubenswrapper[5050]: I1211 14:00:11.786065 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-jfbvl" event={"ID":"6db92812-9f29-4bde-9013-18611df4ded4","Type":"ContainerStarted","Data":"3375b76147fededadcb61c22fbcff8affc3c6eb28ce536e928411495a32e3331"} Dec 11 14:00:12 crc kubenswrapper[5050]: I1211 14:00:12.431117 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-6769fb99d-jfbvl" podStartSLOduration=2.450732051 podStartE2EDuration="5.431093477s" podCreationTimestamp="2025-12-11 14:00:07 +0000 UTC" firstStartedPulling="2025-12-11 14:00:08.076237057 +0000 UTC m=+698.919959643" lastFinishedPulling="2025-12-11 14:00:11.056598483 +0000 UTC m=+701.900321069" observedRunningTime="2025-12-11 14:00:11.805707493 +0000 UTC m=+702.649430109" watchObservedRunningTime="2025-12-11 14:00:12.431093477 +0000 UTC m=+703.274816063" Dec 11 14:00:12 crc kubenswrapper[5050]: I1211 14:00:12.432637 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bhn66"] Dec 11 14:00:12 crc kubenswrapper[5050]: I1211 14:00:12.790609 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bhn66" podUID="051dc05c-79dc-4ea0-b678-3275d37dcc87" containerName="registry-server" containerID="cri-o://07263d08691c3a60bc9f5fa1d1408812d70977159b2fada09970140a7bdfae4b" gracePeriod=2 Dec 11 14:00:14 crc kubenswrapper[5050]: I1211 14:00:14.807415 5050 generic.go:334] "Generic (PLEG): container finished" podID="051dc05c-79dc-4ea0-b678-3275d37dcc87" containerID="07263d08691c3a60bc9f5fa1d1408812d70977159b2fada09970140a7bdfae4b" exitCode=0 Dec 11 14:00:14 crc kubenswrapper[5050]: I1211 14:00:14.807459 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhn66" event={"ID":"051dc05c-79dc-4ea0-b678-3275d37dcc87","Type":"ContainerDied","Data":"07263d08691c3a60bc9f5fa1d1408812d70977159b2fada09970140a7bdfae4b"} Dec 11 14:00:14 crc kubenswrapper[5050]: I1211 14:00:14.928341 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bhn66" Dec 11 14:00:14 crc kubenswrapper[5050]: I1211 14:00:14.971332 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/051dc05c-79dc-4ea0-b678-3275d37dcc87-catalog-content\") pod \"051dc05c-79dc-4ea0-b678-3275d37dcc87\" (UID: \"051dc05c-79dc-4ea0-b678-3275d37dcc87\") " Dec 11 14:00:14 crc kubenswrapper[5050]: I1211 14:00:14.971523 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd9w5\" (UniqueName: \"kubernetes.io/projected/051dc05c-79dc-4ea0-b678-3275d37dcc87-kube-api-access-zd9w5\") pod \"051dc05c-79dc-4ea0-b678-3275d37dcc87\" (UID: \"051dc05c-79dc-4ea0-b678-3275d37dcc87\") " Dec 11 14:00:14 crc kubenswrapper[5050]: I1211 14:00:14.971586 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/051dc05c-79dc-4ea0-b678-3275d37dcc87-utilities\") pod \"051dc05c-79dc-4ea0-b678-3275d37dcc87\" (UID: \"051dc05c-79dc-4ea0-b678-3275d37dcc87\") " Dec 11 14:00:14 crc kubenswrapper[5050]: I1211 14:00:14.972595 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/051dc05c-79dc-4ea0-b678-3275d37dcc87-utilities" (OuterVolumeSpecName: "utilities") pod "051dc05c-79dc-4ea0-b678-3275d37dcc87" (UID: "051dc05c-79dc-4ea0-b678-3275d37dcc87"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:00:14 crc kubenswrapper[5050]: I1211 14:00:14.978455 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/051dc05c-79dc-4ea0-b678-3275d37dcc87-kube-api-access-zd9w5" (OuterVolumeSpecName: "kube-api-access-zd9w5") pod "051dc05c-79dc-4ea0-b678-3275d37dcc87" (UID: "051dc05c-79dc-4ea0-b678-3275d37dcc87"). InnerVolumeSpecName "kube-api-access-zd9w5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:00:15 crc kubenswrapper[5050]: I1211 14:00:15.074189 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd9w5\" (UniqueName: \"kubernetes.io/projected/051dc05c-79dc-4ea0-b678-3275d37dcc87-kube-api-access-zd9w5\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:15 crc kubenswrapper[5050]: I1211 14:00:15.074251 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/051dc05c-79dc-4ea0-b678-3275d37dcc87-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:15 crc kubenswrapper[5050]: I1211 14:00:15.075521 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/051dc05c-79dc-4ea0-b678-3275d37dcc87-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "051dc05c-79dc-4ea0-b678-3275d37dcc87" (UID: "051dc05c-79dc-4ea0-b678-3275d37dcc87"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:00:15 crc kubenswrapper[5050]: I1211 14:00:15.175757 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/051dc05c-79dc-4ea0-b678-3275d37dcc87-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:15 crc kubenswrapper[5050]: I1211 14:00:15.817686 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhn66" event={"ID":"051dc05c-79dc-4ea0-b678-3275d37dcc87","Type":"ContainerDied","Data":"1d54091c78e84d35425eafbeaabe2a149da5d12902c058aa79868b7fbff15e48"} Dec 11 14:00:15 crc kubenswrapper[5050]: I1211 14:00:15.817753 5050 scope.go:117] "RemoveContainer" containerID="07263d08691c3a60bc9f5fa1d1408812d70977159b2fada09970140a7bdfae4b" Dec 11 14:00:15 crc kubenswrapper[5050]: I1211 14:00:15.817846 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bhn66" Dec 11 14:00:15 crc kubenswrapper[5050]: I1211 14:00:15.852259 5050 scope.go:117] "RemoveContainer" containerID="5f4231e8d3f0e985d1f25e14791afe76c95abe4bae699f86323336417f087ff5" Dec 11 14:00:15 crc kubenswrapper[5050]: I1211 14:00:15.879726 5050 scope.go:117] "RemoveContainer" containerID="4b3c1f5a697dc04eda3b5196e49b246c78da411dea0d1b7b4f66b13a8206e4f2" Dec 11 14:00:15 crc kubenswrapper[5050]: I1211 14:00:15.881753 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bhn66"] Dec 11 14:00:15 crc kubenswrapper[5050]: I1211 14:00:15.886609 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bhn66"] Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.269909 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-6p8v4"] Dec 11 14:00:16 crc kubenswrapper[5050]: E1211 14:00:16.270219 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="051dc05c-79dc-4ea0-b678-3275d37dcc87" containerName="registry-server" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.270241 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="051dc05c-79dc-4ea0-b678-3275d37dcc87" containerName="registry-server" Dec 11 14:00:16 crc kubenswrapper[5050]: E1211 14:00:16.270265 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="051dc05c-79dc-4ea0-b678-3275d37dcc87" containerName="extract-content" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.270273 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="051dc05c-79dc-4ea0-b678-3275d37dcc87" containerName="extract-content" Dec 11 14:00:16 crc kubenswrapper[5050]: E1211 14:00:16.270283 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="051dc05c-79dc-4ea0-b678-3275d37dcc87" containerName="extract-utilities" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.270292 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="051dc05c-79dc-4ea0-b678-3275d37dcc87" containerName="extract-utilities" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.270406 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="051dc05c-79dc-4ea0-b678-3275d37dcc87" containerName="registry-server" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.271141 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-6p8v4" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.275117 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-94hht" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.287131 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-6p8v4"] Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.291399 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn65m\" (UniqueName: \"kubernetes.io/projected/424275cd-78db-4916-9383-4c09b2be2d0a-kube-api-access-cn65m\") pod \"nmstate-metrics-7f7f7578db-6p8v4\" (UID: \"424275cd-78db-4916-9383-4c09b2be2d0a\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-6p8v4" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.292825 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-jjj6z"] Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.294048 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-jjj6z" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.295715 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.317128 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-jjj6z"] Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.319628 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-m25g7"] Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.320581 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-m25g7" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.394720 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d75adeef-08d9-44b8-bf22-aea5e71dd392-nmstate-lock\") pod \"nmstate-handler-m25g7\" (UID: \"d75adeef-08d9-44b8-bf22-aea5e71dd392\") " pod="openshift-nmstate/nmstate-handler-m25g7" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.394816 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn65m\" (UniqueName: \"kubernetes.io/projected/424275cd-78db-4916-9383-4c09b2be2d0a-kube-api-access-cn65m\") pod \"nmstate-metrics-7f7f7578db-6p8v4\" (UID: \"424275cd-78db-4916-9383-4c09b2be2d0a\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-6p8v4" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.394859 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3921dc89-0902-4125-9b83-ff0a3c1c486c-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-jjj6z\" (UID: \"3921dc89-0902-4125-9b83-ff0a3c1c486c\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-jjj6z" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.394884 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d75adeef-08d9-44b8-bf22-aea5e71dd392-ovs-socket\") pod \"nmstate-handler-m25g7\" (UID: \"d75adeef-08d9-44b8-bf22-aea5e71dd392\") " pod="openshift-nmstate/nmstate-handler-m25g7" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.394913 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txbl2\" (UniqueName: \"kubernetes.io/projected/d75adeef-08d9-44b8-bf22-aea5e71dd392-kube-api-access-txbl2\") pod \"nmstate-handler-m25g7\" (UID: \"d75adeef-08d9-44b8-bf22-aea5e71dd392\") " pod="openshift-nmstate/nmstate-handler-m25g7" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.394935 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jwcd\" (UniqueName: \"kubernetes.io/projected/3921dc89-0902-4125-9b83-ff0a3c1c486c-kube-api-access-9jwcd\") pod \"nmstate-webhook-f8fb84555-jjj6z\" (UID: \"3921dc89-0902-4125-9b83-ff0a3c1c486c\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-jjj6z" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.394957 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d75adeef-08d9-44b8-bf22-aea5e71dd392-dbus-socket\") pod \"nmstate-handler-m25g7\" (UID: \"d75adeef-08d9-44b8-bf22-aea5e71dd392\") " pod="openshift-nmstate/nmstate-handler-m25g7" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.443391 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn65m\" (UniqueName: \"kubernetes.io/projected/424275cd-78db-4916-9383-4c09b2be2d0a-kube-api-access-cn65m\") pod \"nmstate-metrics-7f7f7578db-6p8v4\" (UID: \"424275cd-78db-4916-9383-4c09b2be2d0a\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-6p8v4" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.482115 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-ttsfn"] Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.482914 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-ttsfn" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.491970 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.492228 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.493088 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-vtnxn" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.495678 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txbl2\" (UniqueName: \"kubernetes.io/projected/d75adeef-08d9-44b8-bf22-aea5e71dd392-kube-api-access-txbl2\") pod \"nmstate-handler-m25g7\" (UID: \"d75adeef-08d9-44b8-bf22-aea5e71dd392\") " pod="openshift-nmstate/nmstate-handler-m25g7" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.495719 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jwcd\" (UniqueName: \"kubernetes.io/projected/3921dc89-0902-4125-9b83-ff0a3c1c486c-kube-api-access-9jwcd\") pod \"nmstate-webhook-f8fb84555-jjj6z\" (UID: \"3921dc89-0902-4125-9b83-ff0a3c1c486c\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-jjj6z" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.495741 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d75adeef-08d9-44b8-bf22-aea5e71dd392-dbus-socket\") pod \"nmstate-handler-m25g7\" (UID: \"d75adeef-08d9-44b8-bf22-aea5e71dd392\") " pod="openshift-nmstate/nmstate-handler-m25g7" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.495788 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d75adeef-08d9-44b8-bf22-aea5e71dd392-nmstate-lock\") pod \"nmstate-handler-m25g7\" (UID: \"d75adeef-08d9-44b8-bf22-aea5e71dd392\") " pod="openshift-nmstate/nmstate-handler-m25g7" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.495813 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5l49\" (UniqueName: \"kubernetes.io/projected/3bb6dc4d-9fa6-4169-a4bf-0a6aa983c7eb-kube-api-access-t5l49\") pod \"nmstate-console-plugin-6ff7998486-ttsfn\" (UID: \"3bb6dc4d-9fa6-4169-a4bf-0a6aa983c7eb\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-ttsfn" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.495838 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3bb6dc4d-9fa6-4169-a4bf-0a6aa983c7eb-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-ttsfn\" (UID: \"3bb6dc4d-9fa6-4169-a4bf-0a6aa983c7eb\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-ttsfn" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.495858 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3bb6dc4d-9fa6-4169-a4bf-0a6aa983c7eb-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-ttsfn\" (UID: \"3bb6dc4d-9fa6-4169-a4bf-0a6aa983c7eb\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-ttsfn" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.495889 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3921dc89-0902-4125-9b83-ff0a3c1c486c-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-jjj6z\" (UID: \"3921dc89-0902-4125-9b83-ff0a3c1c486c\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-jjj6z" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.495911 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d75adeef-08d9-44b8-bf22-aea5e71dd392-ovs-socket\") pod \"nmstate-handler-m25g7\" (UID: \"d75adeef-08d9-44b8-bf22-aea5e71dd392\") " pod="openshift-nmstate/nmstate-handler-m25g7" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.495973 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d75adeef-08d9-44b8-bf22-aea5e71dd392-ovs-socket\") pod \"nmstate-handler-m25g7\" (UID: \"d75adeef-08d9-44b8-bf22-aea5e71dd392\") " pod="openshift-nmstate/nmstate-handler-m25g7" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.496195 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d75adeef-08d9-44b8-bf22-aea5e71dd392-nmstate-lock\") pod \"nmstate-handler-m25g7\" (UID: \"d75adeef-08d9-44b8-bf22-aea5e71dd392\") " pod="openshift-nmstate/nmstate-handler-m25g7" Dec 11 14:00:16 crc kubenswrapper[5050]: E1211 14:00:16.496287 5050 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Dec 11 14:00:16 crc kubenswrapper[5050]: E1211 14:00:16.496345 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3921dc89-0902-4125-9b83-ff0a3c1c486c-tls-key-pair podName:3921dc89-0902-4125-9b83-ff0a3c1c486c nodeName:}" failed. No retries permitted until 2025-12-11 14:00:16.996324961 +0000 UTC m=+707.840047547 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/3921dc89-0902-4125-9b83-ff0a3c1c486c-tls-key-pair") pod "nmstate-webhook-f8fb84555-jjj6z" (UID: "3921dc89-0902-4125-9b83-ff0a3c1c486c") : secret "openshift-nmstate-webhook" not found Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.496452 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d75adeef-08d9-44b8-bf22-aea5e71dd392-dbus-socket\") pod \"nmstate-handler-m25g7\" (UID: \"d75adeef-08d9-44b8-bf22-aea5e71dd392\") " pod="openshift-nmstate/nmstate-handler-m25g7" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.504118 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-ttsfn"] Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.517289 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txbl2\" (UniqueName: \"kubernetes.io/projected/d75adeef-08d9-44b8-bf22-aea5e71dd392-kube-api-access-txbl2\") pod \"nmstate-handler-m25g7\" (UID: \"d75adeef-08d9-44b8-bf22-aea5e71dd392\") " pod="openshift-nmstate/nmstate-handler-m25g7" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.522073 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jwcd\" (UniqueName: \"kubernetes.io/projected/3921dc89-0902-4125-9b83-ff0a3c1c486c-kube-api-access-9jwcd\") pod \"nmstate-webhook-f8fb84555-jjj6z\" (UID: \"3921dc89-0902-4125-9b83-ff0a3c1c486c\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-jjj6z" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.590362 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-6p8v4" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.596710 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5l49\" (UniqueName: \"kubernetes.io/projected/3bb6dc4d-9fa6-4169-a4bf-0a6aa983c7eb-kube-api-access-t5l49\") pod \"nmstate-console-plugin-6ff7998486-ttsfn\" (UID: \"3bb6dc4d-9fa6-4169-a4bf-0a6aa983c7eb\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-ttsfn" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.596757 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3bb6dc4d-9fa6-4169-a4bf-0a6aa983c7eb-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-ttsfn\" (UID: \"3bb6dc4d-9fa6-4169-a4bf-0a6aa983c7eb\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-ttsfn" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.596784 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3bb6dc4d-9fa6-4169-a4bf-0a6aa983c7eb-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-ttsfn\" (UID: \"3bb6dc4d-9fa6-4169-a4bf-0a6aa983c7eb\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-ttsfn" Dec 11 14:00:16 crc kubenswrapper[5050]: E1211 14:00:16.597791 5050 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Dec 11 14:00:16 crc kubenswrapper[5050]: E1211 14:00:16.597864 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3bb6dc4d-9fa6-4169-a4bf-0a6aa983c7eb-plugin-serving-cert podName:3bb6dc4d-9fa6-4169-a4bf-0a6aa983c7eb nodeName:}" failed. No retries permitted until 2025-12-11 14:00:17.097846624 +0000 UTC m=+707.941569210 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/3bb6dc4d-9fa6-4169-a4bf-0a6aa983c7eb-plugin-serving-cert") pod "nmstate-console-plugin-6ff7998486-ttsfn" (UID: "3bb6dc4d-9fa6-4169-a4bf-0a6aa983c7eb") : secret "plugin-serving-cert" not found Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.598762 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3bb6dc4d-9fa6-4169-a4bf-0a6aa983c7eb-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-ttsfn\" (UID: \"3bb6dc4d-9fa6-4169-a4bf-0a6aa983c7eb\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-ttsfn" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.618896 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5l49\" (UniqueName: \"kubernetes.io/projected/3bb6dc4d-9fa6-4169-a4bf-0a6aa983c7eb-kube-api-access-t5l49\") pod \"nmstate-console-plugin-6ff7998486-ttsfn\" (UID: \"3bb6dc4d-9fa6-4169-a4bf-0a6aa983c7eb\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-ttsfn" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.636667 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-m25g7" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.685031 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6758fcc465-5n5wb"] Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.685880 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6758fcc465-5n5wb" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.697607 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a71bf2e0-2e1a-4591-8e3b-7db34508a3cd-console-config\") pod \"console-6758fcc465-5n5wb\" (UID: \"a71bf2e0-2e1a-4591-8e3b-7db34508a3cd\") " pod="openshift-console/console-6758fcc465-5n5wb" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.697678 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf46r\" (UniqueName: \"kubernetes.io/projected/a71bf2e0-2e1a-4591-8e3b-7db34508a3cd-kube-api-access-tf46r\") pod \"console-6758fcc465-5n5wb\" (UID: \"a71bf2e0-2e1a-4591-8e3b-7db34508a3cd\") " pod="openshift-console/console-6758fcc465-5n5wb" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.697701 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a71bf2e0-2e1a-4591-8e3b-7db34508a3cd-oauth-serving-cert\") pod \"console-6758fcc465-5n5wb\" (UID: \"a71bf2e0-2e1a-4591-8e3b-7db34508a3cd\") " pod="openshift-console/console-6758fcc465-5n5wb" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.697725 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a71bf2e0-2e1a-4591-8e3b-7db34508a3cd-service-ca\") pod \"console-6758fcc465-5n5wb\" (UID: \"a71bf2e0-2e1a-4591-8e3b-7db34508a3cd\") " pod="openshift-console/console-6758fcc465-5n5wb" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.697755 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a71bf2e0-2e1a-4591-8e3b-7db34508a3cd-console-serving-cert\") pod \"console-6758fcc465-5n5wb\" (UID: \"a71bf2e0-2e1a-4591-8e3b-7db34508a3cd\") " pod="openshift-console/console-6758fcc465-5n5wb" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.697787 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a71bf2e0-2e1a-4591-8e3b-7db34508a3cd-trusted-ca-bundle\") pod \"console-6758fcc465-5n5wb\" (UID: \"a71bf2e0-2e1a-4591-8e3b-7db34508a3cd\") " pod="openshift-console/console-6758fcc465-5n5wb" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.697805 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a71bf2e0-2e1a-4591-8e3b-7db34508a3cd-console-oauth-config\") pod \"console-6758fcc465-5n5wb\" (UID: \"a71bf2e0-2e1a-4591-8e3b-7db34508a3cd\") " pod="openshift-console/console-6758fcc465-5n5wb" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.699603 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6758fcc465-5n5wb"] Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.800860 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf46r\" (UniqueName: \"kubernetes.io/projected/a71bf2e0-2e1a-4591-8e3b-7db34508a3cd-kube-api-access-tf46r\") pod \"console-6758fcc465-5n5wb\" (UID: \"a71bf2e0-2e1a-4591-8e3b-7db34508a3cd\") " pod="openshift-console/console-6758fcc465-5n5wb" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.801993 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a71bf2e0-2e1a-4591-8e3b-7db34508a3cd-oauth-serving-cert\") pod \"console-6758fcc465-5n5wb\" (UID: \"a71bf2e0-2e1a-4591-8e3b-7db34508a3cd\") " pod="openshift-console/console-6758fcc465-5n5wb" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.802128 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a71bf2e0-2e1a-4591-8e3b-7db34508a3cd-service-ca\") pod \"console-6758fcc465-5n5wb\" (UID: \"a71bf2e0-2e1a-4591-8e3b-7db34508a3cd\") " pod="openshift-console/console-6758fcc465-5n5wb" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.802216 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a71bf2e0-2e1a-4591-8e3b-7db34508a3cd-console-serving-cert\") pod \"console-6758fcc465-5n5wb\" (UID: \"a71bf2e0-2e1a-4591-8e3b-7db34508a3cd\") " pod="openshift-console/console-6758fcc465-5n5wb" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.802288 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a71bf2e0-2e1a-4591-8e3b-7db34508a3cd-trusted-ca-bundle\") pod \"console-6758fcc465-5n5wb\" (UID: \"a71bf2e0-2e1a-4591-8e3b-7db34508a3cd\") " pod="openshift-console/console-6758fcc465-5n5wb" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.802331 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a71bf2e0-2e1a-4591-8e3b-7db34508a3cd-console-oauth-config\") pod \"console-6758fcc465-5n5wb\" (UID: \"a71bf2e0-2e1a-4591-8e3b-7db34508a3cd\") " pod="openshift-console/console-6758fcc465-5n5wb" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.803187 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a71bf2e0-2e1a-4591-8e3b-7db34508a3cd-oauth-serving-cert\") pod \"console-6758fcc465-5n5wb\" (UID: \"a71bf2e0-2e1a-4591-8e3b-7db34508a3cd\") " pod="openshift-console/console-6758fcc465-5n5wb" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.802451 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a71bf2e0-2e1a-4591-8e3b-7db34508a3cd-console-config\") pod \"console-6758fcc465-5n5wb\" (UID: \"a71bf2e0-2e1a-4591-8e3b-7db34508a3cd\") " pod="openshift-console/console-6758fcc465-5n5wb" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.803826 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a71bf2e0-2e1a-4591-8e3b-7db34508a3cd-console-config\") pod \"console-6758fcc465-5n5wb\" (UID: \"a71bf2e0-2e1a-4591-8e3b-7db34508a3cd\") " pod="openshift-console/console-6758fcc465-5n5wb" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.804172 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a71bf2e0-2e1a-4591-8e3b-7db34508a3cd-service-ca\") pod \"console-6758fcc465-5n5wb\" (UID: \"a71bf2e0-2e1a-4591-8e3b-7db34508a3cd\") " pod="openshift-console/console-6758fcc465-5n5wb" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.806028 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a71bf2e0-2e1a-4591-8e3b-7db34508a3cd-trusted-ca-bundle\") pod \"console-6758fcc465-5n5wb\" (UID: \"a71bf2e0-2e1a-4591-8e3b-7db34508a3cd\") " pod="openshift-console/console-6758fcc465-5n5wb" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.812489 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a71bf2e0-2e1a-4591-8e3b-7db34508a3cd-console-oauth-config\") pod \"console-6758fcc465-5n5wb\" (UID: \"a71bf2e0-2e1a-4591-8e3b-7db34508a3cd\") " pod="openshift-console/console-6758fcc465-5n5wb" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.817593 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a71bf2e0-2e1a-4591-8e3b-7db34508a3cd-console-serving-cert\") pod \"console-6758fcc465-5n5wb\" (UID: \"a71bf2e0-2e1a-4591-8e3b-7db34508a3cd\") " pod="openshift-console/console-6758fcc465-5n5wb" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.820276 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf46r\" (UniqueName: \"kubernetes.io/projected/a71bf2e0-2e1a-4591-8e3b-7db34508a3cd-kube-api-access-tf46r\") pod \"console-6758fcc465-5n5wb\" (UID: \"a71bf2e0-2e1a-4591-8e3b-7db34508a3cd\") " pod="openshift-console/console-6758fcc465-5n5wb" Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.826156 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-m25g7" event={"ID":"d75adeef-08d9-44b8-bf22-aea5e71dd392","Type":"ContainerStarted","Data":"8e9ad3f21b8670f24117f7abd74c0f5b0a2bbccbeb2678a79e222085e9cd6aa6"} Dec 11 14:00:16 crc kubenswrapper[5050]: I1211 14:00:16.826505 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-6p8v4"] Dec 11 14:00:16 crc kubenswrapper[5050]: W1211 14:00:16.836201 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod424275cd_78db_4916_9383_4c09b2be2d0a.slice/crio-1c9b78c764866d338bac223a60b1d3ae56971ca55fc7634dae34e0a69d20c027 WatchSource:0}: Error finding container 1c9b78c764866d338bac223a60b1d3ae56971ca55fc7634dae34e0a69d20c027: Status 404 returned error can't find the container with id 1c9b78c764866d338bac223a60b1d3ae56971ca55fc7634dae34e0a69d20c027 Dec 11 14:00:17 crc kubenswrapper[5050]: I1211 14:00:17.001678 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6758fcc465-5n5wb" Dec 11 14:00:17 crc kubenswrapper[5050]: I1211 14:00:17.005672 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3921dc89-0902-4125-9b83-ff0a3c1c486c-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-jjj6z\" (UID: \"3921dc89-0902-4125-9b83-ff0a3c1c486c\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-jjj6z" Dec 11 14:00:17 crc kubenswrapper[5050]: I1211 14:00:17.009237 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3921dc89-0902-4125-9b83-ff0a3c1c486c-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-jjj6z\" (UID: \"3921dc89-0902-4125-9b83-ff0a3c1c486c\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-jjj6z" Dec 11 14:00:17 crc kubenswrapper[5050]: I1211 14:00:17.106630 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3bb6dc4d-9fa6-4169-a4bf-0a6aa983c7eb-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-ttsfn\" (UID: \"3bb6dc4d-9fa6-4169-a4bf-0a6aa983c7eb\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-ttsfn" Dec 11 14:00:17 crc kubenswrapper[5050]: I1211 14:00:17.109619 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3bb6dc4d-9fa6-4169-a4bf-0a6aa983c7eb-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-ttsfn\" (UID: \"3bb6dc4d-9fa6-4169-a4bf-0a6aa983c7eb\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-ttsfn" Dec 11 14:00:17 crc kubenswrapper[5050]: I1211 14:00:17.175599 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6758fcc465-5n5wb"] Dec 11 14:00:17 crc kubenswrapper[5050]: I1211 14:00:17.212890 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-jjj6z" Dec 11 14:00:17 crc kubenswrapper[5050]: I1211 14:00:17.368634 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-jjj6z"] Dec 11 14:00:17 crc kubenswrapper[5050]: W1211 14:00:17.375452 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3921dc89_0902_4125_9b83_ff0a3c1c486c.slice/crio-b271d0cc9067604a99aee0bbef9f1b136919dfa29964ed11d79e8b56e18568a5 WatchSource:0}: Error finding container b271d0cc9067604a99aee0bbef9f1b136919dfa29964ed11d79e8b56e18568a5: Status 404 returned error can't find the container with id b271d0cc9067604a99aee0bbef9f1b136919dfa29964ed11d79e8b56e18568a5 Dec 11 14:00:17 crc kubenswrapper[5050]: I1211 14:00:17.398400 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-ttsfn" Dec 11 14:00:17 crc kubenswrapper[5050]: I1211 14:00:17.554194 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="051dc05c-79dc-4ea0-b678-3275d37dcc87" path="/var/lib/kubelet/pods/051dc05c-79dc-4ea0-b678-3275d37dcc87/volumes" Dec 11 14:00:17 crc kubenswrapper[5050]: I1211 14:00:17.585427 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-ttsfn"] Dec 11 14:00:17 crc kubenswrapper[5050]: I1211 14:00:17.832524 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-ttsfn" event={"ID":"3bb6dc4d-9fa6-4169-a4bf-0a6aa983c7eb","Type":"ContainerStarted","Data":"68d8bc55dc2c8b37075a519b0a4ee4a2bdeb81e79a5c755f564a8bdb9c498402"} Dec 11 14:00:17 crc kubenswrapper[5050]: I1211 14:00:17.833592 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-6p8v4" event={"ID":"424275cd-78db-4916-9383-4c09b2be2d0a","Type":"ContainerStarted","Data":"1c9b78c764866d338bac223a60b1d3ae56971ca55fc7634dae34e0a69d20c027"} Dec 11 14:00:17 crc kubenswrapper[5050]: I1211 14:00:17.835490 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6758fcc465-5n5wb" event={"ID":"a71bf2e0-2e1a-4591-8e3b-7db34508a3cd","Type":"ContainerStarted","Data":"dde471ed3e728accc3580a2ad04c939ee98c0011e4a8c7ff26975d9629d333dd"} Dec 11 14:00:17 crc kubenswrapper[5050]: I1211 14:00:17.835534 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6758fcc465-5n5wb" event={"ID":"a71bf2e0-2e1a-4591-8e3b-7db34508a3cd","Type":"ContainerStarted","Data":"b873ec98fa4da3cf4ec35b6581d6fd1c622259aa4ca2e411e689278dd22095e8"} Dec 11 14:00:17 crc kubenswrapper[5050]: I1211 14:00:17.837043 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-jjj6z" event={"ID":"3921dc89-0902-4125-9b83-ff0a3c1c486c","Type":"ContainerStarted","Data":"b271d0cc9067604a99aee0bbef9f1b136919dfa29964ed11d79e8b56e18568a5"} Dec 11 14:00:17 crc kubenswrapper[5050]: I1211 14:00:17.857581 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6758fcc465-5n5wb" podStartSLOduration=1.857559922 podStartE2EDuration="1.857559922s" podCreationTimestamp="2025-12-11 14:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:00:17.852651517 +0000 UTC m=+708.696374153" watchObservedRunningTime="2025-12-11 14:00:17.857559922 +0000 UTC m=+708.701282508" Dec 11 14:00:19 crc kubenswrapper[5050]: I1211 14:00:19.850734 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-jjj6z" event={"ID":"3921dc89-0902-4125-9b83-ff0a3c1c486c","Type":"ContainerStarted","Data":"5b011b0c158fe676237751662ac0b40dc6ebcbe27d0452ef5e0d3f254e253c80"} Dec 11 14:00:19 crc kubenswrapper[5050]: I1211 14:00:19.851367 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-f8fb84555-jjj6z" Dec 11 14:00:19 crc kubenswrapper[5050]: I1211 14:00:19.853113 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-m25g7" event={"ID":"d75adeef-08d9-44b8-bf22-aea5e71dd392","Type":"ContainerStarted","Data":"40f10c4121267f29a4dd4a58102b1ec3ea360e67961265b07ed9014a03c4e683"} Dec 11 14:00:19 crc kubenswrapper[5050]: I1211 14:00:19.853247 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-m25g7" Dec 11 14:00:19 crc kubenswrapper[5050]: I1211 14:00:19.857426 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-ttsfn" event={"ID":"3bb6dc4d-9fa6-4169-a4bf-0a6aa983c7eb","Type":"ContainerStarted","Data":"a36ab7d5755e298535313ba191a3db3c831206e6ed8e0d644dddababe39b6cce"} Dec 11 14:00:19 crc kubenswrapper[5050]: I1211 14:00:19.858743 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-6p8v4" event={"ID":"424275cd-78db-4916-9383-4c09b2be2d0a","Type":"ContainerStarted","Data":"7a925458976acdaa62fc15116ba82e8a33c42482422146b9d49ea86f50eff246"} Dec 11 14:00:19 crc kubenswrapper[5050]: I1211 14:00:19.866803 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-f8fb84555-jjj6z" podStartSLOduration=1.650975858 podStartE2EDuration="3.86678886s" podCreationTimestamp="2025-12-11 14:00:16 +0000 UTC" firstStartedPulling="2025-12-11 14:00:17.377862684 +0000 UTC m=+708.221585270" lastFinishedPulling="2025-12-11 14:00:19.593675686 +0000 UTC m=+710.437398272" observedRunningTime="2025-12-11 14:00:19.866279046 +0000 UTC m=+710.710001632" watchObservedRunningTime="2025-12-11 14:00:19.86678886 +0000 UTC m=+710.710511446" Dec 11 14:00:19 crc kubenswrapper[5050]: I1211 14:00:19.882460 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-ttsfn" podStartSLOduration=1.902149759 podStartE2EDuration="3.882445491s" podCreationTimestamp="2025-12-11 14:00:16 +0000 UTC" firstStartedPulling="2025-12-11 14:00:17.593330272 +0000 UTC m=+708.437052858" lastFinishedPulling="2025-12-11 14:00:19.573626014 +0000 UTC m=+710.417348590" observedRunningTime="2025-12-11 14:00:19.880185718 +0000 UTC m=+710.723908304" watchObservedRunningTime="2025-12-11 14:00:19.882445491 +0000 UTC m=+710.726168077" Dec 11 14:00:19 crc kubenswrapper[5050]: I1211 14:00:19.898962 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-m25g7" podStartSLOduration=1.009240083 podStartE2EDuration="3.898947635s" podCreationTimestamp="2025-12-11 14:00:16 +0000 UTC" firstStartedPulling="2025-12-11 14:00:16.690566405 +0000 UTC m=+707.534288991" lastFinishedPulling="2025-12-11 14:00:19.580273957 +0000 UTC m=+710.423996543" observedRunningTime="2025-12-11 14:00:19.895785198 +0000 UTC m=+710.739507784" watchObservedRunningTime="2025-12-11 14:00:19.898947635 +0000 UTC m=+710.742670221" Dec 11 14:00:22 crc kubenswrapper[5050]: I1211 14:00:22.877846 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-6p8v4" event={"ID":"424275cd-78db-4916-9383-4c09b2be2d0a","Type":"ContainerStarted","Data":"abd559d5ee733fec949ff8ee123c17150b5bebebfdd2e47d1afc0fb67bab9e53"} Dec 11 14:00:22 crc kubenswrapper[5050]: I1211 14:00:22.897406 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-6p8v4" podStartSLOduration=1.640392976 podStartE2EDuration="6.897386008s" podCreationTimestamp="2025-12-11 14:00:16 +0000 UTC" firstStartedPulling="2025-12-11 14:00:16.839344749 +0000 UTC m=+707.683067335" lastFinishedPulling="2025-12-11 14:00:22.096337781 +0000 UTC m=+712.940060367" observedRunningTime="2025-12-11 14:00:22.892488524 +0000 UTC m=+713.736211150" watchObservedRunningTime="2025-12-11 14:00:22.897386008 +0000 UTC m=+713.741108614" Dec 11 14:00:26 crc kubenswrapper[5050]: I1211 14:00:26.663449 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-m25g7" Dec 11 14:00:27 crc kubenswrapper[5050]: I1211 14:00:27.002936 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6758fcc465-5n5wb" Dec 11 14:00:27 crc kubenswrapper[5050]: I1211 14:00:27.003070 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6758fcc465-5n5wb" Dec 11 14:00:27 crc kubenswrapper[5050]: I1211 14:00:27.007640 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6758fcc465-5n5wb" Dec 11 14:00:27 crc kubenswrapper[5050]: I1211 14:00:27.910576 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6758fcc465-5n5wb" Dec 11 14:00:27 crc kubenswrapper[5050]: I1211 14:00:27.969792 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-gp9fp"] Dec 11 14:00:37 crc kubenswrapper[5050]: I1211 14:00:37.219992 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-f8fb84555-jjj6z" Dec 11 14:00:47 crc kubenswrapper[5050]: I1211 14:00:47.844870 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4k5j98"] Dec 11 14:00:47 crc kubenswrapper[5050]: I1211 14:00:47.847666 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4k5j98" Dec 11 14:00:47 crc kubenswrapper[5050]: I1211 14:00:47.854498 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 11 14:00:47 crc kubenswrapper[5050]: I1211 14:00:47.876865 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4k5j98"] Dec 11 14:00:47 crc kubenswrapper[5050]: I1211 14:00:47.985414 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68f7c\" (UniqueName: \"kubernetes.io/projected/af397408-00ed-432d-b207-91bb1cb086d3-kube-api-access-68f7c\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4k5j98\" (UID: \"af397408-00ed-432d-b207-91bb1cb086d3\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4k5j98" Dec 11 14:00:47 crc kubenswrapper[5050]: I1211 14:00:47.985490 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af397408-00ed-432d-b207-91bb1cb086d3-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4k5j98\" (UID: \"af397408-00ed-432d-b207-91bb1cb086d3\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4k5j98" Dec 11 14:00:47 crc kubenswrapper[5050]: I1211 14:00:47.985518 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af397408-00ed-432d-b207-91bb1cb086d3-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4k5j98\" (UID: \"af397408-00ed-432d-b207-91bb1cb086d3\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4k5j98" Dec 11 14:00:48 crc kubenswrapper[5050]: I1211 14:00:48.086946 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af397408-00ed-432d-b207-91bb1cb086d3-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4k5j98\" (UID: \"af397408-00ed-432d-b207-91bb1cb086d3\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4k5j98" Dec 11 14:00:48 crc kubenswrapper[5050]: I1211 14:00:48.087041 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af397408-00ed-432d-b207-91bb1cb086d3-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4k5j98\" (UID: \"af397408-00ed-432d-b207-91bb1cb086d3\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4k5j98" Dec 11 14:00:48 crc kubenswrapper[5050]: I1211 14:00:48.087489 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af397408-00ed-432d-b207-91bb1cb086d3-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4k5j98\" (UID: \"af397408-00ed-432d-b207-91bb1cb086d3\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4k5j98" Dec 11 14:00:48 crc kubenswrapper[5050]: I1211 14:00:48.087504 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af397408-00ed-432d-b207-91bb1cb086d3-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4k5j98\" (UID: \"af397408-00ed-432d-b207-91bb1cb086d3\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4k5j98" Dec 11 14:00:48 crc kubenswrapper[5050]: I1211 14:00:48.087603 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68f7c\" (UniqueName: \"kubernetes.io/projected/af397408-00ed-432d-b207-91bb1cb086d3-kube-api-access-68f7c\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4k5j98\" (UID: \"af397408-00ed-432d-b207-91bb1cb086d3\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4k5j98" Dec 11 14:00:48 crc kubenswrapper[5050]: I1211 14:00:48.108485 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68f7c\" (UniqueName: \"kubernetes.io/projected/af397408-00ed-432d-b207-91bb1cb086d3-kube-api-access-68f7c\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4k5j98\" (UID: \"af397408-00ed-432d-b207-91bb1cb086d3\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4k5j98" Dec 11 14:00:48 crc kubenswrapper[5050]: I1211 14:00:48.165291 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4k5j98" Dec 11 14:00:48 crc kubenswrapper[5050]: I1211 14:00:48.344396 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4k5j98"] Dec 11 14:00:49 crc kubenswrapper[5050]: I1211 14:00:49.056840 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4k5j98" event={"ID":"af397408-00ed-432d-b207-91bb1cb086d3","Type":"ContainerStarted","Data":"13aae567fe29869832eaaf0ef82ca3499bdafe7b598e6b5ae196b5a017dfec33"} Dec 11 14:00:50 crc kubenswrapper[5050]: I1211 14:00:50.063218 5050 generic.go:334] "Generic (PLEG): container finished" podID="af397408-00ed-432d-b207-91bb1cb086d3" containerID="e21678c6a27041781da969915b52b35cb1f82050c5ba1940a92fb76cd1955856" exitCode=0 Dec 11 14:00:50 crc kubenswrapper[5050]: I1211 14:00:50.063275 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4k5j98" event={"ID":"af397408-00ed-432d-b207-91bb1cb086d3","Type":"ContainerDied","Data":"e21678c6a27041781da969915b52b35cb1f82050c5ba1940a92fb76cd1955856"} Dec 11 14:00:52 crc kubenswrapper[5050]: I1211 14:00:52.075311 5050 generic.go:334] "Generic (PLEG): container finished" podID="af397408-00ed-432d-b207-91bb1cb086d3" containerID="30c7577ae00dc9f6f8d7a44b9215f3225a19ecb1f4871caf0ba4b5195ea8fd56" exitCode=0 Dec 11 14:00:52 crc kubenswrapper[5050]: I1211 14:00:52.075380 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4k5j98" event={"ID":"af397408-00ed-432d-b207-91bb1cb086d3","Type":"ContainerDied","Data":"30c7577ae00dc9f6f8d7a44b9215f3225a19ecb1f4871caf0ba4b5195ea8fd56"} Dec 11 14:00:53 crc kubenswrapper[5050]: I1211 14:00:53.014743 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-gp9fp" podUID="f633d554-794b-4a64-9699-27fbc28a4d7c" containerName="console" containerID="cri-o://88da5d50bd3ba895417abaa05182bd801aecb6ce1fb2eb1562e333fbd9b09a37" gracePeriod=15 Dec 11 14:00:53 crc kubenswrapper[5050]: I1211 14:00:53.083822 5050 generic.go:334] "Generic (PLEG): container finished" podID="af397408-00ed-432d-b207-91bb1cb086d3" containerID="4ce37497d48bb8140e442def7548879dfb7fd40022f0b61dd1c1b243f244dd22" exitCode=0 Dec 11 14:00:53 crc kubenswrapper[5050]: I1211 14:00:53.083864 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4k5j98" event={"ID":"af397408-00ed-432d-b207-91bb1cb086d3","Type":"ContainerDied","Data":"4ce37497d48bb8140e442def7548879dfb7fd40022f0b61dd1c1b243f244dd22"} Dec 11 14:00:53 crc kubenswrapper[5050]: I1211 14:00:53.355224 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-gp9fp_f633d554-794b-4a64-9699-27fbc28a4d7c/console/0.log" Dec 11 14:00:53 crc kubenswrapper[5050]: I1211 14:00:53.355288 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-gp9fp" Dec 11 14:00:53 crc kubenswrapper[5050]: I1211 14:00:53.455692 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f633d554-794b-4a64-9699-27fbc28a4d7c-console-serving-cert\") pod \"f633d554-794b-4a64-9699-27fbc28a4d7c\" (UID: \"f633d554-794b-4a64-9699-27fbc28a4d7c\") " Dec 11 14:00:53 crc kubenswrapper[5050]: I1211 14:00:53.455781 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f633d554-794b-4a64-9699-27fbc28a4d7c-oauth-serving-cert\") pod \"f633d554-794b-4a64-9699-27fbc28a4d7c\" (UID: \"f633d554-794b-4a64-9699-27fbc28a4d7c\") " Dec 11 14:00:53 crc kubenswrapper[5050]: I1211 14:00:53.455870 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f633d554-794b-4a64-9699-27fbc28a4d7c-trusted-ca-bundle\") pod \"f633d554-794b-4a64-9699-27fbc28a4d7c\" (UID: \"f633d554-794b-4a64-9699-27fbc28a4d7c\") " Dec 11 14:00:53 crc kubenswrapper[5050]: I1211 14:00:53.455906 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2w8k\" (UniqueName: \"kubernetes.io/projected/f633d554-794b-4a64-9699-27fbc28a4d7c-kube-api-access-l2w8k\") pod \"f633d554-794b-4a64-9699-27fbc28a4d7c\" (UID: \"f633d554-794b-4a64-9699-27fbc28a4d7c\") " Dec 11 14:00:53 crc kubenswrapper[5050]: I1211 14:00:53.455965 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f633d554-794b-4a64-9699-27fbc28a4d7c-console-oauth-config\") pod \"f633d554-794b-4a64-9699-27fbc28a4d7c\" (UID: \"f633d554-794b-4a64-9699-27fbc28a4d7c\") " Dec 11 14:00:53 crc kubenswrapper[5050]: I1211 14:00:53.456646 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f633d554-794b-4a64-9699-27fbc28a4d7c-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f633d554-794b-4a64-9699-27fbc28a4d7c" (UID: "f633d554-794b-4a64-9699-27fbc28a4d7c"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:00:53 crc kubenswrapper[5050]: I1211 14:00:53.456687 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f633d554-794b-4a64-9699-27fbc28a4d7c-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "f633d554-794b-4a64-9699-27fbc28a4d7c" (UID: "f633d554-794b-4a64-9699-27fbc28a4d7c"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:00:53 crc kubenswrapper[5050]: I1211 14:00:53.456795 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f633d554-794b-4a64-9699-27fbc28a4d7c-console-config\") pod \"f633d554-794b-4a64-9699-27fbc28a4d7c\" (UID: \"f633d554-794b-4a64-9699-27fbc28a4d7c\") " Dec 11 14:00:53 crc kubenswrapper[5050]: I1211 14:00:53.456868 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f633d554-794b-4a64-9699-27fbc28a4d7c-service-ca\") pod \"f633d554-794b-4a64-9699-27fbc28a4d7c\" (UID: \"f633d554-794b-4a64-9699-27fbc28a4d7c\") " Dec 11 14:00:53 crc kubenswrapper[5050]: I1211 14:00:53.457209 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f633d554-794b-4a64-9699-27fbc28a4d7c-service-ca" (OuterVolumeSpecName: "service-ca") pod "f633d554-794b-4a64-9699-27fbc28a4d7c" (UID: "f633d554-794b-4a64-9699-27fbc28a4d7c"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:00:53 crc kubenswrapper[5050]: I1211 14:00:53.457224 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f633d554-794b-4a64-9699-27fbc28a4d7c-console-config" (OuterVolumeSpecName: "console-config") pod "f633d554-794b-4a64-9699-27fbc28a4d7c" (UID: "f633d554-794b-4a64-9699-27fbc28a4d7c"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:00:53 crc kubenswrapper[5050]: I1211 14:00:53.457754 5050 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f633d554-794b-4a64-9699-27fbc28a4d7c-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:53 crc kubenswrapper[5050]: I1211 14:00:53.457774 5050 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f633d554-794b-4a64-9699-27fbc28a4d7c-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:53 crc kubenswrapper[5050]: I1211 14:00:53.457785 5050 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f633d554-794b-4a64-9699-27fbc28a4d7c-console-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:53 crc kubenswrapper[5050]: I1211 14:00:53.457794 5050 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f633d554-794b-4a64-9699-27fbc28a4d7c-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:53 crc kubenswrapper[5050]: I1211 14:00:53.462253 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f633d554-794b-4a64-9699-27fbc28a4d7c-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "f633d554-794b-4a64-9699-27fbc28a4d7c" (UID: "f633d554-794b-4a64-9699-27fbc28a4d7c"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:00:53 crc kubenswrapper[5050]: I1211 14:00:53.462281 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f633d554-794b-4a64-9699-27fbc28a4d7c-kube-api-access-l2w8k" (OuterVolumeSpecName: "kube-api-access-l2w8k") pod "f633d554-794b-4a64-9699-27fbc28a4d7c" (UID: "f633d554-794b-4a64-9699-27fbc28a4d7c"). InnerVolumeSpecName "kube-api-access-l2w8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:00:53 crc kubenswrapper[5050]: I1211 14:00:53.462855 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f633d554-794b-4a64-9699-27fbc28a4d7c-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "f633d554-794b-4a64-9699-27fbc28a4d7c" (UID: "f633d554-794b-4a64-9699-27fbc28a4d7c"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:00:53 crc kubenswrapper[5050]: I1211 14:00:53.558699 5050 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f633d554-794b-4a64-9699-27fbc28a4d7c-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:53 crc kubenswrapper[5050]: I1211 14:00:53.559119 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2w8k\" (UniqueName: \"kubernetes.io/projected/f633d554-794b-4a64-9699-27fbc28a4d7c-kube-api-access-l2w8k\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:53 crc kubenswrapper[5050]: I1211 14:00:53.559135 5050 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f633d554-794b-4a64-9699-27fbc28a4d7c-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:54 crc kubenswrapper[5050]: I1211 14:00:54.104213 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-gp9fp_f633d554-794b-4a64-9699-27fbc28a4d7c/console/0.log" Dec 11 14:00:54 crc kubenswrapper[5050]: I1211 14:00:54.105228 5050 generic.go:334] "Generic (PLEG): container finished" podID="f633d554-794b-4a64-9699-27fbc28a4d7c" containerID="88da5d50bd3ba895417abaa05182bd801aecb6ce1fb2eb1562e333fbd9b09a37" exitCode=2 Dec 11 14:00:54 crc kubenswrapper[5050]: I1211 14:00:54.105315 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-gp9fp" event={"ID":"f633d554-794b-4a64-9699-27fbc28a4d7c","Type":"ContainerDied","Data":"88da5d50bd3ba895417abaa05182bd801aecb6ce1fb2eb1562e333fbd9b09a37"} Dec 11 14:00:54 crc kubenswrapper[5050]: I1211 14:00:54.105375 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-gp9fp" event={"ID":"f633d554-794b-4a64-9699-27fbc28a4d7c","Type":"ContainerDied","Data":"8a92413c75471e031f68d61104b2ee4756a365a30a3b4b75a95669d65bac3c95"} Dec 11 14:00:54 crc kubenswrapper[5050]: I1211 14:00:54.105394 5050 scope.go:117] "RemoveContainer" containerID="88da5d50bd3ba895417abaa05182bd801aecb6ce1fb2eb1562e333fbd9b09a37" Dec 11 14:00:54 crc kubenswrapper[5050]: I1211 14:00:54.105823 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-gp9fp" Dec 11 14:00:54 crc kubenswrapper[5050]: I1211 14:00:54.128321 5050 scope.go:117] "RemoveContainer" containerID="88da5d50bd3ba895417abaa05182bd801aecb6ce1fb2eb1562e333fbd9b09a37" Dec 11 14:00:54 crc kubenswrapper[5050]: E1211 14:00:54.133412 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88da5d50bd3ba895417abaa05182bd801aecb6ce1fb2eb1562e333fbd9b09a37\": container with ID starting with 88da5d50bd3ba895417abaa05182bd801aecb6ce1fb2eb1562e333fbd9b09a37 not found: ID does not exist" containerID="88da5d50bd3ba895417abaa05182bd801aecb6ce1fb2eb1562e333fbd9b09a37" Dec 11 14:00:54 crc kubenswrapper[5050]: I1211 14:00:54.133482 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88da5d50bd3ba895417abaa05182bd801aecb6ce1fb2eb1562e333fbd9b09a37"} err="failed to get container status \"88da5d50bd3ba895417abaa05182bd801aecb6ce1fb2eb1562e333fbd9b09a37\": rpc error: code = NotFound desc = could not find container \"88da5d50bd3ba895417abaa05182bd801aecb6ce1fb2eb1562e333fbd9b09a37\": container with ID starting with 88da5d50bd3ba895417abaa05182bd801aecb6ce1fb2eb1562e333fbd9b09a37 not found: ID does not exist" Dec 11 14:00:54 crc kubenswrapper[5050]: I1211 14:00:54.136170 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-gp9fp"] Dec 11 14:00:54 crc kubenswrapper[5050]: I1211 14:00:54.140061 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-gp9fp"] Dec 11 14:00:54 crc kubenswrapper[5050]: I1211 14:00:54.320224 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4k5j98" Dec 11 14:00:54 crc kubenswrapper[5050]: I1211 14:00:54.471951 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af397408-00ed-432d-b207-91bb1cb086d3-util\") pod \"af397408-00ed-432d-b207-91bb1cb086d3\" (UID: \"af397408-00ed-432d-b207-91bb1cb086d3\") " Dec 11 14:00:54 crc kubenswrapper[5050]: I1211 14:00:54.472123 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68f7c\" (UniqueName: \"kubernetes.io/projected/af397408-00ed-432d-b207-91bb1cb086d3-kube-api-access-68f7c\") pod \"af397408-00ed-432d-b207-91bb1cb086d3\" (UID: \"af397408-00ed-432d-b207-91bb1cb086d3\") " Dec 11 14:00:54 crc kubenswrapper[5050]: I1211 14:00:54.472178 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af397408-00ed-432d-b207-91bb1cb086d3-bundle\") pod \"af397408-00ed-432d-b207-91bb1cb086d3\" (UID: \"af397408-00ed-432d-b207-91bb1cb086d3\") " Dec 11 14:00:54 crc kubenswrapper[5050]: I1211 14:00:54.473128 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af397408-00ed-432d-b207-91bb1cb086d3-bundle" (OuterVolumeSpecName: "bundle") pod "af397408-00ed-432d-b207-91bb1cb086d3" (UID: "af397408-00ed-432d-b207-91bb1cb086d3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:00:54 crc kubenswrapper[5050]: I1211 14:00:54.477249 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af397408-00ed-432d-b207-91bb1cb086d3-kube-api-access-68f7c" (OuterVolumeSpecName: "kube-api-access-68f7c") pod "af397408-00ed-432d-b207-91bb1cb086d3" (UID: "af397408-00ed-432d-b207-91bb1cb086d3"). InnerVolumeSpecName "kube-api-access-68f7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:00:54 crc kubenswrapper[5050]: I1211 14:00:54.485317 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af397408-00ed-432d-b207-91bb1cb086d3-util" (OuterVolumeSpecName: "util") pod "af397408-00ed-432d-b207-91bb1cb086d3" (UID: "af397408-00ed-432d-b207-91bb1cb086d3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:00:54 crc kubenswrapper[5050]: I1211 14:00:54.573709 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68f7c\" (UniqueName: \"kubernetes.io/projected/af397408-00ed-432d-b207-91bb1cb086d3-kube-api-access-68f7c\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:54 crc kubenswrapper[5050]: I1211 14:00:54.573742 5050 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/af397408-00ed-432d-b207-91bb1cb086d3-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:54 crc kubenswrapper[5050]: I1211 14:00:54.573753 5050 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/af397408-00ed-432d-b207-91bb1cb086d3-util\") on node \"crc\" DevicePath \"\"" Dec 11 14:00:55 crc kubenswrapper[5050]: I1211 14:00:55.113092 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4k5j98" Dec 11 14:00:55 crc kubenswrapper[5050]: I1211 14:00:55.113074 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d4k5j98" event={"ID":"af397408-00ed-432d-b207-91bb1cb086d3","Type":"ContainerDied","Data":"13aae567fe29869832eaaf0ef82ca3499bdafe7b598e6b5ae196b5a017dfec33"} Dec 11 14:00:55 crc kubenswrapper[5050]: I1211 14:00:55.113243 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13aae567fe29869832eaaf0ef82ca3499bdafe7b598e6b5ae196b5a017dfec33" Dec 11 14:00:55 crc kubenswrapper[5050]: I1211 14:00:55.552163 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f633d554-794b-4a64-9699-27fbc28a4d7c" path="/var/lib/kubelet/pods/f633d554-794b-4a64-9699-27fbc28a4d7c/volumes" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.040155 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5d666c4679-krnwf"] Dec 11 14:01:04 crc kubenswrapper[5050]: E1211 14:01:04.041929 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af397408-00ed-432d-b207-91bb1cb086d3" containerName="pull" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.041999 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="af397408-00ed-432d-b207-91bb1cb086d3" containerName="pull" Dec 11 14:01:04 crc kubenswrapper[5050]: E1211 14:01:04.042084 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f633d554-794b-4a64-9699-27fbc28a4d7c" containerName="console" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.042141 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f633d554-794b-4a64-9699-27fbc28a4d7c" containerName="console" Dec 11 14:01:04 crc kubenswrapper[5050]: E1211 14:01:04.042202 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af397408-00ed-432d-b207-91bb1cb086d3" containerName="util" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.042252 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="af397408-00ed-432d-b207-91bb1cb086d3" containerName="util" Dec 11 14:01:04 crc kubenswrapper[5050]: E1211 14:01:04.042304 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af397408-00ed-432d-b207-91bb1cb086d3" containerName="extract" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.042354 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="af397408-00ed-432d-b207-91bb1cb086d3" containerName="extract" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.042503 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="af397408-00ed-432d-b207-91bb1cb086d3" containerName="extract" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.042561 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="f633d554-794b-4a64-9699-27fbc28a4d7c" containerName="console" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.042989 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5d666c4679-krnwf" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.045105 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.045894 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.046412 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.047303 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.059139 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5d666c4679-krnwf"] Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.060864 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-m6zt9" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.216288 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a6130f1a-c95b-445f-8235-e57fdcb270fe-webhook-cert\") pod \"metallb-operator-controller-manager-5d666c4679-krnwf\" (UID: \"a6130f1a-c95b-445f-8235-e57fdcb270fe\") " pod="metallb-system/metallb-operator-controller-manager-5d666c4679-krnwf" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.216342 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbj7l\" (UniqueName: \"kubernetes.io/projected/a6130f1a-c95b-445f-8235-e57fdcb270fe-kube-api-access-cbj7l\") pod \"metallb-operator-controller-manager-5d666c4679-krnwf\" (UID: \"a6130f1a-c95b-445f-8235-e57fdcb270fe\") " pod="metallb-system/metallb-operator-controller-manager-5d666c4679-krnwf" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.216385 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a6130f1a-c95b-445f-8235-e57fdcb270fe-apiservice-cert\") pod \"metallb-operator-controller-manager-5d666c4679-krnwf\" (UID: \"a6130f1a-c95b-445f-8235-e57fdcb270fe\") " pod="metallb-system/metallb-operator-controller-manager-5d666c4679-krnwf" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.317667 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a6130f1a-c95b-445f-8235-e57fdcb270fe-webhook-cert\") pod \"metallb-operator-controller-manager-5d666c4679-krnwf\" (UID: \"a6130f1a-c95b-445f-8235-e57fdcb270fe\") " pod="metallb-system/metallb-operator-controller-manager-5d666c4679-krnwf" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.317724 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbj7l\" (UniqueName: \"kubernetes.io/projected/a6130f1a-c95b-445f-8235-e57fdcb270fe-kube-api-access-cbj7l\") pod \"metallb-operator-controller-manager-5d666c4679-krnwf\" (UID: \"a6130f1a-c95b-445f-8235-e57fdcb270fe\") " pod="metallb-system/metallb-operator-controller-manager-5d666c4679-krnwf" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.317767 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a6130f1a-c95b-445f-8235-e57fdcb270fe-apiservice-cert\") pod \"metallb-operator-controller-manager-5d666c4679-krnwf\" (UID: \"a6130f1a-c95b-445f-8235-e57fdcb270fe\") " pod="metallb-system/metallb-operator-controller-manager-5d666c4679-krnwf" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.325795 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a6130f1a-c95b-445f-8235-e57fdcb270fe-webhook-cert\") pod \"metallb-operator-controller-manager-5d666c4679-krnwf\" (UID: \"a6130f1a-c95b-445f-8235-e57fdcb270fe\") " pod="metallb-system/metallb-operator-controller-manager-5d666c4679-krnwf" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.334074 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbj7l\" (UniqueName: \"kubernetes.io/projected/a6130f1a-c95b-445f-8235-e57fdcb270fe-kube-api-access-cbj7l\") pod \"metallb-operator-controller-manager-5d666c4679-krnwf\" (UID: \"a6130f1a-c95b-445f-8235-e57fdcb270fe\") " pod="metallb-system/metallb-operator-controller-manager-5d666c4679-krnwf" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.346598 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a6130f1a-c95b-445f-8235-e57fdcb270fe-apiservice-cert\") pod \"metallb-operator-controller-manager-5d666c4679-krnwf\" (UID: \"a6130f1a-c95b-445f-8235-e57fdcb270fe\") " pod="metallb-system/metallb-operator-controller-manager-5d666c4679-krnwf" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.358572 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5d666c4679-krnwf" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.433715 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6d84b54c84-jdvv4"] Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.434613 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6d84b54c84-jdvv4" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.440033 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.440309 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.444753 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-5zwsv" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.522932 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6d84b54c84-jdvv4"] Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.621161 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg2mf\" (UniqueName: \"kubernetes.io/projected/4b3aef76-30fd-451a-841c-0941caac69ce-kube-api-access-mg2mf\") pod \"metallb-operator-webhook-server-6d84b54c84-jdvv4\" (UID: \"4b3aef76-30fd-451a-841c-0941caac69ce\") " pod="metallb-system/metallb-operator-webhook-server-6d84b54c84-jdvv4" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.621238 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4b3aef76-30fd-451a-841c-0941caac69ce-apiservice-cert\") pod \"metallb-operator-webhook-server-6d84b54c84-jdvv4\" (UID: \"4b3aef76-30fd-451a-841c-0941caac69ce\") " pod="metallb-system/metallb-operator-webhook-server-6d84b54c84-jdvv4" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.621258 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4b3aef76-30fd-451a-841c-0941caac69ce-webhook-cert\") pod \"metallb-operator-webhook-server-6d84b54c84-jdvv4\" (UID: \"4b3aef76-30fd-451a-841c-0941caac69ce\") " pod="metallb-system/metallb-operator-webhook-server-6d84b54c84-jdvv4" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.673592 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5d666c4679-krnwf"] Dec 11 14:01:04 crc kubenswrapper[5050]: W1211 14:01:04.682173 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6130f1a_c95b_445f_8235_e57fdcb270fe.slice/crio-3fddb29d79a49ed9291aa9b2eba01a3e0e45b6a22ee26b6e287bfc36a7c22bd5 WatchSource:0}: Error finding container 3fddb29d79a49ed9291aa9b2eba01a3e0e45b6a22ee26b6e287bfc36a7c22bd5: Status 404 returned error can't find the container with id 3fddb29d79a49ed9291aa9b2eba01a3e0e45b6a22ee26b6e287bfc36a7c22bd5 Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.722819 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg2mf\" (UniqueName: \"kubernetes.io/projected/4b3aef76-30fd-451a-841c-0941caac69ce-kube-api-access-mg2mf\") pod \"metallb-operator-webhook-server-6d84b54c84-jdvv4\" (UID: \"4b3aef76-30fd-451a-841c-0941caac69ce\") " pod="metallb-system/metallb-operator-webhook-server-6d84b54c84-jdvv4" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.722926 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4b3aef76-30fd-451a-841c-0941caac69ce-apiservice-cert\") pod \"metallb-operator-webhook-server-6d84b54c84-jdvv4\" (UID: \"4b3aef76-30fd-451a-841c-0941caac69ce\") " pod="metallb-system/metallb-operator-webhook-server-6d84b54c84-jdvv4" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.722973 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4b3aef76-30fd-451a-841c-0941caac69ce-webhook-cert\") pod \"metallb-operator-webhook-server-6d84b54c84-jdvv4\" (UID: \"4b3aef76-30fd-451a-841c-0941caac69ce\") " pod="metallb-system/metallb-operator-webhook-server-6d84b54c84-jdvv4" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.728197 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4b3aef76-30fd-451a-841c-0941caac69ce-webhook-cert\") pod \"metallb-operator-webhook-server-6d84b54c84-jdvv4\" (UID: \"4b3aef76-30fd-451a-841c-0941caac69ce\") " pod="metallb-system/metallb-operator-webhook-server-6d84b54c84-jdvv4" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.729699 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4b3aef76-30fd-451a-841c-0941caac69ce-apiservice-cert\") pod \"metallb-operator-webhook-server-6d84b54c84-jdvv4\" (UID: \"4b3aef76-30fd-451a-841c-0941caac69ce\") " pod="metallb-system/metallb-operator-webhook-server-6d84b54c84-jdvv4" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.739937 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg2mf\" (UniqueName: \"kubernetes.io/projected/4b3aef76-30fd-451a-841c-0941caac69ce-kube-api-access-mg2mf\") pod \"metallb-operator-webhook-server-6d84b54c84-jdvv4\" (UID: \"4b3aef76-30fd-451a-841c-0941caac69ce\") " pod="metallb-system/metallb-operator-webhook-server-6d84b54c84-jdvv4" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.753629 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6d84b54c84-jdvv4" Dec 11 14:01:04 crc kubenswrapper[5050]: I1211 14:01:04.928420 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6d84b54c84-jdvv4"] Dec 11 14:01:04 crc kubenswrapper[5050]: W1211 14:01:04.950215 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b3aef76_30fd_451a_841c_0941caac69ce.slice/crio-c2cc8fce34fb221e01613b35254731ffb8e968e109b4e764be25b1f488252e9b WatchSource:0}: Error finding container c2cc8fce34fb221e01613b35254731ffb8e968e109b4e764be25b1f488252e9b: Status 404 returned error can't find the container with id c2cc8fce34fb221e01613b35254731ffb8e968e109b4e764be25b1f488252e9b Dec 11 14:01:05 crc kubenswrapper[5050]: I1211 14:01:05.165350 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5d666c4679-krnwf" event={"ID":"a6130f1a-c95b-445f-8235-e57fdcb270fe","Type":"ContainerStarted","Data":"3fddb29d79a49ed9291aa9b2eba01a3e0e45b6a22ee26b6e287bfc36a7c22bd5"} Dec 11 14:01:05 crc kubenswrapper[5050]: I1211 14:01:05.166671 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6d84b54c84-jdvv4" event={"ID":"4b3aef76-30fd-451a-841c-0941caac69ce","Type":"ContainerStarted","Data":"c2cc8fce34fb221e01613b35254731ffb8e968e109b4e764be25b1f488252e9b"} Dec 11 14:01:10 crc kubenswrapper[5050]: I1211 14:01:10.210923 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6d84b54c84-jdvv4" event={"ID":"4b3aef76-30fd-451a-841c-0941caac69ce","Type":"ContainerStarted","Data":"c3671baa8dd533145d25dff9c3ceff882f50a15311e4a83bca522ac8a67dc2fe"} Dec 11 14:01:10 crc kubenswrapper[5050]: I1211 14:01:10.212090 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6d84b54c84-jdvv4" Dec 11 14:01:10 crc kubenswrapper[5050]: I1211 14:01:10.213799 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5d666c4679-krnwf" event={"ID":"a6130f1a-c95b-445f-8235-e57fdcb270fe","Type":"ContainerStarted","Data":"d17d46308a7245c9596076d60060e727d0919075ec397f55611d87cd27d538dc"} Dec 11 14:01:10 crc kubenswrapper[5050]: I1211 14:01:10.214337 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5d666c4679-krnwf" Dec 11 14:01:10 crc kubenswrapper[5050]: I1211 14:01:10.239964 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6d84b54c84-jdvv4" podStartSLOduration=1.328780713 podStartE2EDuration="6.239948651s" podCreationTimestamp="2025-12-11 14:01:04 +0000 UTC" firstStartedPulling="2025-12-11 14:01:04.952580422 +0000 UTC m=+755.796303008" lastFinishedPulling="2025-12-11 14:01:09.86374836 +0000 UTC m=+760.707470946" observedRunningTime="2025-12-11 14:01:10.236214696 +0000 UTC m=+761.079937282" watchObservedRunningTime="2025-12-11 14:01:10.239948651 +0000 UTC m=+761.083671237" Dec 11 14:01:10 crc kubenswrapper[5050]: I1211 14:01:10.274679 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5d666c4679-krnwf" podStartSLOduration=1.11205406 podStartE2EDuration="6.274659976s" podCreationTimestamp="2025-12-11 14:01:04 +0000 UTC" firstStartedPulling="2025-12-11 14:01:04.684936778 +0000 UTC m=+755.528659364" lastFinishedPulling="2025-12-11 14:01:09.847542694 +0000 UTC m=+760.691265280" observedRunningTime="2025-12-11 14:01:10.27084245 +0000 UTC m=+761.114565036" watchObservedRunningTime="2025-12-11 14:01:10.274659976 +0000 UTC m=+761.118382562" Dec 11 14:01:24 crc kubenswrapper[5050]: I1211 14:01:24.758448 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6d84b54c84-jdvv4" Dec 11 14:01:40 crc kubenswrapper[5050]: I1211 14:01:40.796382 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:01:40 crc kubenswrapper[5050]: I1211 14:01:40.797133 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:01:44 crc kubenswrapper[5050]: I1211 14:01:44.361443 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5d666c4679-krnwf" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.014977 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-z5lp7"] Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.017745 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-z5lp7" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.019335 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.019896 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-wks79" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.020321 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.028570 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-dsj8r"] Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.029575 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dsj8r" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.032344 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.041505 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-dsj8r"] Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.075247 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b-metrics\") pod \"frr-k8s-z5lp7\" (UID: \"ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b\") " pod="metallb-system/frr-k8s-z5lp7" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.075304 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b-frr-sockets\") pod \"frr-k8s-z5lp7\" (UID: \"ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b\") " pod="metallb-system/frr-k8s-z5lp7" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.075333 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b-metrics-certs\") pod \"frr-k8s-z5lp7\" (UID: \"ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b\") " pod="metallb-system/frr-k8s-z5lp7" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.075366 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhfbl\" (UniqueName: \"kubernetes.io/projected/3b1cdf6a-950e-42ce-bc50-7fa14d5ad5f0-kube-api-access-nhfbl\") pod \"frr-k8s-webhook-server-7784b6fcf-dsj8r\" (UID: \"3b1cdf6a-950e-42ce-bc50-7fa14d5ad5f0\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dsj8r" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.075390 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3b1cdf6a-950e-42ce-bc50-7fa14d5ad5f0-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-dsj8r\" (UID: \"3b1cdf6a-950e-42ce-bc50-7fa14d5ad5f0\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dsj8r" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.075454 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b-frr-conf\") pod \"frr-k8s-z5lp7\" (UID: \"ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b\") " pod="metallb-system/frr-k8s-z5lp7" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.075524 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv8gw\" (UniqueName: \"kubernetes.io/projected/ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b-kube-api-access-dv8gw\") pod \"frr-k8s-z5lp7\" (UID: \"ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b\") " pod="metallb-system/frr-k8s-z5lp7" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.075565 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b-frr-startup\") pod \"frr-k8s-z5lp7\" (UID: \"ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b\") " pod="metallb-system/frr-k8s-z5lp7" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.075584 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b-reloader\") pod \"frr-k8s-z5lp7\" (UID: \"ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b\") " pod="metallb-system/frr-k8s-z5lp7" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.111462 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-w4tzc"] Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.112346 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-w4tzc" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.114862 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.114881 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.114971 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-p2rzt" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.114870 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.136405 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-5bddd4b946-644bs"] Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.137521 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-644bs" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.139862 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.176781 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/54e98831-cd88-4dee-90db-e8fbb006e9c3-memberlist\") pod \"speaker-w4tzc\" (UID: \"54e98831-cd88-4dee-90db-e8fbb006e9c3\") " pod="metallb-system/speaker-w4tzc" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.176840 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b-metrics\") pod \"frr-k8s-z5lp7\" (UID: \"ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b\") " pod="metallb-system/frr-k8s-z5lp7" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.176860 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s9ql\" (UniqueName: \"kubernetes.io/projected/54e98831-cd88-4dee-90db-e8fbb006e9c3-kube-api-access-7s9ql\") pod \"speaker-w4tzc\" (UID: \"54e98831-cd88-4dee-90db-e8fbb006e9c3\") " pod="metallb-system/speaker-w4tzc" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.176884 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b-frr-sockets\") pod \"frr-k8s-z5lp7\" (UID: \"ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b\") " pod="metallb-system/frr-k8s-z5lp7" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.176911 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b-metrics-certs\") pod \"frr-k8s-z5lp7\" (UID: \"ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b\") " pod="metallb-system/frr-k8s-z5lp7" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.176941 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhfbl\" (UniqueName: \"kubernetes.io/projected/3b1cdf6a-950e-42ce-bc50-7fa14d5ad5f0-kube-api-access-nhfbl\") pod \"frr-k8s-webhook-server-7784b6fcf-dsj8r\" (UID: \"3b1cdf6a-950e-42ce-bc50-7fa14d5ad5f0\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dsj8r" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.176961 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3b1cdf6a-950e-42ce-bc50-7fa14d5ad5f0-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-dsj8r\" (UID: \"3b1cdf6a-950e-42ce-bc50-7fa14d5ad5f0\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dsj8r" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.176982 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5f0d9e74-baf0-4759-bb9f-3ff0a25e95d9-cert\") pod \"controller-5bddd4b946-644bs\" (UID: \"5f0d9e74-baf0-4759-bb9f-3ff0a25e95d9\") " pod="metallb-system/controller-5bddd4b946-644bs" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.177026 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/54e98831-cd88-4dee-90db-e8fbb006e9c3-metallb-excludel2\") pod \"speaker-w4tzc\" (UID: \"54e98831-cd88-4dee-90db-e8fbb006e9c3\") " pod="metallb-system/speaker-w4tzc" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.177056 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/54e98831-cd88-4dee-90db-e8fbb006e9c3-metrics-certs\") pod \"speaker-w4tzc\" (UID: \"54e98831-cd88-4dee-90db-e8fbb006e9c3\") " pod="metallb-system/speaker-w4tzc" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.177079 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b-frr-conf\") pod \"frr-k8s-z5lp7\" (UID: \"ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b\") " pod="metallb-system/frr-k8s-z5lp7" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.177115 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dv8gw\" (UniqueName: \"kubernetes.io/projected/ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b-kube-api-access-dv8gw\") pod \"frr-k8s-z5lp7\" (UID: \"ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b\") " pod="metallb-system/frr-k8s-z5lp7" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.177137 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f0d9e74-baf0-4759-bb9f-3ff0a25e95d9-metrics-certs\") pod \"controller-5bddd4b946-644bs\" (UID: \"5f0d9e74-baf0-4759-bb9f-3ff0a25e95d9\") " pod="metallb-system/controller-5bddd4b946-644bs" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.177162 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgv5q\" (UniqueName: \"kubernetes.io/projected/5f0d9e74-baf0-4759-bb9f-3ff0a25e95d9-kube-api-access-fgv5q\") pod \"controller-5bddd4b946-644bs\" (UID: \"5f0d9e74-baf0-4759-bb9f-3ff0a25e95d9\") " pod="metallb-system/controller-5bddd4b946-644bs" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.177187 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b-frr-startup\") pod \"frr-k8s-z5lp7\" (UID: \"ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b\") " pod="metallb-system/frr-k8s-z5lp7" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.177204 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b-reloader\") pod \"frr-k8s-z5lp7\" (UID: \"ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b\") " pod="metallb-system/frr-k8s-z5lp7" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.177628 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b-reloader\") pod \"frr-k8s-z5lp7\" (UID: \"ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b\") " pod="metallb-system/frr-k8s-z5lp7" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.177859 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b-frr-sockets\") pod \"frr-k8s-z5lp7\" (UID: \"ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b\") " pod="metallb-system/frr-k8s-z5lp7" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.177974 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b-metrics\") pod \"frr-k8s-z5lp7\" (UID: \"ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b\") " pod="metallb-system/frr-k8s-z5lp7" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.178331 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b-frr-conf\") pod \"frr-k8s-z5lp7\" (UID: \"ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b\") " pod="metallb-system/frr-k8s-z5lp7" Dec 11 14:01:45 crc kubenswrapper[5050]: E1211 14:01:45.178440 5050 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Dec 11 14:01:45 crc kubenswrapper[5050]: E1211 14:01:45.178488 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b1cdf6a-950e-42ce-bc50-7fa14d5ad5f0-cert podName:3b1cdf6a-950e-42ce-bc50-7fa14d5ad5f0 nodeName:}" failed. No retries permitted until 2025-12-11 14:01:45.678473883 +0000 UTC m=+796.522196469 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3b1cdf6a-950e-42ce-bc50-7fa14d5ad5f0-cert") pod "frr-k8s-webhook-server-7784b6fcf-dsj8r" (UID: "3b1cdf6a-950e-42ce-bc50-7fa14d5ad5f0") : secret "frr-k8s-webhook-server-cert" not found Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.179399 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b-frr-startup\") pod \"frr-k8s-z5lp7\" (UID: \"ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b\") " pod="metallb-system/frr-k8s-z5lp7" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.184598 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b-metrics-certs\") pod \"frr-k8s-z5lp7\" (UID: \"ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b\") " pod="metallb-system/frr-k8s-z5lp7" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.190128 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-644bs"] Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.197043 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhfbl\" (UniqueName: \"kubernetes.io/projected/3b1cdf6a-950e-42ce-bc50-7fa14d5ad5f0-kube-api-access-nhfbl\") pod \"frr-k8s-webhook-server-7784b6fcf-dsj8r\" (UID: \"3b1cdf6a-950e-42ce-bc50-7fa14d5ad5f0\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dsj8r" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.210424 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dv8gw\" (UniqueName: \"kubernetes.io/projected/ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b-kube-api-access-dv8gw\") pod \"frr-k8s-z5lp7\" (UID: \"ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b\") " pod="metallb-system/frr-k8s-z5lp7" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.278380 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/54e98831-cd88-4dee-90db-e8fbb006e9c3-metrics-certs\") pod \"speaker-w4tzc\" (UID: \"54e98831-cd88-4dee-90db-e8fbb006e9c3\") " pod="metallb-system/speaker-w4tzc" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.278455 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f0d9e74-baf0-4759-bb9f-3ff0a25e95d9-metrics-certs\") pod \"controller-5bddd4b946-644bs\" (UID: \"5f0d9e74-baf0-4759-bb9f-3ff0a25e95d9\") " pod="metallb-system/controller-5bddd4b946-644bs" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.278483 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgv5q\" (UniqueName: \"kubernetes.io/projected/5f0d9e74-baf0-4759-bb9f-3ff0a25e95d9-kube-api-access-fgv5q\") pod \"controller-5bddd4b946-644bs\" (UID: \"5f0d9e74-baf0-4759-bb9f-3ff0a25e95d9\") " pod="metallb-system/controller-5bddd4b946-644bs" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.278515 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/54e98831-cd88-4dee-90db-e8fbb006e9c3-memberlist\") pod \"speaker-w4tzc\" (UID: \"54e98831-cd88-4dee-90db-e8fbb006e9c3\") " pod="metallb-system/speaker-w4tzc" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.278541 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s9ql\" (UniqueName: \"kubernetes.io/projected/54e98831-cd88-4dee-90db-e8fbb006e9c3-kube-api-access-7s9ql\") pod \"speaker-w4tzc\" (UID: \"54e98831-cd88-4dee-90db-e8fbb006e9c3\") " pod="metallb-system/speaker-w4tzc" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.278600 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5f0d9e74-baf0-4759-bb9f-3ff0a25e95d9-cert\") pod \"controller-5bddd4b946-644bs\" (UID: \"5f0d9e74-baf0-4759-bb9f-3ff0a25e95d9\") " pod="metallb-system/controller-5bddd4b946-644bs" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.278630 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/54e98831-cd88-4dee-90db-e8fbb006e9c3-metallb-excludel2\") pod \"speaker-w4tzc\" (UID: \"54e98831-cd88-4dee-90db-e8fbb006e9c3\") " pod="metallb-system/speaker-w4tzc" Dec 11 14:01:45 crc kubenswrapper[5050]: E1211 14:01:45.278750 5050 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 11 14:01:45 crc kubenswrapper[5050]: E1211 14:01:45.278848 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54e98831-cd88-4dee-90db-e8fbb006e9c3-memberlist podName:54e98831-cd88-4dee-90db-e8fbb006e9c3 nodeName:}" failed. No retries permitted until 2025-12-11 14:01:45.778825442 +0000 UTC m=+796.622548098 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/54e98831-cd88-4dee-90db-e8fbb006e9c3-memberlist") pod "speaker-w4tzc" (UID: "54e98831-cd88-4dee-90db-e8fbb006e9c3") : secret "metallb-memberlist" not found Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.279431 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/54e98831-cd88-4dee-90db-e8fbb006e9c3-metallb-excludel2\") pod \"speaker-w4tzc\" (UID: \"54e98831-cd88-4dee-90db-e8fbb006e9c3\") " pod="metallb-system/speaker-w4tzc" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.281395 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.281655 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/54e98831-cd88-4dee-90db-e8fbb006e9c3-metrics-certs\") pod \"speaker-w4tzc\" (UID: \"54e98831-cd88-4dee-90db-e8fbb006e9c3\") " pod="metallb-system/speaker-w4tzc" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.281688 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5f0d9e74-baf0-4759-bb9f-3ff0a25e95d9-metrics-certs\") pod \"controller-5bddd4b946-644bs\" (UID: \"5f0d9e74-baf0-4759-bb9f-3ff0a25e95d9\") " pod="metallb-system/controller-5bddd4b946-644bs" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.293649 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5f0d9e74-baf0-4759-bb9f-3ff0a25e95d9-cert\") pod \"controller-5bddd4b946-644bs\" (UID: \"5f0d9e74-baf0-4759-bb9f-3ff0a25e95d9\") " pod="metallb-system/controller-5bddd4b946-644bs" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.295409 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgv5q\" (UniqueName: \"kubernetes.io/projected/5f0d9e74-baf0-4759-bb9f-3ff0a25e95d9-kube-api-access-fgv5q\") pod \"controller-5bddd4b946-644bs\" (UID: \"5f0d9e74-baf0-4759-bb9f-3ff0a25e95d9\") " pod="metallb-system/controller-5bddd4b946-644bs" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.297308 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s9ql\" (UniqueName: \"kubernetes.io/projected/54e98831-cd88-4dee-90db-e8fbb006e9c3-kube-api-access-7s9ql\") pod \"speaker-w4tzc\" (UID: \"54e98831-cd88-4dee-90db-e8fbb006e9c3\") " pod="metallb-system/speaker-w4tzc" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.337480 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-z5lp7" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.456193 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-644bs" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.644785 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-644bs"] Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.685325 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3b1cdf6a-950e-42ce-bc50-7fa14d5ad5f0-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-dsj8r\" (UID: \"3b1cdf6a-950e-42ce-bc50-7fa14d5ad5f0\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dsj8r" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.691719 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3b1cdf6a-950e-42ce-bc50-7fa14d5ad5f0-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-dsj8r\" (UID: \"3b1cdf6a-950e-42ce-bc50-7fa14d5ad5f0\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dsj8r" Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.786221 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/54e98831-cd88-4dee-90db-e8fbb006e9c3-memberlist\") pod \"speaker-w4tzc\" (UID: \"54e98831-cd88-4dee-90db-e8fbb006e9c3\") " pod="metallb-system/speaker-w4tzc" Dec 11 14:01:45 crc kubenswrapper[5050]: E1211 14:01:45.786395 5050 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 11 14:01:45 crc kubenswrapper[5050]: E1211 14:01:45.786508 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/54e98831-cd88-4dee-90db-e8fbb006e9c3-memberlist podName:54e98831-cd88-4dee-90db-e8fbb006e9c3 nodeName:}" failed. No retries permitted until 2025-12-11 14:01:46.786486043 +0000 UTC m=+797.630208629 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/54e98831-cd88-4dee-90db-e8fbb006e9c3-memberlist") pod "speaker-w4tzc" (UID: "54e98831-cd88-4dee-90db-e8fbb006e9c3") : secret "metallb-memberlist" not found Dec 11 14:01:45 crc kubenswrapper[5050]: I1211 14:01:45.944355 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dsj8r" Dec 11 14:01:46 crc kubenswrapper[5050]: I1211 14:01:46.143645 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-dsj8r"] Dec 11 14:01:46 crc kubenswrapper[5050]: W1211 14:01:46.150262 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b1cdf6a_950e_42ce_bc50_7fa14d5ad5f0.slice/crio-2743f7ae678cca1f69e0af81ca813206a816fce0c96840eba9902912548407dd WatchSource:0}: Error finding container 2743f7ae678cca1f69e0af81ca813206a816fce0c96840eba9902912548407dd: Status 404 returned error can't find the container with id 2743f7ae678cca1f69e0af81ca813206a816fce0c96840eba9902912548407dd Dec 11 14:01:46 crc kubenswrapper[5050]: I1211 14:01:46.413652 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dsj8r" event={"ID":"3b1cdf6a-950e-42ce-bc50-7fa14d5ad5f0","Type":"ContainerStarted","Data":"2743f7ae678cca1f69e0af81ca813206a816fce0c96840eba9902912548407dd"} Dec 11 14:01:46 crc kubenswrapper[5050]: I1211 14:01:46.415950 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-644bs" event={"ID":"5f0d9e74-baf0-4759-bb9f-3ff0a25e95d9","Type":"ContainerStarted","Data":"298d74a55c6c2a58e86597c042e75339641cf805b1971bb10cab51b4e5245421"} Dec 11 14:01:46 crc kubenswrapper[5050]: I1211 14:01:46.416152 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-644bs" event={"ID":"5f0d9e74-baf0-4759-bb9f-3ff0a25e95d9","Type":"ContainerStarted","Data":"7b149ca03af79599c589fd2f4fdb66f546df4bed4b08b5f36d2295048b304426"} Dec 11 14:01:46 crc kubenswrapper[5050]: I1211 14:01:46.416172 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-644bs" event={"ID":"5f0d9e74-baf0-4759-bb9f-3ff0a25e95d9","Type":"ContainerStarted","Data":"a3990eb88488fc0f124e432d95b6238fd51cdfd01acb5ce2095890fa64b597ff"} Dec 11 14:01:46 crc kubenswrapper[5050]: I1211 14:01:46.416227 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-5bddd4b946-644bs" Dec 11 14:01:46 crc kubenswrapper[5050]: I1211 14:01:46.417087 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z5lp7" event={"ID":"ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b","Type":"ContainerStarted","Data":"52a7ab3caddc4ea802c9c89781f4337152e5b6cfcbb6d901af31ee80d7d568c9"} Dec 11 14:01:46 crc kubenswrapper[5050]: I1211 14:01:46.802810 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/54e98831-cd88-4dee-90db-e8fbb006e9c3-memberlist\") pod \"speaker-w4tzc\" (UID: \"54e98831-cd88-4dee-90db-e8fbb006e9c3\") " pod="metallb-system/speaker-w4tzc" Dec 11 14:01:46 crc kubenswrapper[5050]: I1211 14:01:46.807918 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/54e98831-cd88-4dee-90db-e8fbb006e9c3-memberlist\") pod \"speaker-w4tzc\" (UID: \"54e98831-cd88-4dee-90db-e8fbb006e9c3\") " pod="metallb-system/speaker-w4tzc" Dec 11 14:01:46 crc kubenswrapper[5050]: I1211 14:01:46.925873 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-w4tzc" Dec 11 14:01:46 crc kubenswrapper[5050]: W1211 14:01:46.947182 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54e98831_cd88_4dee_90db_e8fbb006e9c3.slice/crio-c0f4214f78d493a00fe82ce75729287c6e39815d6ed261382fe5f8a13664c835 WatchSource:0}: Error finding container c0f4214f78d493a00fe82ce75729287c6e39815d6ed261382fe5f8a13664c835: Status 404 returned error can't find the container with id c0f4214f78d493a00fe82ce75729287c6e39815d6ed261382fe5f8a13664c835 Dec 11 14:01:47 crc kubenswrapper[5050]: I1211 14:01:47.424901 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-w4tzc" event={"ID":"54e98831-cd88-4dee-90db-e8fbb006e9c3","Type":"ContainerStarted","Data":"dc143bce1375c19fe135afda15f93779f6bf099de8cfba514d7efadf417d3fa5"} Dec 11 14:01:47 crc kubenswrapper[5050]: I1211 14:01:47.424992 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-w4tzc" event={"ID":"54e98831-cd88-4dee-90db-e8fbb006e9c3","Type":"ContainerStarted","Data":"c0f4214f78d493a00fe82ce75729287c6e39815d6ed261382fe5f8a13664c835"} Dec 11 14:01:48 crc kubenswrapper[5050]: I1211 14:01:48.445431 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-w4tzc" event={"ID":"54e98831-cd88-4dee-90db-e8fbb006e9c3","Type":"ContainerStarted","Data":"5efea33740df07556c6ecdc31b64175749a8a467c4adcf414f739294dace7684"} Dec 11 14:01:48 crc kubenswrapper[5050]: I1211 14:01:48.445879 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-w4tzc" Dec 11 14:01:48 crc kubenswrapper[5050]: I1211 14:01:48.478462 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-5bddd4b946-644bs" podStartSLOduration=3.478445065 podStartE2EDuration="3.478445065s" podCreationTimestamp="2025-12-11 14:01:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:01:46.445739705 +0000 UTC m=+797.289462291" watchObservedRunningTime="2025-12-11 14:01:48.478445065 +0000 UTC m=+799.322167651" Dec 11 14:01:48 crc kubenswrapper[5050]: I1211 14:01:48.720793 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-w4tzc" podStartSLOduration=3.720770914 podStartE2EDuration="3.720770914s" podCreationTimestamp="2025-12-11 14:01:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:01:48.480453239 +0000 UTC m=+799.324175825" watchObservedRunningTime="2025-12-11 14:01:48.720770914 +0000 UTC m=+799.564493500" Dec 11 14:01:48 crc kubenswrapper[5050]: I1211 14:01:48.726608 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7gp2c"] Dec 11 14:01:48 crc kubenswrapper[5050]: I1211 14:01:48.728404 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7gp2c" Dec 11 14:01:48 crc kubenswrapper[5050]: I1211 14:01:48.734361 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5164adc3-549c-4851-9d2f-e964a5f7137c-utilities\") pod \"certified-operators-7gp2c\" (UID: \"5164adc3-549c-4851-9d2f-e964a5f7137c\") " pod="openshift-marketplace/certified-operators-7gp2c" Dec 11 14:01:48 crc kubenswrapper[5050]: I1211 14:01:48.734421 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k68pq\" (UniqueName: \"kubernetes.io/projected/5164adc3-549c-4851-9d2f-e964a5f7137c-kube-api-access-k68pq\") pod \"certified-operators-7gp2c\" (UID: \"5164adc3-549c-4851-9d2f-e964a5f7137c\") " pod="openshift-marketplace/certified-operators-7gp2c" Dec 11 14:01:48 crc kubenswrapper[5050]: I1211 14:01:48.734464 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5164adc3-549c-4851-9d2f-e964a5f7137c-catalog-content\") pod \"certified-operators-7gp2c\" (UID: \"5164adc3-549c-4851-9d2f-e964a5f7137c\") " pod="openshift-marketplace/certified-operators-7gp2c" Dec 11 14:01:48 crc kubenswrapper[5050]: I1211 14:01:48.749125 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7gp2c"] Dec 11 14:01:48 crc kubenswrapper[5050]: I1211 14:01:48.835595 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5164adc3-549c-4851-9d2f-e964a5f7137c-utilities\") pod \"certified-operators-7gp2c\" (UID: \"5164adc3-549c-4851-9d2f-e964a5f7137c\") " pod="openshift-marketplace/certified-operators-7gp2c" Dec 11 14:01:48 crc kubenswrapper[5050]: I1211 14:01:48.835827 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k68pq\" (UniqueName: \"kubernetes.io/projected/5164adc3-549c-4851-9d2f-e964a5f7137c-kube-api-access-k68pq\") pod \"certified-operators-7gp2c\" (UID: \"5164adc3-549c-4851-9d2f-e964a5f7137c\") " pod="openshift-marketplace/certified-operators-7gp2c" Dec 11 14:01:48 crc kubenswrapper[5050]: I1211 14:01:48.835925 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5164adc3-549c-4851-9d2f-e964a5f7137c-catalog-content\") pod \"certified-operators-7gp2c\" (UID: \"5164adc3-549c-4851-9d2f-e964a5f7137c\") " pod="openshift-marketplace/certified-operators-7gp2c" Dec 11 14:01:48 crc kubenswrapper[5050]: I1211 14:01:48.836270 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5164adc3-549c-4851-9d2f-e964a5f7137c-utilities\") pod \"certified-operators-7gp2c\" (UID: \"5164adc3-549c-4851-9d2f-e964a5f7137c\") " pod="openshift-marketplace/certified-operators-7gp2c" Dec 11 14:01:48 crc kubenswrapper[5050]: I1211 14:01:48.836355 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5164adc3-549c-4851-9d2f-e964a5f7137c-catalog-content\") pod \"certified-operators-7gp2c\" (UID: \"5164adc3-549c-4851-9d2f-e964a5f7137c\") " pod="openshift-marketplace/certified-operators-7gp2c" Dec 11 14:01:48 crc kubenswrapper[5050]: I1211 14:01:48.871032 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k68pq\" (UniqueName: \"kubernetes.io/projected/5164adc3-549c-4851-9d2f-e964a5f7137c-kube-api-access-k68pq\") pod \"certified-operators-7gp2c\" (UID: \"5164adc3-549c-4851-9d2f-e964a5f7137c\") " pod="openshift-marketplace/certified-operators-7gp2c" Dec 11 14:01:49 crc kubenswrapper[5050]: I1211 14:01:49.053387 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7gp2c" Dec 11 14:01:49 crc kubenswrapper[5050]: I1211 14:01:49.402128 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7gp2c"] Dec 11 14:01:49 crc kubenswrapper[5050]: I1211 14:01:49.491931 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gp2c" event={"ID":"5164adc3-549c-4851-9d2f-e964a5f7137c","Type":"ContainerStarted","Data":"5b1b1aa40933238937b7eeca771d50b787754b67971b105097132d8c111ac111"} Dec 11 14:01:50 crc kubenswrapper[5050]: I1211 14:01:50.503070 5050 generic.go:334] "Generic (PLEG): container finished" podID="5164adc3-549c-4851-9d2f-e964a5f7137c" containerID="4a30c286e45f997ea4119277144f8dd14a40546c08aa2330ab68fb1fee5cb9b3" exitCode=0 Dec 11 14:01:50 crc kubenswrapper[5050]: I1211 14:01:50.503129 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gp2c" event={"ID":"5164adc3-549c-4851-9d2f-e964a5f7137c","Type":"ContainerDied","Data":"4a30c286e45f997ea4119277144f8dd14a40546c08aa2330ab68fb1fee5cb9b3"} Dec 11 14:01:51 crc kubenswrapper[5050]: I1211 14:01:51.512111 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gp2c" event={"ID":"5164adc3-549c-4851-9d2f-e964a5f7137c","Type":"ContainerStarted","Data":"0b749ec1a916e21504673e8569e805f06c7bf3756ac8389249072b8affdb9213"} Dec 11 14:01:52 crc kubenswrapper[5050]: I1211 14:01:52.520884 5050 generic.go:334] "Generic (PLEG): container finished" podID="5164adc3-549c-4851-9d2f-e964a5f7137c" containerID="0b749ec1a916e21504673e8569e805f06c7bf3756ac8389249072b8affdb9213" exitCode=0 Dec 11 14:01:52 crc kubenswrapper[5050]: I1211 14:01:52.520924 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gp2c" event={"ID":"5164adc3-549c-4851-9d2f-e964a5f7137c","Type":"ContainerDied","Data":"0b749ec1a916e21504673e8569e805f06c7bf3756ac8389249072b8affdb9213"} Dec 11 14:01:54 crc kubenswrapper[5050]: I1211 14:01:54.535417 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dsj8r" event={"ID":"3b1cdf6a-950e-42ce-bc50-7fa14d5ad5f0","Type":"ContainerStarted","Data":"761054cc96d3fba50cbeba94290098590a3c117cb0d42cd3580a68449798e29d"} Dec 11 14:01:54 crc kubenswrapper[5050]: I1211 14:01:54.535820 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dsj8r" Dec 11 14:01:54 crc kubenswrapper[5050]: I1211 14:01:54.538351 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gp2c" event={"ID":"5164adc3-549c-4851-9d2f-e964a5f7137c","Type":"ContainerStarted","Data":"0a954ce3f007ae5175f0939589d300c90ff8a7534066d0d6bb274c439f5c1be0"} Dec 11 14:01:54 crc kubenswrapper[5050]: I1211 14:01:54.540707 5050 generic.go:334] "Generic (PLEG): container finished" podID="ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b" containerID="04f3bed443c875e6fad5c9f494d2956693cf1bba29e5495eda77a10b1e9f8038" exitCode=0 Dec 11 14:01:54 crc kubenswrapper[5050]: I1211 14:01:54.540753 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z5lp7" event={"ID":"ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b","Type":"ContainerDied","Data":"04f3bed443c875e6fad5c9f494d2956693cf1bba29e5495eda77a10b1e9f8038"} Dec 11 14:01:54 crc kubenswrapper[5050]: I1211 14:01:54.552163 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dsj8r" podStartSLOduration=2.020093716 podStartE2EDuration="9.552140504s" podCreationTimestamp="2025-12-11 14:01:45 +0000 UTC" firstStartedPulling="2025-12-11 14:01:46.153098867 +0000 UTC m=+796.996821453" lastFinishedPulling="2025-12-11 14:01:53.685145655 +0000 UTC m=+804.528868241" observedRunningTime="2025-12-11 14:01:54.551764444 +0000 UTC m=+805.395487030" watchObservedRunningTime="2025-12-11 14:01:54.552140504 +0000 UTC m=+805.395863110" Dec 11 14:01:54 crc kubenswrapper[5050]: I1211 14:01:54.610236 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7gp2c" podStartSLOduration=2.9057242629999998 podStartE2EDuration="6.610209781s" podCreationTimestamp="2025-12-11 14:01:48 +0000 UTC" firstStartedPulling="2025-12-11 14:01:50.504774332 +0000 UTC m=+801.348496918" lastFinishedPulling="2025-12-11 14:01:54.20925985 +0000 UTC m=+805.052982436" observedRunningTime="2025-12-11 14:01:54.606589673 +0000 UTC m=+805.450312259" watchObservedRunningTime="2025-12-11 14:01:54.610209781 +0000 UTC m=+805.453932367" Dec 11 14:01:55 crc kubenswrapper[5050]: I1211 14:01:55.460902 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-5bddd4b946-644bs" Dec 11 14:01:55 crc kubenswrapper[5050]: I1211 14:01:55.549875 5050 generic.go:334] "Generic (PLEG): container finished" podID="ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b" containerID="74e18e018fbfae81afc5564099d2022c99fa196089cfd6431855e6250632ee88" exitCode=0 Dec 11 14:01:55 crc kubenswrapper[5050]: I1211 14:01:55.554220 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z5lp7" event={"ID":"ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b","Type":"ContainerDied","Data":"74e18e018fbfae81afc5564099d2022c99fa196089cfd6431855e6250632ee88"} Dec 11 14:01:56 crc kubenswrapper[5050]: I1211 14:01:56.557895 5050 generic.go:334] "Generic (PLEG): container finished" podID="ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b" containerID="c0b1336ffe241dc55d577bb1ed94e5054810216f1aa79020fb841085ecd0ac08" exitCode=0 Dec 11 14:01:56 crc kubenswrapper[5050]: I1211 14:01:56.557994 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z5lp7" event={"ID":"ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b","Type":"ContainerDied","Data":"c0b1336ffe241dc55d577bb1ed94e5054810216f1aa79020fb841085ecd0ac08"} Dec 11 14:01:57 crc kubenswrapper[5050]: I1211 14:01:57.596655 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6jk6w"] Dec 11 14:01:57 crc kubenswrapper[5050]: I1211 14:01:57.600167 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6jk6w" Dec 11 14:01:57 crc kubenswrapper[5050]: I1211 14:01:57.650381 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z5lp7" event={"ID":"ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b","Type":"ContainerStarted","Data":"36d1714c8cfbd4d77d6d70a8da5a5b45fe04998a0c571f21b0b60c821f3c66c4"} Dec 11 14:01:57 crc kubenswrapper[5050]: I1211 14:01:57.650422 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z5lp7" event={"ID":"ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b","Type":"ContainerStarted","Data":"a02ed2406166c103b53e40f7033820b10c77994647fb017283edf37082d6645e"} Dec 11 14:01:57 crc kubenswrapper[5050]: I1211 14:01:57.654926 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6jk6w"] Dec 11 14:01:57 crc kubenswrapper[5050]: I1211 14:01:57.686504 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6stk2\" (UniqueName: \"kubernetes.io/projected/170382f0-7db3-4ae0-af81-40e31dfef6a4-kube-api-access-6stk2\") pod \"redhat-marketplace-6jk6w\" (UID: \"170382f0-7db3-4ae0-af81-40e31dfef6a4\") " pod="openshift-marketplace/redhat-marketplace-6jk6w" Dec 11 14:01:57 crc kubenswrapper[5050]: I1211 14:01:57.686740 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/170382f0-7db3-4ae0-af81-40e31dfef6a4-utilities\") pod \"redhat-marketplace-6jk6w\" (UID: \"170382f0-7db3-4ae0-af81-40e31dfef6a4\") " pod="openshift-marketplace/redhat-marketplace-6jk6w" Dec 11 14:01:57 crc kubenswrapper[5050]: I1211 14:01:57.686790 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/170382f0-7db3-4ae0-af81-40e31dfef6a4-catalog-content\") pod \"redhat-marketplace-6jk6w\" (UID: \"170382f0-7db3-4ae0-af81-40e31dfef6a4\") " pod="openshift-marketplace/redhat-marketplace-6jk6w" Dec 11 14:01:57 crc kubenswrapper[5050]: I1211 14:01:57.787796 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/170382f0-7db3-4ae0-af81-40e31dfef6a4-catalog-content\") pod \"redhat-marketplace-6jk6w\" (UID: \"170382f0-7db3-4ae0-af81-40e31dfef6a4\") " pod="openshift-marketplace/redhat-marketplace-6jk6w" Dec 11 14:01:57 crc kubenswrapper[5050]: I1211 14:01:57.787845 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6stk2\" (UniqueName: \"kubernetes.io/projected/170382f0-7db3-4ae0-af81-40e31dfef6a4-kube-api-access-6stk2\") pod \"redhat-marketplace-6jk6w\" (UID: \"170382f0-7db3-4ae0-af81-40e31dfef6a4\") " pod="openshift-marketplace/redhat-marketplace-6jk6w" Dec 11 14:01:57 crc kubenswrapper[5050]: I1211 14:01:57.787919 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/170382f0-7db3-4ae0-af81-40e31dfef6a4-utilities\") pod \"redhat-marketplace-6jk6w\" (UID: \"170382f0-7db3-4ae0-af81-40e31dfef6a4\") " pod="openshift-marketplace/redhat-marketplace-6jk6w" Dec 11 14:01:57 crc kubenswrapper[5050]: I1211 14:01:57.788454 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/170382f0-7db3-4ae0-af81-40e31dfef6a4-catalog-content\") pod \"redhat-marketplace-6jk6w\" (UID: \"170382f0-7db3-4ae0-af81-40e31dfef6a4\") " pod="openshift-marketplace/redhat-marketplace-6jk6w" Dec 11 14:01:57 crc kubenswrapper[5050]: I1211 14:01:57.788465 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/170382f0-7db3-4ae0-af81-40e31dfef6a4-utilities\") pod \"redhat-marketplace-6jk6w\" (UID: \"170382f0-7db3-4ae0-af81-40e31dfef6a4\") " pod="openshift-marketplace/redhat-marketplace-6jk6w" Dec 11 14:01:57 crc kubenswrapper[5050]: I1211 14:01:57.811661 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6stk2\" (UniqueName: \"kubernetes.io/projected/170382f0-7db3-4ae0-af81-40e31dfef6a4-kube-api-access-6stk2\") pod \"redhat-marketplace-6jk6w\" (UID: \"170382f0-7db3-4ae0-af81-40e31dfef6a4\") " pod="openshift-marketplace/redhat-marketplace-6jk6w" Dec 11 14:01:57 crc kubenswrapper[5050]: I1211 14:01:57.975453 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6jk6w" Dec 11 14:01:58 crc kubenswrapper[5050]: I1211 14:01:58.196361 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6jk6w"] Dec 11 14:01:58 crc kubenswrapper[5050]: W1211 14:01:58.212172 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod170382f0_7db3_4ae0_af81_40e31dfef6a4.slice/crio-a1295049164420c063d33ad5935ede3bbf0dc2015110825e8e22576a0e8ef912 WatchSource:0}: Error finding container a1295049164420c063d33ad5935ede3bbf0dc2015110825e8e22576a0e8ef912: Status 404 returned error can't find the container with id a1295049164420c063d33ad5935ede3bbf0dc2015110825e8e22576a0e8ef912 Dec 11 14:01:58 crc kubenswrapper[5050]: I1211 14:01:58.660963 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z5lp7" event={"ID":"ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b","Type":"ContainerStarted","Data":"22cfd6bbf9d078a762453dfe15cab5854f52b2951deddbf1314ab89e27978596"} Dec 11 14:01:58 crc kubenswrapper[5050]: I1211 14:01:58.661384 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z5lp7" event={"ID":"ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b","Type":"ContainerStarted","Data":"8b5b06b139ef0e5899f03155bba75cb287ccb4f2bbf929e875f14ab2b7415adb"} Dec 11 14:01:58 crc kubenswrapper[5050]: I1211 14:01:58.661396 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z5lp7" event={"ID":"ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b","Type":"ContainerStarted","Data":"b412957807f1a867e06f724ea39a402f0b48aa69e09034e3ee2310a171361a32"} Dec 11 14:01:58 crc kubenswrapper[5050]: I1211 14:01:58.661405 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z5lp7" event={"ID":"ae0e0a97-d8ea-4a16-82b4-b2b0348fdf3b","Type":"ContainerStarted","Data":"ea24b9f14f19931180284176c760a2c0c4441b692919e0724383a45d5ef08c70"} Dec 11 14:01:58 crc kubenswrapper[5050]: I1211 14:01:58.662723 5050 generic.go:334] "Generic (PLEG): container finished" podID="170382f0-7db3-4ae0-af81-40e31dfef6a4" containerID="1ecd53db8669b81d8e75cf0dc69b1744873cb308d897fbb4bb45d793bc1f3588" exitCode=0 Dec 11 14:01:58 crc kubenswrapper[5050]: I1211 14:01:58.662746 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6jk6w" event={"ID":"170382f0-7db3-4ae0-af81-40e31dfef6a4","Type":"ContainerDied","Data":"1ecd53db8669b81d8e75cf0dc69b1744873cb308d897fbb4bb45d793bc1f3588"} Dec 11 14:01:58 crc kubenswrapper[5050]: I1211 14:01:58.662760 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6jk6w" event={"ID":"170382f0-7db3-4ae0-af81-40e31dfef6a4","Type":"ContainerStarted","Data":"a1295049164420c063d33ad5935ede3bbf0dc2015110825e8e22576a0e8ef912"} Dec 11 14:01:58 crc kubenswrapper[5050]: I1211 14:01:58.663025 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-z5lp7" Dec 11 14:01:58 crc kubenswrapper[5050]: I1211 14:01:58.686681 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-z5lp7" podStartSLOduration=6.636878467 podStartE2EDuration="14.686659878s" podCreationTimestamp="2025-12-11 14:01:44 +0000 UTC" firstStartedPulling="2025-12-11 14:01:45.661968642 +0000 UTC m=+796.505691218" lastFinishedPulling="2025-12-11 14:01:53.711750043 +0000 UTC m=+804.555472629" observedRunningTime="2025-12-11 14:01:58.682126386 +0000 UTC m=+809.525848982" watchObservedRunningTime="2025-12-11 14:01:58.686659878 +0000 UTC m=+809.530382464" Dec 11 14:01:59 crc kubenswrapper[5050]: I1211 14:01:59.053573 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7gp2c" Dec 11 14:01:59 crc kubenswrapper[5050]: I1211 14:01:59.053714 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7gp2c" Dec 11 14:01:59 crc kubenswrapper[5050]: I1211 14:01:59.103081 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7gp2c" Dec 11 14:01:59 crc kubenswrapper[5050]: I1211 14:01:59.671044 5050 generic.go:334] "Generic (PLEG): container finished" podID="170382f0-7db3-4ae0-af81-40e31dfef6a4" containerID="2697ff7bd2de3f5c69688d20bd52758c5cf64240f03ec7cf1dad59b1536440bb" exitCode=0 Dec 11 14:01:59 crc kubenswrapper[5050]: I1211 14:01:59.671095 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6jk6w" event={"ID":"170382f0-7db3-4ae0-af81-40e31dfef6a4","Type":"ContainerDied","Data":"2697ff7bd2de3f5c69688d20bd52758c5cf64240f03ec7cf1dad59b1536440bb"} Dec 11 14:01:59 crc kubenswrapper[5050]: I1211 14:01:59.712805 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7gp2c" Dec 11 14:02:00 crc kubenswrapper[5050]: I1211 14:02:00.338060 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-z5lp7" Dec 11 14:02:00 crc kubenswrapper[5050]: I1211 14:02:00.377169 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-z5lp7" Dec 11 14:02:00 crc kubenswrapper[5050]: I1211 14:02:00.680841 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6jk6w" event={"ID":"170382f0-7db3-4ae0-af81-40e31dfef6a4","Type":"ContainerStarted","Data":"25ad9da73250977bf5bdbc8333dac7f932625751d1406b459e5c87ba6432c030"} Dec 11 14:02:00 crc kubenswrapper[5050]: I1211 14:02:00.702059 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6jk6w" podStartSLOduration=2.217737392 podStartE2EDuration="3.70203921s" podCreationTimestamp="2025-12-11 14:01:57 +0000 UTC" firstStartedPulling="2025-12-11 14:01:58.664369577 +0000 UTC m=+809.508092163" lastFinishedPulling="2025-12-11 14:02:00.148671395 +0000 UTC m=+810.992393981" observedRunningTime="2025-12-11 14:02:00.700391665 +0000 UTC m=+811.544114251" watchObservedRunningTime="2025-12-11 14:02:00.70203921 +0000 UTC m=+811.545761796" Dec 11 14:02:01 crc kubenswrapper[5050]: I1211 14:02:01.351183 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7gp2c"] Dec 11 14:02:02 crc kubenswrapper[5050]: I1211 14:02:02.693883 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7gp2c" podUID="5164adc3-549c-4851-9d2f-e964a5f7137c" containerName="registry-server" containerID="cri-o://0a954ce3f007ae5175f0939589d300c90ff8a7534066d0d6bb274c439f5c1be0" gracePeriod=2 Dec 11 14:02:03 crc kubenswrapper[5050]: I1211 14:02:03.703503 5050 generic.go:334] "Generic (PLEG): container finished" podID="5164adc3-549c-4851-9d2f-e964a5f7137c" containerID="0a954ce3f007ae5175f0939589d300c90ff8a7534066d0d6bb274c439f5c1be0" exitCode=0 Dec 11 14:02:03 crc kubenswrapper[5050]: I1211 14:02:03.703590 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gp2c" event={"ID":"5164adc3-549c-4851-9d2f-e964a5f7137c","Type":"ContainerDied","Data":"0a954ce3f007ae5175f0939589d300c90ff8a7534066d0d6bb274c439f5c1be0"} Dec 11 14:02:05 crc kubenswrapper[5050]: I1211 14:02:05.952658 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-dsj8r" Dec 11 14:02:06 crc kubenswrapper[5050]: I1211 14:02:06.015976 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7gp2c" Dec 11 14:02:06 crc kubenswrapper[5050]: I1211 14:02:06.042060 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5164adc3-549c-4851-9d2f-e964a5f7137c-utilities\") pod \"5164adc3-549c-4851-9d2f-e964a5f7137c\" (UID: \"5164adc3-549c-4851-9d2f-e964a5f7137c\") " Dec 11 14:02:06 crc kubenswrapper[5050]: I1211 14:02:06.042154 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5164adc3-549c-4851-9d2f-e964a5f7137c-catalog-content\") pod \"5164adc3-549c-4851-9d2f-e964a5f7137c\" (UID: \"5164adc3-549c-4851-9d2f-e964a5f7137c\") " Dec 11 14:02:06 crc kubenswrapper[5050]: I1211 14:02:06.042270 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k68pq\" (UniqueName: \"kubernetes.io/projected/5164adc3-549c-4851-9d2f-e964a5f7137c-kube-api-access-k68pq\") pod \"5164adc3-549c-4851-9d2f-e964a5f7137c\" (UID: \"5164adc3-549c-4851-9d2f-e964a5f7137c\") " Dec 11 14:02:06 crc kubenswrapper[5050]: I1211 14:02:06.044994 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5164adc3-549c-4851-9d2f-e964a5f7137c-utilities" (OuterVolumeSpecName: "utilities") pod "5164adc3-549c-4851-9d2f-e964a5f7137c" (UID: "5164adc3-549c-4851-9d2f-e964a5f7137c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:02:06 crc kubenswrapper[5050]: I1211 14:02:06.047475 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5164adc3-549c-4851-9d2f-e964a5f7137c-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 14:02:06 crc kubenswrapper[5050]: I1211 14:02:06.052138 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5164adc3-549c-4851-9d2f-e964a5f7137c-kube-api-access-k68pq" (OuterVolumeSpecName: "kube-api-access-k68pq") pod "5164adc3-549c-4851-9d2f-e964a5f7137c" (UID: "5164adc3-549c-4851-9d2f-e964a5f7137c"). InnerVolumeSpecName "kube-api-access-k68pq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:02:06 crc kubenswrapper[5050]: I1211 14:02:06.091402 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5164adc3-549c-4851-9d2f-e964a5f7137c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5164adc3-549c-4851-9d2f-e964a5f7137c" (UID: "5164adc3-549c-4851-9d2f-e964a5f7137c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:02:06 crc kubenswrapper[5050]: I1211 14:02:06.148698 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5164adc3-549c-4851-9d2f-e964a5f7137c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 14:02:06 crc kubenswrapper[5050]: I1211 14:02:06.148733 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k68pq\" (UniqueName: \"kubernetes.io/projected/5164adc3-549c-4851-9d2f-e964a5f7137c-kube-api-access-k68pq\") on node \"crc\" DevicePath \"\"" Dec 11 14:02:06 crc kubenswrapper[5050]: I1211 14:02:06.152048 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rhcfr"] Dec 11 14:02:06 crc kubenswrapper[5050]: E1211 14:02:06.152309 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5164adc3-549c-4851-9d2f-e964a5f7137c" containerName="extract-utilities" Dec 11 14:02:06 crc kubenswrapper[5050]: I1211 14:02:06.152326 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="5164adc3-549c-4851-9d2f-e964a5f7137c" containerName="extract-utilities" Dec 11 14:02:06 crc kubenswrapper[5050]: E1211 14:02:06.152351 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5164adc3-549c-4851-9d2f-e964a5f7137c" containerName="extract-content" Dec 11 14:02:06 crc kubenswrapper[5050]: I1211 14:02:06.152358 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="5164adc3-549c-4851-9d2f-e964a5f7137c" containerName="extract-content" Dec 11 14:02:06 crc kubenswrapper[5050]: E1211 14:02:06.152374 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5164adc3-549c-4851-9d2f-e964a5f7137c" containerName="registry-server" Dec 11 14:02:06 crc kubenswrapper[5050]: I1211 14:02:06.152380 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="5164adc3-549c-4851-9d2f-e964a5f7137c" containerName="registry-server" Dec 11 14:02:06 crc kubenswrapper[5050]: I1211 14:02:06.152478 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="5164adc3-549c-4851-9d2f-e964a5f7137c" containerName="registry-server" Dec 11 14:02:06 crc kubenswrapper[5050]: I1211 14:02:06.153281 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rhcfr" Dec 11 14:02:06 crc kubenswrapper[5050]: I1211 14:02:06.164055 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rhcfr"] Dec 11 14:02:06 crc kubenswrapper[5050]: I1211 14:02:06.250396 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-462dg\" (UniqueName: \"kubernetes.io/projected/94cd9a09-8820-445b-9da3-1a1b90391048-kube-api-access-462dg\") pod \"community-operators-rhcfr\" (UID: \"94cd9a09-8820-445b-9da3-1a1b90391048\") " pod="openshift-marketplace/community-operators-rhcfr" Dec 11 14:02:06 crc kubenswrapper[5050]: I1211 14:02:06.250450 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94cd9a09-8820-445b-9da3-1a1b90391048-utilities\") pod \"community-operators-rhcfr\" (UID: \"94cd9a09-8820-445b-9da3-1a1b90391048\") " pod="openshift-marketplace/community-operators-rhcfr" Dec 11 14:02:06 crc kubenswrapper[5050]: I1211 14:02:06.250541 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94cd9a09-8820-445b-9da3-1a1b90391048-catalog-content\") pod \"community-operators-rhcfr\" (UID: \"94cd9a09-8820-445b-9da3-1a1b90391048\") " pod="openshift-marketplace/community-operators-rhcfr" Dec 11 14:02:06 crc kubenswrapper[5050]: I1211 14:02:06.351920 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94cd9a09-8820-445b-9da3-1a1b90391048-catalog-content\") pod \"community-operators-rhcfr\" (UID: \"94cd9a09-8820-445b-9da3-1a1b90391048\") " pod="openshift-marketplace/community-operators-rhcfr" Dec 11 14:02:06 crc kubenswrapper[5050]: I1211 14:02:06.351987 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-462dg\" (UniqueName: \"kubernetes.io/projected/94cd9a09-8820-445b-9da3-1a1b90391048-kube-api-access-462dg\") pod \"community-operators-rhcfr\" (UID: \"94cd9a09-8820-445b-9da3-1a1b90391048\") " pod="openshift-marketplace/community-operators-rhcfr" Dec 11 14:02:06 crc kubenswrapper[5050]: I1211 14:02:06.352015 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94cd9a09-8820-445b-9da3-1a1b90391048-utilities\") pod \"community-operators-rhcfr\" (UID: \"94cd9a09-8820-445b-9da3-1a1b90391048\") " pod="openshift-marketplace/community-operators-rhcfr" Dec 11 14:02:06 crc kubenswrapper[5050]: I1211 14:02:06.352571 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94cd9a09-8820-445b-9da3-1a1b90391048-catalog-content\") pod \"community-operators-rhcfr\" (UID: \"94cd9a09-8820-445b-9da3-1a1b90391048\") " pod="openshift-marketplace/community-operators-rhcfr" Dec 11 14:02:06 crc kubenswrapper[5050]: I1211 14:02:06.352593 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94cd9a09-8820-445b-9da3-1a1b90391048-utilities\") pod \"community-operators-rhcfr\" (UID: \"94cd9a09-8820-445b-9da3-1a1b90391048\") " pod="openshift-marketplace/community-operators-rhcfr" Dec 11 14:02:06 crc kubenswrapper[5050]: I1211 14:02:06.369497 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-462dg\" (UniqueName: \"kubernetes.io/projected/94cd9a09-8820-445b-9da3-1a1b90391048-kube-api-access-462dg\") pod \"community-operators-rhcfr\" (UID: \"94cd9a09-8820-445b-9da3-1a1b90391048\") " pod="openshift-marketplace/community-operators-rhcfr" Dec 11 14:02:06 crc kubenswrapper[5050]: I1211 14:02:06.468971 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rhcfr" Dec 11 14:02:06 crc kubenswrapper[5050]: I1211 14:02:06.727934 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rhcfr"] Dec 11 14:02:06 crc kubenswrapper[5050]: I1211 14:02:06.729521 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7gp2c" event={"ID":"5164adc3-549c-4851-9d2f-e964a5f7137c","Type":"ContainerDied","Data":"5b1b1aa40933238937b7eeca771d50b787754b67971b105097132d8c111ac111"} Dec 11 14:02:06 crc kubenswrapper[5050]: I1211 14:02:06.729577 5050 scope.go:117] "RemoveContainer" containerID="0a954ce3f007ae5175f0939589d300c90ff8a7534066d0d6bb274c439f5c1be0" Dec 11 14:02:06 crc kubenswrapper[5050]: I1211 14:02:06.729713 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7gp2c" Dec 11 14:02:06 crc kubenswrapper[5050]: I1211 14:02:06.755395 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7gp2c"] Dec 11 14:02:06 crc kubenswrapper[5050]: I1211 14:02:06.757291 5050 scope.go:117] "RemoveContainer" containerID="0b749ec1a916e21504673e8569e805f06c7bf3756ac8389249072b8affdb9213" Dec 11 14:02:06 crc kubenswrapper[5050]: I1211 14:02:06.761777 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7gp2c"] Dec 11 14:02:06 crc kubenswrapper[5050]: I1211 14:02:06.779379 5050 scope.go:117] "RemoveContainer" containerID="4a30c286e45f997ea4119277144f8dd14a40546c08aa2330ab68fb1fee5cb9b3" Dec 11 14:02:06 crc kubenswrapper[5050]: I1211 14:02:06.929824 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-w4tzc" Dec 11 14:02:07 crc kubenswrapper[5050]: I1211 14:02:07.552896 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5164adc3-549c-4851-9d2f-e964a5f7137c" path="/var/lib/kubelet/pods/5164adc3-549c-4851-9d2f-e964a5f7137c/volumes" Dec 11 14:02:07 crc kubenswrapper[5050]: I1211 14:02:07.737911 5050 generic.go:334] "Generic (PLEG): container finished" podID="94cd9a09-8820-445b-9da3-1a1b90391048" containerID="993d911c980036d208bb43e8f3d925fac09571919f8d54f11a21e0b00f3f0c03" exitCode=0 Dec 11 14:02:07 crc kubenswrapper[5050]: I1211 14:02:07.737952 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rhcfr" event={"ID":"94cd9a09-8820-445b-9da3-1a1b90391048","Type":"ContainerDied","Data":"993d911c980036d208bb43e8f3d925fac09571919f8d54f11a21e0b00f3f0c03"} Dec 11 14:02:07 crc kubenswrapper[5050]: I1211 14:02:07.737987 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rhcfr" event={"ID":"94cd9a09-8820-445b-9da3-1a1b90391048","Type":"ContainerStarted","Data":"520d831c731a08bb7c6a428993f0f92b2f642be2a1235974b383d1fc50bfa6ab"} Dec 11 14:02:07 crc kubenswrapper[5050]: I1211 14:02:07.975943 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6jk6w" Dec 11 14:02:07 crc kubenswrapper[5050]: I1211 14:02:07.975981 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6jk6w" Dec 11 14:02:08 crc kubenswrapper[5050]: I1211 14:02:08.029473 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6jk6w" Dec 11 14:02:08 crc kubenswrapper[5050]: I1211 14:02:08.613373 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931atlqxq"] Dec 11 14:02:08 crc kubenswrapper[5050]: I1211 14:02:08.615284 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931atlqxq" Dec 11 14:02:08 crc kubenswrapper[5050]: I1211 14:02:08.616951 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 11 14:02:08 crc kubenswrapper[5050]: I1211 14:02:08.625975 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931atlqxq"] Dec 11 14:02:08 crc kubenswrapper[5050]: I1211 14:02:08.685451 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/65c19a4d-591e-4386-abed-2d56ae8107a9-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931atlqxq\" (UID: \"65c19a4d-591e-4386-abed-2d56ae8107a9\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931atlqxq" Dec 11 14:02:08 crc kubenswrapper[5050]: I1211 14:02:08.685559 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgjfk\" (UniqueName: \"kubernetes.io/projected/65c19a4d-591e-4386-abed-2d56ae8107a9-kube-api-access-tgjfk\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931atlqxq\" (UID: \"65c19a4d-591e-4386-abed-2d56ae8107a9\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931atlqxq" Dec 11 14:02:08 crc kubenswrapper[5050]: I1211 14:02:08.685616 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/65c19a4d-591e-4386-abed-2d56ae8107a9-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931atlqxq\" (UID: \"65c19a4d-591e-4386-abed-2d56ae8107a9\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931atlqxq" Dec 11 14:02:08 crc kubenswrapper[5050]: I1211 14:02:08.755081 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rhcfr" event={"ID":"94cd9a09-8820-445b-9da3-1a1b90391048","Type":"ContainerStarted","Data":"74c41a97889d92132feda08ab6a5729831720e8134e7834c085c9e1058917abf"} Dec 11 14:02:08 crc kubenswrapper[5050]: I1211 14:02:08.786715 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/65c19a4d-591e-4386-abed-2d56ae8107a9-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931atlqxq\" (UID: \"65c19a4d-591e-4386-abed-2d56ae8107a9\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931atlqxq" Dec 11 14:02:08 crc kubenswrapper[5050]: I1211 14:02:08.786811 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgjfk\" (UniqueName: \"kubernetes.io/projected/65c19a4d-591e-4386-abed-2d56ae8107a9-kube-api-access-tgjfk\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931atlqxq\" (UID: \"65c19a4d-591e-4386-abed-2d56ae8107a9\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931atlqxq" Dec 11 14:02:08 crc kubenswrapper[5050]: I1211 14:02:08.786836 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/65c19a4d-591e-4386-abed-2d56ae8107a9-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931atlqxq\" (UID: \"65c19a4d-591e-4386-abed-2d56ae8107a9\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931atlqxq" Dec 11 14:02:08 crc kubenswrapper[5050]: I1211 14:02:08.787223 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/65c19a4d-591e-4386-abed-2d56ae8107a9-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931atlqxq\" (UID: \"65c19a4d-591e-4386-abed-2d56ae8107a9\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931atlqxq" Dec 11 14:02:08 crc kubenswrapper[5050]: I1211 14:02:08.787444 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/65c19a4d-591e-4386-abed-2d56ae8107a9-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931atlqxq\" (UID: \"65c19a4d-591e-4386-abed-2d56ae8107a9\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931atlqxq" Dec 11 14:02:08 crc kubenswrapper[5050]: I1211 14:02:08.802726 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6jk6w" Dec 11 14:02:08 crc kubenswrapper[5050]: I1211 14:02:08.806433 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgjfk\" (UniqueName: \"kubernetes.io/projected/65c19a4d-591e-4386-abed-2d56ae8107a9-kube-api-access-tgjfk\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931atlqxq\" (UID: \"65c19a4d-591e-4386-abed-2d56ae8107a9\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931atlqxq" Dec 11 14:02:08 crc kubenswrapper[5050]: I1211 14:02:08.931142 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931atlqxq" Dec 11 14:02:09 crc kubenswrapper[5050]: I1211 14:02:09.114963 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931atlqxq"] Dec 11 14:02:09 crc kubenswrapper[5050]: I1211 14:02:09.761803 5050 generic.go:334] "Generic (PLEG): container finished" podID="94cd9a09-8820-445b-9da3-1a1b90391048" containerID="74c41a97889d92132feda08ab6a5729831720e8134e7834c085c9e1058917abf" exitCode=0 Dec 11 14:02:09 crc kubenswrapper[5050]: I1211 14:02:09.761867 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rhcfr" event={"ID":"94cd9a09-8820-445b-9da3-1a1b90391048","Type":"ContainerDied","Data":"74c41a97889d92132feda08ab6a5729831720e8134e7834c085c9e1058917abf"} Dec 11 14:02:09 crc kubenswrapper[5050]: I1211 14:02:09.766600 5050 generic.go:334] "Generic (PLEG): container finished" podID="65c19a4d-591e-4386-abed-2d56ae8107a9" containerID="16f1ae03d017ba407be0fc5fe6bf6831d00f3f7acafe6eff82d222d4342701d3" exitCode=0 Dec 11 14:02:09 crc kubenswrapper[5050]: I1211 14:02:09.767503 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931atlqxq" event={"ID":"65c19a4d-591e-4386-abed-2d56ae8107a9","Type":"ContainerDied","Data":"16f1ae03d017ba407be0fc5fe6bf6831d00f3f7acafe6eff82d222d4342701d3"} Dec 11 14:02:09 crc kubenswrapper[5050]: I1211 14:02:09.767686 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931atlqxq" event={"ID":"65c19a4d-591e-4386-abed-2d56ae8107a9","Type":"ContainerStarted","Data":"861324a588e2e9cd9878a5f8ef472580c7a5e412ac658e4a20489396aafb3e64"} Dec 11 14:02:10 crc kubenswrapper[5050]: I1211 14:02:10.776575 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rhcfr" event={"ID":"94cd9a09-8820-445b-9da3-1a1b90391048","Type":"ContainerStarted","Data":"ad4ba044fa6dbade1690045169970d381960da101f135047d13699a43c175b1a"} Dec 11 14:02:10 crc kubenswrapper[5050]: I1211 14:02:10.797054 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:02:10 crc kubenswrapper[5050]: I1211 14:02:10.797165 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:02:10 crc kubenswrapper[5050]: I1211 14:02:10.799810 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rhcfr" podStartSLOduration=2.2301899020000002 podStartE2EDuration="4.799790701s" podCreationTimestamp="2025-12-11 14:02:06 +0000 UTC" firstStartedPulling="2025-12-11 14:02:07.741206415 +0000 UTC m=+818.584929001" lastFinishedPulling="2025-12-11 14:02:10.310807214 +0000 UTC m=+821.154529800" observedRunningTime="2025-12-11 14:02:10.794854518 +0000 UTC m=+821.638577104" watchObservedRunningTime="2025-12-11 14:02:10.799790701 +0000 UTC m=+821.643513287" Dec 11 14:02:11 crc kubenswrapper[5050]: I1211 14:02:11.960919 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6jk6w"] Dec 11 14:02:11 crc kubenswrapper[5050]: I1211 14:02:11.961557 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6jk6w" podUID="170382f0-7db3-4ae0-af81-40e31dfef6a4" containerName="registry-server" containerID="cri-o://25ad9da73250977bf5bdbc8333dac7f932625751d1406b459e5c87ba6432c030" gracePeriod=2 Dec 11 14:02:12 crc kubenswrapper[5050]: I1211 14:02:12.792827 5050 generic.go:334] "Generic (PLEG): container finished" podID="170382f0-7db3-4ae0-af81-40e31dfef6a4" containerID="25ad9da73250977bf5bdbc8333dac7f932625751d1406b459e5c87ba6432c030" exitCode=0 Dec 11 14:02:12 crc kubenswrapper[5050]: I1211 14:02:12.792898 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6jk6w" event={"ID":"170382f0-7db3-4ae0-af81-40e31dfef6a4","Type":"ContainerDied","Data":"25ad9da73250977bf5bdbc8333dac7f932625751d1406b459e5c87ba6432c030"} Dec 11 14:02:13 crc kubenswrapper[5050]: I1211 14:02:13.572937 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6jk6w" Dec 11 14:02:13 crc kubenswrapper[5050]: I1211 14:02:13.659047 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/170382f0-7db3-4ae0-af81-40e31dfef6a4-utilities\") pod \"170382f0-7db3-4ae0-af81-40e31dfef6a4\" (UID: \"170382f0-7db3-4ae0-af81-40e31dfef6a4\") " Dec 11 14:02:13 crc kubenswrapper[5050]: I1211 14:02:13.659101 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6stk2\" (UniqueName: \"kubernetes.io/projected/170382f0-7db3-4ae0-af81-40e31dfef6a4-kube-api-access-6stk2\") pod \"170382f0-7db3-4ae0-af81-40e31dfef6a4\" (UID: \"170382f0-7db3-4ae0-af81-40e31dfef6a4\") " Dec 11 14:02:13 crc kubenswrapper[5050]: I1211 14:02:13.659122 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/170382f0-7db3-4ae0-af81-40e31dfef6a4-catalog-content\") pod \"170382f0-7db3-4ae0-af81-40e31dfef6a4\" (UID: \"170382f0-7db3-4ae0-af81-40e31dfef6a4\") " Dec 11 14:02:13 crc kubenswrapper[5050]: I1211 14:02:13.660075 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/170382f0-7db3-4ae0-af81-40e31dfef6a4-utilities" (OuterVolumeSpecName: "utilities") pod "170382f0-7db3-4ae0-af81-40e31dfef6a4" (UID: "170382f0-7db3-4ae0-af81-40e31dfef6a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:02:13 crc kubenswrapper[5050]: I1211 14:02:13.667295 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/170382f0-7db3-4ae0-af81-40e31dfef6a4-kube-api-access-6stk2" (OuterVolumeSpecName: "kube-api-access-6stk2") pod "170382f0-7db3-4ae0-af81-40e31dfef6a4" (UID: "170382f0-7db3-4ae0-af81-40e31dfef6a4"). InnerVolumeSpecName "kube-api-access-6stk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:02:13 crc kubenswrapper[5050]: I1211 14:02:13.679631 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/170382f0-7db3-4ae0-af81-40e31dfef6a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "170382f0-7db3-4ae0-af81-40e31dfef6a4" (UID: "170382f0-7db3-4ae0-af81-40e31dfef6a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:02:13 crc kubenswrapper[5050]: I1211 14:02:13.760736 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/170382f0-7db3-4ae0-af81-40e31dfef6a4-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 14:02:13 crc kubenswrapper[5050]: I1211 14:02:13.760792 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6stk2\" (UniqueName: \"kubernetes.io/projected/170382f0-7db3-4ae0-af81-40e31dfef6a4-kube-api-access-6stk2\") on node \"crc\" DevicePath \"\"" Dec 11 14:02:13 crc kubenswrapper[5050]: I1211 14:02:13.760809 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/170382f0-7db3-4ae0-af81-40e31dfef6a4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 14:02:13 crc kubenswrapper[5050]: I1211 14:02:13.802102 5050 generic.go:334] "Generic (PLEG): container finished" podID="65c19a4d-591e-4386-abed-2d56ae8107a9" containerID="8ca64590ac32b8809f32546292d49599655a0eda39a6ae7564b2f4aa50df5df9" exitCode=0 Dec 11 14:02:13 crc kubenswrapper[5050]: I1211 14:02:13.802200 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931atlqxq" event={"ID":"65c19a4d-591e-4386-abed-2d56ae8107a9","Type":"ContainerDied","Data":"8ca64590ac32b8809f32546292d49599655a0eda39a6ae7564b2f4aa50df5df9"} Dec 11 14:02:13 crc kubenswrapper[5050]: I1211 14:02:13.804806 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6jk6w" event={"ID":"170382f0-7db3-4ae0-af81-40e31dfef6a4","Type":"ContainerDied","Data":"a1295049164420c063d33ad5935ede3bbf0dc2015110825e8e22576a0e8ef912"} Dec 11 14:02:13 crc kubenswrapper[5050]: I1211 14:02:13.804862 5050 scope.go:117] "RemoveContainer" containerID="25ad9da73250977bf5bdbc8333dac7f932625751d1406b459e5c87ba6432c030" Dec 11 14:02:13 crc kubenswrapper[5050]: I1211 14:02:13.804881 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6jk6w" Dec 11 14:02:13 crc kubenswrapper[5050]: I1211 14:02:13.829717 5050 scope.go:117] "RemoveContainer" containerID="2697ff7bd2de3f5c69688d20bd52758c5cf64240f03ec7cf1dad59b1536440bb" Dec 11 14:02:13 crc kubenswrapper[5050]: I1211 14:02:13.856104 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6jk6w"] Dec 11 14:02:13 crc kubenswrapper[5050]: I1211 14:02:13.861415 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6jk6w"] Dec 11 14:02:13 crc kubenswrapper[5050]: I1211 14:02:13.864683 5050 scope.go:117] "RemoveContainer" containerID="1ecd53db8669b81d8e75cf0dc69b1744873cb308d897fbb4bb45d793bc1f3588" Dec 11 14:02:14 crc kubenswrapper[5050]: I1211 14:02:14.813242 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931atlqxq" event={"ID":"65c19a4d-591e-4386-abed-2d56ae8107a9","Type":"ContainerStarted","Data":"2fb71e7b6e1c6db9d8d5ffaa374823ae6ef66c5b4755ddb67cedad6bd7b5a78a"} Dec 11 14:02:14 crc kubenswrapper[5050]: I1211 14:02:14.836385 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931atlqxq" podStartSLOduration=3.277692479 podStartE2EDuration="6.836334451s" podCreationTimestamp="2025-12-11 14:02:08 +0000 UTC" firstStartedPulling="2025-12-11 14:02:09.768472668 +0000 UTC m=+820.612195254" lastFinishedPulling="2025-12-11 14:02:13.32711464 +0000 UTC m=+824.170837226" observedRunningTime="2025-12-11 14:02:14.835724804 +0000 UTC m=+825.679447430" watchObservedRunningTime="2025-12-11 14:02:14.836334451 +0000 UTC m=+825.680057047" Dec 11 14:02:15 crc kubenswrapper[5050]: I1211 14:02:15.343406 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-z5lp7" Dec 11 14:02:15 crc kubenswrapper[5050]: I1211 14:02:15.554079 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="170382f0-7db3-4ae0-af81-40e31dfef6a4" path="/var/lib/kubelet/pods/170382f0-7db3-4ae0-af81-40e31dfef6a4/volumes" Dec 11 14:02:15 crc kubenswrapper[5050]: I1211 14:02:15.824833 5050 generic.go:334] "Generic (PLEG): container finished" podID="65c19a4d-591e-4386-abed-2d56ae8107a9" containerID="2fb71e7b6e1c6db9d8d5ffaa374823ae6ef66c5b4755ddb67cedad6bd7b5a78a" exitCode=0 Dec 11 14:02:15 crc kubenswrapper[5050]: I1211 14:02:15.824950 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931atlqxq" event={"ID":"65c19a4d-591e-4386-abed-2d56ae8107a9","Type":"ContainerDied","Data":"2fb71e7b6e1c6db9d8d5ffaa374823ae6ef66c5b4755ddb67cedad6bd7b5a78a"} Dec 11 14:02:16 crc kubenswrapper[5050]: I1211 14:02:16.469480 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rhcfr" Dec 11 14:02:16 crc kubenswrapper[5050]: I1211 14:02:16.470068 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rhcfr" Dec 11 14:02:16 crc kubenswrapper[5050]: I1211 14:02:16.512317 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rhcfr" Dec 11 14:02:16 crc kubenswrapper[5050]: I1211 14:02:16.881048 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rhcfr" Dec 11 14:02:17 crc kubenswrapper[5050]: I1211 14:02:17.054980 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931atlqxq" Dec 11 14:02:17 crc kubenswrapper[5050]: I1211 14:02:17.107871 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/65c19a4d-591e-4386-abed-2d56ae8107a9-util\") pod \"65c19a4d-591e-4386-abed-2d56ae8107a9\" (UID: \"65c19a4d-591e-4386-abed-2d56ae8107a9\") " Dec 11 14:02:17 crc kubenswrapper[5050]: I1211 14:02:17.108217 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgjfk\" (UniqueName: \"kubernetes.io/projected/65c19a4d-591e-4386-abed-2d56ae8107a9-kube-api-access-tgjfk\") pod \"65c19a4d-591e-4386-abed-2d56ae8107a9\" (UID: \"65c19a4d-591e-4386-abed-2d56ae8107a9\") " Dec 11 14:02:17 crc kubenswrapper[5050]: I1211 14:02:17.108278 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/65c19a4d-591e-4386-abed-2d56ae8107a9-bundle\") pod \"65c19a4d-591e-4386-abed-2d56ae8107a9\" (UID: \"65c19a4d-591e-4386-abed-2d56ae8107a9\") " Dec 11 14:02:17 crc kubenswrapper[5050]: I1211 14:02:17.110722 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65c19a4d-591e-4386-abed-2d56ae8107a9-bundle" (OuterVolumeSpecName: "bundle") pod "65c19a4d-591e-4386-abed-2d56ae8107a9" (UID: "65c19a4d-591e-4386-abed-2d56ae8107a9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:02:17 crc kubenswrapper[5050]: I1211 14:02:17.115969 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65c19a4d-591e-4386-abed-2d56ae8107a9-kube-api-access-tgjfk" (OuterVolumeSpecName: "kube-api-access-tgjfk") pod "65c19a4d-591e-4386-abed-2d56ae8107a9" (UID: "65c19a4d-591e-4386-abed-2d56ae8107a9"). InnerVolumeSpecName "kube-api-access-tgjfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:02:17 crc kubenswrapper[5050]: I1211 14:02:17.118523 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65c19a4d-591e-4386-abed-2d56ae8107a9-util" (OuterVolumeSpecName: "util") pod "65c19a4d-591e-4386-abed-2d56ae8107a9" (UID: "65c19a4d-591e-4386-abed-2d56ae8107a9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:02:17 crc kubenswrapper[5050]: I1211 14:02:17.210394 5050 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/65c19a4d-591e-4386-abed-2d56ae8107a9-util\") on node \"crc\" DevicePath \"\"" Dec 11 14:02:17 crc kubenswrapper[5050]: I1211 14:02:17.210429 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgjfk\" (UniqueName: \"kubernetes.io/projected/65c19a4d-591e-4386-abed-2d56ae8107a9-kube-api-access-tgjfk\") on node \"crc\" DevicePath \"\"" Dec 11 14:02:17 crc kubenswrapper[5050]: I1211 14:02:17.210444 5050 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/65c19a4d-591e-4386-abed-2d56ae8107a9-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:02:17 crc kubenswrapper[5050]: I1211 14:02:17.840528 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931atlqxq" event={"ID":"65c19a4d-591e-4386-abed-2d56ae8107a9","Type":"ContainerDied","Data":"861324a588e2e9cd9878a5f8ef472580c7a5e412ac658e4a20489396aafb3e64"} Dec 11 14:02:17 crc kubenswrapper[5050]: I1211 14:02:17.840616 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="861324a588e2e9cd9878a5f8ef472580c7a5e412ac658e4a20489396aafb3e64" Dec 11 14:02:17 crc kubenswrapper[5050]: I1211 14:02:17.840550 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931atlqxq" Dec 11 14:02:18 crc kubenswrapper[5050]: I1211 14:02:18.346526 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rhcfr"] Dec 11 14:02:19 crc kubenswrapper[5050]: I1211 14:02:19.849497 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rhcfr" podUID="94cd9a09-8820-445b-9da3-1a1b90391048" containerName="registry-server" containerID="cri-o://ad4ba044fa6dbade1690045169970d381960da101f135047d13699a43c175b1a" gracePeriod=2 Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.394335 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-rgsmz"] Dec 11 14:02:20 crc kubenswrapper[5050]: E1211 14:02:20.394575 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65c19a4d-591e-4386-abed-2d56ae8107a9" containerName="util" Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.394587 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="65c19a4d-591e-4386-abed-2d56ae8107a9" containerName="util" Dec 11 14:02:20 crc kubenswrapper[5050]: E1211 14:02:20.394595 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65c19a4d-591e-4386-abed-2d56ae8107a9" containerName="pull" Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.394600 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="65c19a4d-591e-4386-abed-2d56ae8107a9" containerName="pull" Dec 11 14:02:20 crc kubenswrapper[5050]: E1211 14:02:20.394608 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="170382f0-7db3-4ae0-af81-40e31dfef6a4" containerName="extract-content" Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.394614 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="170382f0-7db3-4ae0-af81-40e31dfef6a4" containerName="extract-content" Dec 11 14:02:20 crc kubenswrapper[5050]: E1211 14:02:20.394622 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="170382f0-7db3-4ae0-af81-40e31dfef6a4" containerName="extract-utilities" Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.394628 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="170382f0-7db3-4ae0-af81-40e31dfef6a4" containerName="extract-utilities" Dec 11 14:02:20 crc kubenswrapper[5050]: E1211 14:02:20.394638 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65c19a4d-591e-4386-abed-2d56ae8107a9" containerName="extract" Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.394643 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="65c19a4d-591e-4386-abed-2d56ae8107a9" containerName="extract" Dec 11 14:02:20 crc kubenswrapper[5050]: E1211 14:02:20.394660 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="170382f0-7db3-4ae0-af81-40e31dfef6a4" containerName="registry-server" Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.394666 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="170382f0-7db3-4ae0-af81-40e31dfef6a4" containerName="registry-server" Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.394780 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="170382f0-7db3-4ae0-af81-40e31dfef6a4" containerName="registry-server" Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.394794 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="65c19a4d-591e-4386-abed-2d56ae8107a9" containerName="extract" Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.395217 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-rgsmz" Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.397144 5050 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-p54cz" Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.397372 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.398117 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.421029 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-rgsmz"] Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.452472 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5bfc8ad2-08e1-4051-a1d0-2c9dc26c98d4-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-rgsmz\" (UID: \"5bfc8ad2-08e1-4051-a1d0-2c9dc26c98d4\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-rgsmz" Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.452618 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9gpk\" (UniqueName: \"kubernetes.io/projected/5bfc8ad2-08e1-4051-a1d0-2c9dc26c98d4-kube-api-access-l9gpk\") pod \"cert-manager-operator-controller-manager-64cf6dff88-rgsmz\" (UID: \"5bfc8ad2-08e1-4051-a1d0-2c9dc26c98d4\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-rgsmz" Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.554135 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9gpk\" (UniqueName: \"kubernetes.io/projected/5bfc8ad2-08e1-4051-a1d0-2c9dc26c98d4-kube-api-access-l9gpk\") pod \"cert-manager-operator-controller-manager-64cf6dff88-rgsmz\" (UID: \"5bfc8ad2-08e1-4051-a1d0-2c9dc26c98d4\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-rgsmz" Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.554224 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5bfc8ad2-08e1-4051-a1d0-2c9dc26c98d4-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-rgsmz\" (UID: \"5bfc8ad2-08e1-4051-a1d0-2c9dc26c98d4\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-rgsmz" Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.554755 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5bfc8ad2-08e1-4051-a1d0-2c9dc26c98d4-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-rgsmz\" (UID: \"5bfc8ad2-08e1-4051-a1d0-2c9dc26c98d4\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-rgsmz" Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.598154 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9gpk\" (UniqueName: \"kubernetes.io/projected/5bfc8ad2-08e1-4051-a1d0-2c9dc26c98d4-kube-api-access-l9gpk\") pod \"cert-manager-operator-controller-manager-64cf6dff88-rgsmz\" (UID: \"5bfc8ad2-08e1-4051-a1d0-2c9dc26c98d4\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-rgsmz" Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.710764 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-rgsmz" Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.794842 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rhcfr" Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.867474 5050 generic.go:334] "Generic (PLEG): container finished" podID="94cd9a09-8820-445b-9da3-1a1b90391048" containerID="ad4ba044fa6dbade1690045169970d381960da101f135047d13699a43c175b1a" exitCode=0 Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.867520 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rhcfr" event={"ID":"94cd9a09-8820-445b-9da3-1a1b90391048","Type":"ContainerDied","Data":"ad4ba044fa6dbade1690045169970d381960da101f135047d13699a43c175b1a"} Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.867554 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rhcfr" event={"ID":"94cd9a09-8820-445b-9da3-1a1b90391048","Type":"ContainerDied","Data":"520d831c731a08bb7c6a428993f0f92b2f642be2a1235974b383d1fc50bfa6ab"} Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.867577 5050 scope.go:117] "RemoveContainer" containerID="ad4ba044fa6dbade1690045169970d381960da101f135047d13699a43c175b1a" Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.867729 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rhcfr" Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.898024 5050 scope.go:117] "RemoveContainer" containerID="74c41a97889d92132feda08ab6a5729831720e8134e7834c085c9e1058917abf" Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.928355 5050 scope.go:117] "RemoveContainer" containerID="993d911c980036d208bb43e8f3d925fac09571919f8d54f11a21e0b00f3f0c03" Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.955588 5050 scope.go:117] "RemoveContainer" containerID="ad4ba044fa6dbade1690045169970d381960da101f135047d13699a43c175b1a" Dec 11 14:02:20 crc kubenswrapper[5050]: E1211 14:02:20.972342 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad4ba044fa6dbade1690045169970d381960da101f135047d13699a43c175b1a\": container with ID starting with ad4ba044fa6dbade1690045169970d381960da101f135047d13699a43c175b1a not found: ID does not exist" containerID="ad4ba044fa6dbade1690045169970d381960da101f135047d13699a43c175b1a" Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.972385 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad4ba044fa6dbade1690045169970d381960da101f135047d13699a43c175b1a"} err="failed to get container status \"ad4ba044fa6dbade1690045169970d381960da101f135047d13699a43c175b1a\": rpc error: code = NotFound desc = could not find container \"ad4ba044fa6dbade1690045169970d381960da101f135047d13699a43c175b1a\": container with ID starting with ad4ba044fa6dbade1690045169970d381960da101f135047d13699a43c175b1a not found: ID does not exist" Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.972412 5050 scope.go:117] "RemoveContainer" containerID="74c41a97889d92132feda08ab6a5729831720e8134e7834c085c9e1058917abf" Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.973048 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94cd9a09-8820-445b-9da3-1a1b90391048-utilities\") pod \"94cd9a09-8820-445b-9da3-1a1b90391048\" (UID: \"94cd9a09-8820-445b-9da3-1a1b90391048\") " Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.973162 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94cd9a09-8820-445b-9da3-1a1b90391048-catalog-content\") pod \"94cd9a09-8820-445b-9da3-1a1b90391048\" (UID: \"94cd9a09-8820-445b-9da3-1a1b90391048\") " Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.973195 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-462dg\" (UniqueName: \"kubernetes.io/projected/94cd9a09-8820-445b-9da3-1a1b90391048-kube-api-access-462dg\") pod \"94cd9a09-8820-445b-9da3-1a1b90391048\" (UID: \"94cd9a09-8820-445b-9da3-1a1b90391048\") " Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.973532 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-rgsmz"] Dec 11 14:02:20 crc kubenswrapper[5050]: E1211 14:02:20.975161 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74c41a97889d92132feda08ab6a5729831720e8134e7834c085c9e1058917abf\": container with ID starting with 74c41a97889d92132feda08ab6a5729831720e8134e7834c085c9e1058917abf not found: ID does not exist" containerID="74c41a97889d92132feda08ab6a5729831720e8134e7834c085c9e1058917abf" Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.975209 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74c41a97889d92132feda08ab6a5729831720e8134e7834c085c9e1058917abf"} err="failed to get container status \"74c41a97889d92132feda08ab6a5729831720e8134e7834c085c9e1058917abf\": rpc error: code = NotFound desc = could not find container \"74c41a97889d92132feda08ab6a5729831720e8134e7834c085c9e1058917abf\": container with ID starting with 74c41a97889d92132feda08ab6a5729831720e8134e7834c085c9e1058917abf not found: ID does not exist" Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.975241 5050 scope.go:117] "RemoveContainer" containerID="993d911c980036d208bb43e8f3d925fac09571919f8d54f11a21e0b00f3f0c03" Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.976106 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94cd9a09-8820-445b-9da3-1a1b90391048-utilities" (OuterVolumeSpecName: "utilities") pod "94cd9a09-8820-445b-9da3-1a1b90391048" (UID: "94cd9a09-8820-445b-9da3-1a1b90391048"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:02:20 crc kubenswrapper[5050]: E1211 14:02:20.979171 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"993d911c980036d208bb43e8f3d925fac09571919f8d54f11a21e0b00f3f0c03\": container with ID starting with 993d911c980036d208bb43e8f3d925fac09571919f8d54f11a21e0b00f3f0c03 not found: ID does not exist" containerID="993d911c980036d208bb43e8f3d925fac09571919f8d54f11a21e0b00f3f0c03" Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.979205 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"993d911c980036d208bb43e8f3d925fac09571919f8d54f11a21e0b00f3f0c03"} err="failed to get container status \"993d911c980036d208bb43e8f3d925fac09571919f8d54f11a21e0b00f3f0c03\": rpc error: code = NotFound desc = could not find container \"993d911c980036d208bb43e8f3d925fac09571919f8d54f11a21e0b00f3f0c03\": container with ID starting with 993d911c980036d208bb43e8f3d925fac09571919f8d54f11a21e0b00f3f0c03 not found: ID does not exist" Dec 11 14:02:20 crc kubenswrapper[5050]: I1211 14:02:20.979373 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94cd9a09-8820-445b-9da3-1a1b90391048-kube-api-access-462dg" (OuterVolumeSpecName: "kube-api-access-462dg") pod "94cd9a09-8820-445b-9da3-1a1b90391048" (UID: "94cd9a09-8820-445b-9da3-1a1b90391048"). InnerVolumeSpecName "kube-api-access-462dg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:02:21 crc kubenswrapper[5050]: I1211 14:02:21.039680 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94cd9a09-8820-445b-9da3-1a1b90391048-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94cd9a09-8820-445b-9da3-1a1b90391048" (UID: "94cd9a09-8820-445b-9da3-1a1b90391048"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:02:21 crc kubenswrapper[5050]: I1211 14:02:21.075273 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94cd9a09-8820-445b-9da3-1a1b90391048-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 14:02:21 crc kubenswrapper[5050]: I1211 14:02:21.075357 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-462dg\" (UniqueName: \"kubernetes.io/projected/94cd9a09-8820-445b-9da3-1a1b90391048-kube-api-access-462dg\") on node \"crc\" DevicePath \"\"" Dec 11 14:02:21 crc kubenswrapper[5050]: I1211 14:02:21.075376 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94cd9a09-8820-445b-9da3-1a1b90391048-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 14:02:21 crc kubenswrapper[5050]: I1211 14:02:21.199558 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rhcfr"] Dec 11 14:02:21 crc kubenswrapper[5050]: I1211 14:02:21.205703 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rhcfr"] Dec 11 14:02:21 crc kubenswrapper[5050]: I1211 14:02:21.554190 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94cd9a09-8820-445b-9da3-1a1b90391048" path="/var/lib/kubelet/pods/94cd9a09-8820-445b-9da3-1a1b90391048/volumes" Dec 11 14:02:21 crc kubenswrapper[5050]: I1211 14:02:21.874549 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-rgsmz" event={"ID":"5bfc8ad2-08e1-4051-a1d0-2c9dc26c98d4","Type":"ContainerStarted","Data":"f02ddfa4960768216807e2636c418bdb162188f863b192dda9f2200e9093e613"} Dec 11 14:02:28 crc kubenswrapper[5050]: I1211 14:02:28.946915 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-rgsmz" event={"ID":"5bfc8ad2-08e1-4051-a1d0-2c9dc26c98d4","Type":"ContainerStarted","Data":"c435c8e654b1d54374ed2dcfb27f0fc65cefda85dd453a9e9c5cbe401f5d2010"} Dec 11 14:02:28 crc kubenswrapper[5050]: I1211 14:02:28.970829 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-rgsmz" podStartSLOduration=1.312473942 podStartE2EDuration="8.970804858s" podCreationTimestamp="2025-12-11 14:02:20 +0000 UTC" firstStartedPulling="2025-12-11 14:02:20.980951474 +0000 UTC m=+831.824674060" lastFinishedPulling="2025-12-11 14:02:28.63928239 +0000 UTC m=+839.483004976" observedRunningTime="2025-12-11 14:02:28.969805381 +0000 UTC m=+839.813527967" watchObservedRunningTime="2025-12-11 14:02:28.970804858 +0000 UTC m=+839.814527444" Dec 11 14:02:32 crc kubenswrapper[5050]: I1211 14:02:32.836360 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-jblnx"] Dec 11 14:02:32 crc kubenswrapper[5050]: E1211 14:02:32.837728 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94cd9a09-8820-445b-9da3-1a1b90391048" containerName="registry-server" Dec 11 14:02:32 crc kubenswrapper[5050]: I1211 14:02:32.837746 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="94cd9a09-8820-445b-9da3-1a1b90391048" containerName="registry-server" Dec 11 14:02:32 crc kubenswrapper[5050]: E1211 14:02:32.837768 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94cd9a09-8820-445b-9da3-1a1b90391048" containerName="extract-utilities" Dec 11 14:02:32 crc kubenswrapper[5050]: I1211 14:02:32.837776 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="94cd9a09-8820-445b-9da3-1a1b90391048" containerName="extract-utilities" Dec 11 14:02:32 crc kubenswrapper[5050]: E1211 14:02:32.837796 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94cd9a09-8820-445b-9da3-1a1b90391048" containerName="extract-content" Dec 11 14:02:32 crc kubenswrapper[5050]: I1211 14:02:32.837804 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="94cd9a09-8820-445b-9da3-1a1b90391048" containerName="extract-content" Dec 11 14:02:32 crc kubenswrapper[5050]: I1211 14:02:32.838076 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="94cd9a09-8820-445b-9da3-1a1b90391048" containerName="registry-server" Dec 11 14:02:32 crc kubenswrapper[5050]: I1211 14:02:32.839159 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-jblnx" Dec 11 14:02:32 crc kubenswrapper[5050]: I1211 14:02:32.849820 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 11 14:02:32 crc kubenswrapper[5050]: I1211 14:02:32.850064 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 11 14:02:32 crc kubenswrapper[5050]: I1211 14:02:32.851117 5050 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-kbhwz" Dec 11 14:02:32 crc kubenswrapper[5050]: I1211 14:02:32.867584 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-jblnx"] Dec 11 14:02:32 crc kubenswrapper[5050]: I1211 14:02:32.971287 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2c12a493-edc8-4747-93ca-ac8b510cb7a3-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-jblnx\" (UID: \"2c12a493-edc8-4747-93ca-ac8b510cb7a3\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-jblnx" Dec 11 14:02:32 crc kubenswrapper[5050]: I1211 14:02:32.971335 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtqgt\" (UniqueName: \"kubernetes.io/projected/2c12a493-edc8-4747-93ca-ac8b510cb7a3-kube-api-access-gtqgt\") pod \"cert-manager-webhook-f4fb5df64-jblnx\" (UID: \"2c12a493-edc8-4747-93ca-ac8b510cb7a3\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-jblnx" Dec 11 14:02:33 crc kubenswrapper[5050]: I1211 14:02:33.072810 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2c12a493-edc8-4747-93ca-ac8b510cb7a3-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-jblnx\" (UID: \"2c12a493-edc8-4747-93ca-ac8b510cb7a3\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-jblnx" Dec 11 14:02:33 crc kubenswrapper[5050]: I1211 14:02:33.072871 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtqgt\" (UniqueName: \"kubernetes.io/projected/2c12a493-edc8-4747-93ca-ac8b510cb7a3-kube-api-access-gtqgt\") pod \"cert-manager-webhook-f4fb5df64-jblnx\" (UID: \"2c12a493-edc8-4747-93ca-ac8b510cb7a3\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-jblnx" Dec 11 14:02:33 crc kubenswrapper[5050]: I1211 14:02:33.091848 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtqgt\" (UniqueName: \"kubernetes.io/projected/2c12a493-edc8-4747-93ca-ac8b510cb7a3-kube-api-access-gtqgt\") pod \"cert-manager-webhook-f4fb5df64-jblnx\" (UID: \"2c12a493-edc8-4747-93ca-ac8b510cb7a3\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-jblnx" Dec 11 14:02:33 crc kubenswrapper[5050]: I1211 14:02:33.092320 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2c12a493-edc8-4747-93ca-ac8b510cb7a3-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-jblnx\" (UID: \"2c12a493-edc8-4747-93ca-ac8b510cb7a3\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-jblnx" Dec 11 14:02:33 crc kubenswrapper[5050]: I1211 14:02:33.167915 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-jblnx" Dec 11 14:02:33 crc kubenswrapper[5050]: I1211 14:02:33.378522 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-jblnx"] Dec 11 14:02:33 crc kubenswrapper[5050]: I1211 14:02:33.981095 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-jblnx" event={"ID":"2c12a493-edc8-4747-93ca-ac8b510cb7a3","Type":"ContainerStarted","Data":"31824bacc18cabfc9b18367579d5a2067f5ff053e5b8dd9a9e5243c96bd1d546"} Dec 11 14:02:35 crc kubenswrapper[5050]: I1211 14:02:35.404336 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-wksw5"] Dec 11 14:02:35 crc kubenswrapper[5050]: I1211 14:02:35.405508 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-wksw5" Dec 11 14:02:35 crc kubenswrapper[5050]: I1211 14:02:35.407864 5050 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-sxrxp" Dec 11 14:02:35 crc kubenswrapper[5050]: I1211 14:02:35.410683 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ebcfee9-160d-4440-b885-66ae4d5d66a7-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-wksw5\" (UID: \"2ebcfee9-160d-4440-b885-66ae4d5d66a7\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-wksw5" Dec 11 14:02:35 crc kubenswrapper[5050]: I1211 14:02:35.410764 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn65l\" (UniqueName: \"kubernetes.io/projected/2ebcfee9-160d-4440-b885-66ae4d5d66a7-kube-api-access-mn65l\") pod \"cert-manager-cainjector-855d9ccff4-wksw5\" (UID: \"2ebcfee9-160d-4440-b885-66ae4d5d66a7\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-wksw5" Dec 11 14:02:35 crc kubenswrapper[5050]: I1211 14:02:35.415406 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-wksw5"] Dec 11 14:02:35 crc kubenswrapper[5050]: I1211 14:02:35.512583 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ebcfee9-160d-4440-b885-66ae4d5d66a7-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-wksw5\" (UID: \"2ebcfee9-160d-4440-b885-66ae4d5d66a7\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-wksw5" Dec 11 14:02:35 crc kubenswrapper[5050]: I1211 14:02:35.512664 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn65l\" (UniqueName: \"kubernetes.io/projected/2ebcfee9-160d-4440-b885-66ae4d5d66a7-kube-api-access-mn65l\") pod \"cert-manager-cainjector-855d9ccff4-wksw5\" (UID: \"2ebcfee9-160d-4440-b885-66ae4d5d66a7\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-wksw5" Dec 11 14:02:35 crc kubenswrapper[5050]: I1211 14:02:35.531418 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2ebcfee9-160d-4440-b885-66ae4d5d66a7-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-wksw5\" (UID: \"2ebcfee9-160d-4440-b885-66ae4d5d66a7\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-wksw5" Dec 11 14:02:35 crc kubenswrapper[5050]: I1211 14:02:35.531568 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn65l\" (UniqueName: \"kubernetes.io/projected/2ebcfee9-160d-4440-b885-66ae4d5d66a7-kube-api-access-mn65l\") pod \"cert-manager-cainjector-855d9ccff4-wksw5\" (UID: \"2ebcfee9-160d-4440-b885-66ae4d5d66a7\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-wksw5" Dec 11 14:02:35 crc kubenswrapper[5050]: I1211 14:02:35.732067 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-wksw5" Dec 11 14:02:36 crc kubenswrapper[5050]: I1211 14:02:36.171328 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-wksw5"] Dec 11 14:02:36 crc kubenswrapper[5050]: W1211 14:02:36.182888 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ebcfee9_160d_4440_b885_66ae4d5d66a7.slice/crio-815dbafa079f94fa27613e735d2aa7617b3c51d0d9dadd34627c43f95585689f WatchSource:0}: Error finding container 815dbafa079f94fa27613e735d2aa7617b3c51d0d9dadd34627c43f95585689f: Status 404 returned error can't find the container with id 815dbafa079f94fa27613e735d2aa7617b3c51d0d9dadd34627c43f95585689f Dec 11 14:02:37 crc kubenswrapper[5050]: I1211 14:02:37.004874 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-wksw5" event={"ID":"2ebcfee9-160d-4440-b885-66ae4d5d66a7","Type":"ContainerStarted","Data":"815dbafa079f94fa27613e735d2aa7617b3c51d0d9dadd34627c43f95585689f"} Dec 11 14:02:40 crc kubenswrapper[5050]: I1211 14:02:40.797587 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:02:40 crc kubenswrapper[5050]: I1211 14:02:40.798079 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:02:40 crc kubenswrapper[5050]: I1211 14:02:40.798124 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 14:02:40 crc kubenswrapper[5050]: I1211 14:02:40.798652 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d1f524fdfc663274504f05a5df8397287ddcfa403493dc19698f5ccd6febdfcc"} pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 14:02:40 crc kubenswrapper[5050]: I1211 14:02:40.798705 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" containerID="cri-o://d1f524fdfc663274504f05a5df8397287ddcfa403493dc19698f5ccd6febdfcc" gracePeriod=600 Dec 11 14:02:42 crc kubenswrapper[5050]: I1211 14:02:42.042387 5050 generic.go:334] "Generic (PLEG): container finished" podID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerID="d1f524fdfc663274504f05a5df8397287ddcfa403493dc19698f5ccd6febdfcc" exitCode=0 Dec 11 14:02:42 crc kubenswrapper[5050]: I1211 14:02:42.042603 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerDied","Data":"d1f524fdfc663274504f05a5df8397287ddcfa403493dc19698f5ccd6febdfcc"} Dec 11 14:02:42 crc kubenswrapper[5050]: I1211 14:02:42.043037 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerStarted","Data":"9bb2e8f5b00b062d127a620edb8af7fea8c346ac42e77621e28b4dafad3e5aa5"} Dec 11 14:02:42 crc kubenswrapper[5050]: I1211 14:02:42.043070 5050 scope.go:117] "RemoveContainer" containerID="6bf16e768e2e41dab60a5957da98ce41ef5df422f328d1b4578bc9e09da04537" Dec 11 14:02:43 crc kubenswrapper[5050]: I1211 14:02:43.051187 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-wksw5" event={"ID":"2ebcfee9-160d-4440-b885-66ae4d5d66a7","Type":"ContainerStarted","Data":"b5b238e11deb76d8ada2567393d84664750165ff357bd9ffde0f805ab53433f1"} Dec 11 14:02:43 crc kubenswrapper[5050]: I1211 14:02:43.053438 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-jblnx" event={"ID":"2c12a493-edc8-4747-93ca-ac8b510cb7a3","Type":"ContainerStarted","Data":"9ab5106146b2b6ac8de3b6fbe9fc6eb58aab15fe9939fdd75963208ac5465a8a"} Dec 11 14:02:43 crc kubenswrapper[5050]: I1211 14:02:43.053643 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-jblnx" Dec 11 14:02:43 crc kubenswrapper[5050]: I1211 14:02:43.068480 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-wksw5" podStartSLOduration=2.403022639 podStartE2EDuration="8.06845252s" podCreationTimestamp="2025-12-11 14:02:35 +0000 UTC" firstStartedPulling="2025-12-11 14:02:36.185607634 +0000 UTC m=+847.029330220" lastFinishedPulling="2025-12-11 14:02:41.851037515 +0000 UTC m=+852.694760101" observedRunningTime="2025-12-11 14:02:43.065993484 +0000 UTC m=+853.909716070" watchObservedRunningTime="2025-12-11 14:02:43.06845252 +0000 UTC m=+853.912175106" Dec 11 14:02:43 crc kubenswrapper[5050]: I1211 14:02:43.087108 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-jblnx" podStartSLOduration=2.58776246 podStartE2EDuration="11.087091333s" podCreationTimestamp="2025-12-11 14:02:32 +0000 UTC" firstStartedPulling="2025-12-11 14:02:33.400968201 +0000 UTC m=+844.244690787" lastFinishedPulling="2025-12-11 14:02:41.900297074 +0000 UTC m=+852.744019660" observedRunningTime="2025-12-11 14:02:43.086017694 +0000 UTC m=+853.929740280" watchObservedRunningTime="2025-12-11 14:02:43.087091333 +0000 UTC m=+853.930813919" Dec 11 14:02:48 crc kubenswrapper[5050]: I1211 14:02:48.170987 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-jblnx" Dec 11 14:02:51 crc kubenswrapper[5050]: I1211 14:02:51.855220 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-9sn7j"] Dec 11 14:02:51 crc kubenswrapper[5050]: I1211 14:02:51.856699 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-9sn7j" Dec 11 14:02:51 crc kubenswrapper[5050]: I1211 14:02:51.864930 5050 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-8z6ch" Dec 11 14:02:51 crc kubenswrapper[5050]: I1211 14:02:51.866769 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-9sn7j"] Dec 11 14:02:51 crc kubenswrapper[5050]: I1211 14:02:51.944353 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/03c178a1-6fd8-4e37-8894-bcde36cef2b5-bound-sa-token\") pod \"cert-manager-86cb77c54b-9sn7j\" (UID: \"03c178a1-6fd8-4e37-8894-bcde36cef2b5\") " pod="cert-manager/cert-manager-86cb77c54b-9sn7j" Dec 11 14:02:51 crc kubenswrapper[5050]: I1211 14:02:51.944423 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ls55\" (UniqueName: \"kubernetes.io/projected/03c178a1-6fd8-4e37-8894-bcde36cef2b5-kube-api-access-6ls55\") pod \"cert-manager-86cb77c54b-9sn7j\" (UID: \"03c178a1-6fd8-4e37-8894-bcde36cef2b5\") " pod="cert-manager/cert-manager-86cb77c54b-9sn7j" Dec 11 14:02:52 crc kubenswrapper[5050]: I1211 14:02:52.045976 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ls55\" (UniqueName: \"kubernetes.io/projected/03c178a1-6fd8-4e37-8894-bcde36cef2b5-kube-api-access-6ls55\") pod \"cert-manager-86cb77c54b-9sn7j\" (UID: \"03c178a1-6fd8-4e37-8894-bcde36cef2b5\") " pod="cert-manager/cert-manager-86cb77c54b-9sn7j" Dec 11 14:02:52 crc kubenswrapper[5050]: I1211 14:02:52.046129 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/03c178a1-6fd8-4e37-8894-bcde36cef2b5-bound-sa-token\") pod \"cert-manager-86cb77c54b-9sn7j\" (UID: \"03c178a1-6fd8-4e37-8894-bcde36cef2b5\") " pod="cert-manager/cert-manager-86cb77c54b-9sn7j" Dec 11 14:02:52 crc kubenswrapper[5050]: I1211 14:02:52.076742 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ls55\" (UniqueName: \"kubernetes.io/projected/03c178a1-6fd8-4e37-8894-bcde36cef2b5-kube-api-access-6ls55\") pod \"cert-manager-86cb77c54b-9sn7j\" (UID: \"03c178a1-6fd8-4e37-8894-bcde36cef2b5\") " pod="cert-manager/cert-manager-86cb77c54b-9sn7j" Dec 11 14:02:52 crc kubenswrapper[5050]: I1211 14:02:52.076833 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/03c178a1-6fd8-4e37-8894-bcde36cef2b5-bound-sa-token\") pod \"cert-manager-86cb77c54b-9sn7j\" (UID: \"03c178a1-6fd8-4e37-8894-bcde36cef2b5\") " pod="cert-manager/cert-manager-86cb77c54b-9sn7j" Dec 11 14:02:52 crc kubenswrapper[5050]: I1211 14:02:52.181870 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-9sn7j" Dec 11 14:02:52 crc kubenswrapper[5050]: I1211 14:02:52.410742 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-9sn7j"] Dec 11 14:02:52 crc kubenswrapper[5050]: W1211 14:02:52.420294 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod03c178a1_6fd8_4e37_8894_bcde36cef2b5.slice/crio-83c33a40f9b1bd6b422b053d14a9c918032bdc15ef2a56fc737197aedeed8f86 WatchSource:0}: Error finding container 83c33a40f9b1bd6b422b053d14a9c918032bdc15ef2a56fc737197aedeed8f86: Status 404 returned error can't find the container with id 83c33a40f9b1bd6b422b053d14a9c918032bdc15ef2a56fc737197aedeed8f86 Dec 11 14:02:53 crc kubenswrapper[5050]: I1211 14:02:53.120724 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-9sn7j" event={"ID":"03c178a1-6fd8-4e37-8894-bcde36cef2b5","Type":"ContainerStarted","Data":"fdd20fe3de48744d9d76bc3a7bc81e20bbb24203abfebe2b9b047ef24da55d83"} Dec 11 14:02:53 crc kubenswrapper[5050]: I1211 14:02:53.121233 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-9sn7j" event={"ID":"03c178a1-6fd8-4e37-8894-bcde36cef2b5","Type":"ContainerStarted","Data":"83c33a40f9b1bd6b422b053d14a9c918032bdc15ef2a56fc737197aedeed8f86"} Dec 11 14:02:53 crc kubenswrapper[5050]: I1211 14:02:53.141756 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-9sn7j" podStartSLOduration=2.141733051 podStartE2EDuration="2.141733051s" podCreationTimestamp="2025-12-11 14:02:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:02:53.136164821 +0000 UTC m=+863.979887407" watchObservedRunningTime="2025-12-11 14:02:53.141733051 +0000 UTC m=+863.985455637" Dec 11 14:03:01 crc kubenswrapper[5050]: I1211 14:03:01.556454 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-cf689"] Dec 11 14:03:01 crc kubenswrapper[5050]: I1211 14:03:01.558192 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cf689" Dec 11 14:03:01 crc kubenswrapper[5050]: I1211 14:03:01.560622 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 11 14:03:01 crc kubenswrapper[5050]: I1211 14:03:01.560642 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-f86tg" Dec 11 14:03:01 crc kubenswrapper[5050]: I1211 14:03:01.565052 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 11 14:03:01 crc kubenswrapper[5050]: I1211 14:03:01.575063 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cf689"] Dec 11 14:03:01 crc kubenswrapper[5050]: I1211 14:03:01.607507 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zx5q\" (UniqueName: \"kubernetes.io/projected/6a811eff-10cc-423b-bdd3-c4b02e227507-kube-api-access-6zx5q\") pod \"openstack-operator-index-cf689\" (UID: \"6a811eff-10cc-423b-bdd3-c4b02e227507\") " pod="openstack-operators/openstack-operator-index-cf689" Dec 11 14:03:01 crc kubenswrapper[5050]: I1211 14:03:01.709904 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zx5q\" (UniqueName: \"kubernetes.io/projected/6a811eff-10cc-423b-bdd3-c4b02e227507-kube-api-access-6zx5q\") pod \"openstack-operator-index-cf689\" (UID: \"6a811eff-10cc-423b-bdd3-c4b02e227507\") " pod="openstack-operators/openstack-operator-index-cf689" Dec 11 14:03:01 crc kubenswrapper[5050]: I1211 14:03:01.741638 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zx5q\" (UniqueName: \"kubernetes.io/projected/6a811eff-10cc-423b-bdd3-c4b02e227507-kube-api-access-6zx5q\") pod \"openstack-operator-index-cf689\" (UID: \"6a811eff-10cc-423b-bdd3-c4b02e227507\") " pod="openstack-operators/openstack-operator-index-cf689" Dec 11 14:03:01 crc kubenswrapper[5050]: I1211 14:03:01.884479 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cf689" Dec 11 14:03:02 crc kubenswrapper[5050]: I1211 14:03:02.158039 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cf689"] Dec 11 14:03:02 crc kubenswrapper[5050]: I1211 14:03:02.186409 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cf689" event={"ID":"6a811eff-10cc-423b-bdd3-c4b02e227507","Type":"ContainerStarted","Data":"2ae2144086839c9e1db0a5a68451d6112b3e937ce8a0ff2c209c5bd42446fae7"} Dec 11 14:03:04 crc kubenswrapper[5050]: I1211 14:03:04.209657 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cf689" event={"ID":"6a811eff-10cc-423b-bdd3-c4b02e227507","Type":"ContainerStarted","Data":"e040c6bdb9b28ba75ba3760d823ce699d30661b8c4a34209639f93fdff1bba24"} Dec 11 14:03:04 crc kubenswrapper[5050]: I1211 14:03:04.226884 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-cf689" podStartSLOduration=1.633534571 podStartE2EDuration="3.226864202s" podCreationTimestamp="2025-12-11 14:03:01 +0000 UTC" firstStartedPulling="2025-12-11 14:03:02.171057319 +0000 UTC m=+873.014779925" lastFinishedPulling="2025-12-11 14:03:03.76438697 +0000 UTC m=+874.608109556" observedRunningTime="2025-12-11 14:03:04.222499054 +0000 UTC m=+875.066221640" watchObservedRunningTime="2025-12-11 14:03:04.226864202 +0000 UTC m=+875.070586788" Dec 11 14:03:05 crc kubenswrapper[5050]: I1211 14:03:05.050255 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-cf689"] Dec 11 14:03:05 crc kubenswrapper[5050]: I1211 14:03:05.663261 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-phq7c"] Dec 11 14:03:05 crc kubenswrapper[5050]: I1211 14:03:05.665259 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-phq7c" Dec 11 14:03:05 crc kubenswrapper[5050]: I1211 14:03:05.670565 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-phq7c"] Dec 11 14:03:05 crc kubenswrapper[5050]: I1211 14:03:05.767526 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58css\" (UniqueName: \"kubernetes.io/projected/f86ca167-987a-4dfb-8964-bca468fa2994-kube-api-access-58css\") pod \"openstack-operator-index-phq7c\" (UID: \"f86ca167-987a-4dfb-8964-bca468fa2994\") " pod="openstack-operators/openstack-operator-index-phq7c" Dec 11 14:03:05 crc kubenswrapper[5050]: I1211 14:03:05.868728 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58css\" (UniqueName: \"kubernetes.io/projected/f86ca167-987a-4dfb-8964-bca468fa2994-kube-api-access-58css\") pod \"openstack-operator-index-phq7c\" (UID: \"f86ca167-987a-4dfb-8964-bca468fa2994\") " pod="openstack-operators/openstack-operator-index-phq7c" Dec 11 14:03:05 crc kubenswrapper[5050]: I1211 14:03:05.898710 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58css\" (UniqueName: \"kubernetes.io/projected/f86ca167-987a-4dfb-8964-bca468fa2994-kube-api-access-58css\") pod \"openstack-operator-index-phq7c\" (UID: \"f86ca167-987a-4dfb-8964-bca468fa2994\") " pod="openstack-operators/openstack-operator-index-phq7c" Dec 11 14:03:05 crc kubenswrapper[5050]: I1211 14:03:05.988887 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-phq7c" Dec 11 14:03:06 crc kubenswrapper[5050]: I1211 14:03:06.221662 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-cf689" podUID="6a811eff-10cc-423b-bdd3-c4b02e227507" containerName="registry-server" containerID="cri-o://e040c6bdb9b28ba75ba3760d823ce699d30661b8c4a34209639f93fdff1bba24" gracePeriod=2 Dec 11 14:03:06 crc kubenswrapper[5050]: I1211 14:03:06.413215 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-phq7c"] Dec 11 14:03:06 crc kubenswrapper[5050]: I1211 14:03:06.567678 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cf689" Dec 11 14:03:06 crc kubenswrapper[5050]: I1211 14:03:06.679716 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zx5q\" (UniqueName: \"kubernetes.io/projected/6a811eff-10cc-423b-bdd3-c4b02e227507-kube-api-access-6zx5q\") pod \"6a811eff-10cc-423b-bdd3-c4b02e227507\" (UID: \"6a811eff-10cc-423b-bdd3-c4b02e227507\") " Dec 11 14:03:06 crc kubenswrapper[5050]: I1211 14:03:06.687315 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a811eff-10cc-423b-bdd3-c4b02e227507-kube-api-access-6zx5q" (OuterVolumeSpecName: "kube-api-access-6zx5q") pod "6a811eff-10cc-423b-bdd3-c4b02e227507" (UID: "6a811eff-10cc-423b-bdd3-c4b02e227507"). InnerVolumeSpecName "kube-api-access-6zx5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:03:06 crc kubenswrapper[5050]: I1211 14:03:06.782125 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zx5q\" (UniqueName: \"kubernetes.io/projected/6a811eff-10cc-423b-bdd3-c4b02e227507-kube-api-access-6zx5q\") on node \"crc\" DevicePath \"\"" Dec 11 14:03:07 crc kubenswrapper[5050]: I1211 14:03:07.229760 5050 generic.go:334] "Generic (PLEG): container finished" podID="6a811eff-10cc-423b-bdd3-c4b02e227507" containerID="e040c6bdb9b28ba75ba3760d823ce699d30661b8c4a34209639f93fdff1bba24" exitCode=0 Dec 11 14:03:07 crc kubenswrapper[5050]: I1211 14:03:07.229829 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cf689" event={"ID":"6a811eff-10cc-423b-bdd3-c4b02e227507","Type":"ContainerDied","Data":"e040c6bdb9b28ba75ba3760d823ce699d30661b8c4a34209639f93fdff1bba24"} Dec 11 14:03:07 crc kubenswrapper[5050]: I1211 14:03:07.230243 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cf689" event={"ID":"6a811eff-10cc-423b-bdd3-c4b02e227507","Type":"ContainerDied","Data":"2ae2144086839c9e1db0a5a68451d6112b3e937ce8a0ff2c209c5bd42446fae7"} Dec 11 14:03:07 crc kubenswrapper[5050]: I1211 14:03:07.229922 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cf689" Dec 11 14:03:07 crc kubenswrapper[5050]: I1211 14:03:07.230268 5050 scope.go:117] "RemoveContainer" containerID="e040c6bdb9b28ba75ba3760d823ce699d30661b8c4a34209639f93fdff1bba24" Dec 11 14:03:07 crc kubenswrapper[5050]: I1211 14:03:07.234158 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-phq7c" event={"ID":"f86ca167-987a-4dfb-8964-bca468fa2994","Type":"ContainerStarted","Data":"26c17df01cbe3aad03817bb7b45cc58c5cbf2bf6fff09293efb59f0beb868136"} Dec 11 14:03:07 crc kubenswrapper[5050]: I1211 14:03:07.234187 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-phq7c" event={"ID":"f86ca167-987a-4dfb-8964-bca468fa2994","Type":"ContainerStarted","Data":"4430a01d1af86e759257c5e6bdf98a756dc2b5aa6118ee82d012ff58fb117adb"} Dec 11 14:03:07 crc kubenswrapper[5050]: I1211 14:03:07.248175 5050 scope.go:117] "RemoveContainer" containerID="e040c6bdb9b28ba75ba3760d823ce699d30661b8c4a34209639f93fdff1bba24" Dec 11 14:03:07 crc kubenswrapper[5050]: E1211 14:03:07.248614 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e040c6bdb9b28ba75ba3760d823ce699d30661b8c4a34209639f93fdff1bba24\": container with ID starting with e040c6bdb9b28ba75ba3760d823ce699d30661b8c4a34209639f93fdff1bba24 not found: ID does not exist" containerID="e040c6bdb9b28ba75ba3760d823ce699d30661b8c4a34209639f93fdff1bba24" Dec 11 14:03:07 crc kubenswrapper[5050]: I1211 14:03:07.248651 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e040c6bdb9b28ba75ba3760d823ce699d30661b8c4a34209639f93fdff1bba24"} err="failed to get container status \"e040c6bdb9b28ba75ba3760d823ce699d30661b8c4a34209639f93fdff1bba24\": rpc error: code = NotFound desc = could not find container \"e040c6bdb9b28ba75ba3760d823ce699d30661b8c4a34209639f93fdff1bba24\": container with ID starting with e040c6bdb9b28ba75ba3760d823ce699d30661b8c4a34209639f93fdff1bba24 not found: ID does not exist" Dec 11 14:03:07 crc kubenswrapper[5050]: I1211 14:03:07.269862 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-phq7c" podStartSLOduration=1.801874117 podStartE2EDuration="2.269837497s" podCreationTimestamp="2025-12-11 14:03:05 +0000 UTC" firstStartedPulling="2025-12-11 14:03:06.446588459 +0000 UTC m=+877.290311045" lastFinishedPulling="2025-12-11 14:03:06.914551839 +0000 UTC m=+877.758274425" observedRunningTime="2025-12-11 14:03:07.257877225 +0000 UTC m=+878.101599831" watchObservedRunningTime="2025-12-11 14:03:07.269837497 +0000 UTC m=+878.113560093" Dec 11 14:03:07 crc kubenswrapper[5050]: I1211 14:03:07.271525 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-cf689"] Dec 11 14:03:07 crc kubenswrapper[5050]: I1211 14:03:07.277133 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-cf689"] Dec 11 14:03:07 crc kubenswrapper[5050]: I1211 14:03:07.554249 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a811eff-10cc-423b-bdd3-c4b02e227507" path="/var/lib/kubelet/pods/6a811eff-10cc-423b-bdd3-c4b02e227507/volumes" Dec 11 14:03:15 crc kubenswrapper[5050]: I1211 14:03:15.989963 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-phq7c" Dec 11 14:03:15 crc kubenswrapper[5050]: I1211 14:03:15.991385 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-phq7c" Dec 11 14:03:16 crc kubenswrapper[5050]: I1211 14:03:16.025393 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-phq7c" Dec 11 14:03:16 crc kubenswrapper[5050]: I1211 14:03:16.327301 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-phq7c" Dec 11 14:03:17 crc kubenswrapper[5050]: I1211 14:03:17.095715 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/65ebaa037d815792c1ca03416167368b18d08b26771bfdbda948c8e9d1pxp9z"] Dec 11 14:03:17 crc kubenswrapper[5050]: E1211 14:03:17.096053 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a811eff-10cc-423b-bdd3-c4b02e227507" containerName="registry-server" Dec 11 14:03:17 crc kubenswrapper[5050]: I1211 14:03:17.096067 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a811eff-10cc-423b-bdd3-c4b02e227507" containerName="registry-server" Dec 11 14:03:17 crc kubenswrapper[5050]: I1211 14:03:17.096226 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a811eff-10cc-423b-bdd3-c4b02e227507" containerName="registry-server" Dec 11 14:03:17 crc kubenswrapper[5050]: I1211 14:03:17.097262 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/65ebaa037d815792c1ca03416167368b18d08b26771bfdbda948c8e9d1pxp9z" Dec 11 14:03:17 crc kubenswrapper[5050]: I1211 14:03:17.100602 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-nxz9v" Dec 11 14:03:17 crc kubenswrapper[5050]: I1211 14:03:17.111329 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/65ebaa037d815792c1ca03416167368b18d08b26771bfdbda948c8e9d1pxp9z"] Dec 11 14:03:17 crc kubenswrapper[5050]: I1211 14:03:17.128588 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xsq9\" (UniqueName: \"kubernetes.io/projected/4bb370da-cbc6-4de2-9097-187f35450436-kube-api-access-7xsq9\") pod \"65ebaa037d815792c1ca03416167368b18d08b26771bfdbda948c8e9d1pxp9z\" (UID: \"4bb370da-cbc6-4de2-9097-187f35450436\") " pod="openstack-operators/65ebaa037d815792c1ca03416167368b18d08b26771bfdbda948c8e9d1pxp9z" Dec 11 14:03:17 crc kubenswrapper[5050]: I1211 14:03:17.129053 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4bb370da-cbc6-4de2-9097-187f35450436-util\") pod \"65ebaa037d815792c1ca03416167368b18d08b26771bfdbda948c8e9d1pxp9z\" (UID: \"4bb370da-cbc6-4de2-9097-187f35450436\") " pod="openstack-operators/65ebaa037d815792c1ca03416167368b18d08b26771bfdbda948c8e9d1pxp9z" Dec 11 14:03:17 crc kubenswrapper[5050]: I1211 14:03:17.129190 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4bb370da-cbc6-4de2-9097-187f35450436-bundle\") pod \"65ebaa037d815792c1ca03416167368b18d08b26771bfdbda948c8e9d1pxp9z\" (UID: \"4bb370da-cbc6-4de2-9097-187f35450436\") " pod="openstack-operators/65ebaa037d815792c1ca03416167368b18d08b26771bfdbda948c8e9d1pxp9z" Dec 11 14:03:17 crc kubenswrapper[5050]: I1211 14:03:17.229820 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xsq9\" (UniqueName: \"kubernetes.io/projected/4bb370da-cbc6-4de2-9097-187f35450436-kube-api-access-7xsq9\") pod \"65ebaa037d815792c1ca03416167368b18d08b26771bfdbda948c8e9d1pxp9z\" (UID: \"4bb370da-cbc6-4de2-9097-187f35450436\") " pod="openstack-operators/65ebaa037d815792c1ca03416167368b18d08b26771bfdbda948c8e9d1pxp9z" Dec 11 14:03:17 crc kubenswrapper[5050]: I1211 14:03:17.229902 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4bb370da-cbc6-4de2-9097-187f35450436-util\") pod \"65ebaa037d815792c1ca03416167368b18d08b26771bfdbda948c8e9d1pxp9z\" (UID: \"4bb370da-cbc6-4de2-9097-187f35450436\") " pod="openstack-operators/65ebaa037d815792c1ca03416167368b18d08b26771bfdbda948c8e9d1pxp9z" Dec 11 14:03:17 crc kubenswrapper[5050]: I1211 14:03:17.229956 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4bb370da-cbc6-4de2-9097-187f35450436-bundle\") pod \"65ebaa037d815792c1ca03416167368b18d08b26771bfdbda948c8e9d1pxp9z\" (UID: \"4bb370da-cbc6-4de2-9097-187f35450436\") " pod="openstack-operators/65ebaa037d815792c1ca03416167368b18d08b26771bfdbda948c8e9d1pxp9z" Dec 11 14:03:17 crc kubenswrapper[5050]: I1211 14:03:17.230560 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4bb370da-cbc6-4de2-9097-187f35450436-bundle\") pod \"65ebaa037d815792c1ca03416167368b18d08b26771bfdbda948c8e9d1pxp9z\" (UID: \"4bb370da-cbc6-4de2-9097-187f35450436\") " pod="openstack-operators/65ebaa037d815792c1ca03416167368b18d08b26771bfdbda948c8e9d1pxp9z" Dec 11 14:03:17 crc kubenswrapper[5050]: I1211 14:03:17.230841 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4bb370da-cbc6-4de2-9097-187f35450436-util\") pod \"65ebaa037d815792c1ca03416167368b18d08b26771bfdbda948c8e9d1pxp9z\" (UID: \"4bb370da-cbc6-4de2-9097-187f35450436\") " pod="openstack-operators/65ebaa037d815792c1ca03416167368b18d08b26771bfdbda948c8e9d1pxp9z" Dec 11 14:03:17 crc kubenswrapper[5050]: I1211 14:03:17.257934 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xsq9\" (UniqueName: \"kubernetes.io/projected/4bb370da-cbc6-4de2-9097-187f35450436-kube-api-access-7xsq9\") pod \"65ebaa037d815792c1ca03416167368b18d08b26771bfdbda948c8e9d1pxp9z\" (UID: \"4bb370da-cbc6-4de2-9097-187f35450436\") " pod="openstack-operators/65ebaa037d815792c1ca03416167368b18d08b26771bfdbda948c8e9d1pxp9z" Dec 11 14:03:17 crc kubenswrapper[5050]: I1211 14:03:17.421490 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/65ebaa037d815792c1ca03416167368b18d08b26771bfdbda948c8e9d1pxp9z" Dec 11 14:03:17 crc kubenswrapper[5050]: I1211 14:03:17.622804 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/65ebaa037d815792c1ca03416167368b18d08b26771bfdbda948c8e9d1pxp9z"] Dec 11 14:03:18 crc kubenswrapper[5050]: I1211 14:03:18.310682 5050 generic.go:334] "Generic (PLEG): container finished" podID="4bb370da-cbc6-4de2-9097-187f35450436" containerID="1d33f718b9a0d03a88c63abcf59f75bda4a05700b008469f9403af03f40696fe" exitCode=0 Dec 11 14:03:18 crc kubenswrapper[5050]: I1211 14:03:18.310725 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/65ebaa037d815792c1ca03416167368b18d08b26771bfdbda948c8e9d1pxp9z" event={"ID":"4bb370da-cbc6-4de2-9097-187f35450436","Type":"ContainerDied","Data":"1d33f718b9a0d03a88c63abcf59f75bda4a05700b008469f9403af03f40696fe"} Dec 11 14:03:18 crc kubenswrapper[5050]: I1211 14:03:18.310751 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/65ebaa037d815792c1ca03416167368b18d08b26771bfdbda948c8e9d1pxp9z" event={"ID":"4bb370da-cbc6-4de2-9097-187f35450436","Type":"ContainerStarted","Data":"5116c9f851e3e4075d15385ec31a658a6de30260b5b18dd2ea0ecd2329a2a850"} Dec 11 14:03:20 crc kubenswrapper[5050]: I1211 14:03:20.327215 5050 generic.go:334] "Generic (PLEG): container finished" podID="4bb370da-cbc6-4de2-9097-187f35450436" containerID="da20abe58568f9684decf910aeb6f5865d14c909d811fe6807be175e9ace743d" exitCode=0 Dec 11 14:03:20 crc kubenswrapper[5050]: I1211 14:03:20.327289 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/65ebaa037d815792c1ca03416167368b18d08b26771bfdbda948c8e9d1pxp9z" event={"ID":"4bb370da-cbc6-4de2-9097-187f35450436","Type":"ContainerDied","Data":"da20abe58568f9684decf910aeb6f5865d14c909d811fe6807be175e9ace743d"} Dec 11 14:03:21 crc kubenswrapper[5050]: I1211 14:03:21.338183 5050 generic.go:334] "Generic (PLEG): container finished" podID="4bb370da-cbc6-4de2-9097-187f35450436" containerID="748245fcd6c1e7e9a069e777d63696f3b0d5c3c61654f045417bd97f9f8bea49" exitCode=0 Dec 11 14:03:21 crc kubenswrapper[5050]: I1211 14:03:21.338346 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/65ebaa037d815792c1ca03416167368b18d08b26771bfdbda948c8e9d1pxp9z" event={"ID":"4bb370da-cbc6-4de2-9097-187f35450436","Type":"ContainerDied","Data":"748245fcd6c1e7e9a069e777d63696f3b0d5c3c61654f045417bd97f9f8bea49"} Dec 11 14:03:22 crc kubenswrapper[5050]: I1211 14:03:22.572686 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/65ebaa037d815792c1ca03416167368b18d08b26771bfdbda948c8e9d1pxp9z" Dec 11 14:03:22 crc kubenswrapper[5050]: I1211 14:03:22.705407 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4bb370da-cbc6-4de2-9097-187f35450436-bundle\") pod \"4bb370da-cbc6-4de2-9097-187f35450436\" (UID: \"4bb370da-cbc6-4de2-9097-187f35450436\") " Dec 11 14:03:22 crc kubenswrapper[5050]: I1211 14:03:22.705493 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4bb370da-cbc6-4de2-9097-187f35450436-util\") pod \"4bb370da-cbc6-4de2-9097-187f35450436\" (UID: \"4bb370da-cbc6-4de2-9097-187f35450436\") " Dec 11 14:03:22 crc kubenswrapper[5050]: I1211 14:03:22.705563 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xsq9\" (UniqueName: \"kubernetes.io/projected/4bb370da-cbc6-4de2-9097-187f35450436-kube-api-access-7xsq9\") pod \"4bb370da-cbc6-4de2-9097-187f35450436\" (UID: \"4bb370da-cbc6-4de2-9097-187f35450436\") " Dec 11 14:03:22 crc kubenswrapper[5050]: I1211 14:03:22.706793 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bb370da-cbc6-4de2-9097-187f35450436-bundle" (OuterVolumeSpecName: "bundle") pod "4bb370da-cbc6-4de2-9097-187f35450436" (UID: "4bb370da-cbc6-4de2-9097-187f35450436"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:03:22 crc kubenswrapper[5050]: I1211 14:03:22.712152 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb370da-cbc6-4de2-9097-187f35450436-kube-api-access-7xsq9" (OuterVolumeSpecName: "kube-api-access-7xsq9") pod "4bb370da-cbc6-4de2-9097-187f35450436" (UID: "4bb370da-cbc6-4de2-9097-187f35450436"). InnerVolumeSpecName "kube-api-access-7xsq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:03:22 crc kubenswrapper[5050]: I1211 14:03:22.727089 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bb370da-cbc6-4de2-9097-187f35450436-util" (OuterVolumeSpecName: "util") pod "4bb370da-cbc6-4de2-9097-187f35450436" (UID: "4bb370da-cbc6-4de2-9097-187f35450436"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:03:22 crc kubenswrapper[5050]: I1211 14:03:22.807763 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xsq9\" (UniqueName: \"kubernetes.io/projected/4bb370da-cbc6-4de2-9097-187f35450436-kube-api-access-7xsq9\") on node \"crc\" DevicePath \"\"" Dec 11 14:03:22 crc kubenswrapper[5050]: I1211 14:03:22.807818 5050 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4bb370da-cbc6-4de2-9097-187f35450436-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:03:22 crc kubenswrapper[5050]: I1211 14:03:22.807837 5050 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4bb370da-cbc6-4de2-9097-187f35450436-util\") on node \"crc\" DevicePath \"\"" Dec 11 14:03:23 crc kubenswrapper[5050]: I1211 14:03:23.352494 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/65ebaa037d815792c1ca03416167368b18d08b26771bfdbda948c8e9d1pxp9z" event={"ID":"4bb370da-cbc6-4de2-9097-187f35450436","Type":"ContainerDied","Data":"5116c9f851e3e4075d15385ec31a658a6de30260b5b18dd2ea0ecd2329a2a850"} Dec 11 14:03:23 crc kubenswrapper[5050]: I1211 14:03:23.352843 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5116c9f851e3e4075d15385ec31a658a6de30260b5b18dd2ea0ecd2329a2a850" Dec 11 14:03:23 crc kubenswrapper[5050]: I1211 14:03:23.352581 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/65ebaa037d815792c1ca03416167368b18d08b26771bfdbda948c8e9d1pxp9z" Dec 11 14:03:29 crc kubenswrapper[5050]: I1211 14:03:29.266219 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-799b66f579-tqs2v"] Dec 11 14:03:29 crc kubenswrapper[5050]: E1211 14:03:29.267077 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bb370da-cbc6-4de2-9097-187f35450436" containerName="extract" Dec 11 14:03:29 crc kubenswrapper[5050]: I1211 14:03:29.267089 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bb370da-cbc6-4de2-9097-187f35450436" containerName="extract" Dec 11 14:03:29 crc kubenswrapper[5050]: E1211 14:03:29.267105 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bb370da-cbc6-4de2-9097-187f35450436" containerName="util" Dec 11 14:03:29 crc kubenswrapper[5050]: I1211 14:03:29.267111 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bb370da-cbc6-4de2-9097-187f35450436" containerName="util" Dec 11 14:03:29 crc kubenswrapper[5050]: E1211 14:03:29.267124 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bb370da-cbc6-4de2-9097-187f35450436" containerName="pull" Dec 11 14:03:29 crc kubenswrapper[5050]: I1211 14:03:29.267130 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bb370da-cbc6-4de2-9097-187f35450436" containerName="pull" Dec 11 14:03:29 crc kubenswrapper[5050]: I1211 14:03:29.267248 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bb370da-cbc6-4de2-9097-187f35450436" containerName="extract" Dec 11 14:03:29 crc kubenswrapper[5050]: I1211 14:03:29.267628 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-799b66f579-tqs2v" Dec 11 14:03:29 crc kubenswrapper[5050]: I1211 14:03:29.271222 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-68tnd" Dec 11 14:03:29 crc kubenswrapper[5050]: I1211 14:03:29.294366 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-799b66f579-tqs2v"] Dec 11 14:03:29 crc kubenswrapper[5050]: I1211 14:03:29.392122 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ml4s\" (UniqueName: \"kubernetes.io/projected/53a1a6d1-6999-4fa1-a0ce-a20b83f1f347-kube-api-access-9ml4s\") pod \"openstack-operator-controller-operator-799b66f579-tqs2v\" (UID: \"53a1a6d1-6999-4fa1-a0ce-a20b83f1f347\") " pod="openstack-operators/openstack-operator-controller-operator-799b66f579-tqs2v" Dec 11 14:03:29 crc kubenswrapper[5050]: I1211 14:03:29.493048 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ml4s\" (UniqueName: \"kubernetes.io/projected/53a1a6d1-6999-4fa1-a0ce-a20b83f1f347-kube-api-access-9ml4s\") pod \"openstack-operator-controller-operator-799b66f579-tqs2v\" (UID: \"53a1a6d1-6999-4fa1-a0ce-a20b83f1f347\") " pod="openstack-operators/openstack-operator-controller-operator-799b66f579-tqs2v" Dec 11 14:03:29 crc kubenswrapper[5050]: I1211 14:03:29.513667 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ml4s\" (UniqueName: \"kubernetes.io/projected/53a1a6d1-6999-4fa1-a0ce-a20b83f1f347-kube-api-access-9ml4s\") pod \"openstack-operator-controller-operator-799b66f579-tqs2v\" (UID: \"53a1a6d1-6999-4fa1-a0ce-a20b83f1f347\") " pod="openstack-operators/openstack-operator-controller-operator-799b66f579-tqs2v" Dec 11 14:03:29 crc kubenswrapper[5050]: I1211 14:03:29.591951 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-68tnd" Dec 11 14:03:29 crc kubenswrapper[5050]: I1211 14:03:29.600452 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-799b66f579-tqs2v" Dec 11 14:03:30 crc kubenswrapper[5050]: I1211 14:03:30.040040 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-799b66f579-tqs2v"] Dec 11 14:03:30 crc kubenswrapper[5050]: W1211 14:03:30.045411 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53a1a6d1_6999_4fa1_a0ce_a20b83f1f347.slice/crio-c925a647f0e1c415b7d9b101caa55ae782ef048d0c4b35001eb567f1d68a4fcb WatchSource:0}: Error finding container c925a647f0e1c415b7d9b101caa55ae782ef048d0c4b35001eb567f1d68a4fcb: Status 404 returned error can't find the container with id c925a647f0e1c415b7d9b101caa55ae782ef048d0c4b35001eb567f1d68a4fcb Dec 11 14:03:30 crc kubenswrapper[5050]: I1211 14:03:30.400921 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-799b66f579-tqs2v" event={"ID":"53a1a6d1-6999-4fa1-a0ce-a20b83f1f347","Type":"ContainerStarted","Data":"c925a647f0e1c415b7d9b101caa55ae782ef048d0c4b35001eb567f1d68a4fcb"} Dec 11 14:03:36 crc kubenswrapper[5050]: I1211 14:03:36.444829 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-799b66f579-tqs2v" event={"ID":"53a1a6d1-6999-4fa1-a0ce-a20b83f1f347","Type":"ContainerStarted","Data":"5c7d365dce07000662fe39681396cb0bff613093313821486caa669cf3a6ba43"} Dec 11 14:03:36 crc kubenswrapper[5050]: I1211 14:03:36.445711 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-799b66f579-tqs2v" Dec 11 14:03:36 crc kubenswrapper[5050]: I1211 14:03:36.475875 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-799b66f579-tqs2v" podStartSLOduration=1.235318517 podStartE2EDuration="7.475858869s" podCreationTimestamp="2025-12-11 14:03:29 +0000 UTC" firstStartedPulling="2025-12-11 14:03:30.0479486 +0000 UTC m=+900.891671196" lastFinishedPulling="2025-12-11 14:03:36.288488952 +0000 UTC m=+907.132211548" observedRunningTime="2025-12-11 14:03:36.473735631 +0000 UTC m=+907.317458237" watchObservedRunningTime="2025-12-11 14:03:36.475858869 +0000 UTC m=+907.319581445" Dec 11 14:03:49 crc kubenswrapper[5050]: I1211 14:03:49.602866 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-799b66f579-tqs2v" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.192965 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-qdrgd"] Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.195241 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-qdrgd" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.204813 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-zlz4m" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.213193 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp"] Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.239087 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-qdrgd"] Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.239225 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.262844 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-8nf9c" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.271660 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp"] Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.286117 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-sqjhh"] Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.292784 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-sqjhh" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.322691 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-mc6vn" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.327792 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28gfs\" (UniqueName: \"kubernetes.io/projected/4b4e8aa9-a0f6-4bd2-92d2-c524aedf6389-kube-api-access-28gfs\") pod \"barbican-operator-controller-manager-7d9dfd778-qdrgd\" (UID: \"4b4e8aa9-a0f6-4bd2-92d2-c524aedf6389\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-qdrgd" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.333537 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-sqjhh"] Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.376456 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-9tcm2"] Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.377667 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-9tcm2" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.384861 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-nffdg" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.390106 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-9tcm2"] Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.404501 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-xl9wl"] Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.406115 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-xl9wl" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.424592 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-glgrh" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.429179 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntd8d\" (UniqueName: \"kubernetes.io/projected/85683dfb-37fb-4301-8c7a-fbb7453b303d-kube-api-access-ntd8d\") pod \"cinder-operator-controller-manager-6c677c69b-n7crp\" (UID: \"85683dfb-37fb-4301-8c7a-fbb7453b303d\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.429280 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25g6t\" (UniqueName: \"kubernetes.io/projected/0aa7657b-dbca-4b2b-ac62-7000681a918a-kube-api-access-25g6t\") pod \"designate-operator-controller-manager-697fb699cf-sqjhh\" (UID: \"0aa7657b-dbca-4b2b-ac62-7000681a918a\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-sqjhh" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.429314 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28gfs\" (UniqueName: \"kubernetes.io/projected/4b4e8aa9-a0f6-4bd2-92d2-c524aedf6389-kube-api-access-28gfs\") pod \"barbican-operator-controller-manager-7d9dfd778-qdrgd\" (UID: \"4b4e8aa9-a0f6-4bd2-92d2-c524aedf6389\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-qdrgd" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.443983 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qbc2f"] Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.447714 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qbc2f" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.451823 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-xl9wl"] Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.464717 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qbc2f"] Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.470282 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-d5dn6" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.481689 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw"] Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.493468 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.504262 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-25r77" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.504540 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.526109 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28gfs\" (UniqueName: \"kubernetes.io/projected/4b4e8aa9-a0f6-4bd2-92d2-c524aedf6389-kube-api-access-28gfs\") pod \"barbican-operator-controller-manager-7d9dfd778-qdrgd\" (UID: \"4b4e8aa9-a0f6-4bd2-92d2-c524aedf6389\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-qdrgd" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.533793 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntd8d\" (UniqueName: \"kubernetes.io/projected/85683dfb-37fb-4301-8c7a-fbb7453b303d-kube-api-access-ntd8d\") pod \"cinder-operator-controller-manager-6c677c69b-n7crp\" (UID: \"85683dfb-37fb-4301-8c7a-fbb7453b303d\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.533865 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc8wh\" (UniqueName: \"kubernetes.io/projected/048e17a7-0123-45a2-b698-02def3db74fe-kube-api-access-sc8wh\") pod \"glance-operator-controller-manager-5697bb5779-9tcm2\" (UID: \"048e17a7-0123-45a2-b698-02def3db74fe\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-9tcm2" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.533897 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25g6t\" (UniqueName: \"kubernetes.io/projected/0aa7657b-dbca-4b2b-ac62-7000681a918a-kube-api-access-25g6t\") pod \"designate-operator-controller-manager-697fb699cf-sqjhh\" (UID: \"0aa7657b-dbca-4b2b-ac62-7000681a918a\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-sqjhh" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.533925 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z8jr\" (UniqueName: \"kubernetes.io/projected/105854f4-5cc1-491f-983a-50864b37893f-kube-api-access-4z8jr\") pod \"heat-operator-controller-manager-5f64f6f8bb-xl9wl\" (UID: \"105854f4-5cc1-491f-983a-50864b37893f\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-xl9wl" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.549103 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw"] Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.555873 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-qdrgd" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.578028 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25g6t\" (UniqueName: \"kubernetes.io/projected/0aa7657b-dbca-4b2b-ac62-7000681a918a-kube-api-access-25g6t\") pod \"designate-operator-controller-manager-697fb699cf-sqjhh\" (UID: \"0aa7657b-dbca-4b2b-ac62-7000681a918a\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-sqjhh" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.585897 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntd8d\" (UniqueName: \"kubernetes.io/projected/85683dfb-37fb-4301-8c7a-fbb7453b303d-kube-api-access-ntd8d\") pod \"cinder-operator-controller-manager-6c677c69b-n7crp\" (UID: \"85683dfb-37fb-4301-8c7a-fbb7453b303d\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.623660 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-7stc2"] Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.624553 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2"] Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.627149 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.629053 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.629545 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-967d97867-7stc2" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.640120 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z8jr\" (UniqueName: \"kubernetes.io/projected/105854f4-5cc1-491f-983a-50864b37893f-kube-api-access-4z8jr\") pod \"heat-operator-controller-manager-5f64f6f8bb-xl9wl\" (UID: \"105854f4-5cc1-491f-983a-50864b37893f\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-xl9wl" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.640202 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dhdm\" (UniqueName: \"kubernetes.io/projected/3477354d-838b-48cc-a6c3-612088d82640-kube-api-access-2dhdm\") pod \"infra-operator-controller-manager-78d48bff9d-5g8lw\" (UID: \"3477354d-838b-48cc-a6c3-612088d82640\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.640264 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz6sx\" (UniqueName: \"kubernetes.io/projected/9726c3f9-bcae-4722-a054-5a66c161953b-kube-api-access-pz6sx\") pod \"horizon-operator-controller-manager-68c6d99b8f-qbc2f\" (UID: \"9726c3f9-bcae-4722-a054-5a66c161953b\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qbc2f" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.640314 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3477354d-838b-48cc-a6c3-612088d82640-cert\") pod \"infra-operator-controller-manager-78d48bff9d-5g8lw\" (UID: \"3477354d-838b-48cc-a6c3-612088d82640\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.640339 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc8wh\" (UniqueName: \"kubernetes.io/projected/048e17a7-0123-45a2-b698-02def3db74fe-kube-api-access-sc8wh\") pod \"glance-operator-controller-manager-5697bb5779-9tcm2\" (UID: \"048e17a7-0123-45a2-b698-02def3db74fe\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-9tcm2" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.647360 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-7stc2"] Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.659631 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-x9p8j" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.679196 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-sqjhh" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.686704 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-qd9ll" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.698525 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2"] Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.741149 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jq9vt"] Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.741958 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz6sx\" (UniqueName: \"kubernetes.io/projected/9726c3f9-bcae-4722-a054-5a66c161953b-kube-api-access-pz6sx\") pod \"horizon-operator-controller-manager-68c6d99b8f-qbc2f\" (UID: \"9726c3f9-bcae-4722-a054-5a66c161953b\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qbc2f" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.742915 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jq9vt" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.750749 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3477354d-838b-48cc-a6c3-612088d82640-cert\") pod \"infra-operator-controller-manager-78d48bff9d-5g8lw\" (UID: \"3477354d-838b-48cc-a6c3-612088d82640\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.750916 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf4nm\" (UniqueName: \"kubernetes.io/projected/06df6c8c-640d-431b-b216-78345a9054e1-kube-api-access-kf4nm\") pod \"keystone-operator-controller-manager-7765d96ddf-xgbp2\" (UID: \"06df6c8c-640d-431b-b216-78345a9054e1\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.751111 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cn45\" (UniqueName: \"kubernetes.io/projected/d7b70b3b-5481-4ac2-8e60-256e2690752f-kube-api-access-2cn45\") pod \"ironic-operator-controller-manager-967d97867-7stc2\" (UID: \"d7b70b3b-5481-4ac2-8e60-256e2690752f\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-7stc2" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.751160 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dhdm\" (UniqueName: \"kubernetes.io/projected/3477354d-838b-48cc-a6c3-612088d82640-kube-api-access-2dhdm\") pod \"infra-operator-controller-manager-78d48bff9d-5g8lw\" (UID: \"3477354d-838b-48cc-a6c3-612088d82640\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" Dec 11 14:04:11 crc kubenswrapper[5050]: E1211 14:04:11.751722 5050 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 11 14:04:11 crc kubenswrapper[5050]: E1211 14:04:11.751787 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3477354d-838b-48cc-a6c3-612088d82640-cert podName:3477354d-838b-48cc-a6c3-612088d82640 nodeName:}" failed. No retries permitted until 2025-12-11 14:04:12.251758202 +0000 UTC m=+943.095480788 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3477354d-838b-48cc-a6c3-612088d82640-cert") pod "infra-operator-controller-manager-78d48bff9d-5g8lw" (UID: "3477354d-838b-48cc-a6c3-612088d82640") : secret "infra-operator-webhook-server-cert" not found Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.755699 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-gkldc" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.762410 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc8wh\" (UniqueName: \"kubernetes.io/projected/048e17a7-0123-45a2-b698-02def3db74fe-kube-api-access-sc8wh\") pod \"glance-operator-controller-manager-5697bb5779-9tcm2\" (UID: \"048e17a7-0123-45a2-b698-02def3db74fe\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-9tcm2" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.793050 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jq9vt"] Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.803973 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z8jr\" (UniqueName: \"kubernetes.io/projected/105854f4-5cc1-491f-983a-50864b37893f-kube-api-access-4z8jr\") pod \"heat-operator-controller-manager-5f64f6f8bb-xl9wl\" (UID: \"105854f4-5cc1-491f-983a-50864b37893f\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-xl9wl" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.811461 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-65swh"] Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.813026 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-65swh" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.824868 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-7zqpj" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.855445 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dhdm\" (UniqueName: \"kubernetes.io/projected/3477354d-838b-48cc-a6c3-612088d82640-kube-api-access-2dhdm\") pod \"infra-operator-controller-manager-78d48bff9d-5g8lw\" (UID: \"3477354d-838b-48cc-a6c3-612088d82640\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.856367 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz6sx\" (UniqueName: \"kubernetes.io/projected/9726c3f9-bcae-4722-a054-5a66c161953b-kube-api-access-pz6sx\") pod \"horizon-operator-controller-manager-68c6d99b8f-qbc2f\" (UID: \"9726c3f9-bcae-4722-a054-5a66c161953b\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qbc2f" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.872260 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf4nm\" (UniqueName: \"kubernetes.io/projected/06df6c8c-640d-431b-b216-78345a9054e1-kube-api-access-kf4nm\") pod \"keystone-operator-controller-manager-7765d96ddf-xgbp2\" (UID: \"06df6c8c-640d-431b-b216-78345a9054e1\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.872306 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv76z\" (UniqueName: \"kubernetes.io/projected/5e11a0d1-4179-4621-803d-839196fb940b-kube-api-access-fv76z\") pod \"manila-operator-controller-manager-5b5fd79c9c-jq9vt\" (UID: \"5e11a0d1-4179-4621-803d-839196fb940b\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jq9vt" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.872374 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cn45\" (UniqueName: \"kubernetes.io/projected/d7b70b3b-5481-4ac2-8e60-256e2690752f-kube-api-access-2cn45\") pod \"ironic-operator-controller-manager-967d97867-7stc2\" (UID: \"d7b70b3b-5481-4ac2-8e60-256e2690752f\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-7stc2" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.872470 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwr46\" (UniqueName: \"kubernetes.io/projected/58a60a6f-fcf5-4aa0-b6e4-97d7df2b2bd2-kube-api-access-fwr46\") pod \"mariadb-operator-controller-manager-79c8c4686c-65swh\" (UID: \"58a60a6f-fcf5-4aa0-b6e4-97d7df2b2bd2\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-65swh" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.974407 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwr46\" (UniqueName: \"kubernetes.io/projected/58a60a6f-fcf5-4aa0-b6e4-97d7df2b2bd2-kube-api-access-fwr46\") pod \"mariadb-operator-controller-manager-79c8c4686c-65swh\" (UID: \"58a60a6f-fcf5-4aa0-b6e4-97d7df2b2bd2\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-65swh" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.975355 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv76z\" (UniqueName: \"kubernetes.io/projected/5e11a0d1-4179-4621-803d-839196fb940b-kube-api-access-fv76z\") pod \"manila-operator-controller-manager-5b5fd79c9c-jq9vt\" (UID: \"5e11a0d1-4179-4621-803d-839196fb940b\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jq9vt" Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.990775 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pjhfc"] Dec 11 14:04:11 crc kubenswrapper[5050]: I1211 14:04:11.999393 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qbc2f" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.003925 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cn45\" (UniqueName: \"kubernetes.io/projected/d7b70b3b-5481-4ac2-8e60-256e2690752f-kube-api-access-2cn45\") pod \"ironic-operator-controller-manager-967d97867-7stc2\" (UID: \"d7b70b3b-5481-4ac2-8e60-256e2690752f\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-7stc2" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.010749 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-9tcm2" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.021460 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv76z\" (UniqueName: \"kubernetes.io/projected/5e11a0d1-4179-4621-803d-839196fb940b-kube-api-access-fv76z\") pod \"manila-operator-controller-manager-5b5fd79c9c-jq9vt\" (UID: \"5e11a0d1-4179-4621-803d-839196fb940b\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jq9vt" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.025918 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf4nm\" (UniqueName: \"kubernetes.io/projected/06df6c8c-640d-431b-b216-78345a9054e1-kube-api-access-kf4nm\") pod \"keystone-operator-controller-manager-7765d96ddf-xgbp2\" (UID: \"06df6c8c-640d-431b-b216-78345a9054e1\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.048885 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-h4w7p"] Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.051826 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pjhfc" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.067928 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwr46\" (UniqueName: \"kubernetes.io/projected/58a60a6f-fcf5-4aa0-b6e4-97d7df2b2bd2-kube-api-access-fwr46\") pod \"mariadb-operator-controller-manager-79c8c4686c-65swh\" (UID: \"58a60a6f-fcf5-4aa0-b6e4-97d7df2b2bd2\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-65swh" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.087686 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-xl9wl" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.093022 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-65swh"] Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.093211 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-h4w7p" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.097712 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-n57x7" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.099360 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-2bw5c" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.104137 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pjhfc"] Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.111332 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-h4w7p"] Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.130089 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-lvbdb"] Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.131224 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lvbdb" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.136316 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.136752 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-djswv" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.139385 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5c747c4-l2hpx"] Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.150461 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-ttg8w"] Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.150667 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.152619 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-wsc2k"] Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.153596 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-wsc2k" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.153666 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.154045 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ttg8w" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.160578 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-jbnlr" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.168354 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-967d97867-7stc2" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.170057 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-4x88l" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.170950 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-6tg85" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.192071 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-lvbdb"] Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.198492 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s5nk\" (UniqueName: \"kubernetes.io/projected/3c9e825c-0aee-42b9-a7a5-3191486f301d-kube-api-access-9s5nk\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-pjhfc\" (UID: \"3c9e825c-0aee-42b9-a7a5-3191486f301d\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pjhfc" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.204253 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlx75\" (UniqueName: \"kubernetes.io/projected/9c82f51b-e2a0-49e6-bc0e-d7679e439a6f-kube-api-access-dlx75\") pod \"nova-operator-controller-manager-697bc559fc-h4w7p\" (UID: \"9c82f51b-e2a0-49e6-bc0e-d7679e439a6f\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-h4w7p" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.235046 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jq9vt" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.238386 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-ttg8w"] Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.279922 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-wsc2k"] Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.308364 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-65swh" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.308742 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l96h\" (UniqueName: \"kubernetes.io/projected/f5bec9f7-072c-4c21-80ea-af9f59313eef-kube-api-access-4l96h\") pod \"openstack-baremetal-operator-controller-manager-5c747c4-l2hpx\" (UID: \"f5bec9f7-072c-4c21-80ea-af9f59313eef\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.308818 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zks8\" (UniqueName: \"kubernetes.io/projected/b885fa10-3ed3-41fd-94ae-2b7442519450-kube-api-access-9zks8\") pod \"ovn-operator-controller-manager-b6456fdb6-ttg8w\" (UID: \"b885fa10-3ed3-41fd-94ae-2b7442519450\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ttg8w" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.308858 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3477354d-838b-48cc-a6c3-612088d82640-cert\") pod \"infra-operator-controller-manager-78d48bff9d-5g8lw\" (UID: \"3477354d-838b-48cc-a6c3-612088d82640\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.308887 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9s5nk\" (UniqueName: \"kubernetes.io/projected/3c9e825c-0aee-42b9-a7a5-3191486f301d-kube-api-access-9s5nk\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-pjhfc\" (UID: \"3c9e825c-0aee-42b9-a7a5-3191486f301d\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pjhfc" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.308906 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlx75\" (UniqueName: \"kubernetes.io/projected/9c82f51b-e2a0-49e6-bc0e-d7679e439a6f-kube-api-access-dlx75\") pod \"nova-operator-controller-manager-697bc559fc-h4w7p\" (UID: \"9c82f51b-e2a0-49e6-bc0e-d7679e439a6f\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-h4w7p" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.308932 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7ffv\" (UniqueName: \"kubernetes.io/projected/fa0985f7-7d87-41b3-9916-f22375a0489c-kube-api-access-p7ffv\") pod \"octavia-operator-controller-manager-998648c74-lvbdb\" (UID: \"fa0985f7-7d87-41b3-9916-f22375a0489c\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-lvbdb" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.308979 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2lxv\" (UniqueName: \"kubernetes.io/projected/cedca20e-aaaa-4190-944d-8f18bd93f737-kube-api-access-g2lxv\") pod \"placement-operator-controller-manager-78f8948974-wsc2k\" (UID: \"cedca20e-aaaa-4190-944d-8f18bd93f737\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-wsc2k" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.309046 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5bec9f7-072c-4c21-80ea-af9f59313eef-cert\") pod \"openstack-baremetal-operator-controller-manager-5c747c4-l2hpx\" (UID: \"f5bec9f7-072c-4c21-80ea-af9f59313eef\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" Dec 11 14:04:12 crc kubenswrapper[5050]: E1211 14:04:12.309664 5050 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 11 14:04:12 crc kubenswrapper[5050]: E1211 14:04:12.309730 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3477354d-838b-48cc-a6c3-612088d82640-cert podName:3477354d-838b-48cc-a6c3-612088d82640 nodeName:}" failed. No retries permitted until 2025-12-11 14:04:13.30970698 +0000 UTC m=+944.153429736 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3477354d-838b-48cc-a6c3-612088d82640-cert") pod "infra-operator-controller-manager-78d48bff9d-5g8lw" (UID: "3477354d-838b-48cc-a6c3-612088d82640") : secret "infra-operator-webhook-server-cert" not found Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.310321 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5c747c4-l2hpx"] Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.336812 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-qqb7f"] Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.338679 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-qqb7f" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.341964 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s5nk\" (UniqueName: \"kubernetes.io/projected/3c9e825c-0aee-42b9-a7a5-3191486f301d-kube-api-access-9s5nk\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-pjhfc\" (UID: \"3c9e825c-0aee-42b9-a7a5-3191486f301d\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pjhfc" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.342464 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-cclxg" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.357883 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlx75\" (UniqueName: \"kubernetes.io/projected/9c82f51b-e2a0-49e6-bc0e-d7679e439a6f-kube-api-access-dlx75\") pod \"nova-operator-controller-manager-697bc559fc-h4w7p\" (UID: \"9c82f51b-e2a0-49e6-bc0e-d7679e439a6f\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-h4w7p" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.395113 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-qqb7f"] Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.415265 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5bec9f7-072c-4c21-80ea-af9f59313eef-cert\") pod \"openstack-baremetal-operator-controller-manager-5c747c4-l2hpx\" (UID: \"f5bec9f7-072c-4c21-80ea-af9f59313eef\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.415507 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l96h\" (UniqueName: \"kubernetes.io/projected/f5bec9f7-072c-4c21-80ea-af9f59313eef-kube-api-access-4l96h\") pod \"openstack-baremetal-operator-controller-manager-5c747c4-l2hpx\" (UID: \"f5bec9f7-072c-4c21-80ea-af9f59313eef\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.415690 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hklrh\" (UniqueName: \"kubernetes.io/projected/12778398-2baa-44cb-9fd1-f2034870e9fc-kube-api-access-hklrh\") pod \"test-operator-controller-manager-5854674fcc-qqb7f\" (UID: \"12778398-2baa-44cb-9fd1-f2034870e9fc\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-qqb7f" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.415824 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zks8\" (UniqueName: \"kubernetes.io/projected/b885fa10-3ed3-41fd-94ae-2b7442519450-kube-api-access-9zks8\") pod \"ovn-operator-controller-manager-b6456fdb6-ttg8w\" (UID: \"b885fa10-3ed3-41fd-94ae-2b7442519450\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ttg8w" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.415953 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7ffv\" (UniqueName: \"kubernetes.io/projected/fa0985f7-7d87-41b3-9916-f22375a0489c-kube-api-access-p7ffv\") pod \"octavia-operator-controller-manager-998648c74-lvbdb\" (UID: \"fa0985f7-7d87-41b3-9916-f22375a0489c\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-lvbdb" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.416172 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2lxv\" (UniqueName: \"kubernetes.io/projected/cedca20e-aaaa-4190-944d-8f18bd93f737-kube-api-access-g2lxv\") pod \"placement-operator-controller-manager-78f8948974-wsc2k\" (UID: \"cedca20e-aaaa-4190-944d-8f18bd93f737\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-wsc2k" Dec 11 14:04:12 crc kubenswrapper[5050]: E1211 14:04:12.416631 5050 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 14:04:12 crc kubenswrapper[5050]: E1211 14:04:12.416762 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5bec9f7-072c-4c21-80ea-af9f59313eef-cert podName:f5bec9f7-072c-4c21-80ea-af9f59313eef nodeName:}" failed. No retries permitted until 2025-12-11 14:04:12.916746363 +0000 UTC m=+943.760468949 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f5bec9f7-072c-4c21-80ea-af9f59313eef-cert") pod "openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" (UID: "f5bec9f7-072c-4c21-80ea-af9f59313eef") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.436461 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-8jnzj"] Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.440137 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-8jnzj" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.440744 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58d5ff84df-ssfnh"] Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.445971 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-ssfnh" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.446243 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-7bf58" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.448179 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-f52rf" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.449215 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75944c9b7-grhdp"] Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.450872 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-grhdp" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.464506 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-8jnzj"] Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.471147 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7ffv\" (UniqueName: \"kubernetes.io/projected/fa0985f7-7d87-41b3-9916-f22375a0489c-kube-api-access-p7ffv\") pod \"octavia-operator-controller-manager-998648c74-lvbdb\" (UID: \"fa0985f7-7d87-41b3-9916-f22375a0489c\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-lvbdb" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.471563 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2lxv\" (UniqueName: \"kubernetes.io/projected/cedca20e-aaaa-4190-944d-8f18bd93f737-kube-api-access-g2lxv\") pod \"placement-operator-controller-manager-78f8948974-wsc2k\" (UID: \"cedca20e-aaaa-4190-944d-8f18bd93f737\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-wsc2k" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.471982 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-w847r" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.475631 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zks8\" (UniqueName: \"kubernetes.io/projected/b885fa10-3ed3-41fd-94ae-2b7442519450-kube-api-access-9zks8\") pod \"ovn-operator-controller-manager-b6456fdb6-ttg8w\" (UID: \"b885fa10-3ed3-41fd-94ae-2b7442519450\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ttg8w" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.477538 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l96h\" (UniqueName: \"kubernetes.io/projected/f5bec9f7-072c-4c21-80ea-af9f59313eef-kube-api-access-4l96h\") pod \"openstack-baremetal-operator-controller-manager-5c747c4-l2hpx\" (UID: \"f5bec9f7-072c-4c21-80ea-af9f59313eef\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.486612 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58d5ff84df-ssfnh"] Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.492536 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75944c9b7-grhdp"] Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.519100 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5b74fbd87-zqsjt"] Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.520432 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5b74fbd87-zqsjt" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.520490 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfqbt\" (UniqueName: \"kubernetes.io/projected/712f888b-7c45-4c1f-95d8-ccc464b7c15f-kube-api-access-xfqbt\") pod \"watcher-operator-controller-manager-75944c9b7-grhdp\" (UID: \"712f888b-7c45-4c1f-95d8-ccc464b7c15f\") " pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-grhdp" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.520641 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hklrh\" (UniqueName: \"kubernetes.io/projected/12778398-2baa-44cb-9fd1-f2034870e9fc-kube-api-access-hklrh\") pod \"test-operator-controller-manager-5854674fcc-qqb7f\" (UID: \"12778398-2baa-44cb-9fd1-f2034870e9fc\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-qqb7f" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.520679 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2srj\" (UniqueName: \"kubernetes.io/projected/bff5d533-0728-4436-bdeb-c725bf04bdb3-kube-api-access-j2srj\") pod \"telemetry-operator-controller-manager-58d5ff84df-ssfnh\" (UID: \"bff5d533-0728-4436-bdeb-c725bf04bdb3\") " pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-ssfnh" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.520724 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh69p\" (UniqueName: \"kubernetes.io/projected/a6345cf8-abc2-4c9a-bfe6-8b65187ada2d-kube-api-access-vh69p\") pod \"swift-operator-controller-manager-9d58d64bc-8jnzj\" (UID: \"a6345cf8-abc2-4c9a-bfe6-8b65187ada2d\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-8jnzj" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.523802 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.526777 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-mz95s" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.527051 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.540721 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pjhfc" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.549885 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hklrh\" (UniqueName: \"kubernetes.io/projected/12778398-2baa-44cb-9fd1-f2034870e9fc-kube-api-access-hklrh\") pod \"test-operator-controller-manager-5854674fcc-qqb7f\" (UID: \"12778398-2baa-44cb-9fd1-f2034870e9fc\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-qqb7f" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.556051 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5b74fbd87-zqsjt"] Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.582759 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-h4w7p" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.593784 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lvbdb" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.631328 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfqbt\" (UniqueName: \"kubernetes.io/projected/712f888b-7c45-4c1f-95d8-ccc464b7c15f-kube-api-access-xfqbt\") pod \"watcher-operator-controller-manager-75944c9b7-grhdp\" (UID: \"712f888b-7c45-4c1f-95d8-ccc464b7c15f\") " pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-grhdp" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.631444 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-webhook-certs\") pod \"openstack-operator-controller-manager-5b74fbd87-zqsjt\" (UID: \"b3b941f1-576d-4b49-871b-3666eda635ff\") " pod="openstack-operators/openstack-operator-controller-manager-5b74fbd87-zqsjt" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.631567 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2srj\" (UniqueName: \"kubernetes.io/projected/bff5d533-0728-4436-bdeb-c725bf04bdb3-kube-api-access-j2srj\") pod \"telemetry-operator-controller-manager-58d5ff84df-ssfnh\" (UID: \"bff5d533-0728-4436-bdeb-c725bf04bdb3\") " pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-ssfnh" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.631634 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh69p\" (UniqueName: \"kubernetes.io/projected/a6345cf8-abc2-4c9a-bfe6-8b65187ada2d-kube-api-access-vh69p\") pod \"swift-operator-controller-manager-9d58d64bc-8jnzj\" (UID: \"a6345cf8-abc2-4c9a-bfe6-8b65187ada2d\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-8jnzj" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.631684 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-metrics-certs\") pod \"openstack-operator-controller-manager-5b74fbd87-zqsjt\" (UID: \"b3b941f1-576d-4b49-871b-3666eda635ff\") " pod="openstack-operators/openstack-operator-controller-manager-5b74fbd87-zqsjt" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.631702 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh6c4\" (UniqueName: \"kubernetes.io/projected/b3b941f1-576d-4b49-871b-3666eda635ff-kube-api-access-rh6c4\") pod \"openstack-operator-controller-manager-5b74fbd87-zqsjt\" (UID: \"b3b941f1-576d-4b49-871b-3666eda635ff\") " pod="openstack-operators/openstack-operator-controller-manager-5b74fbd87-zqsjt" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.650631 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-wsc2k" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.718737 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ttg8w" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.733941 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-metrics-certs\") pod \"openstack-operator-controller-manager-5b74fbd87-zqsjt\" (UID: \"b3b941f1-576d-4b49-871b-3666eda635ff\") " pod="openstack-operators/openstack-operator-controller-manager-5b74fbd87-zqsjt" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.734023 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh6c4\" (UniqueName: \"kubernetes.io/projected/b3b941f1-576d-4b49-871b-3666eda635ff-kube-api-access-rh6c4\") pod \"openstack-operator-controller-manager-5b74fbd87-zqsjt\" (UID: \"b3b941f1-576d-4b49-871b-3666eda635ff\") " pod="openstack-operators/openstack-operator-controller-manager-5b74fbd87-zqsjt" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.734139 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-webhook-certs\") pod \"openstack-operator-controller-manager-5b74fbd87-zqsjt\" (UID: \"b3b941f1-576d-4b49-871b-3666eda635ff\") " pod="openstack-operators/openstack-operator-controller-manager-5b74fbd87-zqsjt" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.734178 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mcj5n"] Dec 11 14:04:12 crc kubenswrapper[5050]: E1211 14:04:12.734400 5050 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 11 14:04:12 crc kubenswrapper[5050]: E1211 14:04:12.734455 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-webhook-certs podName:b3b941f1-576d-4b49-871b-3666eda635ff nodeName:}" failed. No retries permitted until 2025-12-11 14:04:13.23443863 +0000 UTC m=+944.078161206 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-webhook-certs") pod "openstack-operator-controller-manager-5b74fbd87-zqsjt" (UID: "b3b941f1-576d-4b49-871b-3666eda635ff") : secret "webhook-server-cert" not found Dec 11 14:04:12 crc kubenswrapper[5050]: E1211 14:04:12.734754 5050 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 11 14:04:12 crc kubenswrapper[5050]: E1211 14:04:12.734791 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-metrics-certs podName:b3b941f1-576d-4b49-871b-3666eda635ff nodeName:}" failed. No retries permitted until 2025-12-11 14:04:13.234783309 +0000 UTC m=+944.078505895 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-metrics-certs") pod "openstack-operator-controller-manager-5b74fbd87-zqsjt" (UID: "b3b941f1-576d-4b49-871b-3666eda635ff") : secret "metrics-server-cert" not found Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.736204 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mcj5n" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.737511 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2srj\" (UniqueName: \"kubernetes.io/projected/bff5d533-0728-4436-bdeb-c725bf04bdb3-kube-api-access-j2srj\") pod \"telemetry-operator-controller-manager-58d5ff84df-ssfnh\" (UID: \"bff5d533-0728-4436-bdeb-c725bf04bdb3\") " pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-ssfnh" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.738101 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh69p\" (UniqueName: \"kubernetes.io/projected/a6345cf8-abc2-4c9a-bfe6-8b65187ada2d-kube-api-access-vh69p\") pod \"swift-operator-controller-manager-9d58d64bc-8jnzj\" (UID: \"a6345cf8-abc2-4c9a-bfe6-8b65187ada2d\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-8jnzj" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.740584 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-crgwv" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.741911 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-qqb7f" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.744287 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mcj5n"] Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.744888 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfqbt\" (UniqueName: \"kubernetes.io/projected/712f888b-7c45-4c1f-95d8-ccc464b7c15f-kube-api-access-xfqbt\") pod \"watcher-operator-controller-manager-75944c9b7-grhdp\" (UID: \"712f888b-7c45-4c1f-95d8-ccc464b7c15f\") " pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-grhdp" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.751498 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-qdrgd" event={"ID":"4b4e8aa9-a0f6-4bd2-92d2-c524aedf6389","Type":"ContainerStarted","Data":"71c5fac1f45855c16ae2bc5f3efdd90078e670b116eb3d17f3f93b26f1d94f13"} Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.761262 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-qdrgd"] Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.774218 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh6c4\" (UniqueName: \"kubernetes.io/projected/b3b941f1-576d-4b49-871b-3666eda635ff-kube-api-access-rh6c4\") pod \"openstack-operator-controller-manager-5b74fbd87-zqsjt\" (UID: \"b3b941f1-576d-4b49-871b-3666eda635ff\") " pod="openstack-operators/openstack-operator-controller-manager-5b74fbd87-zqsjt" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.804311 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-8jnzj" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.815066 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-grhdp" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.836199 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgv2z\" (UniqueName: \"kubernetes.io/projected/dc74a2ef-5885-462e-a5b8-7b50454df35b-kube-api-access-dgv2z\") pod \"rabbitmq-cluster-operator-manager-668c99d594-mcj5n\" (UID: \"dc74a2ef-5885-462e-a5b8-7b50454df35b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mcj5n" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.865206 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-sqjhh"] Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.938527 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5bec9f7-072c-4c21-80ea-af9f59313eef-cert\") pod \"openstack-baremetal-operator-controller-manager-5c747c4-l2hpx\" (UID: \"f5bec9f7-072c-4c21-80ea-af9f59313eef\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.938760 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgv2z\" (UniqueName: \"kubernetes.io/projected/dc74a2ef-5885-462e-a5b8-7b50454df35b-kube-api-access-dgv2z\") pod \"rabbitmq-cluster-operator-manager-668c99d594-mcj5n\" (UID: \"dc74a2ef-5885-462e-a5b8-7b50454df35b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mcj5n" Dec 11 14:04:12 crc kubenswrapper[5050]: E1211 14:04:12.939751 5050 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 14:04:12 crc kubenswrapper[5050]: E1211 14:04:12.939849 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5bec9f7-072c-4c21-80ea-af9f59313eef-cert podName:f5bec9f7-072c-4c21-80ea-af9f59313eef nodeName:}" failed. No retries permitted until 2025-12-11 14:04:13.939813131 +0000 UTC m=+944.783535717 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f5bec9f7-072c-4c21-80ea-af9f59313eef-cert") pod "openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" (UID: "f5bec9f7-072c-4c21-80ea-af9f59313eef") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.960577 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgv2z\" (UniqueName: \"kubernetes.io/projected/dc74a2ef-5885-462e-a5b8-7b50454df35b-kube-api-access-dgv2z\") pod \"rabbitmq-cluster-operator-manager-668c99d594-mcj5n\" (UID: \"dc74a2ef-5885-462e-a5b8-7b50454df35b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mcj5n" Dec 11 14:04:12 crc kubenswrapper[5050]: I1211 14:04:12.966507 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-ssfnh" Dec 11 14:04:13 crc kubenswrapper[5050]: I1211 14:04:13.177573 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qbc2f"] Dec 11 14:04:13 crc kubenswrapper[5050]: I1211 14:04:13.186502 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mcj5n" Dec 11 14:04:13 crc kubenswrapper[5050]: I1211 14:04:13.194701 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp"] Dec 11 14:04:13 crc kubenswrapper[5050]: I1211 14:04:13.216767 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2"] Dec 11 14:04:13 crc kubenswrapper[5050]: I1211 14:04:13.245712 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-webhook-certs\") pod \"openstack-operator-controller-manager-5b74fbd87-zqsjt\" (UID: \"b3b941f1-576d-4b49-871b-3666eda635ff\") " pod="openstack-operators/openstack-operator-controller-manager-5b74fbd87-zqsjt" Dec 11 14:04:13 crc kubenswrapper[5050]: I1211 14:04:13.245877 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-metrics-certs\") pod \"openstack-operator-controller-manager-5b74fbd87-zqsjt\" (UID: \"b3b941f1-576d-4b49-871b-3666eda635ff\") " pod="openstack-operators/openstack-operator-controller-manager-5b74fbd87-zqsjt" Dec 11 14:04:13 crc kubenswrapper[5050]: E1211 14:04:13.245904 5050 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 11 14:04:13 crc kubenswrapper[5050]: E1211 14:04:13.246092 5050 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 11 14:04:13 crc kubenswrapper[5050]: E1211 14:04:13.246335 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-webhook-certs podName:b3b941f1-576d-4b49-871b-3666eda635ff nodeName:}" failed. No retries permitted until 2025-12-11 14:04:14.245962887 +0000 UTC m=+945.089685473 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-webhook-certs") pod "openstack-operator-controller-manager-5b74fbd87-zqsjt" (UID: "b3b941f1-576d-4b49-871b-3666eda635ff") : secret "webhook-server-cert" not found Dec 11 14:04:13 crc kubenswrapper[5050]: E1211 14:04:13.246375 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-metrics-certs podName:b3b941f1-576d-4b49-871b-3666eda635ff nodeName:}" failed. No retries permitted until 2025-12-11 14:04:14.246365948 +0000 UTC m=+945.090088534 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-metrics-certs") pod "openstack-operator-controller-manager-5b74fbd87-zqsjt" (UID: "b3b941f1-576d-4b49-871b-3666eda635ff") : secret "metrics-server-cert" not found Dec 11 14:04:13 crc kubenswrapper[5050]: I1211 14:04:13.347959 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3477354d-838b-48cc-a6c3-612088d82640-cert\") pod \"infra-operator-controller-manager-78d48bff9d-5g8lw\" (UID: \"3477354d-838b-48cc-a6c3-612088d82640\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" Dec 11 14:04:13 crc kubenswrapper[5050]: E1211 14:04:13.348309 5050 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 11 14:04:13 crc kubenswrapper[5050]: E1211 14:04:13.348388 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3477354d-838b-48cc-a6c3-612088d82640-cert podName:3477354d-838b-48cc-a6c3-612088d82640 nodeName:}" failed. No retries permitted until 2025-12-11 14:04:15.348363955 +0000 UTC m=+946.192086541 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3477354d-838b-48cc-a6c3-612088d82640-cert") pod "infra-operator-controller-manager-78d48bff9d-5g8lw" (UID: "3477354d-838b-48cc-a6c3-612088d82640") : secret "infra-operator-webhook-server-cert" not found Dec 11 14:04:13 crc kubenswrapper[5050]: I1211 14:04:13.575497 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-7stc2"] Dec 11 14:04:13 crc kubenswrapper[5050]: I1211 14:04:13.583335 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-xl9wl"] Dec 11 14:04:13 crc kubenswrapper[5050]: W1211 14:04:13.590040 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7b70b3b_5481_4ac2_8e60_256e2690752f.slice/crio-12a041e294eefffa37497fd673cdae0e3125c3dfbe9090f397ce5350892287f1 WatchSource:0}: Error finding container 12a041e294eefffa37497fd673cdae0e3125c3dfbe9090f397ce5350892287f1: Status 404 returned error can't find the container with id 12a041e294eefffa37497fd673cdae0e3125c3dfbe9090f397ce5350892287f1 Dec 11 14:04:13 crc kubenswrapper[5050]: I1211 14:04:13.767170 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qbc2f" event={"ID":"9726c3f9-bcae-4722-a054-5a66c161953b","Type":"ContainerStarted","Data":"6750a02b9502e26ab0d35528039aebad4f6016b940504fe992fc74d397bb7b23"} Dec 11 14:04:13 crc kubenswrapper[5050]: I1211 14:04:13.772415 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-xl9wl" event={"ID":"105854f4-5cc1-491f-983a-50864b37893f","Type":"ContainerStarted","Data":"c63ba182cf5688efaef1753edd4f0bd738e6dc6dc91e24601c750e9570750725"} Dec 11 14:04:13 crc kubenswrapper[5050]: I1211 14:04:13.774390 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp" event={"ID":"85683dfb-37fb-4301-8c7a-fbb7453b303d","Type":"ContainerStarted","Data":"7c479b7ff027f5b0c4ea529d15ac2634bc552130f8c4de1b3043faf91bef4483"} Dec 11 14:04:13 crc kubenswrapper[5050]: I1211 14:04:13.777189 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2" event={"ID":"06df6c8c-640d-431b-b216-78345a9054e1","Type":"ContainerStarted","Data":"28c69a07e490738f10df7d4b5bc1d265ed2e4634f5074ce8abfb6f4e0540c440"} Dec 11 14:04:13 crc kubenswrapper[5050]: I1211 14:04:13.778319 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-7stc2" event={"ID":"d7b70b3b-5481-4ac2-8e60-256e2690752f","Type":"ContainerStarted","Data":"12a041e294eefffa37497fd673cdae0e3125c3dfbe9090f397ce5350892287f1"} Dec 11 14:04:13 crc kubenswrapper[5050]: I1211 14:04:13.779977 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-sqjhh" event={"ID":"0aa7657b-dbca-4b2b-ac62-7000681a918a","Type":"ContainerStarted","Data":"7eeacd6c19dbeda0e241f66ddae9a008d5b2754d5b13a5838fc0dddd43a62f9a"} Dec 11 14:04:13 crc kubenswrapper[5050]: I1211 14:04:13.963222 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5bec9f7-072c-4c21-80ea-af9f59313eef-cert\") pod \"openstack-baremetal-operator-controller-manager-5c747c4-l2hpx\" (UID: \"f5bec9f7-072c-4c21-80ea-af9f59313eef\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" Dec 11 14:04:13 crc kubenswrapper[5050]: E1211 14:04:13.963465 5050 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 14:04:13 crc kubenswrapper[5050]: E1211 14:04:13.963549 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5bec9f7-072c-4c21-80ea-af9f59313eef-cert podName:f5bec9f7-072c-4c21-80ea-af9f59313eef nodeName:}" failed. No retries permitted until 2025-12-11 14:04:15.963527455 +0000 UTC m=+946.807250041 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f5bec9f7-072c-4c21-80ea-af9f59313eef-cert") pod "openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" (UID: "f5bec9f7-072c-4c21-80ea-af9f59313eef") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 14:04:13 crc kubenswrapper[5050]: I1211 14:04:13.986064 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-65swh"] Dec 11 14:04:14 crc kubenswrapper[5050]: I1211 14:04:14.013443 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pjhfc"] Dec 11 14:04:14 crc kubenswrapper[5050]: I1211 14:04:14.015551 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-9tcm2"] Dec 11 14:04:14 crc kubenswrapper[5050]: I1211 14:04:14.039103 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-wsc2k"] Dec 11 14:04:14 crc kubenswrapper[5050]: I1211 14:04:14.040994 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-8jnzj"] Dec 11 14:04:14 crc kubenswrapper[5050]: W1211 14:04:14.043300 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c9e825c_0aee_42b9_a7a5_3191486f301d.slice/crio-3bbfbfed3c042409ac28dbe2bfed3c30a53d9e70e9b2a2895ef15d6427a0eb3a WatchSource:0}: Error finding container 3bbfbfed3c042409ac28dbe2bfed3c30a53d9e70e9b2a2895ef15d6427a0eb3a: Status 404 returned error can't find the container with id 3bbfbfed3c042409ac28dbe2bfed3c30a53d9e70e9b2a2895ef15d6427a0eb3a Dec 11 14:04:14 crc kubenswrapper[5050]: I1211 14:04:14.056384 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jq9vt"] Dec 11 14:04:14 crc kubenswrapper[5050]: I1211 14:04:14.071927 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-h4w7p"] Dec 11 14:04:14 crc kubenswrapper[5050]: W1211 14:04:14.095722 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c82f51b_e2a0_49e6_bc0e_d7679e439a6f.slice/crio-f3fee1e29872b85d76f4f6a26a6d05f7060041f0b17f22b155c385ccab7c1257 WatchSource:0}: Error finding container f3fee1e29872b85d76f4f6a26a6d05f7060041f0b17f22b155c385ccab7c1257: Status 404 returned error can't find the container with id f3fee1e29872b85d76f4f6a26a6d05f7060041f0b17f22b155c385ccab7c1257 Dec 11 14:04:14 crc kubenswrapper[5050]: I1211 14:04:14.096517 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-qqb7f"] Dec 11 14:04:14 crc kubenswrapper[5050]: W1211 14:04:14.101413 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12778398_2baa_44cb_9fd1_f2034870e9fc.slice/crio-18eba7b2f2e21a3fa9823970a42fdf43696ad0b1314d45e3bd2877b54c51c4ce WatchSource:0}: Error finding container 18eba7b2f2e21a3fa9823970a42fdf43696ad0b1314d45e3bd2877b54c51c4ce: Status 404 returned error can't find the container with id 18eba7b2f2e21a3fa9823970a42fdf43696ad0b1314d45e3bd2877b54c51c4ce Dec 11 14:04:14 crc kubenswrapper[5050]: I1211 14:04:14.107411 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75944c9b7-grhdp"] Dec 11 14:04:14 crc kubenswrapper[5050]: I1211 14:04:14.120727 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-lvbdb"] Dec 11 14:04:14 crc kubenswrapper[5050]: I1211 14:04:14.129448 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-ttg8w"] Dec 11 14:04:14 crc kubenswrapper[5050]: I1211 14:04:14.139273 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58d5ff84df-ssfnh"] Dec 11 14:04:14 crc kubenswrapper[5050]: I1211 14:04:14.149205 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mcj5n"] Dec 11 14:04:14 crc kubenswrapper[5050]: E1211 14:04:14.153496 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hklrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-qqb7f_openstack-operators(12778398-2baa-44cb-9fd1-f2034870e9fc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 14:04:14 crc kubenswrapper[5050]: E1211 14:04:14.162319 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hklrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-qqb7f_openstack-operators(12778398-2baa-44cb-9fd1-f2034870e9fc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 14:04:14 crc kubenswrapper[5050]: E1211 14:04:14.163722 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-qqb7f" podUID="12778398-2baa-44cb-9fd1-f2034870e9fc" Dec 11 14:04:14 crc kubenswrapper[5050]: E1211 14:04:14.172676 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9zks8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-ttg8w_openstack-operators(b885fa10-3ed3-41fd-94ae-2b7442519450): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 14:04:14 crc kubenswrapper[5050]: E1211 14:04:14.174320 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f27e732ec1faee765461bf137d9be81278b2fa39675019a73622755e1e610b6f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j2srj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-58d5ff84df-ssfnh_openstack-operators(bff5d533-0728-4436-bdeb-c725bf04bdb3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 14:04:14 crc kubenswrapper[5050]: E1211 14:04:14.179509 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9zks8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-ttg8w_openstack-operators(b885fa10-3ed3-41fd-94ae-2b7442519450): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 14:04:14 crc kubenswrapper[5050]: E1211 14:04:14.179704 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dgv2z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-mcj5n_openstack-operators(dc74a2ef-5885-462e-a5b8-7b50454df35b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 14:04:14 crc kubenswrapper[5050]: E1211 14:04:14.182142 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mcj5n" podUID="dc74a2ef-5885-462e-a5b8-7b50454df35b" Dec 11 14:04:14 crc kubenswrapper[5050]: E1211 14:04:14.182236 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ttg8w" podUID="b885fa10-3ed3-41fd-94ae-2b7442519450" Dec 11 14:04:14 crc kubenswrapper[5050]: E1211 14:04:14.182132 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j2srj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-58d5ff84df-ssfnh_openstack-operators(bff5d533-0728-4436-bdeb-c725bf04bdb3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 14:04:14 crc kubenswrapper[5050]: E1211 14:04:14.187260 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-ssfnh" podUID="bff5d533-0728-4436-bdeb-c725bf04bdb3" Dec 11 14:04:14 crc kubenswrapper[5050]: I1211 14:04:14.277445 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-webhook-certs\") pod \"openstack-operator-controller-manager-5b74fbd87-zqsjt\" (UID: \"b3b941f1-576d-4b49-871b-3666eda635ff\") " pod="openstack-operators/openstack-operator-controller-manager-5b74fbd87-zqsjt" Dec 11 14:04:14 crc kubenswrapper[5050]: I1211 14:04:14.277592 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-metrics-certs\") pod \"openstack-operator-controller-manager-5b74fbd87-zqsjt\" (UID: \"b3b941f1-576d-4b49-871b-3666eda635ff\") " pod="openstack-operators/openstack-operator-controller-manager-5b74fbd87-zqsjt" Dec 11 14:04:14 crc kubenswrapper[5050]: E1211 14:04:14.277678 5050 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 11 14:04:14 crc kubenswrapper[5050]: E1211 14:04:14.277800 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-webhook-certs podName:b3b941f1-576d-4b49-871b-3666eda635ff nodeName:}" failed. No retries permitted until 2025-12-11 14:04:16.277780559 +0000 UTC m=+947.121503135 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-webhook-certs") pod "openstack-operator-controller-manager-5b74fbd87-zqsjt" (UID: "b3b941f1-576d-4b49-871b-3666eda635ff") : secret "webhook-server-cert" not found Dec 11 14:04:14 crc kubenswrapper[5050]: E1211 14:04:14.277916 5050 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 11 14:04:14 crc kubenswrapper[5050]: E1211 14:04:14.277991 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-metrics-certs podName:b3b941f1-576d-4b49-871b-3666eda635ff nodeName:}" failed. No retries permitted until 2025-12-11 14:04:16.277970134 +0000 UTC m=+947.121692710 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-metrics-certs") pod "openstack-operator-controller-manager-5b74fbd87-zqsjt" (UID: "b3b941f1-576d-4b49-871b-3666eda635ff") : secret "metrics-server-cert" not found Dec 11 14:04:14 crc kubenswrapper[5050]: I1211 14:04:14.796248 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-8jnzj" event={"ID":"a6345cf8-abc2-4c9a-bfe6-8b65187ada2d","Type":"ContainerStarted","Data":"36bc4a97e9f4eef6084465be6a78e6298e47dcebbf226a9f792ca5f476a197c4"} Dec 11 14:04:14 crc kubenswrapper[5050]: I1211 14:04:14.799376 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-grhdp" event={"ID":"712f888b-7c45-4c1f-95d8-ccc464b7c15f","Type":"ContainerStarted","Data":"287cfbce9aec0ef19bd933878e588becde310732e61866d9bafb5b0c131d4253"} Dec 11 14:04:14 crc kubenswrapper[5050]: I1211 14:04:14.818557 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-qqb7f" event={"ID":"12778398-2baa-44cb-9fd1-f2034870e9fc","Type":"ContainerStarted","Data":"18eba7b2f2e21a3fa9823970a42fdf43696ad0b1314d45e3bd2877b54c51c4ce"} Dec 11 14:04:14 crc kubenswrapper[5050]: I1211 14:04:14.821083 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ttg8w" event={"ID":"b885fa10-3ed3-41fd-94ae-2b7442519450","Type":"ContainerStarted","Data":"4ff0a537aa1571b97ba523a05c0a78cffac73beb29182ab25562951ee595ea74"} Dec 11 14:04:14 crc kubenswrapper[5050]: E1211 14:04:14.824369 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-qqb7f" podUID="12778398-2baa-44cb-9fd1-f2034870e9fc" Dec 11 14:04:14 crc kubenswrapper[5050]: E1211 14:04:14.824555 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ttg8w" podUID="b885fa10-3ed3-41fd-94ae-2b7442519450" Dec 11 14:04:14 crc kubenswrapper[5050]: I1211 14:04:14.825747 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-9tcm2" event={"ID":"048e17a7-0123-45a2-b698-02def3db74fe","Type":"ContainerStarted","Data":"cc1860b15a928da635f75756f081d5c05b2ec693f0d58fc500c01710585f6c8f"} Dec 11 14:04:14 crc kubenswrapper[5050]: I1211 14:04:14.829799 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-h4w7p" event={"ID":"9c82f51b-e2a0-49e6-bc0e-d7679e439a6f","Type":"ContainerStarted","Data":"f3fee1e29872b85d76f4f6a26a6d05f7060041f0b17f22b155c385ccab7c1257"} Dec 11 14:04:14 crc kubenswrapper[5050]: I1211 14:04:14.833148 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jq9vt" event={"ID":"5e11a0d1-4179-4621-803d-839196fb940b","Type":"ContainerStarted","Data":"d5391f28ebdfdc7d03076197c0ff47eddc45e66ef79dde3e68e1ed60f5c2d7c0"} Dec 11 14:04:14 crc kubenswrapper[5050]: I1211 14:04:14.835502 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mcj5n" event={"ID":"dc74a2ef-5885-462e-a5b8-7b50454df35b","Type":"ContainerStarted","Data":"53808a54d67eae8a975bfb95a6c95bda23e21b760a526d394f67a1343720735a"} Dec 11 14:04:14 crc kubenswrapper[5050]: E1211 14:04:14.839498 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mcj5n" podUID="dc74a2ef-5885-462e-a5b8-7b50454df35b" Dec 11 14:04:14 crc kubenswrapper[5050]: I1211 14:04:14.844703 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pjhfc" event={"ID":"3c9e825c-0aee-42b9-a7a5-3191486f301d","Type":"ContainerStarted","Data":"3bbfbfed3c042409ac28dbe2bfed3c30a53d9e70e9b2a2895ef15d6427a0eb3a"} Dec 11 14:04:14 crc kubenswrapper[5050]: I1211 14:04:14.847437 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-wsc2k" event={"ID":"cedca20e-aaaa-4190-944d-8f18bd93f737","Type":"ContainerStarted","Data":"712119c5ae79bf39ab8f77750540893eed6c4f6f1b6eab64cf83e7b9cabcbe82"} Dec 11 14:04:14 crc kubenswrapper[5050]: I1211 14:04:14.848794 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lvbdb" event={"ID":"fa0985f7-7d87-41b3-9916-f22375a0489c","Type":"ContainerStarted","Data":"6139ed061c0a3ba30288cebdce65f9a029f07ce8d2a7b70782dc58b75a9f4fc9"} Dec 11 14:04:14 crc kubenswrapper[5050]: I1211 14:04:14.852109 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-ssfnh" event={"ID":"bff5d533-0728-4436-bdeb-c725bf04bdb3","Type":"ContainerStarted","Data":"4cc1219b0f03b8520e47ea015e3d6202a0f1fd4b7873672d8f888b692102fcfc"} Dec 11 14:04:14 crc kubenswrapper[5050]: E1211 14:04:14.867773 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f27e732ec1faee765461bf137d9be81278b2fa39675019a73622755e1e610b6f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-ssfnh" podUID="bff5d533-0728-4436-bdeb-c725bf04bdb3" Dec 11 14:04:14 crc kubenswrapper[5050]: I1211 14:04:14.869582 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-65swh" event={"ID":"58a60a6f-fcf5-4aa0-b6e4-97d7df2b2bd2","Type":"ContainerStarted","Data":"37f8a196fe7e6ebcdb2935c5a8264fc4beb33038685f2fd3ac4524265dab42a7"} Dec 11 14:04:15 crc kubenswrapper[5050]: I1211 14:04:15.416801 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3477354d-838b-48cc-a6c3-612088d82640-cert\") pod \"infra-operator-controller-manager-78d48bff9d-5g8lw\" (UID: \"3477354d-838b-48cc-a6c3-612088d82640\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" Dec 11 14:04:15 crc kubenswrapper[5050]: E1211 14:04:15.417153 5050 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 11 14:04:15 crc kubenswrapper[5050]: E1211 14:04:15.417616 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3477354d-838b-48cc-a6c3-612088d82640-cert podName:3477354d-838b-48cc-a6c3-612088d82640 nodeName:}" failed. No retries permitted until 2025-12-11 14:04:19.417572868 +0000 UTC m=+950.261295494 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3477354d-838b-48cc-a6c3-612088d82640-cert") pod "infra-operator-controller-manager-78d48bff9d-5g8lw" (UID: "3477354d-838b-48cc-a6c3-612088d82640") : secret "infra-operator-webhook-server-cert" not found Dec 11 14:04:15 crc kubenswrapper[5050]: E1211 14:04:15.888797 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mcj5n" podUID="dc74a2ef-5885-462e-a5b8-7b50454df35b" Dec 11 14:04:15 crc kubenswrapper[5050]: E1211 14:04:15.904096 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f27e732ec1faee765461bf137d9be81278b2fa39675019a73622755e1e610b6f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-ssfnh" podUID="bff5d533-0728-4436-bdeb-c725bf04bdb3" Dec 11 14:04:15 crc kubenswrapper[5050]: E1211 14:04:15.911389 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ttg8w" podUID="b885fa10-3ed3-41fd-94ae-2b7442519450" Dec 11 14:04:15 crc kubenswrapper[5050]: E1211 14:04:15.912764 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-qqb7f" podUID="12778398-2baa-44cb-9fd1-f2034870e9fc" Dec 11 14:04:16 crc kubenswrapper[5050]: E1211 14:04:16.035207 5050 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 14:04:16 crc kubenswrapper[5050]: E1211 14:04:16.035318 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5bec9f7-072c-4c21-80ea-af9f59313eef-cert podName:f5bec9f7-072c-4c21-80ea-af9f59313eef nodeName:}" failed. No retries permitted until 2025-12-11 14:04:20.035295586 +0000 UTC m=+950.879018172 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f5bec9f7-072c-4c21-80ea-af9f59313eef-cert") pod "openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" (UID: "f5bec9f7-072c-4c21-80ea-af9f59313eef") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 14:04:16 crc kubenswrapper[5050]: I1211 14:04:16.035728 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5bec9f7-072c-4c21-80ea-af9f59313eef-cert\") pod \"openstack-baremetal-operator-controller-manager-5c747c4-l2hpx\" (UID: \"f5bec9f7-072c-4c21-80ea-af9f59313eef\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" Dec 11 14:04:16 crc kubenswrapper[5050]: I1211 14:04:16.344060 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-webhook-certs\") pod \"openstack-operator-controller-manager-5b74fbd87-zqsjt\" (UID: \"b3b941f1-576d-4b49-871b-3666eda635ff\") " pod="openstack-operators/openstack-operator-controller-manager-5b74fbd87-zqsjt" Dec 11 14:04:16 crc kubenswrapper[5050]: E1211 14:04:16.344229 5050 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 11 14:04:16 crc kubenswrapper[5050]: I1211 14:04:16.344253 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-metrics-certs\") pod \"openstack-operator-controller-manager-5b74fbd87-zqsjt\" (UID: \"b3b941f1-576d-4b49-871b-3666eda635ff\") " pod="openstack-operators/openstack-operator-controller-manager-5b74fbd87-zqsjt" Dec 11 14:04:16 crc kubenswrapper[5050]: E1211 14:04:16.344346 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-webhook-certs podName:b3b941f1-576d-4b49-871b-3666eda635ff nodeName:}" failed. No retries permitted until 2025-12-11 14:04:20.344307469 +0000 UTC m=+951.188030055 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-webhook-certs") pod "openstack-operator-controller-manager-5b74fbd87-zqsjt" (UID: "b3b941f1-576d-4b49-871b-3666eda635ff") : secret "webhook-server-cert" not found Dec 11 14:04:16 crc kubenswrapper[5050]: E1211 14:04:16.344380 5050 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 11 14:04:16 crc kubenswrapper[5050]: E1211 14:04:16.344420 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-metrics-certs podName:b3b941f1-576d-4b49-871b-3666eda635ff nodeName:}" failed. No retries permitted until 2025-12-11 14:04:20.344408762 +0000 UTC m=+951.188131348 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-metrics-certs") pod "openstack-operator-controller-manager-5b74fbd87-zqsjt" (UID: "b3b941f1-576d-4b49-871b-3666eda635ff") : secret "metrics-server-cert" not found Dec 11 14:04:19 crc kubenswrapper[5050]: I1211 14:04:19.420460 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3477354d-838b-48cc-a6c3-612088d82640-cert\") pod \"infra-operator-controller-manager-78d48bff9d-5g8lw\" (UID: \"3477354d-838b-48cc-a6c3-612088d82640\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" Dec 11 14:04:19 crc kubenswrapper[5050]: E1211 14:04:19.420759 5050 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 11 14:04:19 crc kubenswrapper[5050]: E1211 14:04:19.421261 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3477354d-838b-48cc-a6c3-612088d82640-cert podName:3477354d-838b-48cc-a6c3-612088d82640 nodeName:}" failed. No retries permitted until 2025-12-11 14:04:27.421233464 +0000 UTC m=+958.264956050 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3477354d-838b-48cc-a6c3-612088d82640-cert") pod "infra-operator-controller-manager-78d48bff9d-5g8lw" (UID: "3477354d-838b-48cc-a6c3-612088d82640") : secret "infra-operator-webhook-server-cert" not found Dec 11 14:04:20 crc kubenswrapper[5050]: I1211 14:04:20.036959 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5bec9f7-072c-4c21-80ea-af9f59313eef-cert\") pod \"openstack-baremetal-operator-controller-manager-5c747c4-l2hpx\" (UID: \"f5bec9f7-072c-4c21-80ea-af9f59313eef\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" Dec 11 14:04:20 crc kubenswrapper[5050]: E1211 14:04:20.037806 5050 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 14:04:20 crc kubenswrapper[5050]: E1211 14:04:20.037883 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5bec9f7-072c-4c21-80ea-af9f59313eef-cert podName:f5bec9f7-072c-4c21-80ea-af9f59313eef nodeName:}" failed. No retries permitted until 2025-12-11 14:04:28.037854752 +0000 UTC m=+958.881577328 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f5bec9f7-072c-4c21-80ea-af9f59313eef-cert") pod "openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" (UID: "f5bec9f7-072c-4c21-80ea-af9f59313eef") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 14:04:20 crc kubenswrapper[5050]: I1211 14:04:20.346183 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-metrics-certs\") pod \"openstack-operator-controller-manager-5b74fbd87-zqsjt\" (UID: \"b3b941f1-576d-4b49-871b-3666eda635ff\") " pod="openstack-operators/openstack-operator-controller-manager-5b74fbd87-zqsjt" Dec 11 14:04:20 crc kubenswrapper[5050]: I1211 14:04:20.346301 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-webhook-certs\") pod \"openstack-operator-controller-manager-5b74fbd87-zqsjt\" (UID: \"b3b941f1-576d-4b49-871b-3666eda635ff\") " pod="openstack-operators/openstack-operator-controller-manager-5b74fbd87-zqsjt" Dec 11 14:04:20 crc kubenswrapper[5050]: E1211 14:04:20.346549 5050 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 11 14:04:20 crc kubenswrapper[5050]: E1211 14:04:20.346630 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-webhook-certs podName:b3b941f1-576d-4b49-871b-3666eda635ff nodeName:}" failed. No retries permitted until 2025-12-11 14:04:28.346602788 +0000 UTC m=+959.190325384 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-webhook-certs") pod "openstack-operator-controller-manager-5b74fbd87-zqsjt" (UID: "b3b941f1-576d-4b49-871b-3666eda635ff") : secret "webhook-server-cert" not found Dec 11 14:04:20 crc kubenswrapper[5050]: E1211 14:04:20.347227 5050 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 11 14:04:20 crc kubenswrapper[5050]: E1211 14:04:20.347364 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-metrics-certs podName:b3b941f1-576d-4b49-871b-3666eda635ff nodeName:}" failed. No retries permitted until 2025-12-11 14:04:28.347332808 +0000 UTC m=+959.191055434 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-metrics-certs") pod "openstack-operator-controller-manager-5b74fbd87-zqsjt" (UID: "b3b941f1-576d-4b49-871b-3666eda635ff") : secret "metrics-server-cert" not found Dec 11 14:04:27 crc kubenswrapper[5050]: E1211 14:04:27.193444 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:5bdb3685be3ddc1efd62e16aaf2fa96ead64315e26d52b1b2a7d8ac01baa1e87" Dec 11 14:04:27 crc kubenswrapper[5050]: E1211 14:04:27.194622 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:5bdb3685be3ddc1efd62e16aaf2fa96ead64315e26d52b1b2a7d8ac01baa1e87,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2cn45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-967d97867-7stc2_openstack-operators(d7b70b3b-5481-4ac2-8e60-256e2690752f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 14:04:27 crc kubenswrapper[5050]: I1211 14:04:27.488045 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3477354d-838b-48cc-a6c3-612088d82640-cert\") pod \"infra-operator-controller-manager-78d48bff9d-5g8lw\" (UID: \"3477354d-838b-48cc-a6c3-612088d82640\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" Dec 11 14:04:27 crc kubenswrapper[5050]: E1211 14:04:27.488241 5050 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 11 14:04:27 crc kubenswrapper[5050]: E1211 14:04:27.488327 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3477354d-838b-48cc-a6c3-612088d82640-cert podName:3477354d-838b-48cc-a6c3-612088d82640 nodeName:}" failed. No retries permitted until 2025-12-11 14:04:43.488308615 +0000 UTC m=+974.332031201 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/3477354d-838b-48cc-a6c3-612088d82640-cert") pod "infra-operator-controller-manager-78d48bff9d-5g8lw" (UID: "3477354d-838b-48cc-a6c3-612088d82640") : secret "infra-operator-webhook-server-cert" not found Dec 11 14:04:28 crc kubenswrapper[5050]: I1211 14:04:28.098770 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5bec9f7-072c-4c21-80ea-af9f59313eef-cert\") pod \"openstack-baremetal-operator-controller-manager-5c747c4-l2hpx\" (UID: \"f5bec9f7-072c-4c21-80ea-af9f59313eef\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" Dec 11 14:04:28 crc kubenswrapper[5050]: I1211 14:04:28.108125 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f5bec9f7-072c-4c21-80ea-af9f59313eef-cert\") pod \"openstack-baremetal-operator-controller-manager-5c747c4-l2hpx\" (UID: \"f5bec9f7-072c-4c21-80ea-af9f59313eef\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" Dec 11 14:04:28 crc kubenswrapper[5050]: I1211 14:04:28.230110 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" Dec 11 14:04:28 crc kubenswrapper[5050]: I1211 14:04:28.405053 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-metrics-certs\") pod \"openstack-operator-controller-manager-5b74fbd87-zqsjt\" (UID: \"b3b941f1-576d-4b49-871b-3666eda635ff\") " pod="openstack-operators/openstack-operator-controller-manager-5b74fbd87-zqsjt" Dec 11 14:04:28 crc kubenswrapper[5050]: I1211 14:04:28.405281 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-webhook-certs\") pod \"openstack-operator-controller-manager-5b74fbd87-zqsjt\" (UID: \"b3b941f1-576d-4b49-871b-3666eda635ff\") " pod="openstack-operators/openstack-operator-controller-manager-5b74fbd87-zqsjt" Dec 11 14:04:28 crc kubenswrapper[5050]: E1211 14:04:28.405367 5050 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 11 14:04:28 crc kubenswrapper[5050]: E1211 14:04:28.405504 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-metrics-certs podName:b3b941f1-576d-4b49-871b-3666eda635ff nodeName:}" failed. No retries permitted until 2025-12-11 14:04:44.405470208 +0000 UTC m=+975.249192794 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-metrics-certs") pod "openstack-operator-controller-manager-5b74fbd87-zqsjt" (UID: "b3b941f1-576d-4b49-871b-3666eda635ff") : secret "metrics-server-cert" not found Dec 11 14:04:28 crc kubenswrapper[5050]: E1211 14:04:28.405537 5050 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 11 14:04:28 crc kubenswrapper[5050]: E1211 14:04:28.405636 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-webhook-certs podName:b3b941f1-576d-4b49-871b-3666eda635ff nodeName:}" failed. No retries permitted until 2025-12-11 14:04:44.405606482 +0000 UTC m=+975.249329098 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-webhook-certs") pod "openstack-operator-controller-manager-5b74fbd87-zqsjt" (UID: "b3b941f1-576d-4b49-871b-3666eda635ff") : secret "webhook-server-cert" not found Dec 11 14:04:28 crc kubenswrapper[5050]: E1211 14:04:28.519782 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:424da951f13f1fbe9083215dc9f5088f90676dd813f01fdf3c1a8639b61cbaad" Dec 11 14:04:28 crc kubenswrapper[5050]: E1211 14:04:28.520494 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:424da951f13f1fbe9083215dc9f5088f90676dd813f01fdf3c1a8639b61cbaad,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fwr46,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-79c8c4686c-65swh_openstack-operators(58a60a6f-fcf5-4aa0-b6e4-97d7df2b2bd2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 14:04:30 crc kubenswrapper[5050]: E1211 14:04:30.320697 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7" Dec 11 14:04:30 crc kubenswrapper[5050]: E1211 14:04:30.320915 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kf4nm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-xgbp2_openstack-operators(06df6c8c-640d-431b-b216-78345a9054e1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 14:04:31 crc kubenswrapper[5050]: E1211 14:04:31.142280 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:981b6a8f95934a86c5f10ef6e198b07265aeba7f11cf84b9ccd13dfaf06f3ca3" Dec 11 14:04:31 crc kubenswrapper[5050]: E1211 14:04:31.142928 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:981b6a8f95934a86c5f10ef6e198b07265aeba7f11cf84b9ccd13dfaf06f3ca3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ntd8d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-6c677c69b-n7crp_openstack-operators(85683dfb-37fb-4301-8c7a-fbb7453b303d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 14:04:31 crc kubenswrapper[5050]: E1211 14:04:31.992152 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a" Dec 11 14:04:31 crc kubenswrapper[5050]: E1211 14:04:31.992372 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xfqbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-75944c9b7-grhdp_openstack-operators(712f888b-7c45-4c1f-95d8-ccc464b7c15f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 14:04:32 crc kubenswrapper[5050]: E1211 14:04:32.634951 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f" Dec 11 14:04:32 crc kubenswrapper[5050]: E1211 14:04:32.635732 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g2lxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-wsc2k_openstack-operators(cedca20e-aaaa-4190-944d-8f18bd93f737): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 14:04:33 crc kubenswrapper[5050]: E1211 14:04:33.365071 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:44126f9c6b1d2bf752ddf989e20a4fc4cc1c07723d4fcb78465ccb2f55da6b3a" Dec 11 14:04:33 crc kubenswrapper[5050]: E1211 14:04:33.368084 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:44126f9c6b1d2bf752ddf989e20a4fc4cc1c07723d4fcb78465ccb2f55da6b3a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fv76z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-5b5fd79c9c-jq9vt_openstack-operators(5e11a0d1-4179-4621-803d-839196fb940b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 14:04:34 crc kubenswrapper[5050]: E1211 14:04:34.133792 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:5370dc4a8e776923eec00bb50cbdb2e390e9dde50be26bdc04a216bd2d6b5027" Dec 11 14:04:34 crc kubenswrapper[5050]: E1211 14:04:34.134025 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:5370dc4a8e776923eec00bb50cbdb2e390e9dde50be26bdc04a216bd2d6b5027,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sc8wh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-5697bb5779-9tcm2_openstack-operators(048e17a7-0123-45a2-b698-02def3db74fe): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 14:04:34 crc kubenswrapper[5050]: E1211 14:04:34.707732 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168" Dec 11 14:04:34 crc kubenswrapper[5050]: E1211 14:04:34.707960 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p7ffv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-lvbdb_openstack-operators(fa0985f7-7d87-41b3-9916-f22375a0489c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 14:04:35 crc kubenswrapper[5050]: E1211 14:04:35.202565 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5" Dec 11 14:04:35 crc kubenswrapper[5050]: E1211 14:04:35.202846 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pz6sx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c6d99b8f-qbc2f_openstack-operators(9726c3f9-bcae-4722-a054-5a66c161953b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 14:04:35 crc kubenswrapper[5050]: E1211 14:04:35.753269 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991" Dec 11 14:04:35 crc kubenswrapper[5050]: E1211 14:04:35.754092 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vh69p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-9d58d64bc-8jnzj_openstack-operators(a6345cf8-abc2-4c9a-bfe6-8b65187ada2d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 14:04:39 crc kubenswrapper[5050]: E1211 14:04:39.440273 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Dec 11 14:04:39 crc kubenswrapper[5050]: E1211 14:04:39.440893 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dlx75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-h4w7p_openstack-operators(9c82f51b-e2a0-49e6-bc0e-d7679e439a6f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 14:04:40 crc kubenswrapper[5050]: I1211 14:04:40.316805 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5c747c4-l2hpx"] Dec 11 14:04:40 crc kubenswrapper[5050]: W1211 14:04:40.542907 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5bec9f7_072c_4c21_80ea_af9f59313eef.slice/crio-49b0d136f078d5a6dfa8ce9b91c3c3c69d3339d5cb2cc54a9b8e28d228ef79c2 WatchSource:0}: Error finding container 49b0d136f078d5a6dfa8ce9b91c3c3c69d3339d5cb2cc54a9b8e28d228ef79c2: Status 404 returned error can't find the container with id 49b0d136f078d5a6dfa8ce9b91c3c3c69d3339d5cb2cc54a9b8e28d228ef79c2 Dec 11 14:04:41 crc kubenswrapper[5050]: I1211 14:04:41.146660 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-qdrgd" event={"ID":"4b4e8aa9-a0f6-4bd2-92d2-c524aedf6389","Type":"ContainerStarted","Data":"af7a54f4263f021d4aff8a9a4ae17f2163472123dd835db7a0edb1c97d4ed3a2"} Dec 11 14:04:41 crc kubenswrapper[5050]: I1211 14:04:41.149424 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-xl9wl" event={"ID":"105854f4-5cc1-491f-983a-50864b37893f","Type":"ContainerStarted","Data":"559fe69c268927c75729b6de3c99a83913eae1a1967823238e78bb9a9e507a28"} Dec 11 14:04:41 crc kubenswrapper[5050]: I1211 14:04:41.151108 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" event={"ID":"f5bec9f7-072c-4c21-80ea-af9f59313eef","Type":"ContainerStarted","Data":"49b0d136f078d5a6dfa8ce9b91c3c3c69d3339d5cb2cc54a9b8e28d228ef79c2"} Dec 11 14:04:42 crc kubenswrapper[5050]: I1211 14:04:42.166858 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-sqjhh" event={"ID":"0aa7657b-dbca-4b2b-ac62-7000681a918a","Type":"ContainerStarted","Data":"759eebf464b9b81e3e452ececdef6061836e4d6a44710cd6dbbb7f8042ffb464"} Dec 11 14:04:42 crc kubenswrapper[5050]: I1211 14:04:42.189888 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ttg8w" event={"ID":"b885fa10-3ed3-41fd-94ae-2b7442519450","Type":"ContainerStarted","Data":"aec394cc63296e9f55ffee2ce659881063d43cd980faccaee47e6fbb8456acd5"} Dec 11 14:04:42 crc kubenswrapper[5050]: I1211 14:04:42.211404 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-ssfnh" event={"ID":"bff5d533-0728-4436-bdeb-c725bf04bdb3","Type":"ContainerStarted","Data":"cc1aebcc94f6c4f1c94befd20ad91bc84f12d60166bde82ab44388d1cde4d3bb"} Dec 11 14:04:42 crc kubenswrapper[5050]: I1211 14:04:42.222545 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pjhfc" event={"ID":"3c9e825c-0aee-42b9-a7a5-3191486f301d","Type":"ContainerStarted","Data":"ce2bc8fd07e25246673f9423055ae960e439b06b2f99f0e8154eb011bf21074d"} Dec 11 14:04:42 crc kubenswrapper[5050]: E1211 14:04:42.239929 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-9tcm2" podUID="048e17a7-0123-45a2-b698-02def3db74fe" Dec 11 14:04:42 crc kubenswrapper[5050]: E1211 14:04:42.329638 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-h4w7p" podUID="9c82f51b-e2a0-49e6-bc0e-d7679e439a6f" Dec 11 14:04:42 crc kubenswrapper[5050]: E1211 14:04:42.672163 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lvbdb" podUID="fa0985f7-7d87-41b3-9916-f22375a0489c" Dec 11 14:04:42 crc kubenswrapper[5050]: E1211 14:04:42.869981 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2" podUID="06df6c8c-640d-431b-b216-78345a9054e1" Dec 11 14:04:42 crc kubenswrapper[5050]: E1211 14:04:42.870538 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qbc2f" podUID="9726c3f9-bcae-4722-a054-5a66c161953b" Dec 11 14:04:42 crc kubenswrapper[5050]: E1211 14:04:42.999380 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-8jnzj" podUID="a6345cf8-abc2-4c9a-bfe6-8b65187ada2d" Dec 11 14:04:43 crc kubenswrapper[5050]: E1211 14:04:43.001291 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-65swh" podUID="58a60a6f-fcf5-4aa0-b6e4-97d7df2b2bd2" Dec 11 14:04:43 crc kubenswrapper[5050]: I1211 14:04:43.267691 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ttg8w" event={"ID":"b885fa10-3ed3-41fd-94ae-2b7442519450","Type":"ContainerStarted","Data":"e3f2e85201889c50372323c4c3f2f0236d230250e8d2e571d991df4b59bc7559"} Dec 11 14:04:43 crc kubenswrapper[5050]: I1211 14:04:43.269073 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ttg8w" Dec 11 14:04:43 crc kubenswrapper[5050]: E1211 14:04:43.288790 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-78f8948974-wsc2k" podUID="cedca20e-aaaa-4190-944d-8f18bd93f737" Dec 11 14:04:43 crc kubenswrapper[5050]: I1211 14:04:43.296861 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-xl9wl" event={"ID":"105854f4-5cc1-491f-983a-50864b37893f","Type":"ContainerStarted","Data":"6d1057ad4bd8c873d43651ebdc8f66d433ba07fb3cfde1ef69dca2364be331e3"} Dec 11 14:04:43 crc kubenswrapper[5050]: I1211 14:04:43.297198 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-xl9wl" Dec 11 14:04:43 crc kubenswrapper[5050]: E1211 14:04:43.301769 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp" podUID="85683dfb-37fb-4301-8c7a-fbb7453b303d" Dec 11 14:04:43 crc kubenswrapper[5050]: E1211 14:04:43.306419 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-967d97867-7stc2" podUID="d7b70b3b-5481-4ac2-8e60-256e2690752f" Dec 11 14:04:43 crc kubenswrapper[5050]: E1211 14:04:43.306574 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jq9vt" podUID="5e11a0d1-4179-4621-803d-839196fb940b" Dec 11 14:04:43 crc kubenswrapper[5050]: I1211 14:04:43.315716 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ttg8w" podStartSLOduration=6.440990907 podStartE2EDuration="32.315688594s" podCreationTimestamp="2025-12-11 14:04:11 +0000 UTC" firstStartedPulling="2025-12-11 14:04:14.172423741 +0000 UTC m=+945.016146327" lastFinishedPulling="2025-12-11 14:04:40.047121438 +0000 UTC m=+970.890844014" observedRunningTime="2025-12-11 14:04:43.301547633 +0000 UTC m=+974.145270219" watchObservedRunningTime="2025-12-11 14:04:43.315688594 +0000 UTC m=+974.159411170" Dec 11 14:04:43 crc kubenswrapper[5050]: I1211 14:04:43.329274 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pjhfc" event={"ID":"3c9e825c-0aee-42b9-a7a5-3191486f301d","Type":"ContainerStarted","Data":"b3db1b735846889d2d13d2c2ed63e289238f6cd3c1f68d8cb27c64967ac83f90"} Dec 11 14:04:43 crc kubenswrapper[5050]: I1211 14:04:43.330221 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pjhfc" Dec 11 14:04:43 crc kubenswrapper[5050]: I1211 14:04:43.344217 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-qqb7f" event={"ID":"12778398-2baa-44cb-9fd1-f2034870e9fc","Type":"ContainerStarted","Data":"1b15c8702fd3fc0d08a1478c75411ef07b0172464f8b56ce188a16b911b5eab1"} Dec 11 14:04:43 crc kubenswrapper[5050]: I1211 14:04:43.344283 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-qqb7f" event={"ID":"12778398-2baa-44cb-9fd1-f2034870e9fc","Type":"ContainerStarted","Data":"d01989086e6e233a8a7e550489bb5ac0e0797c26c266f1d28706afc6f3accdda"} Dec 11 14:04:43 crc kubenswrapper[5050]: I1211 14:04:43.344542 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-qqb7f" Dec 11 14:04:43 crc kubenswrapper[5050]: I1211 14:04:43.353920 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-ssfnh" event={"ID":"bff5d533-0728-4436-bdeb-c725bf04bdb3","Type":"ContainerStarted","Data":"7df5bb03260c3c3773bf46eb541235f898aa4152cfdc8ea648e524a37e3eba55"} Dec 11 14:04:43 crc kubenswrapper[5050]: I1211 14:04:43.354919 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-ssfnh" Dec 11 14:04:43 crc kubenswrapper[5050]: I1211 14:04:43.364486 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-65swh" event={"ID":"58a60a6f-fcf5-4aa0-b6e4-97d7df2b2bd2","Type":"ContainerStarted","Data":"1c3bc94ecbf0523061e87c9a192bfc530e25c2a49b43699f6035170ed395c3a1"} Dec 11 14:04:43 crc kubenswrapper[5050]: I1211 14:04:43.366714 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pjhfc" podStartSLOduration=6.300806372 podStartE2EDuration="32.366685998s" podCreationTimestamp="2025-12-11 14:04:11 +0000 UTC" firstStartedPulling="2025-12-11 14:04:14.078544773 +0000 UTC m=+944.922267359" lastFinishedPulling="2025-12-11 14:04:40.144424399 +0000 UTC m=+970.988146985" observedRunningTime="2025-12-11 14:04:43.364445357 +0000 UTC m=+974.208167953" watchObservedRunningTime="2025-12-11 14:04:43.366685998 +0000 UTC m=+974.210408594" Dec 11 14:04:43 crc kubenswrapper[5050]: I1211 14:04:43.398148 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-xl9wl" podStartSLOduration=4.119608754 podStartE2EDuration="32.398123635s" podCreationTimestamp="2025-12-11 14:04:11 +0000 UTC" firstStartedPulling="2025-12-11 14:04:13.603264941 +0000 UTC m=+944.446987527" lastFinishedPulling="2025-12-11 14:04:41.881779822 +0000 UTC m=+972.725502408" observedRunningTime="2025-12-11 14:04:43.33482773 +0000 UTC m=+974.178550336" watchObservedRunningTime="2025-12-11 14:04:43.398123635 +0000 UTC m=+974.241846231" Dec 11 14:04:43 crc kubenswrapper[5050]: I1211 14:04:43.401391 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-qqb7f" podStartSLOduration=6.409273252 podStartE2EDuration="32.401381872s" podCreationTimestamp="2025-12-11 14:04:11 +0000 UTC" firstStartedPulling="2025-12-11 14:04:14.153275795 +0000 UTC m=+944.996998381" lastFinishedPulling="2025-12-11 14:04:40.145384415 +0000 UTC m=+970.989107001" observedRunningTime="2025-12-11 14:04:43.390743836 +0000 UTC m=+974.234466432" watchObservedRunningTime="2025-12-11 14:04:43.401381872 +0000 UTC m=+974.245104458" Dec 11 14:04:43 crc kubenswrapper[5050]: I1211 14:04:43.417856 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-8jnzj" event={"ID":"a6345cf8-abc2-4c9a-bfe6-8b65187ada2d","Type":"ContainerStarted","Data":"332bf07ae0196caae58f0c2060c2bf1c81fd64095cf988b5235fe70bd28286f8"} Dec 11 14:04:43 crc kubenswrapper[5050]: E1211 14:04:43.422430 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991\\\"\"" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-8jnzj" podUID="a6345cf8-abc2-4c9a-bfe6-8b65187ada2d" Dec 11 14:04:43 crc kubenswrapper[5050]: I1211 14:04:43.428830 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2" event={"ID":"06df6c8c-640d-431b-b216-78345a9054e1","Type":"ContainerStarted","Data":"5cb94e0fe28e91387a1ec6d73b38cbe146c0db1ba54767a555f9295dd6340185"} Dec 11 14:04:43 crc kubenswrapper[5050]: I1211 14:04:43.432422 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-ssfnh" podStartSLOduration=6.460945004 podStartE2EDuration="32.432392857s" podCreationTimestamp="2025-12-11 14:04:11 +0000 UTC" firstStartedPulling="2025-12-11 14:04:14.173799068 +0000 UTC m=+945.017521654" lastFinishedPulling="2025-12-11 14:04:40.145246921 +0000 UTC m=+970.988969507" observedRunningTime="2025-12-11 14:04:43.430213149 +0000 UTC m=+974.273935745" watchObservedRunningTime="2025-12-11 14:04:43.432392857 +0000 UTC m=+974.276115443" Dec 11 14:04:43 crc kubenswrapper[5050]: E1211 14:04:43.435260 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-grhdp" podUID="712f888b-7c45-4c1f-95d8-ccc464b7c15f" Dec 11 14:04:43 crc kubenswrapper[5050]: I1211 14:04:43.454179 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lvbdb" event={"ID":"fa0985f7-7d87-41b3-9916-f22375a0489c","Type":"ContainerStarted","Data":"3e57655f31d10f58fc08f9d0580b5aad435833ab7367933377dbacdde2d97616"} Dec 11 14:04:43 crc kubenswrapper[5050]: E1211 14:04:43.466676 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lvbdb" podUID="fa0985f7-7d87-41b3-9916-f22375a0489c" Dec 11 14:04:43 crc kubenswrapper[5050]: I1211 14:04:43.469804 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-sqjhh" event={"ID":"0aa7657b-dbca-4b2b-ac62-7000681a918a","Type":"ContainerStarted","Data":"db8947ecd3bf1ba162edcbe9f90af5a112f0bbde0dc2d4ca6fd92e316773abc9"} Dec 11 14:04:43 crc kubenswrapper[5050]: I1211 14:04:43.470727 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-sqjhh" Dec 11 14:04:43 crc kubenswrapper[5050]: I1211 14:04:43.473752 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-h4w7p" event={"ID":"9c82f51b-e2a0-49e6-bc0e-d7679e439a6f","Type":"ContainerStarted","Data":"ad6a4873295db7102d9d3ab42de7605668aebade42c02fddd342a6d594741987"} Dec 11 14:04:43 crc kubenswrapper[5050]: E1211 14:04:43.478547 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-h4w7p" podUID="9c82f51b-e2a0-49e6-bc0e-d7679e439a6f" Dec 11 14:04:43 crc kubenswrapper[5050]: I1211 14:04:43.507471 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mcj5n" event={"ID":"dc74a2ef-5885-462e-a5b8-7b50454df35b","Type":"ContainerStarted","Data":"dcfa17f99a1c1948ff38bd67fedb10de764c36a7bd5d77919ea42d684801b05d"} Dec 11 14:04:43 crc kubenswrapper[5050]: I1211 14:04:43.540271 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-9tcm2" event={"ID":"048e17a7-0123-45a2-b698-02def3db74fe","Type":"ContainerStarted","Data":"382a5474bea62c6281d40b205a70af28bb5e539a481e87624129c67693292142"} Dec 11 14:04:43 crc kubenswrapper[5050]: E1211 14:04:43.542581 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:5370dc4a8e776923eec00bb50cbdb2e390e9dde50be26bdc04a216bd2d6b5027\\\"\"" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-9tcm2" podUID="048e17a7-0123-45a2-b698-02def3db74fe" Dec 11 14:04:43 crc kubenswrapper[5050]: I1211 14:04:43.564716 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3477354d-838b-48cc-a6c3-612088d82640-cert\") pod \"infra-operator-controller-manager-78d48bff9d-5g8lw\" (UID: \"3477354d-838b-48cc-a6c3-612088d82640\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" Dec 11 14:04:43 crc kubenswrapper[5050]: I1211 14:04:43.567700 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qbc2f" event={"ID":"9726c3f9-bcae-4722-a054-5a66c161953b","Type":"ContainerStarted","Data":"feafc2f4a5ea02bcde50809918f0ffd87ec44b46b7eb00bf7337f9986ee88e3a"} Dec 11 14:04:43 crc kubenswrapper[5050]: E1211 14:04:43.573383 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qbc2f" podUID="9726c3f9-bcae-4722-a054-5a66c161953b" Dec 11 14:04:43 crc kubenswrapper[5050]: I1211 14:04:43.609179 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3477354d-838b-48cc-a6c3-612088d82640-cert\") pod \"infra-operator-controller-manager-78d48bff9d-5g8lw\" (UID: \"3477354d-838b-48cc-a6c3-612088d82640\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" Dec 11 14:04:43 crc kubenswrapper[5050]: I1211 14:04:43.644849 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-sqjhh" podStartSLOduration=3.542205241 podStartE2EDuration="32.644814259s" podCreationTimestamp="2025-12-11 14:04:11 +0000 UTC" firstStartedPulling="2025-12-11 14:04:12.819348277 +0000 UTC m=+943.663070863" lastFinishedPulling="2025-12-11 14:04:41.921957295 +0000 UTC m=+972.765679881" observedRunningTime="2025-12-11 14:04:43.636951137 +0000 UTC m=+974.480673723" watchObservedRunningTime="2025-12-11 14:04:43.644814259 +0000 UTC m=+974.488536845" Dec 11 14:04:43 crc kubenswrapper[5050]: I1211 14:04:43.765309 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mcj5n" podStartSLOduration=5.25355705 podStartE2EDuration="31.765291684s" podCreationTimestamp="2025-12-11 14:04:12 +0000 UTC" firstStartedPulling="2025-12-11 14:04:14.179610785 +0000 UTC m=+945.023333371" lastFinishedPulling="2025-12-11 14:04:40.691345419 +0000 UTC m=+971.535068005" observedRunningTime="2025-12-11 14:04:43.726620012 +0000 UTC m=+974.570342608" watchObservedRunningTime="2025-12-11 14:04:43.765291684 +0000 UTC m=+974.609014270" Dec 11 14:04:43 crc kubenswrapper[5050]: I1211 14:04:43.861159 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-25r77" Dec 11 14:04:43 crc kubenswrapper[5050]: I1211 14:04:43.869142 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" Dec 11 14:04:44 crc kubenswrapper[5050]: I1211 14:04:44.250884 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw"] Dec 11 14:04:44 crc kubenswrapper[5050]: W1211 14:04:44.278469 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3477354d_838b_48cc_a6c3_612088d82640.slice/crio-995b2b723c7694415d55c94b56046ebf5fc5176490f2b22fa370ddb3ae5a5371 WatchSource:0}: Error finding container 995b2b723c7694415d55c94b56046ebf5fc5176490f2b22fa370ddb3ae5a5371: Status 404 returned error can't find the container with id 995b2b723c7694415d55c94b56046ebf5fc5176490f2b22fa370ddb3ae5a5371 Dec 11 14:04:44 crc kubenswrapper[5050]: I1211 14:04:44.482132 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-metrics-certs\") pod \"openstack-operator-controller-manager-5b74fbd87-zqsjt\" (UID: \"b3b941f1-576d-4b49-871b-3666eda635ff\") " pod="openstack-operators/openstack-operator-controller-manager-5b74fbd87-zqsjt" Dec 11 14:04:44 crc kubenswrapper[5050]: I1211 14:04:44.482226 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-webhook-certs\") pod \"openstack-operator-controller-manager-5b74fbd87-zqsjt\" (UID: \"b3b941f1-576d-4b49-871b-3666eda635ff\") " pod="openstack-operators/openstack-operator-controller-manager-5b74fbd87-zqsjt" Dec 11 14:04:44 crc kubenswrapper[5050]: I1211 14:04:44.489793 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-webhook-certs\") pod \"openstack-operator-controller-manager-5b74fbd87-zqsjt\" (UID: \"b3b941f1-576d-4b49-871b-3666eda635ff\") " pod="openstack-operators/openstack-operator-controller-manager-5b74fbd87-zqsjt" Dec 11 14:04:44 crc kubenswrapper[5050]: I1211 14:04:44.491663 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3b941f1-576d-4b49-871b-3666eda635ff-metrics-certs\") pod \"openstack-operator-controller-manager-5b74fbd87-zqsjt\" (UID: \"b3b941f1-576d-4b49-871b-3666eda635ff\") " pod="openstack-operators/openstack-operator-controller-manager-5b74fbd87-zqsjt" Dec 11 14:04:44 crc kubenswrapper[5050]: I1211 14:04:44.579325 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-65swh" event={"ID":"58a60a6f-fcf5-4aa0-b6e4-97d7df2b2bd2","Type":"ContainerStarted","Data":"979e0deb887552cefe316acef526be5df838912215e620a28e6177f1a500c441"} Dec 11 14:04:44 crc kubenswrapper[5050]: I1211 14:04:44.580268 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-65swh" Dec 11 14:04:44 crc kubenswrapper[5050]: I1211 14:04:44.587508 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jq9vt" event={"ID":"5e11a0d1-4179-4621-803d-839196fb940b","Type":"ContainerStarted","Data":"fa27b631f2f679667d3b702adc8a7acb37ae079cfc5d32e7b385b8a6591d70b4"} Dec 11 14:04:44 crc kubenswrapper[5050]: I1211 14:04:44.595097 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-wsc2k" event={"ID":"cedca20e-aaaa-4190-944d-8f18bd93f737","Type":"ContainerStarted","Data":"c855449671da00b22c2fb6e6fd2021d84167ac051c6e7dff4fa092594b39f8eb"} Dec 11 14:04:44 crc kubenswrapper[5050]: I1211 14:04:44.604740 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-65swh" podStartSLOduration=3.630080467 podStartE2EDuration="33.604713762s" podCreationTimestamp="2025-12-11 14:04:11 +0000 UTC" firstStartedPulling="2025-12-11 14:04:14.08216806 +0000 UTC m=+944.925890646" lastFinishedPulling="2025-12-11 14:04:44.056801355 +0000 UTC m=+974.900523941" observedRunningTime="2025-12-11 14:04:44.601951658 +0000 UTC m=+975.445674244" watchObservedRunningTime="2025-12-11 14:04:44.604713762 +0000 UTC m=+975.448436358" Dec 11 14:04:44 crc kubenswrapper[5050]: I1211 14:04:44.612854 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-7stc2" event={"ID":"d7b70b3b-5481-4ac2-8e60-256e2690752f","Type":"ContainerStarted","Data":"12388a94320bca85664e5b47356d8f6b17a33c9c115782f4d4c0001e9547d761"} Dec 11 14:04:44 crc kubenswrapper[5050]: I1211 14:04:44.618245 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-grhdp" event={"ID":"712f888b-7c45-4c1f-95d8-ccc464b7c15f","Type":"ContainerStarted","Data":"3fd6d90e5ad6b5de02f78860c7c8cc354b34de8772fa71103f893ecf37ea1260"} Dec 11 14:04:44 crc kubenswrapper[5050]: I1211 14:04:44.621103 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" event={"ID":"3477354d-838b-48cc-a6c3-612088d82640","Type":"ContainerStarted","Data":"995b2b723c7694415d55c94b56046ebf5fc5176490f2b22fa370ddb3ae5a5371"} Dec 11 14:04:44 crc kubenswrapper[5050]: I1211 14:04:44.623228 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-qdrgd" event={"ID":"4b4e8aa9-a0f6-4bd2-92d2-c524aedf6389","Type":"ContainerStarted","Data":"4f3fbc874218a8af672ab0e8db405d1b7124bba91352b328e58653a3f3149282"} Dec 11 14:04:44 crc kubenswrapper[5050]: I1211 14:04:44.623738 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-qdrgd" Dec 11 14:04:44 crc kubenswrapper[5050]: I1211 14:04:44.626648 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp" event={"ID":"85683dfb-37fb-4301-8c7a-fbb7453b303d","Type":"ContainerStarted","Data":"138c785bb604ea7821eda6280369509be4221403ac36ef70fdff7c7796670c93"} Dec 11 14:04:44 crc kubenswrapper[5050]: I1211 14:04:44.632376 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2" event={"ID":"06df6c8c-640d-431b-b216-78345a9054e1","Type":"ContainerStarted","Data":"9285260a37630061624ba74a7c92327ead3d7a69163e896c20c90b3da8d7a4b6"} Dec 11 14:04:44 crc kubenswrapper[5050]: E1211 14:04:44.637821 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qbc2f" podUID="9726c3f9-bcae-4722-a054-5a66c161953b" Dec 11 14:04:44 crc kubenswrapper[5050]: E1211 14:04:44.638139 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lvbdb" podUID="fa0985f7-7d87-41b3-9916-f22375a0489c" Dec 11 14:04:44 crc kubenswrapper[5050]: E1211 14:04:44.638214 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670\\\"\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-h4w7p" podUID="9c82f51b-e2a0-49e6-bc0e-d7679e439a6f" Dec 11 14:04:44 crc kubenswrapper[5050]: E1211 14:04:44.638340 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991\\\"\"" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-8jnzj" podUID="a6345cf8-abc2-4c9a-bfe6-8b65187ada2d" Dec 11 14:04:44 crc kubenswrapper[5050]: I1211 14:04:44.640240 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-mz95s" Dec 11 14:04:44 crc kubenswrapper[5050]: I1211 14:04:44.649650 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5b74fbd87-zqsjt" Dec 11 14:04:44 crc kubenswrapper[5050]: I1211 14:04:44.673891 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2" podStartSLOduration=2.787723299 podStartE2EDuration="33.673858075s" podCreationTimestamp="2025-12-11 14:04:11 +0000 UTC" firstStartedPulling="2025-12-11 14:04:13.303331032 +0000 UTC m=+944.147053618" lastFinishedPulling="2025-12-11 14:04:44.189465808 +0000 UTC m=+975.033188394" observedRunningTime="2025-12-11 14:04:44.665565801 +0000 UTC m=+975.509288387" watchObservedRunningTime="2025-12-11 14:04:44.673858075 +0000 UTC m=+975.517580661" Dec 11 14:04:44 crc kubenswrapper[5050]: I1211 14:04:44.869576 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-qdrgd" podStartSLOduration=3.2762905079999998 podStartE2EDuration="33.869554976s" podCreationTimestamp="2025-12-11 14:04:11 +0000 UTC" firstStartedPulling="2025-12-11 14:04:12.531518554 +0000 UTC m=+943.375241150" lastFinishedPulling="2025-12-11 14:04:43.124783032 +0000 UTC m=+973.968505618" observedRunningTime="2025-12-11 14:04:44.86638238 +0000 UTC m=+975.710104966" watchObservedRunningTime="2025-12-11 14:04:44.869554976 +0000 UTC m=+975.713277562" Dec 11 14:04:44 crc kubenswrapper[5050]: I1211 14:04:44.996225 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5b74fbd87-zqsjt"] Dec 11 14:04:45 crc kubenswrapper[5050]: I1211 14:04:45.662423 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2" Dec 11 14:04:45 crc kubenswrapper[5050]: I1211 14:04:45.664891 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-qdrgd" Dec 11 14:04:45 crc kubenswrapper[5050]: W1211 14:04:45.761006 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3b941f1_576d_4b49_871b_3666eda635ff.slice/crio-046f95eab8184bf36e0c7e29c3307e64a0a6856e9bf7aa15351ef7cfbc136ff9 WatchSource:0}: Error finding container 046f95eab8184bf36e0c7e29c3307e64a0a6856e9bf7aa15351ef7cfbc136ff9: Status 404 returned error can't find the container with id 046f95eab8184bf36e0c7e29c3307e64a0a6856e9bf7aa15351ef7cfbc136ff9 Dec 11 14:04:46 crc kubenswrapper[5050]: I1211 14:04:46.669063 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5b74fbd87-zqsjt" event={"ID":"b3b941f1-576d-4b49-871b-3666eda635ff","Type":"ContainerStarted","Data":"046f95eab8184bf36e0c7e29c3307e64a0a6856e9bf7aa15351ef7cfbc136ff9"} Dec 11 14:04:48 crc kubenswrapper[5050]: I1211 14:04:48.710681 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-9tcm2" event={"ID":"048e17a7-0123-45a2-b698-02def3db74fe","Type":"ContainerStarted","Data":"90984eafc9bf0a2dfd6de44205765a78318573224fe9dac04cbc812f5a363bc3"} Dec 11 14:04:48 crc kubenswrapper[5050]: I1211 14:04:48.711655 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-9tcm2" Dec 11 14:04:48 crc kubenswrapper[5050]: I1211 14:04:48.718897 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" event={"ID":"f5bec9f7-072c-4c21-80ea-af9f59313eef","Type":"ContainerStarted","Data":"f01e89cc8a0c33752a723a1e08042cee2c7882d75ad6f74fc2adb84f7939c81f"} Dec 11 14:04:48 crc kubenswrapper[5050]: I1211 14:04:48.730431 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-wsc2k" event={"ID":"cedca20e-aaaa-4190-944d-8f18bd93f737","Type":"ContainerStarted","Data":"a8d9d63a9e63134c857b78bccb377dd075cf8e58cb3004339094bafbbee23e50"} Dec 11 14:04:48 crc kubenswrapper[5050]: I1211 14:04:48.731617 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-wsc2k" Dec 11 14:04:48 crc kubenswrapper[5050]: I1211 14:04:48.752822 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-9tcm2" podStartSLOduration=3.705373076 podStartE2EDuration="37.752795568s" podCreationTimestamp="2025-12-11 14:04:11 +0000 UTC" firstStartedPulling="2025-12-11 14:04:14.084989176 +0000 UTC m=+944.928711762" lastFinishedPulling="2025-12-11 14:04:48.132411678 +0000 UTC m=+978.976134254" observedRunningTime="2025-12-11 14:04:48.751876973 +0000 UTC m=+979.595599559" watchObservedRunningTime="2025-12-11 14:04:48.752795568 +0000 UTC m=+979.596518154" Dec 11 14:04:48 crc kubenswrapper[5050]: I1211 14:04:48.758270 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp" event={"ID":"85683dfb-37fb-4301-8c7a-fbb7453b303d","Type":"ContainerStarted","Data":"17ce3bb0fb08a6af9be92adfec958eef19d6b30d1b82e27337ec77e555a96524"} Dec 11 14:04:48 crc kubenswrapper[5050]: I1211 14:04:48.758724 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp" Dec 11 14:04:48 crc kubenswrapper[5050]: I1211 14:04:48.775384 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-7stc2" event={"ID":"d7b70b3b-5481-4ac2-8e60-256e2690752f","Type":"ContainerStarted","Data":"91f1d18ce3c391a2d8cfcf26b53b446461d57ed0ef6059ae81ecc6c004531c75"} Dec 11 14:04:48 crc kubenswrapper[5050]: I1211 14:04:48.777345 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-967d97867-7stc2" Dec 11 14:04:48 crc kubenswrapper[5050]: I1211 14:04:48.805047 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-grhdp" event={"ID":"712f888b-7c45-4c1f-95d8-ccc464b7c15f","Type":"ContainerStarted","Data":"cee7a07a8266303d97a8acfa2e535e8bc4b66e8fe482d34d7f556f927bc90d01"} Dec 11 14:04:48 crc kubenswrapper[5050]: I1211 14:04:48.806236 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-grhdp" Dec 11 14:04:48 crc kubenswrapper[5050]: I1211 14:04:48.815465 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-wsc2k" podStartSLOduration=3.741116479 podStartE2EDuration="37.815445356s" podCreationTimestamp="2025-12-11 14:04:11 +0000 UTC" firstStartedPulling="2025-12-11 14:04:14.084239196 +0000 UTC m=+944.927961782" lastFinishedPulling="2025-12-11 14:04:48.158568073 +0000 UTC m=+979.002290659" observedRunningTime="2025-12-11 14:04:48.812565848 +0000 UTC m=+979.656288434" watchObservedRunningTime="2025-12-11 14:04:48.815445356 +0000 UTC m=+979.659167932" Dec 11 14:04:48 crc kubenswrapper[5050]: I1211 14:04:48.827654 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5b74fbd87-zqsjt" event={"ID":"b3b941f1-576d-4b49-871b-3666eda635ff","Type":"ContainerStarted","Data":"2c79f39871dcc870bbb0bf089f57ee1c74e03d8f6076ab4cbe7a430584e5b026"} Dec 11 14:04:48 crc kubenswrapper[5050]: I1211 14:04:48.828695 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5b74fbd87-zqsjt" Dec 11 14:04:48 crc kubenswrapper[5050]: I1211 14:04:48.840817 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" event={"ID":"3477354d-838b-48cc-a6c3-612088d82640","Type":"ContainerStarted","Data":"466b93fe028fca07c950e859d36217259a41f4f7bfb1b3eeba0ddc9b195b96a1"} Dec 11 14:04:48 crc kubenswrapper[5050]: I1211 14:04:48.847663 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-grhdp" podStartSLOduration=3.776167183 podStartE2EDuration="37.847639443s" podCreationTimestamp="2025-12-11 14:04:11 +0000 UTC" firstStartedPulling="2025-12-11 14:04:14.117690357 +0000 UTC m=+944.961412953" lastFinishedPulling="2025-12-11 14:04:48.189162627 +0000 UTC m=+979.032885213" observedRunningTime="2025-12-11 14:04:48.841511988 +0000 UTC m=+979.685234574" watchObservedRunningTime="2025-12-11 14:04:48.847639443 +0000 UTC m=+979.691362029" Dec 11 14:04:48 crc kubenswrapper[5050]: I1211 14:04:48.866294 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jq9vt" event={"ID":"5e11a0d1-4179-4621-803d-839196fb940b","Type":"ContainerStarted","Data":"0227acc6f5c529de7ffe8cf29065a76cc73e2ec49c7d56ced235fe90d0badca7"} Dec 11 14:04:48 crc kubenswrapper[5050]: I1211 14:04:48.866540 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jq9vt" Dec 11 14:04:48 crc kubenswrapper[5050]: I1211 14:04:48.892929 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-967d97867-7stc2" podStartSLOduration=3.3950760779999998 podStartE2EDuration="37.892907422s" podCreationTimestamp="2025-12-11 14:04:11 +0000 UTC" firstStartedPulling="2025-12-11 14:04:13.604678309 +0000 UTC m=+944.448400895" lastFinishedPulling="2025-12-11 14:04:48.102509633 +0000 UTC m=+978.946232239" observedRunningTime="2025-12-11 14:04:48.886682034 +0000 UTC m=+979.730404630" watchObservedRunningTime="2025-12-11 14:04:48.892907422 +0000 UTC m=+979.736630018" Dec 11 14:04:48 crc kubenswrapper[5050]: I1211 14:04:48.949202 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp" podStartSLOduration=3.054869044 podStartE2EDuration="37.949169387s" podCreationTimestamp="2025-12-11 14:04:11 +0000 UTC" firstStartedPulling="2025-12-11 14:04:13.293699603 +0000 UTC m=+944.137422189" lastFinishedPulling="2025-12-11 14:04:48.187999946 +0000 UTC m=+979.031722532" observedRunningTime="2025-12-11 14:04:48.944052979 +0000 UTC m=+979.787775565" watchObservedRunningTime="2025-12-11 14:04:48.949169387 +0000 UTC m=+979.792891973" Dec 11 14:04:49 crc kubenswrapper[5050]: I1211 14:04:49.002078 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5b74fbd87-zqsjt" podStartSLOduration=37.002048422 podStartE2EDuration="37.002048422s" podCreationTimestamp="2025-12-11 14:04:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:04:48.992632148 +0000 UTC m=+979.836354734" watchObservedRunningTime="2025-12-11 14:04:49.002048422 +0000 UTC m=+979.845771008" Dec 11 14:04:49 crc kubenswrapper[5050]: I1211 14:04:49.037277 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jq9vt" podStartSLOduration=3.984407461 podStartE2EDuration="38.037222749s" podCreationTimestamp="2025-12-11 14:04:11 +0000 UTC" firstStartedPulling="2025-12-11 14:04:14.085172021 +0000 UTC m=+944.928894607" lastFinishedPulling="2025-12-11 14:04:48.137987309 +0000 UTC m=+978.981709895" observedRunningTime="2025-12-11 14:04:49.027862167 +0000 UTC m=+979.871584753" watchObservedRunningTime="2025-12-11 14:04:49.037222749 +0000 UTC m=+979.880945345" Dec 11 14:04:49 crc kubenswrapper[5050]: I1211 14:04:49.877370 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" event={"ID":"3477354d-838b-48cc-a6c3-612088d82640","Type":"ContainerStarted","Data":"4d2c154a17e7350ef6c4f8f316d2ae12fd533c6b243b9ab626c4734249f74bc3"} Dec 11 14:04:49 crc kubenswrapper[5050]: I1211 14:04:49.877561 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" Dec 11 14:04:49 crc kubenswrapper[5050]: I1211 14:04:49.880547 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" event={"ID":"f5bec9f7-072c-4c21-80ea-af9f59313eef","Type":"ContainerStarted","Data":"ed2020ea35036ebce0233713d55053b9cdf448cf4ff45cd67f4f9620c64b357f"} Dec 11 14:04:49 crc kubenswrapper[5050]: I1211 14:04:49.912524 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" podStartSLOduration=35.010674851 podStartE2EDuration="38.912497134s" podCreationTimestamp="2025-12-11 14:04:11 +0000 UTC" firstStartedPulling="2025-12-11 14:04:44.287337524 +0000 UTC m=+975.131060110" lastFinishedPulling="2025-12-11 14:04:48.189159807 +0000 UTC m=+979.032882393" observedRunningTime="2025-12-11 14:04:49.906702648 +0000 UTC m=+980.750425244" watchObservedRunningTime="2025-12-11 14:04:49.912497134 +0000 UTC m=+980.756219720" Dec 11 14:04:49 crc kubenswrapper[5050]: I1211 14:04:49.940061 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" podStartSLOduration=31.28564387 podStartE2EDuration="38.940008795s" podCreationTimestamp="2025-12-11 14:04:11 +0000 UTC" firstStartedPulling="2025-12-11 14:04:40.54586809 +0000 UTC m=+971.389590676" lastFinishedPulling="2025-12-11 14:04:48.200233015 +0000 UTC m=+979.043955601" observedRunningTime="2025-12-11 14:04:49.937812626 +0000 UTC m=+980.781535212" watchObservedRunningTime="2025-12-11 14:04:49.940008795 +0000 UTC m=+980.783731381" Dec 11 14:04:50 crc kubenswrapper[5050]: I1211 14:04:50.888890 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" Dec 11 14:04:51 crc kubenswrapper[5050]: I1211 14:04:51.683921 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-sqjhh" Dec 11 14:04:52 crc kubenswrapper[5050]: I1211 14:04:52.106613 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-xl9wl" Dec 11 14:04:52 crc kubenswrapper[5050]: I1211 14:04:52.139844 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2" Dec 11 14:04:52 crc kubenswrapper[5050]: I1211 14:04:52.312624 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-65swh" Dec 11 14:04:52 crc kubenswrapper[5050]: I1211 14:04:52.546581 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pjhfc" Dec 11 14:04:52 crc kubenswrapper[5050]: I1211 14:04:52.727498 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ttg8w" Dec 11 14:04:52 crc kubenswrapper[5050]: I1211 14:04:52.750751 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-qqb7f" Dec 11 14:04:52 crc kubenswrapper[5050]: I1211 14:04:52.971730 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-ssfnh" Dec 11 14:04:53 crc kubenswrapper[5050]: I1211 14:04:53.877965 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" Dec 11 14:04:54 crc kubenswrapper[5050]: I1211 14:04:54.656987 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5b74fbd87-zqsjt" Dec 11 14:04:55 crc kubenswrapper[5050]: I1211 14:04:55.551273 5050 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 14:04:58 crc kubenswrapper[5050]: I1211 14:04:58.239769 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" Dec 11 14:05:01 crc kubenswrapper[5050]: I1211 14:05:01.635974 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp" Dec 11 14:05:02 crc kubenswrapper[5050]: I1211 14:05:02.015183 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-9tcm2" Dec 11 14:05:02 crc kubenswrapper[5050]: I1211 14:05:02.174187 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-967d97867-7stc2" Dec 11 14:05:02 crc kubenswrapper[5050]: I1211 14:05:02.257590 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jq9vt" Dec 11 14:05:02 crc kubenswrapper[5050]: I1211 14:05:02.656359 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-wsc2k" Dec 11 14:05:02 crc kubenswrapper[5050]: I1211 14:05:02.822397 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-grhdp" Dec 11 14:05:07 crc kubenswrapper[5050]: I1211 14:05:07.026549 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-8jnzj" event={"ID":"a6345cf8-abc2-4c9a-bfe6-8b65187ada2d","Type":"ContainerStarted","Data":"05e1cfed054359e4690695e0e19eb3250041fc5e66353373c649dc422c3083c9"} Dec 11 14:05:07 crc kubenswrapper[5050]: I1211 14:05:07.028767 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qbc2f" event={"ID":"9726c3f9-bcae-4722-a054-5a66c161953b","Type":"ContainerStarted","Data":"67f5e275e8639848b24d74b4f8b7bbeea49770d73edebe643bd6731490eadeaf"} Dec 11 14:05:07 crc kubenswrapper[5050]: I1211 14:05:07.029066 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-8jnzj" Dec 11 14:05:07 crc kubenswrapper[5050]: I1211 14:05:07.030358 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lvbdb" event={"ID":"fa0985f7-7d87-41b3-9916-f22375a0489c","Type":"ContainerStarted","Data":"0d0410cf581b6a3bf74633d4810a438a458442f5365cab80578272c2419bfbe3"} Dec 11 14:05:07 crc kubenswrapper[5050]: I1211 14:05:07.031390 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lvbdb" Dec 11 14:05:07 crc kubenswrapper[5050]: I1211 14:05:07.033272 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-h4w7p" event={"ID":"9c82f51b-e2a0-49e6-bc0e-d7679e439a6f","Type":"ContainerStarted","Data":"bfed92f2b27b2071868d25582796802cd9202ad55e3580db605a4a3f8e78aa24"} Dec 11 14:05:07 crc kubenswrapper[5050]: I1211 14:05:07.033469 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-h4w7p" Dec 11 14:05:07 crc kubenswrapper[5050]: I1211 14:05:07.073355 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-8jnzj" podStartSLOduration=3.606560574 podStartE2EDuration="56.073321378s" podCreationTimestamp="2025-12-11 14:04:11 +0000 UTC" firstStartedPulling="2025-12-11 14:04:14.103428603 +0000 UTC m=+944.947151189" lastFinishedPulling="2025-12-11 14:05:06.570189407 +0000 UTC m=+997.413911993" observedRunningTime="2025-12-11 14:05:07.063133124 +0000 UTC m=+997.906855730" watchObservedRunningTime="2025-12-11 14:05:07.073321378 +0000 UTC m=+997.917043974" Dec 11 14:05:07 crc kubenswrapper[5050]: I1211 14:05:07.103524 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qbc2f" podStartSLOduration=2.8835818509999998 podStartE2EDuration="56.103497141s" podCreationTimestamp="2025-12-11 14:04:11 +0000 UTC" firstStartedPulling="2025-12-11 14:04:13.349974369 +0000 UTC m=+944.193696955" lastFinishedPulling="2025-12-11 14:05:06.569889659 +0000 UTC m=+997.413612245" observedRunningTime="2025-12-11 14:05:07.092880205 +0000 UTC m=+997.936602791" watchObservedRunningTime="2025-12-11 14:05:07.103497141 +0000 UTC m=+997.947219737" Dec 11 14:05:07 crc kubenswrapper[5050]: I1211 14:05:07.129382 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-h4w7p" podStartSLOduration=3.672058548 podStartE2EDuration="56.129351237s" podCreationTimestamp="2025-12-11 14:04:11 +0000 UTC" firstStartedPulling="2025-12-11 14:04:14.112840246 +0000 UTC m=+944.956562832" lastFinishedPulling="2025-12-11 14:05:06.570132935 +0000 UTC m=+997.413855521" observedRunningTime="2025-12-11 14:05:07.121462895 +0000 UTC m=+997.965185481" watchObservedRunningTime="2025-12-11 14:05:07.129351237 +0000 UTC m=+997.973073823" Dec 11 14:05:07 crc kubenswrapper[5050]: I1211 14:05:07.160754 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lvbdb" podStartSLOduration=3.713346411 podStartE2EDuration="56.160718702s" podCreationTimestamp="2025-12-11 14:04:11 +0000 UTC" firstStartedPulling="2025-12-11 14:04:14.122513467 +0000 UTC m=+944.966236053" lastFinishedPulling="2025-12-11 14:05:06.569885758 +0000 UTC m=+997.413608344" observedRunningTime="2025-12-11 14:05:07.15245738 +0000 UTC m=+997.996179966" watchObservedRunningTime="2025-12-11 14:05:07.160718702 +0000 UTC m=+998.004441298" Dec 11 14:05:10 crc kubenswrapper[5050]: I1211 14:05:10.796525 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:05:10 crc kubenswrapper[5050]: I1211 14:05:10.796964 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:05:12 crc kubenswrapper[5050]: I1211 14:05:11.999917 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qbc2f" Dec 11 14:05:12 crc kubenswrapper[5050]: I1211 14:05:12.003986 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qbc2f" Dec 11 14:05:12 crc kubenswrapper[5050]: I1211 14:05:12.586735 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-h4w7p" Dec 11 14:05:12 crc kubenswrapper[5050]: I1211 14:05:12.597194 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lvbdb" Dec 11 14:05:12 crc kubenswrapper[5050]: I1211 14:05:12.808851 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-8jnzj" Dec 11 14:05:26 crc kubenswrapper[5050]: I1211 14:05:26.380747 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-88cc6"] Dec 11 14:05:26 crc kubenswrapper[5050]: I1211 14:05:26.382930 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-88cc6" Dec 11 14:05:26 crc kubenswrapper[5050]: I1211 14:05:26.385543 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-6zndk" Dec 11 14:05:26 crc kubenswrapper[5050]: I1211 14:05:26.389239 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 11 14:05:26 crc kubenswrapper[5050]: I1211 14:05:26.389372 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 11 14:05:26 crc kubenswrapper[5050]: I1211 14:05:26.389449 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 11 14:05:26 crc kubenswrapper[5050]: I1211 14:05:26.407843 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-88cc6"] Dec 11 14:05:26 crc kubenswrapper[5050]: I1211 14:05:26.467950 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-zv5n4"] Dec 11 14:05:26 crc kubenswrapper[5050]: I1211 14:05:26.469881 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-zv5n4" Dec 11 14:05:26 crc kubenswrapper[5050]: I1211 14:05:26.473846 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 11 14:05:26 crc kubenswrapper[5050]: I1211 14:05:26.483457 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-zv5n4"] Dec 11 14:05:26 crc kubenswrapper[5050]: I1211 14:05:26.509228 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkzsn\" (UniqueName: \"kubernetes.io/projected/ac0ab827-d14b-4fa6-b93c-1e71237fbaef-kube-api-access-vkzsn\") pod \"dnsmasq-dns-84bb9d8bd9-88cc6\" (UID: \"ac0ab827-d14b-4fa6-b93c-1e71237fbaef\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-88cc6" Dec 11 14:05:26 crc kubenswrapper[5050]: I1211 14:05:26.509353 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac0ab827-d14b-4fa6-b93c-1e71237fbaef-config\") pod \"dnsmasq-dns-84bb9d8bd9-88cc6\" (UID: \"ac0ab827-d14b-4fa6-b93c-1e71237fbaef\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-88cc6" Dec 11 14:05:26 crc kubenswrapper[5050]: I1211 14:05:26.611613 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac0ab827-d14b-4fa6-b93c-1e71237fbaef-config\") pod \"dnsmasq-dns-84bb9d8bd9-88cc6\" (UID: \"ac0ab827-d14b-4fa6-b93c-1e71237fbaef\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-88cc6" Dec 11 14:05:26 crc kubenswrapper[5050]: I1211 14:05:26.611708 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/743c85da-99ca-4ac8-8d19-edf69c27b90f-dns-svc\") pod \"dnsmasq-dns-5f854695bc-zv5n4\" (UID: \"743c85da-99ca-4ac8-8d19-edf69c27b90f\") " pod="openstack/dnsmasq-dns-5f854695bc-zv5n4" Dec 11 14:05:26 crc kubenswrapper[5050]: I1211 14:05:26.611771 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkzsn\" (UniqueName: \"kubernetes.io/projected/ac0ab827-d14b-4fa6-b93c-1e71237fbaef-kube-api-access-vkzsn\") pod \"dnsmasq-dns-84bb9d8bd9-88cc6\" (UID: \"ac0ab827-d14b-4fa6-b93c-1e71237fbaef\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-88cc6" Dec 11 14:05:26 crc kubenswrapper[5050]: I1211 14:05:26.612304 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gplgl\" (UniqueName: \"kubernetes.io/projected/743c85da-99ca-4ac8-8d19-edf69c27b90f-kube-api-access-gplgl\") pod \"dnsmasq-dns-5f854695bc-zv5n4\" (UID: \"743c85da-99ca-4ac8-8d19-edf69c27b90f\") " pod="openstack/dnsmasq-dns-5f854695bc-zv5n4" Dec 11 14:05:26 crc kubenswrapper[5050]: I1211 14:05:26.612347 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/743c85da-99ca-4ac8-8d19-edf69c27b90f-config\") pod \"dnsmasq-dns-5f854695bc-zv5n4\" (UID: \"743c85da-99ca-4ac8-8d19-edf69c27b90f\") " pod="openstack/dnsmasq-dns-5f854695bc-zv5n4" Dec 11 14:05:26 crc kubenswrapper[5050]: I1211 14:05:26.612879 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac0ab827-d14b-4fa6-b93c-1e71237fbaef-config\") pod \"dnsmasq-dns-84bb9d8bd9-88cc6\" (UID: \"ac0ab827-d14b-4fa6-b93c-1e71237fbaef\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-88cc6" Dec 11 14:05:26 crc kubenswrapper[5050]: I1211 14:05:26.647259 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkzsn\" (UniqueName: \"kubernetes.io/projected/ac0ab827-d14b-4fa6-b93c-1e71237fbaef-kube-api-access-vkzsn\") pod \"dnsmasq-dns-84bb9d8bd9-88cc6\" (UID: \"ac0ab827-d14b-4fa6-b93c-1e71237fbaef\") " pod="openstack/dnsmasq-dns-84bb9d8bd9-88cc6" Dec 11 14:05:26 crc kubenswrapper[5050]: I1211 14:05:26.711925 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-88cc6" Dec 11 14:05:26 crc kubenswrapper[5050]: I1211 14:05:26.713513 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/743c85da-99ca-4ac8-8d19-edf69c27b90f-dns-svc\") pod \"dnsmasq-dns-5f854695bc-zv5n4\" (UID: \"743c85da-99ca-4ac8-8d19-edf69c27b90f\") " pod="openstack/dnsmasq-dns-5f854695bc-zv5n4" Dec 11 14:05:26 crc kubenswrapper[5050]: I1211 14:05:26.713624 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gplgl\" (UniqueName: \"kubernetes.io/projected/743c85da-99ca-4ac8-8d19-edf69c27b90f-kube-api-access-gplgl\") pod \"dnsmasq-dns-5f854695bc-zv5n4\" (UID: \"743c85da-99ca-4ac8-8d19-edf69c27b90f\") " pod="openstack/dnsmasq-dns-5f854695bc-zv5n4" Dec 11 14:05:26 crc kubenswrapper[5050]: I1211 14:05:26.713659 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/743c85da-99ca-4ac8-8d19-edf69c27b90f-config\") pod \"dnsmasq-dns-5f854695bc-zv5n4\" (UID: \"743c85da-99ca-4ac8-8d19-edf69c27b90f\") " pod="openstack/dnsmasq-dns-5f854695bc-zv5n4" Dec 11 14:05:26 crc kubenswrapper[5050]: I1211 14:05:26.714396 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/743c85da-99ca-4ac8-8d19-edf69c27b90f-dns-svc\") pod \"dnsmasq-dns-5f854695bc-zv5n4\" (UID: \"743c85da-99ca-4ac8-8d19-edf69c27b90f\") " pod="openstack/dnsmasq-dns-5f854695bc-zv5n4" Dec 11 14:05:26 crc kubenswrapper[5050]: I1211 14:05:26.714683 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/743c85da-99ca-4ac8-8d19-edf69c27b90f-config\") pod \"dnsmasq-dns-5f854695bc-zv5n4\" (UID: \"743c85da-99ca-4ac8-8d19-edf69c27b90f\") " pod="openstack/dnsmasq-dns-5f854695bc-zv5n4" Dec 11 14:05:26 crc kubenswrapper[5050]: I1211 14:05:26.734969 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gplgl\" (UniqueName: \"kubernetes.io/projected/743c85da-99ca-4ac8-8d19-edf69c27b90f-kube-api-access-gplgl\") pod \"dnsmasq-dns-5f854695bc-zv5n4\" (UID: \"743c85da-99ca-4ac8-8d19-edf69c27b90f\") " pod="openstack/dnsmasq-dns-5f854695bc-zv5n4" Dec 11 14:05:26 crc kubenswrapper[5050]: I1211 14:05:26.789047 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-zv5n4" Dec 11 14:05:27 crc kubenswrapper[5050]: I1211 14:05:27.250697 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-88cc6"] Dec 11 14:05:27 crc kubenswrapper[5050]: W1211 14:05:27.256755 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac0ab827_d14b_4fa6_b93c_1e71237fbaef.slice/crio-429d43df0473a247a68fbff34e96f33f8705d7a3bbd25c8f3678a7f8ed14d1f8 WatchSource:0}: Error finding container 429d43df0473a247a68fbff34e96f33f8705d7a3bbd25c8f3678a7f8ed14d1f8: Status 404 returned error can't find the container with id 429d43df0473a247a68fbff34e96f33f8705d7a3bbd25c8f3678a7f8ed14d1f8 Dec 11 14:05:27 crc kubenswrapper[5050]: I1211 14:05:27.319223 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-zv5n4"] Dec 11 14:05:28 crc kubenswrapper[5050]: I1211 14:05:28.200302 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-88cc6" event={"ID":"ac0ab827-d14b-4fa6-b93c-1e71237fbaef","Type":"ContainerStarted","Data":"429d43df0473a247a68fbff34e96f33f8705d7a3bbd25c8f3678a7f8ed14d1f8"} Dec 11 14:05:28 crc kubenswrapper[5050]: I1211 14:05:28.201939 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-zv5n4" event={"ID":"743c85da-99ca-4ac8-8d19-edf69c27b90f","Type":"ContainerStarted","Data":"ebc703a776fcf343cd2b3d72b8878c551ed63e2f40bf5bbbd5c0f5cf4e431dbe"} Dec 11 14:05:29 crc kubenswrapper[5050]: I1211 14:05:29.210056 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-zv5n4"] Dec 11 14:05:29 crc kubenswrapper[5050]: I1211 14:05:29.245162 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-vlqm4"] Dec 11 14:05:29 crc kubenswrapper[5050]: I1211 14:05:29.258652 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c7cbb8f79-vlqm4" Dec 11 14:05:29 crc kubenswrapper[5050]: I1211 14:05:29.288617 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-vlqm4"] Dec 11 14:05:29 crc kubenswrapper[5050]: I1211 14:05:29.359226 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gd7t\" (UniqueName: \"kubernetes.io/projected/4b68c591-2f4c-41c3-9ab1-372deed0e388-kube-api-access-7gd7t\") pod \"dnsmasq-dns-c7cbb8f79-vlqm4\" (UID: \"4b68c591-2f4c-41c3-9ab1-372deed0e388\") " pod="openstack/dnsmasq-dns-c7cbb8f79-vlqm4" Dec 11 14:05:29 crc kubenswrapper[5050]: I1211 14:05:29.359376 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b68c591-2f4c-41c3-9ab1-372deed0e388-config\") pod \"dnsmasq-dns-c7cbb8f79-vlqm4\" (UID: \"4b68c591-2f4c-41c3-9ab1-372deed0e388\") " pod="openstack/dnsmasq-dns-c7cbb8f79-vlqm4" Dec 11 14:05:29 crc kubenswrapper[5050]: I1211 14:05:29.359405 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b68c591-2f4c-41c3-9ab1-372deed0e388-dns-svc\") pod \"dnsmasq-dns-c7cbb8f79-vlqm4\" (UID: \"4b68c591-2f4c-41c3-9ab1-372deed0e388\") " pod="openstack/dnsmasq-dns-c7cbb8f79-vlqm4" Dec 11 14:05:29 crc kubenswrapper[5050]: I1211 14:05:29.461714 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b68c591-2f4c-41c3-9ab1-372deed0e388-config\") pod \"dnsmasq-dns-c7cbb8f79-vlqm4\" (UID: \"4b68c591-2f4c-41c3-9ab1-372deed0e388\") " pod="openstack/dnsmasq-dns-c7cbb8f79-vlqm4" Dec 11 14:05:29 crc kubenswrapper[5050]: I1211 14:05:29.461779 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b68c591-2f4c-41c3-9ab1-372deed0e388-dns-svc\") pod \"dnsmasq-dns-c7cbb8f79-vlqm4\" (UID: \"4b68c591-2f4c-41c3-9ab1-372deed0e388\") " pod="openstack/dnsmasq-dns-c7cbb8f79-vlqm4" Dec 11 14:05:29 crc kubenswrapper[5050]: I1211 14:05:29.461885 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gd7t\" (UniqueName: \"kubernetes.io/projected/4b68c591-2f4c-41c3-9ab1-372deed0e388-kube-api-access-7gd7t\") pod \"dnsmasq-dns-c7cbb8f79-vlqm4\" (UID: \"4b68c591-2f4c-41c3-9ab1-372deed0e388\") " pod="openstack/dnsmasq-dns-c7cbb8f79-vlqm4" Dec 11 14:05:29 crc kubenswrapper[5050]: I1211 14:05:29.463972 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b68c591-2f4c-41c3-9ab1-372deed0e388-config\") pod \"dnsmasq-dns-c7cbb8f79-vlqm4\" (UID: \"4b68c591-2f4c-41c3-9ab1-372deed0e388\") " pod="openstack/dnsmasq-dns-c7cbb8f79-vlqm4" Dec 11 14:05:29 crc kubenswrapper[5050]: I1211 14:05:29.464630 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b68c591-2f4c-41c3-9ab1-372deed0e388-dns-svc\") pod \"dnsmasq-dns-c7cbb8f79-vlqm4\" (UID: \"4b68c591-2f4c-41c3-9ab1-372deed0e388\") " pod="openstack/dnsmasq-dns-c7cbb8f79-vlqm4" Dec 11 14:05:29 crc kubenswrapper[5050]: I1211 14:05:29.492055 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gd7t\" (UniqueName: \"kubernetes.io/projected/4b68c591-2f4c-41c3-9ab1-372deed0e388-kube-api-access-7gd7t\") pod \"dnsmasq-dns-c7cbb8f79-vlqm4\" (UID: \"4b68c591-2f4c-41c3-9ab1-372deed0e388\") " pod="openstack/dnsmasq-dns-c7cbb8f79-vlqm4" Dec 11 14:05:29 crc kubenswrapper[5050]: I1211 14:05:29.589219 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c7cbb8f79-vlqm4" Dec 11 14:05:29 crc kubenswrapper[5050]: I1211 14:05:29.589868 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-88cc6"] Dec 11 14:05:29 crc kubenswrapper[5050]: I1211 14:05:29.630656 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-clprb"] Dec 11 14:05:29 crc kubenswrapper[5050]: I1211 14:05:29.632685 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-clprb" Dec 11 14:05:29 crc kubenswrapper[5050]: I1211 14:05:29.650529 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-clprb"] Dec 11 14:05:29 crc kubenswrapper[5050]: I1211 14:05:29.768784 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c71212-7318-45f1-94f9-235d861faf86-config\") pod \"dnsmasq-dns-95f5f6995-clprb\" (UID: \"63c71212-7318-45f1-94f9-235d861faf86\") " pod="openstack/dnsmasq-dns-95f5f6995-clprb" Dec 11 14:05:29 crc kubenswrapper[5050]: I1211 14:05:29.769434 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63c71212-7318-45f1-94f9-235d861faf86-dns-svc\") pod \"dnsmasq-dns-95f5f6995-clprb\" (UID: \"63c71212-7318-45f1-94f9-235d861faf86\") " pod="openstack/dnsmasq-dns-95f5f6995-clprb" Dec 11 14:05:29 crc kubenswrapper[5050]: I1211 14:05:29.769478 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkfqd\" (UniqueName: \"kubernetes.io/projected/63c71212-7318-45f1-94f9-235d861faf86-kube-api-access-pkfqd\") pod \"dnsmasq-dns-95f5f6995-clprb\" (UID: \"63c71212-7318-45f1-94f9-235d861faf86\") " pod="openstack/dnsmasq-dns-95f5f6995-clprb" Dec 11 14:05:29 crc kubenswrapper[5050]: I1211 14:05:29.873314 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63c71212-7318-45f1-94f9-235d861faf86-dns-svc\") pod \"dnsmasq-dns-95f5f6995-clprb\" (UID: \"63c71212-7318-45f1-94f9-235d861faf86\") " pod="openstack/dnsmasq-dns-95f5f6995-clprb" Dec 11 14:05:29 crc kubenswrapper[5050]: I1211 14:05:29.873376 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkfqd\" (UniqueName: \"kubernetes.io/projected/63c71212-7318-45f1-94f9-235d861faf86-kube-api-access-pkfqd\") pod \"dnsmasq-dns-95f5f6995-clprb\" (UID: \"63c71212-7318-45f1-94f9-235d861faf86\") " pod="openstack/dnsmasq-dns-95f5f6995-clprb" Dec 11 14:05:29 crc kubenswrapper[5050]: I1211 14:05:29.873456 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c71212-7318-45f1-94f9-235d861faf86-config\") pod \"dnsmasq-dns-95f5f6995-clprb\" (UID: \"63c71212-7318-45f1-94f9-235d861faf86\") " pod="openstack/dnsmasq-dns-95f5f6995-clprb" Dec 11 14:05:29 crc kubenswrapper[5050]: I1211 14:05:29.874500 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c71212-7318-45f1-94f9-235d861faf86-config\") pod \"dnsmasq-dns-95f5f6995-clprb\" (UID: \"63c71212-7318-45f1-94f9-235d861faf86\") " pod="openstack/dnsmasq-dns-95f5f6995-clprb" Dec 11 14:05:29 crc kubenswrapper[5050]: I1211 14:05:29.875290 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63c71212-7318-45f1-94f9-235d861faf86-dns-svc\") pod \"dnsmasq-dns-95f5f6995-clprb\" (UID: \"63c71212-7318-45f1-94f9-235d861faf86\") " pod="openstack/dnsmasq-dns-95f5f6995-clprb" Dec 11 14:05:29 crc kubenswrapper[5050]: I1211 14:05:29.924413 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkfqd\" (UniqueName: \"kubernetes.io/projected/63c71212-7318-45f1-94f9-235d861faf86-kube-api-access-pkfqd\") pod \"dnsmasq-dns-95f5f6995-clprb\" (UID: \"63c71212-7318-45f1-94f9-235d861faf86\") " pod="openstack/dnsmasq-dns-95f5f6995-clprb" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.041827 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-clprb" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.269998 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-vlqm4"] Dec 11 14:05:30 crc kubenswrapper[5050]: W1211 14:05:30.311588 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b68c591_2f4c_41c3_9ab1_372deed0e388.slice/crio-359def1b79483f54be63282959561882ca79b96cce95b707c6ab7c3fb2c8a436 WatchSource:0}: Error finding container 359def1b79483f54be63282959561882ca79b96cce95b707c6ab7c3fb2c8a436: Status 404 returned error can't find the container with id 359def1b79483f54be63282959561882ca79b96cce95b707c6ab7c3fb2c8a436 Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.449811 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.452198 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.454888 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.455293 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.455716 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-cwcsm" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.461193 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.464208 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.465228 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.465389 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.465557 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.590561 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/458f05be-2fd6-44d9-8034-f077356964ce-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.590640 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.590738 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6p5z\" (UniqueName: \"kubernetes.io/projected/458f05be-2fd6-44d9-8034-f077356964ce-kube-api-access-z6p5z\") pod \"rabbitmq-cell1-server-0\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.590780 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/458f05be-2fd6-44d9-8034-f077356964ce-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.590835 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/458f05be-2fd6-44d9-8034-f077356964ce-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.590892 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/458f05be-2fd6-44d9-8034-f077356964ce-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.590927 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/458f05be-2fd6-44d9-8034-f077356964ce-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.590954 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/458f05be-2fd6-44d9-8034-f077356964ce-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.590983 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/458f05be-2fd6-44d9-8034-f077356964ce-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.591027 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/458f05be-2fd6-44d9-8034-f077356964ce-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.591096 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/458f05be-2fd6-44d9-8034-f077356964ce-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.605191 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-clprb"] Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.700391 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/458f05be-2fd6-44d9-8034-f077356964ce-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.700457 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/458f05be-2fd6-44d9-8034-f077356964ce-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.700485 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.700517 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6p5z\" (UniqueName: \"kubernetes.io/projected/458f05be-2fd6-44d9-8034-f077356964ce-kube-api-access-z6p5z\") pod \"rabbitmq-cell1-server-0\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.700547 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/458f05be-2fd6-44d9-8034-f077356964ce-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.700591 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/458f05be-2fd6-44d9-8034-f077356964ce-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.700618 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/458f05be-2fd6-44d9-8034-f077356964ce-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.700644 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/458f05be-2fd6-44d9-8034-f077356964ce-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.700666 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/458f05be-2fd6-44d9-8034-f077356964ce-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.700699 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/458f05be-2fd6-44d9-8034-f077356964ce-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.700730 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/458f05be-2fd6-44d9-8034-f077356964ce-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.702646 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/458f05be-2fd6-44d9-8034-f077356964ce-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.703497 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/458f05be-2fd6-44d9-8034-f077356964ce-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.703752 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/458f05be-2fd6-44d9-8034-f077356964ce-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.705022 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.708927 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/458f05be-2fd6-44d9-8034-f077356964ce-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.734863 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/458f05be-2fd6-44d9-8034-f077356964ce-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.735612 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/458f05be-2fd6-44d9-8034-f077356964ce-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.740852 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/458f05be-2fd6-44d9-8034-f077356964ce-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.746773 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/458f05be-2fd6-44d9-8034-f077356964ce-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.763983 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/458f05be-2fd6-44d9-8034-f077356964ce-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.768260 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6p5z\" (UniqueName: \"kubernetes.io/projected/458f05be-2fd6-44d9-8034-f077356964ce-kube-api-access-z6p5z\") pod \"rabbitmq-cell1-server-0\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.819139 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.823087 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.824153 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.825986 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.826903 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.828203 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.828700 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.828865 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.833111 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-2q7tr" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.833279 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.842305 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.853698 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.903658 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0891f075-8101-475b-b844-e7cb42a4990b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " pod="openstack/rabbitmq-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.903710 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0891f075-8101-475b-b844-e7cb42a4990b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " pod="openstack/rabbitmq-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.903781 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-548z5\" (UniqueName: \"kubernetes.io/projected/0891f075-8101-475b-b844-e7cb42a4990b-kube-api-access-548z5\") pod \"rabbitmq-server-0\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " pod="openstack/rabbitmq-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.903823 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0891f075-8101-475b-b844-e7cb42a4990b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " pod="openstack/rabbitmq-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.903860 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0891f075-8101-475b-b844-e7cb42a4990b-config-data\") pod \"rabbitmq-server-0\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " pod="openstack/rabbitmq-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.903880 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0891f075-8101-475b-b844-e7cb42a4990b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " pod="openstack/rabbitmq-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.903902 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0891f075-8101-475b-b844-e7cb42a4990b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " pod="openstack/rabbitmq-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.903929 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " pod="openstack/rabbitmq-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.904085 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0891f075-8101-475b-b844-e7cb42a4990b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " pod="openstack/rabbitmq-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.904118 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0891f075-8101-475b-b844-e7cb42a4990b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " pod="openstack/rabbitmq-server-0" Dec 11 14:05:30 crc kubenswrapper[5050]: I1211 14:05:30.904158 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0891f075-8101-475b-b844-e7cb42a4990b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " pod="openstack/rabbitmq-server-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.005418 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0891f075-8101-475b-b844-e7cb42a4990b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " pod="openstack/rabbitmq-server-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.005862 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0891f075-8101-475b-b844-e7cb42a4990b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " pod="openstack/rabbitmq-server-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.005896 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-548z5\" (UniqueName: \"kubernetes.io/projected/0891f075-8101-475b-b844-e7cb42a4990b-kube-api-access-548z5\") pod \"rabbitmq-server-0\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " pod="openstack/rabbitmq-server-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.005935 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0891f075-8101-475b-b844-e7cb42a4990b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " pod="openstack/rabbitmq-server-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.005964 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0891f075-8101-475b-b844-e7cb42a4990b-config-data\") pod \"rabbitmq-server-0\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " pod="openstack/rabbitmq-server-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.005984 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0891f075-8101-475b-b844-e7cb42a4990b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " pod="openstack/rabbitmq-server-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.006021 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0891f075-8101-475b-b844-e7cb42a4990b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " pod="openstack/rabbitmq-server-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.006055 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " pod="openstack/rabbitmq-server-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.006090 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0891f075-8101-475b-b844-e7cb42a4990b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " pod="openstack/rabbitmq-server-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.006111 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0891f075-8101-475b-b844-e7cb42a4990b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " pod="openstack/rabbitmq-server-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.006166 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0891f075-8101-475b-b844-e7cb42a4990b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " pod="openstack/rabbitmq-server-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.006997 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0891f075-8101-475b-b844-e7cb42a4990b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " pod="openstack/rabbitmq-server-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.007200 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-server-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.007386 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0891f075-8101-475b-b844-e7cb42a4990b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " pod="openstack/rabbitmq-server-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.007918 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0891f075-8101-475b-b844-e7cb42a4990b-config-data\") pod \"rabbitmq-server-0\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " pod="openstack/rabbitmq-server-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.008881 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0891f075-8101-475b-b844-e7cb42a4990b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " pod="openstack/rabbitmq-server-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.014435 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0891f075-8101-475b-b844-e7cb42a4990b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " pod="openstack/rabbitmq-server-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.014957 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0891f075-8101-475b-b844-e7cb42a4990b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " pod="openstack/rabbitmq-server-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.024562 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0891f075-8101-475b-b844-e7cb42a4990b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " pod="openstack/rabbitmq-server-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.039445 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-548z5\" (UniqueName: \"kubernetes.io/projected/0891f075-8101-475b-b844-e7cb42a4990b-kube-api-access-548z5\") pod \"rabbitmq-server-0\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " pod="openstack/rabbitmq-server-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.070715 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0891f075-8101-475b-b844-e7cb42a4990b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " pod="openstack/rabbitmq-server-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.076720 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0891f075-8101-475b-b844-e7cb42a4990b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " pod="openstack/rabbitmq-server-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.079680 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-server-0\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " pod="openstack/rabbitmq-server-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.164953 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.242971 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-clprb" event={"ID":"63c71212-7318-45f1-94f9-235d861faf86","Type":"ContainerStarted","Data":"02666ca0c6f5f2991c2db0a3550951e8fc3eedf2fe4cbb24ce8ae903c174cc93"} Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.247244 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c7cbb8f79-vlqm4" event={"ID":"4b68c591-2f4c-41c3-9ab1-372deed0e388","Type":"ContainerStarted","Data":"359def1b79483f54be63282959561882ca79b96cce95b707c6ab7c3fb2c8a436"} Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.389279 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.765954 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.768103 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.771487 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.775785 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.777052 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-mgt2f" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.778583 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.779514 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.795311 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.825984 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrsfm\" (UniqueName: \"kubernetes.io/projected/55b7e535-46f6-403b-9cdf-bf172dba97b6-kube-api-access-lrsfm\") pod \"openstack-galera-0\" (UID: \"55b7e535-46f6-403b-9cdf-bf172dba97b6\") " pod="openstack/openstack-galera-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.826082 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"55b7e535-46f6-403b-9cdf-bf172dba97b6\") " pod="openstack/openstack-galera-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.826123 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/55b7e535-46f6-403b-9cdf-bf172dba97b6-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"55b7e535-46f6-403b-9cdf-bf172dba97b6\") " pod="openstack/openstack-galera-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.826150 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/55b7e535-46f6-403b-9cdf-bf172dba97b6-config-data-default\") pod \"openstack-galera-0\" (UID: \"55b7e535-46f6-403b-9cdf-bf172dba97b6\") " pod="openstack/openstack-galera-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.826178 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55b7e535-46f6-403b-9cdf-bf172dba97b6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"55b7e535-46f6-403b-9cdf-bf172dba97b6\") " pod="openstack/openstack-galera-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.826198 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/55b7e535-46f6-403b-9cdf-bf172dba97b6-kolla-config\") pod \"openstack-galera-0\" (UID: \"55b7e535-46f6-403b-9cdf-bf172dba97b6\") " pod="openstack/openstack-galera-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.826226 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/55b7e535-46f6-403b-9cdf-bf172dba97b6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"55b7e535-46f6-403b-9cdf-bf172dba97b6\") " pod="openstack/openstack-galera-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.826242 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55b7e535-46f6-403b-9cdf-bf172dba97b6-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"55b7e535-46f6-403b-9cdf-bf172dba97b6\") " pod="openstack/openstack-galera-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.903453 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 14:05:31 crc kubenswrapper[5050]: W1211 14:05:31.922693 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0891f075_8101_475b_b844_e7cb42a4990b.slice/crio-76671437bcb30dc3b3007bb03f71b029d9481abc8524be43f277c131fcc5cd96 WatchSource:0}: Error finding container 76671437bcb30dc3b3007bb03f71b029d9481abc8524be43f277c131fcc5cd96: Status 404 returned error can't find the container with id 76671437bcb30dc3b3007bb03f71b029d9481abc8524be43f277c131fcc5cd96 Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.927474 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrsfm\" (UniqueName: \"kubernetes.io/projected/55b7e535-46f6-403b-9cdf-bf172dba97b6-kube-api-access-lrsfm\") pod \"openstack-galera-0\" (UID: \"55b7e535-46f6-403b-9cdf-bf172dba97b6\") " pod="openstack/openstack-galera-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.927535 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"55b7e535-46f6-403b-9cdf-bf172dba97b6\") " pod="openstack/openstack-galera-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.927570 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/55b7e535-46f6-403b-9cdf-bf172dba97b6-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"55b7e535-46f6-403b-9cdf-bf172dba97b6\") " pod="openstack/openstack-galera-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.927599 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/55b7e535-46f6-403b-9cdf-bf172dba97b6-config-data-default\") pod \"openstack-galera-0\" (UID: \"55b7e535-46f6-403b-9cdf-bf172dba97b6\") " pod="openstack/openstack-galera-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.927629 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55b7e535-46f6-403b-9cdf-bf172dba97b6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"55b7e535-46f6-403b-9cdf-bf172dba97b6\") " pod="openstack/openstack-galera-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.927648 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/55b7e535-46f6-403b-9cdf-bf172dba97b6-kolla-config\") pod \"openstack-galera-0\" (UID: \"55b7e535-46f6-403b-9cdf-bf172dba97b6\") " pod="openstack/openstack-galera-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.927678 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/55b7e535-46f6-403b-9cdf-bf172dba97b6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"55b7e535-46f6-403b-9cdf-bf172dba97b6\") " pod="openstack/openstack-galera-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.927696 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55b7e535-46f6-403b-9cdf-bf172dba97b6-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"55b7e535-46f6-403b-9cdf-bf172dba97b6\") " pod="openstack/openstack-galera-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.929696 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/55b7e535-46f6-403b-9cdf-bf172dba97b6-config-data-default\") pod \"openstack-galera-0\" (UID: \"55b7e535-46f6-403b-9cdf-bf172dba97b6\") " pod="openstack/openstack-galera-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.930104 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"55b7e535-46f6-403b-9cdf-bf172dba97b6\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-galera-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.931389 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55b7e535-46f6-403b-9cdf-bf172dba97b6-operator-scripts\") pod \"openstack-galera-0\" (UID: \"55b7e535-46f6-403b-9cdf-bf172dba97b6\") " pod="openstack/openstack-galera-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.932064 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/55b7e535-46f6-403b-9cdf-bf172dba97b6-kolla-config\") pod \"openstack-galera-0\" (UID: \"55b7e535-46f6-403b-9cdf-bf172dba97b6\") " pod="openstack/openstack-galera-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.938607 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/55b7e535-46f6-403b-9cdf-bf172dba97b6-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"55b7e535-46f6-403b-9cdf-bf172dba97b6\") " pod="openstack/openstack-galera-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.939451 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/55b7e535-46f6-403b-9cdf-bf172dba97b6-config-data-generated\") pod \"openstack-galera-0\" (UID: \"55b7e535-46f6-403b-9cdf-bf172dba97b6\") " pod="openstack/openstack-galera-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.944308 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55b7e535-46f6-403b-9cdf-bf172dba97b6-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"55b7e535-46f6-403b-9cdf-bf172dba97b6\") " pod="openstack/openstack-galera-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.952783 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrsfm\" (UniqueName: \"kubernetes.io/projected/55b7e535-46f6-403b-9cdf-bf172dba97b6-kube-api-access-lrsfm\") pod \"openstack-galera-0\" (UID: \"55b7e535-46f6-403b-9cdf-bf172dba97b6\") " pod="openstack/openstack-galera-0" Dec 11 14:05:31 crc kubenswrapper[5050]: I1211 14:05:31.981022 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"55b7e535-46f6-403b-9cdf-bf172dba97b6\") " pod="openstack/openstack-galera-0" Dec 11 14:05:32 crc kubenswrapper[5050]: I1211 14:05:32.115632 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 11 14:05:32 crc kubenswrapper[5050]: I1211 14:05:32.275447 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"458f05be-2fd6-44d9-8034-f077356964ce","Type":"ContainerStarted","Data":"2d0451eab14fb448dcdc0b7ce30cc6a358bc5517d182993f5b4b8a3785edf30b"} Dec 11 14:05:32 crc kubenswrapper[5050]: I1211 14:05:32.310521 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0891f075-8101-475b-b844-e7cb42a4990b","Type":"ContainerStarted","Data":"76671437bcb30dc3b3007bb03f71b029d9481abc8524be43f277c131fcc5cd96"} Dec 11 14:05:32 crc kubenswrapper[5050]: I1211 14:05:32.639313 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.256394 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.258663 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.283587 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.284288 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.284552 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-68256" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.287731 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.288795 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.339243 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"55b7e535-46f6-403b-9cdf-bf172dba97b6","Type":"ContainerStarted","Data":"40067b474165e1cb78f1321cfd27b60fed349c9f2208ca7671993f05feb28cf8"} Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.450832 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.467660 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8b3d8cd-9278-4639-86fe-1aa7696fecca-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\") " pod="openstack/openstack-cell1-galera-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.467733 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8b3d8cd-9278-4639-86fe-1aa7696fecca-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\") " pod="openstack/openstack-cell1-galera-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.468300 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c8b3d8cd-9278-4639-86fe-1aa7696fecca-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\") " pod="openstack/openstack-cell1-galera-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.468385 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c8b3d8cd-9278-4639-86fe-1aa7696fecca-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\") " pod="openstack/openstack-cell1-galera-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.468437 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.468606 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.468652 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c8b3d8cd-9278-4639-86fe-1aa7696fecca-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\") " pod="openstack/openstack-cell1-galera-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.468747 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\") " pod="openstack/openstack-cell1-galera-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.468780 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twplg\" (UniqueName: \"kubernetes.io/projected/c8b3d8cd-9278-4639-86fe-1aa7696fecca-kube-api-access-twplg\") pod \"openstack-cell1-galera-0\" (UID: \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\") " pod="openstack/openstack-cell1-galera-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.469273 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b3d8cd-9278-4639-86fe-1aa7696fecca-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\") " pod="openstack/openstack-cell1-galera-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.473590 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-vlq7v" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.473887 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.475531 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.572275 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/defedffb-9310-4b18-b7ee-b54040aa5447-memcached-tls-certs\") pod \"memcached-0\" (UID: \"defedffb-9310-4b18-b7ee-b54040aa5447\") " pod="openstack/memcached-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.572391 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b3d8cd-9278-4639-86fe-1aa7696fecca-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\") " pod="openstack/openstack-cell1-galera-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.572432 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8b3d8cd-9278-4639-86fe-1aa7696fecca-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\") " pod="openstack/openstack-cell1-galera-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.572461 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8b3d8cd-9278-4639-86fe-1aa7696fecca-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\") " pod="openstack/openstack-cell1-galera-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.572519 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defedffb-9310-4b18-b7ee-b54040aa5447-combined-ca-bundle\") pod \"memcached-0\" (UID: \"defedffb-9310-4b18-b7ee-b54040aa5447\") " pod="openstack/memcached-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.572550 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c8b3d8cd-9278-4639-86fe-1aa7696fecca-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\") " pod="openstack/openstack-cell1-galera-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.572573 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c8b3d8cd-9278-4639-86fe-1aa7696fecca-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\") " pod="openstack/openstack-cell1-galera-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.572622 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/defedffb-9310-4b18-b7ee-b54040aa5447-config-data\") pod \"memcached-0\" (UID: \"defedffb-9310-4b18-b7ee-b54040aa5447\") " pod="openstack/memcached-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.572651 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c8b3d8cd-9278-4639-86fe-1aa7696fecca-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\") " pod="openstack/openstack-cell1-galera-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.572674 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/defedffb-9310-4b18-b7ee-b54040aa5447-kolla-config\") pod \"memcached-0\" (UID: \"defedffb-9310-4b18-b7ee-b54040aa5447\") " pod="openstack/memcached-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.572706 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\") " pod="openstack/openstack-cell1-galera-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.572730 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twplg\" (UniqueName: \"kubernetes.io/projected/c8b3d8cd-9278-4639-86fe-1aa7696fecca-kube-api-access-twplg\") pod \"openstack-cell1-galera-0\" (UID: \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\") " pod="openstack/openstack-cell1-galera-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.572776 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcwl9\" (UniqueName: \"kubernetes.io/projected/defedffb-9310-4b18-b7ee-b54040aa5447-kube-api-access-lcwl9\") pod \"memcached-0\" (UID: \"defedffb-9310-4b18-b7ee-b54040aa5447\") " pod="openstack/memcached-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.575092 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c8b3d8cd-9278-4639-86fe-1aa7696fecca-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\") " pod="openstack/openstack-cell1-galera-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.575982 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c8b3d8cd-9278-4639-86fe-1aa7696fecca-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\") " pod="openstack/openstack-cell1-galera-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.577116 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c8b3d8cd-9278-4639-86fe-1aa7696fecca-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\") " pod="openstack/openstack-cell1-galera-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.580183 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-cell1-galera-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.587914 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b3d8cd-9278-4639-86fe-1aa7696fecca-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\") " pod="openstack/openstack-cell1-galera-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.601685 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8b3d8cd-9278-4639-86fe-1aa7696fecca-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\") " pod="openstack/openstack-cell1-galera-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.615556 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8b3d8cd-9278-4639-86fe-1aa7696fecca-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\") " pod="openstack/openstack-cell1-galera-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.619193 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twplg\" (UniqueName: \"kubernetes.io/projected/c8b3d8cd-9278-4639-86fe-1aa7696fecca-kube-api-access-twplg\") pod \"openstack-cell1-galera-0\" (UID: \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\") " pod="openstack/openstack-cell1-galera-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.657075 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-cell1-galera-0\" (UID: \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\") " pod="openstack/openstack-cell1-galera-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.676245 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defedffb-9310-4b18-b7ee-b54040aa5447-combined-ca-bundle\") pod \"memcached-0\" (UID: \"defedffb-9310-4b18-b7ee-b54040aa5447\") " pod="openstack/memcached-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.676339 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/defedffb-9310-4b18-b7ee-b54040aa5447-config-data\") pod \"memcached-0\" (UID: \"defedffb-9310-4b18-b7ee-b54040aa5447\") " pod="openstack/memcached-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.676374 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/defedffb-9310-4b18-b7ee-b54040aa5447-kolla-config\") pod \"memcached-0\" (UID: \"defedffb-9310-4b18-b7ee-b54040aa5447\") " pod="openstack/memcached-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.676438 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcwl9\" (UniqueName: \"kubernetes.io/projected/defedffb-9310-4b18-b7ee-b54040aa5447-kube-api-access-lcwl9\") pod \"memcached-0\" (UID: \"defedffb-9310-4b18-b7ee-b54040aa5447\") " pod="openstack/memcached-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.676465 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/defedffb-9310-4b18-b7ee-b54040aa5447-memcached-tls-certs\") pod \"memcached-0\" (UID: \"defedffb-9310-4b18-b7ee-b54040aa5447\") " pod="openstack/memcached-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.680423 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/defedffb-9310-4b18-b7ee-b54040aa5447-config-data\") pod \"memcached-0\" (UID: \"defedffb-9310-4b18-b7ee-b54040aa5447\") " pod="openstack/memcached-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.696508 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/defedffb-9310-4b18-b7ee-b54040aa5447-kolla-config\") pod \"memcached-0\" (UID: \"defedffb-9310-4b18-b7ee-b54040aa5447\") " pod="openstack/memcached-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.697301 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/defedffb-9310-4b18-b7ee-b54040aa5447-memcached-tls-certs\") pod \"memcached-0\" (UID: \"defedffb-9310-4b18-b7ee-b54040aa5447\") " pod="openstack/memcached-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.717499 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defedffb-9310-4b18-b7ee-b54040aa5447-combined-ca-bundle\") pod \"memcached-0\" (UID: \"defedffb-9310-4b18-b7ee-b54040aa5447\") " pod="openstack/memcached-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.724835 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcwl9\" (UniqueName: \"kubernetes.io/projected/defedffb-9310-4b18-b7ee-b54040aa5447-kube-api-access-lcwl9\") pod \"memcached-0\" (UID: \"defedffb-9310-4b18-b7ee-b54040aa5447\") " pod="openstack/memcached-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.813912 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 11 14:05:33 crc kubenswrapper[5050]: I1211 14:05:33.906245 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 11 14:05:34 crc kubenswrapper[5050]: I1211 14:05:34.503375 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 11 14:05:34 crc kubenswrapper[5050]: I1211 14:05:34.637807 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 11 14:05:34 crc kubenswrapper[5050]: W1211 14:05:34.651116 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8b3d8cd_9278_4639_86fe_1aa7696fecca.slice/crio-60fc2ae7bd55a26f15c78cb791549aa82b5904e02f97e0b62c1c2744d47e12eb WatchSource:0}: Error finding container 60fc2ae7bd55a26f15c78cb791549aa82b5904e02f97e0b62c1c2744d47e12eb: Status 404 returned error can't find the container with id 60fc2ae7bd55a26f15c78cb791549aa82b5904e02f97e0b62c1c2744d47e12eb Dec 11 14:05:35 crc kubenswrapper[5050]: I1211 14:05:35.187814 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 14:05:35 crc kubenswrapper[5050]: I1211 14:05:35.189740 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 11 14:05:35 crc kubenswrapper[5050]: I1211 14:05:35.201219 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 14:05:35 crc kubenswrapper[5050]: I1211 14:05:35.207063 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-vm4fg" Dec 11 14:05:35 crc kubenswrapper[5050]: I1211 14:05:35.309865 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twq74\" (UniqueName: \"kubernetes.io/projected/8e60c3c2-6055-4e50-99b6-4a5f08728b17-kube-api-access-twq74\") pod \"kube-state-metrics-0\" (UID: \"8e60c3c2-6055-4e50-99b6-4a5f08728b17\") " pod="openstack/kube-state-metrics-0" Dec 11 14:05:35 crc kubenswrapper[5050]: I1211 14:05:35.411521 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twq74\" (UniqueName: \"kubernetes.io/projected/8e60c3c2-6055-4e50-99b6-4a5f08728b17-kube-api-access-twq74\") pod \"kube-state-metrics-0\" (UID: \"8e60c3c2-6055-4e50-99b6-4a5f08728b17\") " pod="openstack/kube-state-metrics-0" Dec 11 14:05:35 crc kubenswrapper[5050]: I1211 14:05:35.421787 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"defedffb-9310-4b18-b7ee-b54040aa5447","Type":"ContainerStarted","Data":"aa551191fb3a0ea98347fca4525dc93cee1f4c93fbca070cbdc38382a4dcbbc2"} Dec 11 14:05:35 crc kubenswrapper[5050]: I1211 14:05:35.426977 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c8b3d8cd-9278-4639-86fe-1aa7696fecca","Type":"ContainerStarted","Data":"60fc2ae7bd55a26f15c78cb791549aa82b5904e02f97e0b62c1c2744d47e12eb"} Dec 11 14:05:35 crc kubenswrapper[5050]: I1211 14:05:35.450835 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twq74\" (UniqueName: \"kubernetes.io/projected/8e60c3c2-6055-4e50-99b6-4a5f08728b17-kube-api-access-twq74\") pod \"kube-state-metrics-0\" (UID: \"8e60c3c2-6055-4e50-99b6-4a5f08728b17\") " pod="openstack/kube-state-metrics-0" Dec 11 14:05:35 crc kubenswrapper[5050]: I1211 14:05:35.571438 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 11 14:05:36 crc kubenswrapper[5050]: I1211 14:05:36.392890 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 14:05:36 crc kubenswrapper[5050]: W1211 14:05:36.475378 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e60c3c2_6055_4e50_99b6_4a5f08728b17.slice/crio-8f63bf38a8ce475fd832f14043c670eee1242e8c25f318fc0a497be6c8dad4aa WatchSource:0}: Error finding container 8f63bf38a8ce475fd832f14043c670eee1242e8c25f318fc0a497be6c8dad4aa: Status 404 returned error can't find the container with id 8f63bf38a8ce475fd832f14043c670eee1242e8c25f318fc0a497be6c8dad4aa Dec 11 14:05:37 crc kubenswrapper[5050]: I1211 14:05:37.456843 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8e60c3c2-6055-4e50-99b6-4a5f08728b17","Type":"ContainerStarted","Data":"8f63bf38a8ce475fd832f14043c670eee1242e8c25f318fc0a497be6c8dad4aa"} Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.565997 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-47tvr"] Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.568301 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-47tvr" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.572379 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-f5pcs" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.572859 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.580662 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.604445 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-pjzpq"] Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.606353 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-pjzpq" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.643863 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-47tvr"] Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.654691 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-pjzpq"] Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.689102 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01fa4d89-aae5-451a-8798-2700053fe3d4-combined-ca-bundle\") pod \"ovn-controller-47tvr\" (UID: \"01fa4d89-aae5-451a-8798-2700053fe3d4\") " pod="openstack/ovn-controller-47tvr" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.689202 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/88b4966d-124b-4cf4-b52b-704955059220-var-run\") pod \"ovn-controller-ovs-pjzpq\" (UID: \"88b4966d-124b-4cf4-b52b-704955059220\") " pod="openstack/ovn-controller-ovs-pjzpq" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.689240 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/88b4966d-124b-4cf4-b52b-704955059220-var-lib\") pod \"ovn-controller-ovs-pjzpq\" (UID: \"88b4966d-124b-4cf4-b52b-704955059220\") " pod="openstack/ovn-controller-ovs-pjzpq" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.689300 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/01fa4d89-aae5-451a-8798-2700053fe3d4-scripts\") pod \"ovn-controller-47tvr\" (UID: \"01fa4d89-aae5-451a-8798-2700053fe3d4\") " pod="openstack/ovn-controller-47tvr" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.689325 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/88b4966d-124b-4cf4-b52b-704955059220-etc-ovs\") pod \"ovn-controller-ovs-pjzpq\" (UID: \"88b4966d-124b-4cf4-b52b-704955059220\") " pod="openstack/ovn-controller-ovs-pjzpq" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.690684 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcvpj\" (UniqueName: \"kubernetes.io/projected/88b4966d-124b-4cf4-b52b-704955059220-kube-api-access-fcvpj\") pod \"ovn-controller-ovs-pjzpq\" (UID: \"88b4966d-124b-4cf4-b52b-704955059220\") " pod="openstack/ovn-controller-ovs-pjzpq" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.690755 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/88b4966d-124b-4cf4-b52b-704955059220-scripts\") pod \"ovn-controller-ovs-pjzpq\" (UID: \"88b4966d-124b-4cf4-b52b-704955059220\") " pod="openstack/ovn-controller-ovs-pjzpq" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.690834 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/88b4966d-124b-4cf4-b52b-704955059220-var-log\") pod \"ovn-controller-ovs-pjzpq\" (UID: \"88b4966d-124b-4cf4-b52b-704955059220\") " pod="openstack/ovn-controller-ovs-pjzpq" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.690942 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/01fa4d89-aae5-451a-8798-2700053fe3d4-var-run-ovn\") pod \"ovn-controller-47tvr\" (UID: \"01fa4d89-aae5-451a-8798-2700053fe3d4\") " pod="openstack/ovn-controller-47tvr" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.691060 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxnd5\" (UniqueName: \"kubernetes.io/projected/01fa4d89-aae5-451a-8798-2700053fe3d4-kube-api-access-kxnd5\") pod \"ovn-controller-47tvr\" (UID: \"01fa4d89-aae5-451a-8798-2700053fe3d4\") " pod="openstack/ovn-controller-47tvr" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.691178 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/01fa4d89-aae5-451a-8798-2700053fe3d4-ovn-controller-tls-certs\") pod \"ovn-controller-47tvr\" (UID: \"01fa4d89-aae5-451a-8798-2700053fe3d4\") " pod="openstack/ovn-controller-47tvr" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.691238 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/01fa4d89-aae5-451a-8798-2700053fe3d4-var-run\") pod \"ovn-controller-47tvr\" (UID: \"01fa4d89-aae5-451a-8798-2700053fe3d4\") " pod="openstack/ovn-controller-47tvr" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.691341 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/01fa4d89-aae5-451a-8798-2700053fe3d4-var-log-ovn\") pod \"ovn-controller-47tvr\" (UID: \"01fa4d89-aae5-451a-8798-2700053fe3d4\") " pod="openstack/ovn-controller-47tvr" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.794035 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/01fa4d89-aae5-451a-8798-2700053fe3d4-var-run-ovn\") pod \"ovn-controller-47tvr\" (UID: \"01fa4d89-aae5-451a-8798-2700053fe3d4\") " pod="openstack/ovn-controller-47tvr" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.794478 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxnd5\" (UniqueName: \"kubernetes.io/projected/01fa4d89-aae5-451a-8798-2700053fe3d4-kube-api-access-kxnd5\") pod \"ovn-controller-47tvr\" (UID: \"01fa4d89-aae5-451a-8798-2700053fe3d4\") " pod="openstack/ovn-controller-47tvr" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.794729 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/01fa4d89-aae5-451a-8798-2700053fe3d4-ovn-controller-tls-certs\") pod \"ovn-controller-47tvr\" (UID: \"01fa4d89-aae5-451a-8798-2700053fe3d4\") " pod="openstack/ovn-controller-47tvr" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.794824 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/01fa4d89-aae5-451a-8798-2700053fe3d4-var-run-ovn\") pod \"ovn-controller-47tvr\" (UID: \"01fa4d89-aae5-451a-8798-2700053fe3d4\") " pod="openstack/ovn-controller-47tvr" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.794769 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/01fa4d89-aae5-451a-8798-2700053fe3d4-var-run\") pod \"ovn-controller-47tvr\" (UID: \"01fa4d89-aae5-451a-8798-2700053fe3d4\") " pod="openstack/ovn-controller-47tvr" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.795149 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/01fa4d89-aae5-451a-8798-2700053fe3d4-var-log-ovn\") pod \"ovn-controller-47tvr\" (UID: \"01fa4d89-aae5-451a-8798-2700053fe3d4\") " pod="openstack/ovn-controller-47tvr" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.795346 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01fa4d89-aae5-451a-8798-2700053fe3d4-combined-ca-bundle\") pod \"ovn-controller-47tvr\" (UID: \"01fa4d89-aae5-451a-8798-2700053fe3d4\") " pod="openstack/ovn-controller-47tvr" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.795499 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/88b4966d-124b-4cf4-b52b-704955059220-var-run\") pod \"ovn-controller-ovs-pjzpq\" (UID: \"88b4966d-124b-4cf4-b52b-704955059220\") " pod="openstack/ovn-controller-ovs-pjzpq" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.795532 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/88b4966d-124b-4cf4-b52b-704955059220-var-lib\") pod \"ovn-controller-ovs-pjzpq\" (UID: \"88b4966d-124b-4cf4-b52b-704955059220\") " pod="openstack/ovn-controller-ovs-pjzpq" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.795576 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/01fa4d89-aae5-451a-8798-2700053fe3d4-scripts\") pod \"ovn-controller-47tvr\" (UID: \"01fa4d89-aae5-451a-8798-2700053fe3d4\") " pod="openstack/ovn-controller-47tvr" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.795598 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/88b4966d-124b-4cf4-b52b-704955059220-etc-ovs\") pod \"ovn-controller-ovs-pjzpq\" (UID: \"88b4966d-124b-4cf4-b52b-704955059220\") " pod="openstack/ovn-controller-ovs-pjzpq" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.795636 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcvpj\" (UniqueName: \"kubernetes.io/projected/88b4966d-124b-4cf4-b52b-704955059220-kube-api-access-fcvpj\") pod \"ovn-controller-ovs-pjzpq\" (UID: \"88b4966d-124b-4cf4-b52b-704955059220\") " pod="openstack/ovn-controller-ovs-pjzpq" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.795665 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/88b4966d-124b-4cf4-b52b-704955059220-scripts\") pod \"ovn-controller-ovs-pjzpq\" (UID: \"88b4966d-124b-4cf4-b52b-704955059220\") " pod="openstack/ovn-controller-ovs-pjzpq" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.795708 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/88b4966d-124b-4cf4-b52b-704955059220-var-log\") pod \"ovn-controller-ovs-pjzpq\" (UID: \"88b4966d-124b-4cf4-b52b-704955059220\") " pod="openstack/ovn-controller-ovs-pjzpq" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.795732 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/01fa4d89-aae5-451a-8798-2700053fe3d4-var-run\") pod \"ovn-controller-47tvr\" (UID: \"01fa4d89-aae5-451a-8798-2700053fe3d4\") " pod="openstack/ovn-controller-47tvr" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.795927 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/88b4966d-124b-4cf4-b52b-704955059220-etc-ovs\") pod \"ovn-controller-ovs-pjzpq\" (UID: \"88b4966d-124b-4cf4-b52b-704955059220\") " pod="openstack/ovn-controller-ovs-pjzpq" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.796323 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/88b4966d-124b-4cf4-b52b-704955059220-var-log\") pod \"ovn-controller-ovs-pjzpq\" (UID: \"88b4966d-124b-4cf4-b52b-704955059220\") " pod="openstack/ovn-controller-ovs-pjzpq" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.797027 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/88b4966d-124b-4cf4-b52b-704955059220-var-lib\") pod \"ovn-controller-ovs-pjzpq\" (UID: \"88b4966d-124b-4cf4-b52b-704955059220\") " pod="openstack/ovn-controller-ovs-pjzpq" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.797098 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/88b4966d-124b-4cf4-b52b-704955059220-var-run\") pod \"ovn-controller-ovs-pjzpq\" (UID: \"88b4966d-124b-4cf4-b52b-704955059220\") " pod="openstack/ovn-controller-ovs-pjzpq" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.797116 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/01fa4d89-aae5-451a-8798-2700053fe3d4-var-log-ovn\") pod \"ovn-controller-47tvr\" (UID: \"01fa4d89-aae5-451a-8798-2700053fe3d4\") " pod="openstack/ovn-controller-47tvr" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.800742 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/88b4966d-124b-4cf4-b52b-704955059220-scripts\") pod \"ovn-controller-ovs-pjzpq\" (UID: \"88b4966d-124b-4cf4-b52b-704955059220\") " pod="openstack/ovn-controller-ovs-pjzpq" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.803581 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/01fa4d89-aae5-451a-8798-2700053fe3d4-scripts\") pod \"ovn-controller-47tvr\" (UID: \"01fa4d89-aae5-451a-8798-2700053fe3d4\") " pod="openstack/ovn-controller-47tvr" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.811305 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/01fa4d89-aae5-451a-8798-2700053fe3d4-ovn-controller-tls-certs\") pod \"ovn-controller-47tvr\" (UID: \"01fa4d89-aae5-451a-8798-2700053fe3d4\") " pod="openstack/ovn-controller-47tvr" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.817464 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxnd5\" (UniqueName: \"kubernetes.io/projected/01fa4d89-aae5-451a-8798-2700053fe3d4-kube-api-access-kxnd5\") pod \"ovn-controller-47tvr\" (UID: \"01fa4d89-aae5-451a-8798-2700053fe3d4\") " pod="openstack/ovn-controller-47tvr" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.818657 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcvpj\" (UniqueName: \"kubernetes.io/projected/88b4966d-124b-4cf4-b52b-704955059220-kube-api-access-fcvpj\") pod \"ovn-controller-ovs-pjzpq\" (UID: \"88b4966d-124b-4cf4-b52b-704955059220\") " pod="openstack/ovn-controller-ovs-pjzpq" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.829916 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01fa4d89-aae5-451a-8798-2700053fe3d4-combined-ca-bundle\") pod \"ovn-controller-47tvr\" (UID: \"01fa4d89-aae5-451a-8798-2700053fe3d4\") " pod="openstack/ovn-controller-47tvr" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.945606 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-pjzpq" Dec 11 14:05:38 crc kubenswrapper[5050]: I1211 14:05:38.894952 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-47tvr" Dec 11 14:05:39 crc kubenswrapper[5050]: I1211 14:05:39.465940 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 11 14:05:39 crc kubenswrapper[5050]: I1211 14:05:39.468503 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 11 14:05:39 crc kubenswrapper[5050]: I1211 14:05:39.472986 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 11 14:05:39 crc kubenswrapper[5050]: I1211 14:05:39.473913 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 11 14:05:39 crc kubenswrapper[5050]: I1211 14:05:39.474257 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 11 14:05:39 crc kubenswrapper[5050]: I1211 14:05:39.475841 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-7qkw4" Dec 11 14:05:39 crc kubenswrapper[5050]: I1211 14:05:39.475993 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 11 14:05:39 crc kubenswrapper[5050]: I1211 14:05:39.488318 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 11 14:05:39 crc kubenswrapper[5050]: I1211 14:05:39.625686 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e96be66c-07f2-47c0-a784-6af473c8a2a8\") " pod="openstack/ovsdbserver-nb-0" Dec 11 14:05:39 crc kubenswrapper[5050]: I1211 14:05:39.625899 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e96be66c-07f2-47c0-a784-6af473c8a2a8-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e96be66c-07f2-47c0-a784-6af473c8a2a8\") " pod="openstack/ovsdbserver-nb-0" Dec 11 14:05:39 crc kubenswrapper[5050]: I1211 14:05:39.625921 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e96be66c-07f2-47c0-a784-6af473c8a2a8-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e96be66c-07f2-47c0-a784-6af473c8a2a8\") " pod="openstack/ovsdbserver-nb-0" Dec 11 14:05:39 crc kubenswrapper[5050]: I1211 14:05:39.626031 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e96be66c-07f2-47c0-a784-6af473c8a2a8-config\") pod \"ovsdbserver-nb-0\" (UID: \"e96be66c-07f2-47c0-a784-6af473c8a2a8\") " pod="openstack/ovsdbserver-nb-0" Dec 11 14:05:39 crc kubenswrapper[5050]: I1211 14:05:39.626167 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e96be66c-07f2-47c0-a784-6af473c8a2a8-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e96be66c-07f2-47c0-a784-6af473c8a2a8\") " pod="openstack/ovsdbserver-nb-0" Dec 11 14:05:39 crc kubenswrapper[5050]: I1211 14:05:39.626218 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e96be66c-07f2-47c0-a784-6af473c8a2a8-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e96be66c-07f2-47c0-a784-6af473c8a2a8\") " pod="openstack/ovsdbserver-nb-0" Dec 11 14:05:39 crc kubenswrapper[5050]: I1211 14:05:39.626299 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e96be66c-07f2-47c0-a784-6af473c8a2a8-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e96be66c-07f2-47c0-a784-6af473c8a2a8\") " pod="openstack/ovsdbserver-nb-0" Dec 11 14:05:39 crc kubenswrapper[5050]: I1211 14:05:39.626422 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhkkd\" (UniqueName: \"kubernetes.io/projected/e96be66c-07f2-47c0-a784-6af473c8a2a8-kube-api-access-vhkkd\") pod \"ovsdbserver-nb-0\" (UID: \"e96be66c-07f2-47c0-a784-6af473c8a2a8\") " pod="openstack/ovsdbserver-nb-0" Dec 11 14:05:39 crc kubenswrapper[5050]: I1211 14:05:39.729048 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhkkd\" (UniqueName: \"kubernetes.io/projected/e96be66c-07f2-47c0-a784-6af473c8a2a8-kube-api-access-vhkkd\") pod \"ovsdbserver-nb-0\" (UID: \"e96be66c-07f2-47c0-a784-6af473c8a2a8\") " pod="openstack/ovsdbserver-nb-0" Dec 11 14:05:39 crc kubenswrapper[5050]: I1211 14:05:39.729128 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e96be66c-07f2-47c0-a784-6af473c8a2a8\") " pod="openstack/ovsdbserver-nb-0" Dec 11 14:05:39 crc kubenswrapper[5050]: I1211 14:05:39.729186 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e96be66c-07f2-47c0-a784-6af473c8a2a8-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e96be66c-07f2-47c0-a784-6af473c8a2a8\") " pod="openstack/ovsdbserver-nb-0" Dec 11 14:05:39 crc kubenswrapper[5050]: I1211 14:05:39.729206 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e96be66c-07f2-47c0-a784-6af473c8a2a8-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e96be66c-07f2-47c0-a784-6af473c8a2a8\") " pod="openstack/ovsdbserver-nb-0" Dec 11 14:05:39 crc kubenswrapper[5050]: I1211 14:05:39.729316 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e96be66c-07f2-47c0-a784-6af473c8a2a8-config\") pod \"ovsdbserver-nb-0\" (UID: \"e96be66c-07f2-47c0-a784-6af473c8a2a8\") " pod="openstack/ovsdbserver-nb-0" Dec 11 14:05:39 crc kubenswrapper[5050]: I1211 14:05:39.729367 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e96be66c-07f2-47c0-a784-6af473c8a2a8-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e96be66c-07f2-47c0-a784-6af473c8a2a8\") " pod="openstack/ovsdbserver-nb-0" Dec 11 14:05:39 crc kubenswrapper[5050]: I1211 14:05:39.729395 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e96be66c-07f2-47c0-a784-6af473c8a2a8-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e96be66c-07f2-47c0-a784-6af473c8a2a8\") " pod="openstack/ovsdbserver-nb-0" Dec 11 14:05:39 crc kubenswrapper[5050]: I1211 14:05:39.729446 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e96be66c-07f2-47c0-a784-6af473c8a2a8-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e96be66c-07f2-47c0-a784-6af473c8a2a8\") " pod="openstack/ovsdbserver-nb-0" Dec 11 14:05:39 crc kubenswrapper[5050]: I1211 14:05:39.729502 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e96be66c-07f2-47c0-a784-6af473c8a2a8\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-nb-0" Dec 11 14:05:39 crc kubenswrapper[5050]: I1211 14:05:39.730483 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e96be66c-07f2-47c0-a784-6af473c8a2a8-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e96be66c-07f2-47c0-a784-6af473c8a2a8\") " pod="openstack/ovsdbserver-nb-0" Dec 11 14:05:39 crc kubenswrapper[5050]: I1211 14:05:39.732215 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e96be66c-07f2-47c0-a784-6af473c8a2a8-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e96be66c-07f2-47c0-a784-6af473c8a2a8\") " pod="openstack/ovsdbserver-nb-0" Dec 11 14:05:39 crc kubenswrapper[5050]: I1211 14:05:39.734193 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e96be66c-07f2-47c0-a784-6af473c8a2a8-config\") pod \"ovsdbserver-nb-0\" (UID: \"e96be66c-07f2-47c0-a784-6af473c8a2a8\") " pod="openstack/ovsdbserver-nb-0" Dec 11 14:05:39 crc kubenswrapper[5050]: I1211 14:05:39.736174 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e96be66c-07f2-47c0-a784-6af473c8a2a8-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e96be66c-07f2-47c0-a784-6af473c8a2a8\") " pod="openstack/ovsdbserver-nb-0" Dec 11 14:05:39 crc kubenswrapper[5050]: I1211 14:05:39.742030 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e96be66c-07f2-47c0-a784-6af473c8a2a8-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e96be66c-07f2-47c0-a784-6af473c8a2a8\") " pod="openstack/ovsdbserver-nb-0" Dec 11 14:05:39 crc kubenswrapper[5050]: I1211 14:05:39.752278 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e96be66c-07f2-47c0-a784-6af473c8a2a8-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e96be66c-07f2-47c0-a784-6af473c8a2a8\") " pod="openstack/ovsdbserver-nb-0" Dec 11 14:05:39 crc kubenswrapper[5050]: I1211 14:05:39.752354 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-nb-0\" (UID: \"e96be66c-07f2-47c0-a784-6af473c8a2a8\") " pod="openstack/ovsdbserver-nb-0" Dec 11 14:05:39 crc kubenswrapper[5050]: I1211 14:05:39.753448 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhkkd\" (UniqueName: \"kubernetes.io/projected/e96be66c-07f2-47c0-a784-6af473c8a2a8-kube-api-access-vhkkd\") pod \"ovsdbserver-nb-0\" (UID: \"e96be66c-07f2-47c0-a784-6af473c8a2a8\") " pod="openstack/ovsdbserver-nb-0" Dec 11 14:05:39 crc kubenswrapper[5050]: I1211 14:05:39.812547 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 11 14:05:40 crc kubenswrapper[5050]: I1211 14:05:40.797351 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:05:40 crc kubenswrapper[5050]: I1211 14:05:40.797430 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:05:42 crc kubenswrapper[5050]: I1211 14:05:42.781480 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 11 14:05:42 crc kubenswrapper[5050]: I1211 14:05:42.785456 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 11 14:05:42 crc kubenswrapper[5050]: I1211 14:05:42.788222 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 11 14:05:42 crc kubenswrapper[5050]: I1211 14:05:42.788390 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 11 14:05:42 crc kubenswrapper[5050]: I1211 14:05:42.789048 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 11 14:05:42 crc kubenswrapper[5050]: I1211 14:05:42.789217 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-7fk8g" Dec 11 14:05:42 crc kubenswrapper[5050]: I1211 14:05:42.794618 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 11 14:05:42 crc kubenswrapper[5050]: I1211 14:05:42.893731 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c928931c-d49d-41dc-9181-11d856ed3bd0-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c928931c-d49d-41dc-9181-11d856ed3bd0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 14:05:42 crc kubenswrapper[5050]: I1211 14:05:42.893788 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c928931c-d49d-41dc-9181-11d856ed3bd0-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c928931c-d49d-41dc-9181-11d856ed3bd0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 14:05:42 crc kubenswrapper[5050]: I1211 14:05:42.893828 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c928931c-d49d-41dc-9181-11d856ed3bd0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 14:05:42 crc kubenswrapper[5050]: I1211 14:05:42.893998 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c928931c-d49d-41dc-9181-11d856ed3bd0-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c928931c-d49d-41dc-9181-11d856ed3bd0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 14:05:42 crc kubenswrapper[5050]: I1211 14:05:42.894076 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9gv9\" (UniqueName: \"kubernetes.io/projected/c928931c-d49d-41dc-9181-11d856ed3bd0-kube-api-access-c9gv9\") pod \"ovsdbserver-sb-0\" (UID: \"c928931c-d49d-41dc-9181-11d856ed3bd0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 14:05:42 crc kubenswrapper[5050]: I1211 14:05:42.894382 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c928931c-d49d-41dc-9181-11d856ed3bd0-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c928931c-d49d-41dc-9181-11d856ed3bd0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 14:05:42 crc kubenswrapper[5050]: I1211 14:05:42.894469 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c928931c-d49d-41dc-9181-11d856ed3bd0-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c928931c-d49d-41dc-9181-11d856ed3bd0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 14:05:42 crc kubenswrapper[5050]: I1211 14:05:42.894593 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c928931c-d49d-41dc-9181-11d856ed3bd0-config\") pod \"ovsdbserver-sb-0\" (UID: \"c928931c-d49d-41dc-9181-11d856ed3bd0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 14:05:42 crc kubenswrapper[5050]: I1211 14:05:42.997933 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c928931c-d49d-41dc-9181-11d856ed3bd0-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c928931c-d49d-41dc-9181-11d856ed3bd0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 14:05:42 crc kubenswrapper[5050]: I1211 14:05:42.997985 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c928931c-d49d-41dc-9181-11d856ed3bd0-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c928931c-d49d-41dc-9181-11d856ed3bd0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 14:05:42 crc kubenswrapper[5050]: I1211 14:05:42.998045 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c928931c-d49d-41dc-9181-11d856ed3bd0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 14:05:42 crc kubenswrapper[5050]: I1211 14:05:42.998093 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c928931c-d49d-41dc-9181-11d856ed3bd0-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c928931c-d49d-41dc-9181-11d856ed3bd0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 14:05:42 crc kubenswrapper[5050]: I1211 14:05:42.998114 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9gv9\" (UniqueName: \"kubernetes.io/projected/c928931c-d49d-41dc-9181-11d856ed3bd0-kube-api-access-c9gv9\") pod \"ovsdbserver-sb-0\" (UID: \"c928931c-d49d-41dc-9181-11d856ed3bd0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 14:05:42 crc kubenswrapper[5050]: I1211 14:05:42.998192 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c928931c-d49d-41dc-9181-11d856ed3bd0-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c928931c-d49d-41dc-9181-11d856ed3bd0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 14:05:42 crc kubenswrapper[5050]: I1211 14:05:42.998209 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c928931c-d49d-41dc-9181-11d856ed3bd0-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c928931c-d49d-41dc-9181-11d856ed3bd0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 14:05:42 crc kubenswrapper[5050]: I1211 14:05:42.998252 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c928931c-d49d-41dc-9181-11d856ed3bd0-config\") pod \"ovsdbserver-sb-0\" (UID: \"c928931c-d49d-41dc-9181-11d856ed3bd0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 14:05:42 crc kubenswrapper[5050]: I1211 14:05:42.998923 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c928931c-d49d-41dc-9181-11d856ed3bd0\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-sb-0" Dec 11 14:05:42 crc kubenswrapper[5050]: I1211 14:05:42.999104 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c928931c-d49d-41dc-9181-11d856ed3bd0-config\") pod \"ovsdbserver-sb-0\" (UID: \"c928931c-d49d-41dc-9181-11d856ed3bd0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 14:05:42 crc kubenswrapper[5050]: I1211 14:05:42.999147 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c928931c-d49d-41dc-9181-11d856ed3bd0-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"c928931c-d49d-41dc-9181-11d856ed3bd0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 14:05:43 crc kubenswrapper[5050]: I1211 14:05:43.000112 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c928931c-d49d-41dc-9181-11d856ed3bd0-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"c928931c-d49d-41dc-9181-11d856ed3bd0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 14:05:43 crc kubenswrapper[5050]: I1211 14:05:43.012124 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c928931c-d49d-41dc-9181-11d856ed3bd0-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c928931c-d49d-41dc-9181-11d856ed3bd0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 14:05:43 crc kubenswrapper[5050]: I1211 14:05:43.028049 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c928931c-d49d-41dc-9181-11d856ed3bd0-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"c928931c-d49d-41dc-9181-11d856ed3bd0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 14:05:43 crc kubenswrapper[5050]: I1211 14:05:43.028880 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c928931c-d49d-41dc-9181-11d856ed3bd0-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"c928931c-d49d-41dc-9181-11d856ed3bd0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 14:05:43 crc kubenswrapper[5050]: I1211 14:05:43.045048 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9gv9\" (UniqueName: \"kubernetes.io/projected/c928931c-d49d-41dc-9181-11d856ed3bd0-kube-api-access-c9gv9\") pod \"ovsdbserver-sb-0\" (UID: \"c928931c-d49d-41dc-9181-11d856ed3bd0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 14:05:43 crc kubenswrapper[5050]: I1211 14:05:43.053446 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-sb-0\" (UID: \"c928931c-d49d-41dc-9181-11d856ed3bd0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 14:05:43 crc kubenswrapper[5050]: I1211 14:05:43.122870 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 11 14:05:54 crc kubenswrapper[5050]: E1211 14:05:54.983301 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc" Dec 11 14:05:54 crc kubenswrapper[5050]: E1211 14:05:54.984187 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n5c4h548h5fbh5b5h57h66dh76h675h56fh5b6h85h5c4hddh9bh5dfhd9h568h56bh5c7h5dfh78h664h55hd5h544h689h66fh657h55dh675hf9h5b7q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lcwl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(defedffb-9310-4b18-b7ee-b54040aa5447): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 14:05:54 crc kubenswrapper[5050]: E1211 14:05:54.985392 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="defedffb-9310-4b18-b7ee-b54040aa5447" Dec 11 14:05:55 crc kubenswrapper[5050]: E1211 14:05:55.657151 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached@sha256:e47191ba776414b781b3e27b856ab45a03b9480c7dc2b1addb939608794882dc\\\"\"" pod="openstack/memcached-0" podUID="defedffb-9310-4b18-b7ee-b54040aa5447" Dec 11 14:05:56 crc kubenswrapper[5050]: E1211 14:05:56.858397 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13" Dec 11 14:05:56 crc kubenswrapper[5050]: E1211 14:05:56.859188 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-twplg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(c8b3d8cd-9278-4639-86fe-1aa7696fecca): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 14:05:56 crc kubenswrapper[5050]: E1211 14:05:56.860405 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="c8b3d8cd-9278-4639-86fe-1aa7696fecca" Dec 11 14:05:57 crc kubenswrapper[5050]: E1211 14:05:57.673370 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="c8b3d8cd-9278-4639-86fe-1aa7696fecca" Dec 11 14:06:02 crc kubenswrapper[5050]: E1211 14:06:02.126637 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d" Dec 11 14:06:02 crc kubenswrapper[5050]: E1211 14:06:02.127077 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-548z5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(0891f075-8101-475b-b844-e7cb42a4990b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 14:06:02 crc kubenswrapper[5050]: E1211 14:06:02.128304 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="0891f075-8101-475b-b844-e7cb42a4990b" Dec 11 14:06:02 crc kubenswrapper[5050]: E1211 14:06:02.228276 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13" Dec 11 14:06:02 crc kubenswrapper[5050]: E1211 14:06:02.228466 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lrsfm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(55b7e535-46f6-403b-9cdf-bf172dba97b6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 14:06:02 crc kubenswrapper[5050]: E1211 14:06:02.229660 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="55b7e535-46f6-403b-9cdf-bf172dba97b6" Dec 11 14:06:02 crc kubenswrapper[5050]: E1211 14:06:02.238883 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d" Dec 11 14:06:02 crc kubenswrapper[5050]: E1211 14:06:02.239240 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6p5z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(458f05be-2fd6-44d9-8034-f077356964ce): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 14:06:02 crc kubenswrapper[5050]: E1211 14:06:02.240866 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="458f05be-2fd6-44d9-8034-f077356964ce" Dec 11 14:06:02 crc kubenswrapper[5050]: E1211 14:06:02.725860 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="458f05be-2fd6-44d9-8034-f077356964ce" Dec 11 14:06:02 crc kubenswrapper[5050]: E1211 14:06:02.725865 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb@sha256:ed0f8ba03f3ce47a32006d730c3049455325eb2c3b98b9fd6b3fb9901004df13\\\"\"" pod="openstack/openstack-galera-0" podUID="55b7e535-46f6-403b-9cdf-bf172dba97b6" Dec 11 14:06:02 crc kubenswrapper[5050]: E1211 14:06:02.725966 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq@sha256:e733252aab7f4bc0efbdd712bcd88e44c5498bf1773dba843bc9dcfac324fe3d\\\"\"" pod="openstack/rabbitmq-server-0" podUID="0891f075-8101-475b-b844-e7cb42a4990b" Dec 11 14:06:03 crc kubenswrapper[5050]: E1211 14:06:03.316819 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Dec 11 14:06:03 crc kubenswrapper[5050]: E1211 14:06:03.317092 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pkfqd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-95f5f6995-clprb_openstack(63c71212-7318-45f1-94f9-235d861faf86): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 14:06:03 crc kubenswrapper[5050]: E1211 14:06:03.319282 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-95f5f6995-clprb" podUID="63c71212-7318-45f1-94f9-235d861faf86" Dec 11 14:06:03 crc kubenswrapper[5050]: E1211 14:06:03.333194 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Dec 11 14:06:03 crc kubenswrapper[5050]: E1211 14:06:03.333403 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gplgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5f854695bc-zv5n4_openstack(743c85da-99ca-4ac8-8d19-edf69c27b90f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 14:06:03 crc kubenswrapper[5050]: E1211 14:06:03.334865 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5f854695bc-zv5n4" podUID="743c85da-99ca-4ac8-8d19-edf69c27b90f" Dec 11 14:06:03 crc kubenswrapper[5050]: E1211 14:06:03.335526 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Dec 11 14:06:03 crc kubenswrapper[5050]: E1211 14:06:03.335850 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vkzsn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-84bb9d8bd9-88cc6_openstack(ac0ab827-d14b-4fa6-b93c-1e71237fbaef): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 14:06:03 crc kubenswrapper[5050]: E1211 14:06:03.337262 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-84bb9d8bd9-88cc6" podUID="ac0ab827-d14b-4fa6-b93c-1e71237fbaef" Dec 11 14:06:03 crc kubenswrapper[5050]: E1211 14:06:03.583655 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33" Dec 11 14:06:03 crc kubenswrapper[5050]: E1211 14:06:03.583908 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7gd7t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-c7cbb8f79-vlqm4_openstack(4b68c591-2f4c-41c3-9ab1-372deed0e388): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 14:06:03 crc kubenswrapper[5050]: E1211 14:06:03.585437 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-c7cbb8f79-vlqm4" podUID="4b68c591-2f4c-41c3-9ab1-372deed0e388" Dec 11 14:06:03 crc kubenswrapper[5050]: E1211 14:06:03.732914 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-95f5f6995-clprb" podUID="63c71212-7318-45f1-94f9-235d861faf86" Dec 11 14:06:03 crc kubenswrapper[5050]: E1211 14:06:03.738443 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33\\\"\"" pod="openstack/dnsmasq-dns-c7cbb8f79-vlqm4" podUID="4b68c591-2f4c-41c3-9ab1-372deed0e388" Dec 11 14:06:04 crc kubenswrapper[5050]: I1211 14:06:04.156403 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 11 14:06:04 crc kubenswrapper[5050]: I1211 14:06:04.469444 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-88cc6" Dec 11 14:06:04 crc kubenswrapper[5050]: I1211 14:06:04.507635 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-47tvr"] Dec 11 14:06:04 crc kubenswrapper[5050]: I1211 14:06:04.507875 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac0ab827-d14b-4fa6-b93c-1e71237fbaef-config\") pod \"ac0ab827-d14b-4fa6-b93c-1e71237fbaef\" (UID: \"ac0ab827-d14b-4fa6-b93c-1e71237fbaef\") " Dec 11 14:06:04 crc kubenswrapper[5050]: I1211 14:06:04.508151 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkzsn\" (UniqueName: \"kubernetes.io/projected/ac0ab827-d14b-4fa6-b93c-1e71237fbaef-kube-api-access-vkzsn\") pod \"ac0ab827-d14b-4fa6-b93c-1e71237fbaef\" (UID: \"ac0ab827-d14b-4fa6-b93c-1e71237fbaef\") " Dec 11 14:06:04 crc kubenswrapper[5050]: I1211 14:06:04.508523 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac0ab827-d14b-4fa6-b93c-1e71237fbaef-config" (OuterVolumeSpecName: "config") pod "ac0ab827-d14b-4fa6-b93c-1e71237fbaef" (UID: "ac0ab827-d14b-4fa6-b93c-1e71237fbaef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:06:04 crc kubenswrapper[5050]: I1211 14:06:04.508722 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac0ab827-d14b-4fa6-b93c-1e71237fbaef-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:04 crc kubenswrapper[5050]: I1211 14:06:04.516283 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac0ab827-d14b-4fa6-b93c-1e71237fbaef-kube-api-access-vkzsn" (OuterVolumeSpecName: "kube-api-access-vkzsn") pod "ac0ab827-d14b-4fa6-b93c-1e71237fbaef" (UID: "ac0ab827-d14b-4fa6-b93c-1e71237fbaef"). InnerVolumeSpecName "kube-api-access-vkzsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:06:04 crc kubenswrapper[5050]: I1211 14:06:04.610748 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkzsn\" (UniqueName: \"kubernetes.io/projected/ac0ab827-d14b-4fa6-b93c-1e71237fbaef-kube-api-access-vkzsn\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:04 crc kubenswrapper[5050]: W1211 14:06:04.652087 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01fa4d89_aae5_451a_8798_2700053fe3d4.slice/crio-1ade635b19d02c0aef64a1546e17b2e6fa10bdb422590899fc1a126ab22d4372 WatchSource:0}: Error finding container 1ade635b19d02c0aef64a1546e17b2e6fa10bdb422590899fc1a126ab22d4372: Status 404 returned error can't find the container with id 1ade635b19d02c0aef64a1546e17b2e6fa10bdb422590899fc1a126ab22d4372 Dec 11 14:06:04 crc kubenswrapper[5050]: I1211 14:06:04.653948 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 11 14:06:04 crc kubenswrapper[5050]: I1211 14:06:04.703421 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-zv5n4" Dec 11 14:06:04 crc kubenswrapper[5050]: I1211 14:06:04.746843 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84bb9d8bd9-88cc6" Dec 11 14:06:04 crc kubenswrapper[5050]: I1211 14:06:04.746845 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84bb9d8bd9-88cc6" event={"ID":"ac0ab827-d14b-4fa6-b93c-1e71237fbaef","Type":"ContainerDied","Data":"429d43df0473a247a68fbff34e96f33f8705d7a3bbd25c8f3678a7f8ed14d1f8"} Dec 11 14:06:04 crc kubenswrapper[5050]: I1211 14:06:04.753041 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f854695bc-zv5n4" event={"ID":"743c85da-99ca-4ac8-8d19-edf69c27b90f","Type":"ContainerDied","Data":"ebc703a776fcf343cd2b3d72b8878c551ed63e2f40bf5bbbd5c0f5cf4e431dbe"} Dec 11 14:06:04 crc kubenswrapper[5050]: I1211 14:06:04.753090 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f854695bc-zv5n4" Dec 11 14:06:04 crc kubenswrapper[5050]: I1211 14:06:04.757643 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-47tvr" event={"ID":"01fa4d89-aae5-451a-8798-2700053fe3d4","Type":"ContainerStarted","Data":"1ade635b19d02c0aef64a1546e17b2e6fa10bdb422590899fc1a126ab22d4372"} Dec 11 14:06:04 crc kubenswrapper[5050]: I1211 14:06:04.760180 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e96be66c-07f2-47c0-a784-6af473c8a2a8","Type":"ContainerStarted","Data":"cfb0a77aeeda20dcd88cfbfdd07fabd66839af79463ba0d77f3fc7604c35e830"} Dec 11 14:06:04 crc kubenswrapper[5050]: I1211 14:06:04.815645 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-88cc6"] Dec 11 14:06:04 crc kubenswrapper[5050]: I1211 14:06:04.820889 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84bb9d8bd9-88cc6"] Dec 11 14:06:04 crc kubenswrapper[5050]: I1211 14:06:04.834887 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/743c85da-99ca-4ac8-8d19-edf69c27b90f-config\") pod \"743c85da-99ca-4ac8-8d19-edf69c27b90f\" (UID: \"743c85da-99ca-4ac8-8d19-edf69c27b90f\") " Dec 11 14:06:04 crc kubenswrapper[5050]: I1211 14:06:04.834987 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/743c85da-99ca-4ac8-8d19-edf69c27b90f-dns-svc\") pod \"743c85da-99ca-4ac8-8d19-edf69c27b90f\" (UID: \"743c85da-99ca-4ac8-8d19-edf69c27b90f\") " Dec 11 14:06:04 crc kubenswrapper[5050]: I1211 14:06:04.835041 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gplgl\" (UniqueName: \"kubernetes.io/projected/743c85da-99ca-4ac8-8d19-edf69c27b90f-kube-api-access-gplgl\") pod \"743c85da-99ca-4ac8-8d19-edf69c27b90f\" (UID: \"743c85da-99ca-4ac8-8d19-edf69c27b90f\") " Dec 11 14:06:04 crc kubenswrapper[5050]: I1211 14:06:04.835878 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/743c85da-99ca-4ac8-8d19-edf69c27b90f-config" (OuterVolumeSpecName: "config") pod "743c85da-99ca-4ac8-8d19-edf69c27b90f" (UID: "743c85da-99ca-4ac8-8d19-edf69c27b90f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:06:04 crc kubenswrapper[5050]: I1211 14:06:04.835898 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/743c85da-99ca-4ac8-8d19-edf69c27b90f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "743c85da-99ca-4ac8-8d19-edf69c27b90f" (UID: "743c85da-99ca-4ac8-8d19-edf69c27b90f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:06:04 crc kubenswrapper[5050]: I1211 14:06:04.841638 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/743c85da-99ca-4ac8-8d19-edf69c27b90f-kube-api-access-gplgl" (OuterVolumeSpecName: "kube-api-access-gplgl") pod "743c85da-99ca-4ac8-8d19-edf69c27b90f" (UID: "743c85da-99ca-4ac8-8d19-edf69c27b90f"). InnerVolumeSpecName "kube-api-access-gplgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:06:04 crc kubenswrapper[5050]: I1211 14:06:04.937256 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/743c85da-99ca-4ac8-8d19-edf69c27b90f-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:04 crc kubenswrapper[5050]: I1211 14:06:04.937295 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/743c85da-99ca-4ac8-8d19-edf69c27b90f-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:04 crc kubenswrapper[5050]: I1211 14:06:04.937307 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gplgl\" (UniqueName: \"kubernetes.io/projected/743c85da-99ca-4ac8-8d19-edf69c27b90f-kube-api-access-gplgl\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:04 crc kubenswrapper[5050]: W1211 14:06:04.951625 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc928931c_d49d_41dc_9181_11d856ed3bd0.slice/crio-1cea5ee03c4b98e2dd708e2afdf4fa98c7bbe1a680f5dc49724ce2f1716f81ee WatchSource:0}: Error finding container 1cea5ee03c4b98e2dd708e2afdf4fa98c7bbe1a680f5dc49724ce2f1716f81ee: Status 404 returned error can't find the container with id 1cea5ee03c4b98e2dd708e2afdf4fa98c7bbe1a680f5dc49724ce2f1716f81ee Dec 11 14:06:05 crc kubenswrapper[5050]: I1211 14:06:05.118105 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-zv5n4"] Dec 11 14:06:05 crc kubenswrapper[5050]: I1211 14:06:05.124400 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f854695bc-zv5n4"] Dec 11 14:06:05 crc kubenswrapper[5050]: E1211 14:06:05.363993 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb" Dec 11 14:06:05 crc kubenswrapper[5050]: E1211 14:06:05.364080 5050 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb" Dec 11 14:06:05 crc kubenswrapper[5050]: E1211 14:06:05.364260 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-twq74,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(8e60c3c2-6055-4e50-99b6-4a5f08728b17): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" logger="UnhandledError" Dec 11 14:06:05 crc kubenswrapper[5050]: E1211 14:06:05.365496 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="8e60c3c2-6055-4e50-99b6-4a5f08728b17" Dec 11 14:06:05 crc kubenswrapper[5050]: I1211 14:06:05.559522 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="743c85da-99ca-4ac8-8d19-edf69c27b90f" path="/var/lib/kubelet/pods/743c85da-99ca-4ac8-8d19-edf69c27b90f/volumes" Dec 11 14:06:05 crc kubenswrapper[5050]: I1211 14:06:05.560324 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac0ab827-d14b-4fa6-b93c-1e71237fbaef" path="/var/lib/kubelet/pods/ac0ab827-d14b-4fa6-b93c-1e71237fbaef/volumes" Dec 11 14:06:05 crc kubenswrapper[5050]: W1211 14:06:05.755584 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88b4966d_124b_4cf4_b52b_704955059220.slice/crio-31794c1b02c4feda1b83378dcdbbca471105b28edc8768160a171cede3872d9f WatchSource:0}: Error finding container 31794c1b02c4feda1b83378dcdbbca471105b28edc8768160a171cede3872d9f: Status 404 returned error can't find the container with id 31794c1b02c4feda1b83378dcdbbca471105b28edc8768160a171cede3872d9f Dec 11 14:06:05 crc kubenswrapper[5050]: I1211 14:06:05.760375 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-pjzpq"] Dec 11 14:06:05 crc kubenswrapper[5050]: I1211 14:06:05.770228 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pjzpq" event={"ID":"88b4966d-124b-4cf4-b52b-704955059220","Type":"ContainerStarted","Data":"31794c1b02c4feda1b83378dcdbbca471105b28edc8768160a171cede3872d9f"} Dec 11 14:06:05 crc kubenswrapper[5050]: I1211 14:06:05.781621 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c928931c-d49d-41dc-9181-11d856ed3bd0","Type":"ContainerStarted","Data":"1cea5ee03c4b98e2dd708e2afdf4fa98c7bbe1a680f5dc49724ce2f1716f81ee"} Dec 11 14:06:05 crc kubenswrapper[5050]: E1211 14:06:05.784286 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb\\\"\"" pod="openstack/kube-state-metrics-0" podUID="8e60c3c2-6055-4e50-99b6-4a5f08728b17" Dec 11 14:06:09 crc kubenswrapper[5050]: I1211 14:06:09.817361 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-47tvr" event={"ID":"01fa4d89-aae5-451a-8798-2700053fe3d4","Type":"ContainerStarted","Data":"3dc7b12cce104d7f7c7d469a1c36d591c390e0b69268044a2dd0bba6d7255d70"} Dec 11 14:06:09 crc kubenswrapper[5050]: I1211 14:06:09.817895 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-47tvr" Dec 11 14:06:09 crc kubenswrapper[5050]: I1211 14:06:09.820735 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e96be66c-07f2-47c0-a784-6af473c8a2a8","Type":"ContainerStarted","Data":"c612f0ddd6bdd2093ab8086d9662d761b9c9ab6d418a4c7403db013b23c7a269"} Dec 11 14:06:09 crc kubenswrapper[5050]: I1211 14:06:09.823065 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c928931c-d49d-41dc-9181-11d856ed3bd0","Type":"ContainerStarted","Data":"3768c3d6cf415867973810bdb14c5966684aab657f8614b9d4062545081db44d"} Dec 11 14:06:09 crc kubenswrapper[5050]: I1211 14:06:09.825381 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pjzpq" event={"ID":"88b4966d-124b-4cf4-b52b-704955059220","Type":"ContainerStarted","Data":"869a4abfd180a5e436ede22ec0513f606237aa2e6ae7e715fd1ae502f1b97492"} Dec 11 14:06:09 crc kubenswrapper[5050]: I1211 14:06:09.843763 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-47tvr" podStartSLOduration=27.419805846 podStartE2EDuration="31.843741051s" podCreationTimestamp="2025-12-11 14:05:38 +0000 UTC" firstStartedPulling="2025-12-11 14:06:04.661254585 +0000 UTC m=+1055.504977171" lastFinishedPulling="2025-12-11 14:06:09.08518979 +0000 UTC m=+1059.928912376" observedRunningTime="2025-12-11 14:06:09.84039472 +0000 UTC m=+1060.684117306" watchObservedRunningTime="2025-12-11 14:06:09.843741051 +0000 UTC m=+1060.687463637" Dec 11 14:06:10 crc kubenswrapper[5050]: I1211 14:06:10.796727 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:06:10 crc kubenswrapper[5050]: I1211 14:06:10.797098 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:06:10 crc kubenswrapper[5050]: I1211 14:06:10.797158 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 14:06:10 crc kubenswrapper[5050]: I1211 14:06:10.798253 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9bb2e8f5b00b062d127a620edb8af7fea8c346ac42e77621e28b4dafad3e5aa5"} pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 14:06:10 crc kubenswrapper[5050]: I1211 14:06:10.798393 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" containerID="cri-o://9bb2e8f5b00b062d127a620edb8af7fea8c346ac42e77621e28b4dafad3e5aa5" gracePeriod=600 Dec 11 14:06:10 crc kubenswrapper[5050]: I1211 14:06:10.837721 5050 generic.go:334] "Generic (PLEG): container finished" podID="88b4966d-124b-4cf4-b52b-704955059220" containerID="869a4abfd180a5e436ede22ec0513f606237aa2e6ae7e715fd1ae502f1b97492" exitCode=0 Dec 11 14:06:10 crc kubenswrapper[5050]: I1211 14:06:10.839436 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pjzpq" event={"ID":"88b4966d-124b-4cf4-b52b-704955059220","Type":"ContainerDied","Data":"869a4abfd180a5e436ede22ec0513f606237aa2e6ae7e715fd1ae502f1b97492"} Dec 11 14:06:11 crc kubenswrapper[5050]: I1211 14:06:11.868415 5050 generic.go:334] "Generic (PLEG): container finished" podID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerID="9bb2e8f5b00b062d127a620edb8af7fea8c346ac42e77621e28b4dafad3e5aa5" exitCode=0 Dec 11 14:06:11 crc kubenswrapper[5050]: I1211 14:06:11.868520 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerDied","Data":"9bb2e8f5b00b062d127a620edb8af7fea8c346ac42e77621e28b4dafad3e5aa5"} Dec 11 14:06:11 crc kubenswrapper[5050]: I1211 14:06:11.869192 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerStarted","Data":"9aa9a97e2005f8a9ae70c2d72cef618f936c2d3673f7194a8c95ac3ad3519511"} Dec 11 14:06:11 crc kubenswrapper[5050]: I1211 14:06:11.869230 5050 scope.go:117] "RemoveContainer" containerID="d1f524fdfc663274504f05a5df8397287ddcfa403493dc19698f5ccd6febdfcc" Dec 11 14:06:11 crc kubenswrapper[5050]: I1211 14:06:11.874568 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"defedffb-9310-4b18-b7ee-b54040aa5447","Type":"ContainerStarted","Data":"2a9d3c07c9884ff572de5d859d886e5e90497f2bc5adb397f3f64151ee6e7fd3"} Dec 11 14:06:11 crc kubenswrapper[5050]: I1211 14:06:11.875262 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 11 14:06:11 crc kubenswrapper[5050]: I1211 14:06:11.879580 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pjzpq" event={"ID":"88b4966d-124b-4cf4-b52b-704955059220","Type":"ContainerStarted","Data":"3713fc357ee4f2a6f119e91ece32b1dd727d67185d218ebb887b345bbd9015d1"} Dec 11 14:06:11 crc kubenswrapper[5050]: I1211 14:06:11.879629 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pjzpq" event={"ID":"88b4966d-124b-4cf4-b52b-704955059220","Type":"ContainerStarted","Data":"5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c"} Dec 11 14:06:11 crc kubenswrapper[5050]: I1211 14:06:11.880058 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-pjzpq" Dec 11 14:06:11 crc kubenswrapper[5050]: I1211 14:06:11.880170 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-pjzpq" Dec 11 14:06:11 crc kubenswrapper[5050]: I1211 14:06:11.919041 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-pjzpq" podStartSLOduration=30.61118081 podStartE2EDuration="33.918999223s" podCreationTimestamp="2025-12-11 14:05:38 +0000 UTC" firstStartedPulling="2025-12-11 14:06:05.759420843 +0000 UTC m=+1056.603143429" lastFinishedPulling="2025-12-11 14:06:09.067239256 +0000 UTC m=+1059.910961842" observedRunningTime="2025-12-11 14:06:11.911548352 +0000 UTC m=+1062.755270938" watchObservedRunningTime="2025-12-11 14:06:11.918999223 +0000 UTC m=+1062.762721809" Dec 11 14:06:11 crc kubenswrapper[5050]: I1211 14:06:11.934365 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.332186511 podStartE2EDuration="38.934291315s" podCreationTimestamp="2025-12-11 14:05:33 +0000 UTC" firstStartedPulling="2025-12-11 14:05:34.57403353 +0000 UTC m=+1025.417756106" lastFinishedPulling="2025-12-11 14:06:11.176138324 +0000 UTC m=+1062.019860910" observedRunningTime="2025-12-11 14:06:11.932541318 +0000 UTC m=+1062.776263924" watchObservedRunningTime="2025-12-11 14:06:11.934291315 +0000 UTC m=+1062.778013891" Dec 11 14:06:14 crc kubenswrapper[5050]: I1211 14:06:14.918807 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c8b3d8cd-9278-4639-86fe-1aa7696fecca","Type":"ContainerStarted","Data":"3f6ab43d7c44f6f5b8c73954b9c98393b51e2f88daf2fc69efb6768d87c72dd3"} Dec 11 14:06:14 crc kubenswrapper[5050]: I1211 14:06:14.921054 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e96be66c-07f2-47c0-a784-6af473c8a2a8","Type":"ContainerStarted","Data":"93fdd4d5510bd166ab18bedc5b8b3ae3ef5a610b722c510e999ff481726b0345"} Dec 11 14:06:14 crc kubenswrapper[5050]: I1211 14:06:14.924607 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c928931c-d49d-41dc-9181-11d856ed3bd0","Type":"ContainerStarted","Data":"3111270185c2d5c0f18227cafa21198528d2d2e6ed0667e53d97a8edfd2d1860"} Dec 11 14:06:14 crc kubenswrapper[5050]: I1211 14:06:14.963288 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=27.866336595 podStartE2EDuration="36.96326897s" podCreationTimestamp="2025-12-11 14:05:38 +0000 UTC" firstStartedPulling="2025-12-11 14:06:04.651555603 +0000 UTC m=+1055.495278179" lastFinishedPulling="2025-12-11 14:06:13.748487968 +0000 UTC m=+1064.592210554" observedRunningTime="2025-12-11 14:06:14.961603395 +0000 UTC m=+1065.805325991" watchObservedRunningTime="2025-12-11 14:06:14.96326897 +0000 UTC m=+1065.806991556" Dec 11 14:06:14 crc kubenswrapper[5050]: I1211 14:06:14.995704 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=25.181286967 podStartE2EDuration="33.995686444s" podCreationTimestamp="2025-12-11 14:05:41 +0000 UTC" firstStartedPulling="2025-12-11 14:06:04.954288475 +0000 UTC m=+1055.798011061" lastFinishedPulling="2025-12-11 14:06:13.768687952 +0000 UTC m=+1064.612410538" observedRunningTime="2025-12-11 14:06:14.98773423 +0000 UTC m=+1065.831456816" watchObservedRunningTime="2025-12-11 14:06:14.995686444 +0000 UTC m=+1065.839409030" Dec 11 14:06:15 crc kubenswrapper[5050]: I1211 14:06:15.813707 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 11 14:06:15 crc kubenswrapper[5050]: I1211 14:06:15.854692 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 11 14:06:15 crc kubenswrapper[5050]: I1211 14:06:15.932145 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 11 14:06:15 crc kubenswrapper[5050]: I1211 14:06:15.970611 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.124495 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.181137 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.372074 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-clprb"] Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.439773 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7878659675-8x5rp"] Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.441534 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7878659675-8x5rp" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.448870 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.471544 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-7gmrp"] Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.480329 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-7gmrp"] Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.480450 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-7gmrp" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.487149 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.514925 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7878659675-8x5rp"] Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.606783 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/58cdcd05-e81a-4ed4-8357-249649b17449-ovs-rundir\") pod \"ovn-controller-metrics-7gmrp\" (UID: \"58cdcd05-e81a-4ed4-8357-249649b17449\") " pod="openstack/ovn-controller-metrics-7gmrp" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.607321 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxgn8\" (UniqueName: \"kubernetes.io/projected/58cdcd05-e81a-4ed4-8357-249649b17449-kube-api-access-lxgn8\") pod \"ovn-controller-metrics-7gmrp\" (UID: \"58cdcd05-e81a-4ed4-8357-249649b17449\") " pod="openstack/ovn-controller-metrics-7gmrp" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.607365 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bn4b\" (UniqueName: \"kubernetes.io/projected/ede85a3e-dba0-4946-a12d-3dd38485c815-kube-api-access-6bn4b\") pod \"dnsmasq-dns-7878659675-8x5rp\" (UID: \"ede85a3e-dba0-4946-a12d-3dd38485c815\") " pod="openstack/dnsmasq-dns-7878659675-8x5rp" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.607410 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58cdcd05-e81a-4ed4-8357-249649b17449-config\") pod \"ovn-controller-metrics-7gmrp\" (UID: \"58cdcd05-e81a-4ed4-8357-249649b17449\") " pod="openstack/ovn-controller-metrics-7gmrp" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.607460 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/58cdcd05-e81a-4ed4-8357-249649b17449-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-7gmrp\" (UID: \"58cdcd05-e81a-4ed4-8357-249649b17449\") " pod="openstack/ovn-controller-metrics-7gmrp" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.607550 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ede85a3e-dba0-4946-a12d-3dd38485c815-dns-svc\") pod \"dnsmasq-dns-7878659675-8x5rp\" (UID: \"ede85a3e-dba0-4946-a12d-3dd38485c815\") " pod="openstack/dnsmasq-dns-7878659675-8x5rp" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.607599 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58cdcd05-e81a-4ed4-8357-249649b17449-combined-ca-bundle\") pod \"ovn-controller-metrics-7gmrp\" (UID: \"58cdcd05-e81a-4ed4-8357-249649b17449\") " pod="openstack/ovn-controller-metrics-7gmrp" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.607626 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ede85a3e-dba0-4946-a12d-3dd38485c815-ovsdbserver-nb\") pod \"dnsmasq-dns-7878659675-8x5rp\" (UID: \"ede85a3e-dba0-4946-a12d-3dd38485c815\") " pod="openstack/dnsmasq-dns-7878659675-8x5rp" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.607687 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ede85a3e-dba0-4946-a12d-3dd38485c815-config\") pod \"dnsmasq-dns-7878659675-8x5rp\" (UID: \"ede85a3e-dba0-4946-a12d-3dd38485c815\") " pod="openstack/dnsmasq-dns-7878659675-8x5rp" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.607712 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/58cdcd05-e81a-4ed4-8357-249649b17449-ovn-rundir\") pod \"ovn-controller-metrics-7gmrp\" (UID: \"58cdcd05-e81a-4ed4-8357-249649b17449\") " pod="openstack/ovn-controller-metrics-7gmrp" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.698253 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-vlqm4"] Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.711703 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ede85a3e-dba0-4946-a12d-3dd38485c815-config\") pod \"dnsmasq-dns-7878659675-8x5rp\" (UID: \"ede85a3e-dba0-4946-a12d-3dd38485c815\") " pod="openstack/dnsmasq-dns-7878659675-8x5rp" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.711758 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/58cdcd05-e81a-4ed4-8357-249649b17449-ovn-rundir\") pod \"ovn-controller-metrics-7gmrp\" (UID: \"58cdcd05-e81a-4ed4-8357-249649b17449\") " pod="openstack/ovn-controller-metrics-7gmrp" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.711819 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/58cdcd05-e81a-4ed4-8357-249649b17449-ovs-rundir\") pod \"ovn-controller-metrics-7gmrp\" (UID: \"58cdcd05-e81a-4ed4-8357-249649b17449\") " pod="openstack/ovn-controller-metrics-7gmrp" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.711858 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxgn8\" (UniqueName: \"kubernetes.io/projected/58cdcd05-e81a-4ed4-8357-249649b17449-kube-api-access-lxgn8\") pod \"ovn-controller-metrics-7gmrp\" (UID: \"58cdcd05-e81a-4ed4-8357-249649b17449\") " pod="openstack/ovn-controller-metrics-7gmrp" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.711878 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bn4b\" (UniqueName: \"kubernetes.io/projected/ede85a3e-dba0-4946-a12d-3dd38485c815-kube-api-access-6bn4b\") pod \"dnsmasq-dns-7878659675-8x5rp\" (UID: \"ede85a3e-dba0-4946-a12d-3dd38485c815\") " pod="openstack/dnsmasq-dns-7878659675-8x5rp" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.711929 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58cdcd05-e81a-4ed4-8357-249649b17449-config\") pod \"ovn-controller-metrics-7gmrp\" (UID: \"58cdcd05-e81a-4ed4-8357-249649b17449\") " pod="openstack/ovn-controller-metrics-7gmrp" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.711952 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/58cdcd05-e81a-4ed4-8357-249649b17449-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-7gmrp\" (UID: \"58cdcd05-e81a-4ed4-8357-249649b17449\") " pod="openstack/ovn-controller-metrics-7gmrp" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.712087 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ede85a3e-dba0-4946-a12d-3dd38485c815-dns-svc\") pod \"dnsmasq-dns-7878659675-8x5rp\" (UID: \"ede85a3e-dba0-4946-a12d-3dd38485c815\") " pod="openstack/dnsmasq-dns-7878659675-8x5rp" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.712109 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58cdcd05-e81a-4ed4-8357-249649b17449-combined-ca-bundle\") pod \"ovn-controller-metrics-7gmrp\" (UID: \"58cdcd05-e81a-4ed4-8357-249649b17449\") " pod="openstack/ovn-controller-metrics-7gmrp" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.712136 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ede85a3e-dba0-4946-a12d-3dd38485c815-ovsdbserver-nb\") pod \"dnsmasq-dns-7878659675-8x5rp\" (UID: \"ede85a3e-dba0-4946-a12d-3dd38485c815\") " pod="openstack/dnsmasq-dns-7878659675-8x5rp" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.713364 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ede85a3e-dba0-4946-a12d-3dd38485c815-config\") pod \"dnsmasq-dns-7878659675-8x5rp\" (UID: \"ede85a3e-dba0-4946-a12d-3dd38485c815\") " pod="openstack/dnsmasq-dns-7878659675-8x5rp" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.713733 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/58cdcd05-e81a-4ed4-8357-249649b17449-ovn-rundir\") pod \"ovn-controller-metrics-7gmrp\" (UID: \"58cdcd05-e81a-4ed4-8357-249649b17449\") " pod="openstack/ovn-controller-metrics-7gmrp" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.713907 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ede85a3e-dba0-4946-a12d-3dd38485c815-dns-svc\") pod \"dnsmasq-dns-7878659675-8x5rp\" (UID: \"ede85a3e-dba0-4946-a12d-3dd38485c815\") " pod="openstack/dnsmasq-dns-7878659675-8x5rp" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.714753 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/58cdcd05-e81a-4ed4-8357-249649b17449-ovs-rundir\") pod \"ovn-controller-metrics-7gmrp\" (UID: \"58cdcd05-e81a-4ed4-8357-249649b17449\") " pod="openstack/ovn-controller-metrics-7gmrp" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.715121 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58cdcd05-e81a-4ed4-8357-249649b17449-config\") pod \"ovn-controller-metrics-7gmrp\" (UID: \"58cdcd05-e81a-4ed4-8357-249649b17449\") " pod="openstack/ovn-controller-metrics-7gmrp" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.715494 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ede85a3e-dba0-4946-a12d-3dd38485c815-ovsdbserver-nb\") pod \"dnsmasq-dns-7878659675-8x5rp\" (UID: \"ede85a3e-dba0-4946-a12d-3dd38485c815\") " pod="openstack/dnsmasq-dns-7878659675-8x5rp" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.720077 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-6rpgw"] Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.721292 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58cdcd05-e81a-4ed4-8357-249649b17449-combined-ca-bundle\") pod \"ovn-controller-metrics-7gmrp\" (UID: \"58cdcd05-e81a-4ed4-8357-249649b17449\") " pod="openstack/ovn-controller-metrics-7gmrp" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.721825 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/58cdcd05-e81a-4ed4-8357-249649b17449-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-7gmrp\" (UID: \"58cdcd05-e81a-4ed4-8357-249649b17449\") " pod="openstack/ovn-controller-metrics-7gmrp" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.723999 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-6rpgw" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.729235 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.747950 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-6rpgw"] Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.748717 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxgn8\" (UniqueName: \"kubernetes.io/projected/58cdcd05-e81a-4ed4-8357-249649b17449-kube-api-access-lxgn8\") pod \"ovn-controller-metrics-7gmrp\" (UID: \"58cdcd05-e81a-4ed4-8357-249649b17449\") " pod="openstack/ovn-controller-metrics-7gmrp" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.751816 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bn4b\" (UniqueName: \"kubernetes.io/projected/ede85a3e-dba0-4946-a12d-3dd38485c815-kube-api-access-6bn4b\") pod \"dnsmasq-dns-7878659675-8x5rp\" (UID: \"ede85a3e-dba0-4946-a12d-3dd38485c815\") " pod="openstack/dnsmasq-dns-7878659675-8x5rp" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.917581 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7878659675-8x5rp" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.919616 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zw64\" (UniqueName: \"kubernetes.io/projected/e1f95cdc-064f-479e-af77-4520647ba58a-kube-api-access-8zw64\") pod \"dnsmasq-dns-586b989cdc-6rpgw\" (UID: \"e1f95cdc-064f-479e-af77-4520647ba58a\") " pod="openstack/dnsmasq-dns-586b989cdc-6rpgw" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.919791 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1f95cdc-064f-479e-af77-4520647ba58a-ovsdbserver-sb\") pod \"dnsmasq-dns-586b989cdc-6rpgw\" (UID: \"e1f95cdc-064f-479e-af77-4520647ba58a\") " pod="openstack/dnsmasq-dns-586b989cdc-6rpgw" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.919949 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1f95cdc-064f-479e-af77-4520647ba58a-config\") pod \"dnsmasq-dns-586b989cdc-6rpgw\" (UID: \"e1f95cdc-064f-479e-af77-4520647ba58a\") " pod="openstack/dnsmasq-dns-586b989cdc-6rpgw" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.920099 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1f95cdc-064f-479e-af77-4520647ba58a-dns-svc\") pod \"dnsmasq-dns-586b989cdc-6rpgw\" (UID: \"e1f95cdc-064f-479e-af77-4520647ba58a\") " pod="openstack/dnsmasq-dns-586b989cdc-6rpgw" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.920277 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1f95cdc-064f-479e-af77-4520647ba58a-ovsdbserver-nb\") pod \"dnsmasq-dns-586b989cdc-6rpgw\" (UID: \"e1f95cdc-064f-479e-af77-4520647ba58a\") " pod="openstack/dnsmasq-dns-586b989cdc-6rpgw" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.949717 5050 generic.go:334] "Generic (PLEG): container finished" podID="4b68c591-2f4c-41c3-9ab1-372deed0e388" containerID="cde80b8090bc6179a7fd540ca9c1007335eac9f83960d33874c18a605c7a4ac9" exitCode=0 Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.949826 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c7cbb8f79-vlqm4" event={"ID":"4b68c591-2f4c-41c3-9ab1-372deed0e388","Type":"ContainerDied","Data":"cde80b8090bc6179a7fd540ca9c1007335eac9f83960d33874c18a605c7a4ac9"} Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.969417 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-7gmrp" Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.979076 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"55b7e535-46f6-403b-9cdf-bf172dba97b6","Type":"ContainerStarted","Data":"48b2599eb07fbfc258247376195b9fee5c9054df813653e7696ec82ef8a7ca4b"} Dec 11 14:06:16 crc kubenswrapper[5050]: I1211 14:06:16.980549 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.025574 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zw64\" (UniqueName: \"kubernetes.io/projected/e1f95cdc-064f-479e-af77-4520647ba58a-kube-api-access-8zw64\") pod \"dnsmasq-dns-586b989cdc-6rpgw\" (UID: \"e1f95cdc-064f-479e-af77-4520647ba58a\") " pod="openstack/dnsmasq-dns-586b989cdc-6rpgw" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.025660 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1f95cdc-064f-479e-af77-4520647ba58a-ovsdbserver-sb\") pod \"dnsmasq-dns-586b989cdc-6rpgw\" (UID: \"e1f95cdc-064f-479e-af77-4520647ba58a\") " pod="openstack/dnsmasq-dns-586b989cdc-6rpgw" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.025715 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1f95cdc-064f-479e-af77-4520647ba58a-config\") pod \"dnsmasq-dns-586b989cdc-6rpgw\" (UID: \"e1f95cdc-064f-479e-af77-4520647ba58a\") " pod="openstack/dnsmasq-dns-586b989cdc-6rpgw" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.025747 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1f95cdc-064f-479e-af77-4520647ba58a-dns-svc\") pod \"dnsmasq-dns-586b989cdc-6rpgw\" (UID: \"e1f95cdc-064f-479e-af77-4520647ba58a\") " pod="openstack/dnsmasq-dns-586b989cdc-6rpgw" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.025811 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1f95cdc-064f-479e-af77-4520647ba58a-ovsdbserver-nb\") pod \"dnsmasq-dns-586b989cdc-6rpgw\" (UID: \"e1f95cdc-064f-479e-af77-4520647ba58a\") " pod="openstack/dnsmasq-dns-586b989cdc-6rpgw" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.027131 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1f95cdc-064f-479e-af77-4520647ba58a-ovsdbserver-nb\") pod \"dnsmasq-dns-586b989cdc-6rpgw\" (UID: \"e1f95cdc-064f-479e-af77-4520647ba58a\") " pod="openstack/dnsmasq-dns-586b989cdc-6rpgw" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.030180 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1f95cdc-064f-479e-af77-4520647ba58a-config\") pod \"dnsmasq-dns-586b989cdc-6rpgw\" (UID: \"e1f95cdc-064f-479e-af77-4520647ba58a\") " pod="openstack/dnsmasq-dns-586b989cdc-6rpgw" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.031127 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1f95cdc-064f-479e-af77-4520647ba58a-dns-svc\") pod \"dnsmasq-dns-586b989cdc-6rpgw\" (UID: \"e1f95cdc-064f-479e-af77-4520647ba58a\") " pod="openstack/dnsmasq-dns-586b989cdc-6rpgw" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.040348 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1f95cdc-064f-479e-af77-4520647ba58a-ovsdbserver-sb\") pod \"dnsmasq-dns-586b989cdc-6rpgw\" (UID: \"e1f95cdc-064f-479e-af77-4520647ba58a\") " pod="openstack/dnsmasq-dns-586b989cdc-6rpgw" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.071063 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zw64\" (UniqueName: \"kubernetes.io/projected/e1f95cdc-064f-479e-af77-4520647ba58a-kube-api-access-8zw64\") pod \"dnsmasq-dns-586b989cdc-6rpgw\" (UID: \"e1f95cdc-064f-479e-af77-4520647ba58a\") " pod="openstack/dnsmasq-dns-586b989cdc-6rpgw" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.079447 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-6rpgw" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.120474 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.370787 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.373881 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.383908 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.384250 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.384439 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.384689 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-wdk84" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.410353 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.437834 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5371a32d-3998-4ddc-93d6-27e9afdb9712-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5371a32d-3998-4ddc-93d6-27e9afdb9712\") " pod="openstack/ovn-northd-0" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.438417 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5371a32d-3998-4ddc-93d6-27e9afdb9712-scripts\") pod \"ovn-northd-0\" (UID: \"5371a32d-3998-4ddc-93d6-27e9afdb9712\") " pod="openstack/ovn-northd-0" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.438562 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5371a32d-3998-4ddc-93d6-27e9afdb9712-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5371a32d-3998-4ddc-93d6-27e9afdb9712\") " pod="openstack/ovn-northd-0" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.438667 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5371a32d-3998-4ddc-93d6-27e9afdb9712-config\") pod \"ovn-northd-0\" (UID: \"5371a32d-3998-4ddc-93d6-27e9afdb9712\") " pod="openstack/ovn-northd-0" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.438807 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jdmv\" (UniqueName: \"kubernetes.io/projected/5371a32d-3998-4ddc-93d6-27e9afdb9712-kube-api-access-8jdmv\") pod \"ovn-northd-0\" (UID: \"5371a32d-3998-4ddc-93d6-27e9afdb9712\") " pod="openstack/ovn-northd-0" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.439092 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5371a32d-3998-4ddc-93d6-27e9afdb9712-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5371a32d-3998-4ddc-93d6-27e9afdb9712\") " pod="openstack/ovn-northd-0" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.439272 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5371a32d-3998-4ddc-93d6-27e9afdb9712-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5371a32d-3998-4ddc-93d6-27e9afdb9712\") " pod="openstack/ovn-northd-0" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.541823 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jdmv\" (UniqueName: \"kubernetes.io/projected/5371a32d-3998-4ddc-93d6-27e9afdb9712-kube-api-access-8jdmv\") pod \"ovn-northd-0\" (UID: \"5371a32d-3998-4ddc-93d6-27e9afdb9712\") " pod="openstack/ovn-northd-0" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.541905 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5371a32d-3998-4ddc-93d6-27e9afdb9712-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5371a32d-3998-4ddc-93d6-27e9afdb9712\") " pod="openstack/ovn-northd-0" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.541946 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5371a32d-3998-4ddc-93d6-27e9afdb9712-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5371a32d-3998-4ddc-93d6-27e9afdb9712\") " pod="openstack/ovn-northd-0" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.541981 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5371a32d-3998-4ddc-93d6-27e9afdb9712-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5371a32d-3998-4ddc-93d6-27e9afdb9712\") " pod="openstack/ovn-northd-0" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.542112 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5371a32d-3998-4ddc-93d6-27e9afdb9712-scripts\") pod \"ovn-northd-0\" (UID: \"5371a32d-3998-4ddc-93d6-27e9afdb9712\") " pod="openstack/ovn-northd-0" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.542150 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5371a32d-3998-4ddc-93d6-27e9afdb9712-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5371a32d-3998-4ddc-93d6-27e9afdb9712\") " pod="openstack/ovn-northd-0" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.542170 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5371a32d-3998-4ddc-93d6-27e9afdb9712-config\") pod \"ovn-northd-0\" (UID: \"5371a32d-3998-4ddc-93d6-27e9afdb9712\") " pod="openstack/ovn-northd-0" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.543080 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5371a32d-3998-4ddc-93d6-27e9afdb9712-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"5371a32d-3998-4ddc-93d6-27e9afdb9712\") " pod="openstack/ovn-northd-0" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.543178 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5371a32d-3998-4ddc-93d6-27e9afdb9712-config\") pod \"ovn-northd-0\" (UID: \"5371a32d-3998-4ddc-93d6-27e9afdb9712\") " pod="openstack/ovn-northd-0" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.543360 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5371a32d-3998-4ddc-93d6-27e9afdb9712-scripts\") pod \"ovn-northd-0\" (UID: \"5371a32d-3998-4ddc-93d6-27e9afdb9712\") " pod="openstack/ovn-northd-0" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.572034 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5371a32d-3998-4ddc-93d6-27e9afdb9712-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"5371a32d-3998-4ddc-93d6-27e9afdb9712\") " pod="openstack/ovn-northd-0" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.572416 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5371a32d-3998-4ddc-93d6-27e9afdb9712-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"5371a32d-3998-4ddc-93d6-27e9afdb9712\") " pod="openstack/ovn-northd-0" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.573605 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5371a32d-3998-4ddc-93d6-27e9afdb9712-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"5371a32d-3998-4ddc-93d6-27e9afdb9712\") " pod="openstack/ovn-northd-0" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.575625 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jdmv\" (UniqueName: \"kubernetes.io/projected/5371a32d-3998-4ddc-93d6-27e9afdb9712-kube-api-access-8jdmv\") pod \"ovn-northd-0\" (UID: \"5371a32d-3998-4ddc-93d6-27e9afdb9712\") " pod="openstack/ovn-northd-0" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.662620 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c7cbb8f79-vlqm4" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.712968 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.747612 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gd7t\" (UniqueName: \"kubernetes.io/projected/4b68c591-2f4c-41c3-9ab1-372deed0e388-kube-api-access-7gd7t\") pod \"4b68c591-2f4c-41c3-9ab1-372deed0e388\" (UID: \"4b68c591-2f4c-41c3-9ab1-372deed0e388\") " Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.747741 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b68c591-2f4c-41c3-9ab1-372deed0e388-dns-svc\") pod \"4b68c591-2f4c-41c3-9ab1-372deed0e388\" (UID: \"4b68c591-2f4c-41c3-9ab1-372deed0e388\") " Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.747764 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b68c591-2f4c-41c3-9ab1-372deed0e388-config\") pod \"4b68c591-2f4c-41c3-9ab1-372deed0e388\" (UID: \"4b68c591-2f4c-41c3-9ab1-372deed0e388\") " Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.761185 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7878659675-8x5rp"] Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.766875 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b68c591-2f4c-41c3-9ab1-372deed0e388-kube-api-access-7gd7t" (OuterVolumeSpecName: "kube-api-access-7gd7t") pod "4b68c591-2f4c-41c3-9ab1-372deed0e388" (UID: "4b68c591-2f4c-41c3-9ab1-372deed0e388"). InnerVolumeSpecName "kube-api-access-7gd7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.771222 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-7gmrp"] Dec 11 14:06:17 crc kubenswrapper[5050]: W1211 14:06:17.780263 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podede85a3e_dba0_4946_a12d_3dd38485c815.slice/crio-6184f3f1f403c89a14fae481a6a3119d9673487edecc15d51b4b24ae11f85e38 WatchSource:0}: Error finding container 6184f3f1f403c89a14fae481a6a3119d9673487edecc15d51b4b24ae11f85e38: Status 404 returned error can't find the container with id 6184f3f1f403c89a14fae481a6a3119d9673487edecc15d51b4b24ae11f85e38 Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.780890 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b68c591-2f4c-41c3-9ab1-372deed0e388-config" (OuterVolumeSpecName: "config") pod "4b68c591-2f4c-41c3-9ab1-372deed0e388" (UID: "4b68c591-2f4c-41c3-9ab1-372deed0e388"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.787749 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b68c591-2f4c-41c3-9ab1-372deed0e388-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4b68c591-2f4c-41c3-9ab1-372deed0e388" (UID: "4b68c591-2f4c-41c3-9ab1-372deed0e388"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.850145 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gd7t\" (UniqueName: \"kubernetes.io/projected/4b68c591-2f4c-41c3-9ab1-372deed0e388-kube-api-access-7gd7t\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.850558 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4b68c591-2f4c-41c3-9ab1-372deed0e388-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.850590 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4b68c591-2f4c-41c3-9ab1-372deed0e388-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:17 crc kubenswrapper[5050]: I1211 14:06:17.938659 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-6rpgw"] Dec 11 14:06:18 crc kubenswrapper[5050]: I1211 14:06:18.005287 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878659675-8x5rp" event={"ID":"ede85a3e-dba0-4946-a12d-3dd38485c815","Type":"ContainerStarted","Data":"6184f3f1f403c89a14fae481a6a3119d9673487edecc15d51b4b24ae11f85e38"} Dec 11 14:06:18 crc kubenswrapper[5050]: I1211 14:06:18.011812 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0891f075-8101-475b-b844-e7cb42a4990b","Type":"ContainerStarted","Data":"b030d9ea1d520c633a941cacbfc01b8167a3e4ea9d95d099c782876dc0ce6862"} Dec 11 14:06:18 crc kubenswrapper[5050]: I1211 14:06:18.017600 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c7cbb8f79-vlqm4" event={"ID":"4b68c591-2f4c-41c3-9ab1-372deed0e388","Type":"ContainerDied","Data":"359def1b79483f54be63282959561882ca79b96cce95b707c6ab7c3fb2c8a436"} Dec 11 14:06:18 crc kubenswrapper[5050]: I1211 14:06:18.017654 5050 scope.go:117] "RemoveContainer" containerID="cde80b8090bc6179a7fd540ca9c1007335eac9f83960d33874c18a605c7a4ac9" Dec 11 14:06:18 crc kubenswrapper[5050]: I1211 14:06:18.017736 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c7cbb8f79-vlqm4" Dec 11 14:06:18 crc kubenswrapper[5050]: I1211 14:06:18.026566 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 11 14:06:18 crc kubenswrapper[5050]: I1211 14:06:18.028680 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-6rpgw" event={"ID":"e1f95cdc-064f-479e-af77-4520647ba58a","Type":"ContainerStarted","Data":"693dc6e68daa06b3ec7d24b236971e221f918c458e74634a0be300c852b231a4"} Dec 11 14:06:18 crc kubenswrapper[5050]: I1211 14:06:18.032157 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"458f05be-2fd6-44d9-8034-f077356964ce","Type":"ContainerStarted","Data":"510949b6fa4514794979cb46d1baa4411178e70e74985dbfb206b0b3da3f4cc4"} Dec 11 14:06:18 crc kubenswrapper[5050]: I1211 14:06:18.036884 5050 generic.go:334] "Generic (PLEG): container finished" podID="63c71212-7318-45f1-94f9-235d861faf86" containerID="310c33ff08c85c39ad9141c243fe9a546dbf86fe7a5e13fe2c70e00c5cb998b0" exitCode=0 Dec 11 14:06:18 crc kubenswrapper[5050]: I1211 14:06:18.036986 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-clprb" event={"ID":"63c71212-7318-45f1-94f9-235d861faf86","Type":"ContainerDied","Data":"310c33ff08c85c39ad9141c243fe9a546dbf86fe7a5e13fe2c70e00c5cb998b0"} Dec 11 14:06:18 crc kubenswrapper[5050]: I1211 14:06:18.043578 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-7gmrp" event={"ID":"58cdcd05-e81a-4ed4-8357-249649b17449","Type":"ContainerStarted","Data":"70b268aba91e4e02de538365adb4c126705681e130601bd8739cdafa467c2a68"} Dec 11 14:06:18 crc kubenswrapper[5050]: I1211 14:06:18.124731 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-vlqm4"] Dec 11 14:06:18 crc kubenswrapper[5050]: I1211 14:06:18.136138 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c7cbb8f79-vlqm4"] Dec 11 14:06:18 crc kubenswrapper[5050]: I1211 14:06:18.397295 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-clprb" Dec 11 14:06:18 crc kubenswrapper[5050]: I1211 14:06:18.482348 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63c71212-7318-45f1-94f9-235d861faf86-dns-svc\") pod \"63c71212-7318-45f1-94f9-235d861faf86\" (UID: \"63c71212-7318-45f1-94f9-235d861faf86\") " Dec 11 14:06:18 crc kubenswrapper[5050]: I1211 14:06:18.482504 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c71212-7318-45f1-94f9-235d861faf86-config\") pod \"63c71212-7318-45f1-94f9-235d861faf86\" (UID: \"63c71212-7318-45f1-94f9-235d861faf86\") " Dec 11 14:06:18 crc kubenswrapper[5050]: I1211 14:06:18.482533 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkfqd\" (UniqueName: \"kubernetes.io/projected/63c71212-7318-45f1-94f9-235d861faf86-kube-api-access-pkfqd\") pod \"63c71212-7318-45f1-94f9-235d861faf86\" (UID: \"63c71212-7318-45f1-94f9-235d861faf86\") " Dec 11 14:06:18 crc kubenswrapper[5050]: I1211 14:06:18.489037 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63c71212-7318-45f1-94f9-235d861faf86-kube-api-access-pkfqd" (OuterVolumeSpecName: "kube-api-access-pkfqd") pod "63c71212-7318-45f1-94f9-235d861faf86" (UID: "63c71212-7318-45f1-94f9-235d861faf86"). InnerVolumeSpecName "kube-api-access-pkfqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:06:18 crc kubenswrapper[5050]: I1211 14:06:18.505241 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63c71212-7318-45f1-94f9-235d861faf86-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "63c71212-7318-45f1-94f9-235d861faf86" (UID: "63c71212-7318-45f1-94f9-235d861faf86"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:06:18 crc kubenswrapper[5050]: I1211 14:06:18.506613 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63c71212-7318-45f1-94f9-235d861faf86-config" (OuterVolumeSpecName: "config") pod "63c71212-7318-45f1-94f9-235d861faf86" (UID: "63c71212-7318-45f1-94f9-235d861faf86"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:06:18 crc kubenswrapper[5050]: I1211 14:06:18.584785 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63c71212-7318-45f1-94f9-235d861faf86-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:18 crc kubenswrapper[5050]: I1211 14:06:18.584824 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63c71212-7318-45f1-94f9-235d861faf86-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:18 crc kubenswrapper[5050]: I1211 14:06:18.584833 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkfqd\" (UniqueName: \"kubernetes.io/projected/63c71212-7318-45f1-94f9-235d861faf86-kube-api-access-pkfqd\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:18 crc kubenswrapper[5050]: I1211 14:06:18.815750 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 11 14:06:19 crc kubenswrapper[5050]: I1211 14:06:19.053173 5050 generic.go:334] "Generic (PLEG): container finished" podID="ede85a3e-dba0-4946-a12d-3dd38485c815" containerID="014bfd854f87afffdd6f06a10aed570e95644be987cd5f116d9c3f0cd18b1004" exitCode=0 Dec 11 14:06:19 crc kubenswrapper[5050]: I1211 14:06:19.053432 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878659675-8x5rp" event={"ID":"ede85a3e-dba0-4946-a12d-3dd38485c815","Type":"ContainerDied","Data":"014bfd854f87afffdd6f06a10aed570e95644be987cd5f116d9c3f0cd18b1004"} Dec 11 14:06:19 crc kubenswrapper[5050]: I1211 14:06:19.061729 5050 generic.go:334] "Generic (PLEG): container finished" podID="e1f95cdc-064f-479e-af77-4520647ba58a" containerID="13e5044b8f3009d9cb81b52c6226f5a3afb295d4e0a5f46e1a876049d0afbccb" exitCode=0 Dec 11 14:06:19 crc kubenswrapper[5050]: I1211 14:06:19.062206 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-6rpgw" event={"ID":"e1f95cdc-064f-479e-af77-4520647ba58a","Type":"ContainerDied","Data":"13e5044b8f3009d9cb81b52c6226f5a3afb295d4e0a5f46e1a876049d0afbccb"} Dec 11 14:06:19 crc kubenswrapper[5050]: I1211 14:06:19.067277 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95f5f6995-clprb" Dec 11 14:06:19 crc kubenswrapper[5050]: I1211 14:06:19.067486 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95f5f6995-clprb" event={"ID":"63c71212-7318-45f1-94f9-235d861faf86","Type":"ContainerDied","Data":"02666ca0c6f5f2991c2db0a3550951e8fc3eedf2fe4cbb24ce8ae903c174cc93"} Dec 11 14:06:19 crc kubenswrapper[5050]: I1211 14:06:19.067630 5050 scope.go:117] "RemoveContainer" containerID="310c33ff08c85c39ad9141c243fe9a546dbf86fe7a5e13fe2c70e00c5cb998b0" Dec 11 14:06:19 crc kubenswrapper[5050]: I1211 14:06:19.071614 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-7gmrp" event={"ID":"58cdcd05-e81a-4ed4-8357-249649b17449","Type":"ContainerStarted","Data":"c4c46a2b59720049a8e8b5a6204d154b1116f9503accccc382e625d7853d1369"} Dec 11 14:06:19 crc kubenswrapper[5050]: I1211 14:06:19.088101 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5371a32d-3998-4ddc-93d6-27e9afdb9712","Type":"ContainerStarted","Data":"9060d643ec7e71ef960f3be115d349c7b177a56db9bd83b2fa67c2629f764c76"} Dec 11 14:06:19 crc kubenswrapper[5050]: I1211 14:06:19.136491 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-7gmrp" podStartSLOduration=3.1364519140000002 podStartE2EDuration="3.136451914s" podCreationTimestamp="2025-12-11 14:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:06:19.132636282 +0000 UTC m=+1069.976358868" watchObservedRunningTime="2025-12-11 14:06:19.136451914 +0000 UTC m=+1069.980174500" Dec 11 14:06:19 crc kubenswrapper[5050]: I1211 14:06:19.198500 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-clprb"] Dec 11 14:06:19 crc kubenswrapper[5050]: I1211 14:06:19.243311 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95f5f6995-clprb"] Dec 11 14:06:19 crc kubenswrapper[5050]: I1211 14:06:19.562933 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b68c591-2f4c-41c3-9ab1-372deed0e388" path="/var/lib/kubelet/pods/4b68c591-2f4c-41c3-9ab1-372deed0e388/volumes" Dec 11 14:06:19 crc kubenswrapper[5050]: I1211 14:06:19.564025 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63c71212-7318-45f1-94f9-235d861faf86" path="/var/lib/kubelet/pods/63c71212-7318-45f1-94f9-235d861faf86/volumes" Dec 11 14:06:20 crc kubenswrapper[5050]: I1211 14:06:20.097002 5050 generic.go:334] "Generic (PLEG): container finished" podID="c8b3d8cd-9278-4639-86fe-1aa7696fecca" containerID="3f6ab43d7c44f6f5b8c73954b9c98393b51e2f88daf2fc69efb6768d87c72dd3" exitCode=0 Dec 11 14:06:20 crc kubenswrapper[5050]: I1211 14:06:20.097055 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c8b3d8cd-9278-4639-86fe-1aa7696fecca","Type":"ContainerDied","Data":"3f6ab43d7c44f6f5b8c73954b9c98393b51e2f88daf2fc69efb6768d87c72dd3"} Dec 11 14:06:21 crc kubenswrapper[5050]: I1211 14:06:21.116374 5050 generic.go:334] "Generic (PLEG): container finished" podID="55b7e535-46f6-403b-9cdf-bf172dba97b6" containerID="48b2599eb07fbfc258247376195b9fee5c9054df813653e7696ec82ef8a7ca4b" exitCode=0 Dec 11 14:06:21 crc kubenswrapper[5050]: I1211 14:06:21.116879 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"55b7e535-46f6-403b-9cdf-bf172dba97b6","Type":"ContainerDied","Data":"48b2599eb07fbfc258247376195b9fee5c9054df813653e7696ec82ef8a7ca4b"} Dec 11 14:06:21 crc kubenswrapper[5050]: I1211 14:06:21.125430 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-6rpgw" event={"ID":"e1f95cdc-064f-479e-af77-4520647ba58a","Type":"ContainerStarted","Data":"06f60c8ff27471025ffbba71d2692f6c21c38b9265b9bfea67a492f45db55f86"} Dec 11 14:06:21 crc kubenswrapper[5050]: I1211 14:06:21.125616 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-586b989cdc-6rpgw" Dec 11 14:06:21 crc kubenswrapper[5050]: I1211 14:06:21.128424 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5371a32d-3998-4ddc-93d6-27e9afdb9712","Type":"ContainerStarted","Data":"7f95d994fd5fc97f391f6f15efe0c185c18faac91b7536e24f460feb81c83897"} Dec 11 14:06:21 crc kubenswrapper[5050]: I1211 14:06:21.128466 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5371a32d-3998-4ddc-93d6-27e9afdb9712","Type":"ContainerStarted","Data":"43b0ab8e8f0e3806e99b599e730fb8eac2bff4d265f8afa1bea527256b44c445"} Dec 11 14:06:21 crc kubenswrapper[5050]: I1211 14:06:21.129488 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 11 14:06:21 crc kubenswrapper[5050]: I1211 14:06:21.136597 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878659675-8x5rp" event={"ID":"ede85a3e-dba0-4946-a12d-3dd38485c815","Type":"ContainerStarted","Data":"875da1a66cf4f9872f5bfa21153f6bf14de20de2c8334794f580a8e13f2166ea"} Dec 11 14:06:21 crc kubenswrapper[5050]: I1211 14:06:21.136685 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7878659675-8x5rp" Dec 11 14:06:21 crc kubenswrapper[5050]: I1211 14:06:21.144030 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c8b3d8cd-9278-4639-86fe-1aa7696fecca","Type":"ContainerStarted","Data":"8498e742367424482ed9a44ca42a11a58844241a90788c1a5e431a1e93f23131"} Dec 11 14:06:21 crc kubenswrapper[5050]: I1211 14:06:21.169146 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7878659675-8x5rp" podStartSLOduration=5.169120697 podStartE2EDuration="5.169120697s" podCreationTimestamp="2025-12-11 14:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:06:21.164584215 +0000 UTC m=+1072.008306821" watchObservedRunningTime="2025-12-11 14:06:21.169120697 +0000 UTC m=+1072.012843293" Dec 11 14:06:21 crc kubenswrapper[5050]: I1211 14:06:21.193457 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-586b989cdc-6rpgw" podStartSLOduration=5.193436613 podStartE2EDuration="5.193436613s" podCreationTimestamp="2025-12-11 14:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:06:21.18960647 +0000 UTC m=+1072.033329076" watchObservedRunningTime="2025-12-11 14:06:21.193436613 +0000 UTC m=+1072.037159199" Dec 11 14:06:21 crc kubenswrapper[5050]: I1211 14:06:21.246523 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.5045385380000003 podStartE2EDuration="4.246504954s" podCreationTimestamp="2025-12-11 14:06:17 +0000 UTC" firstStartedPulling="2025-12-11 14:06:18.031203885 +0000 UTC m=+1068.874926471" lastFinishedPulling="2025-12-11 14:06:19.773170301 +0000 UTC m=+1070.616892887" observedRunningTime="2025-12-11 14:06:21.242688811 +0000 UTC m=+1072.086411397" watchObservedRunningTime="2025-12-11 14:06:21.246504954 +0000 UTC m=+1072.090227540" Dec 11 14:06:21 crc kubenswrapper[5050]: I1211 14:06:21.286423 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=10.110948142 podStartE2EDuration="49.286403369s" podCreationTimestamp="2025-12-11 14:05:32 +0000 UTC" firstStartedPulling="2025-12-11 14:05:34.655103193 +0000 UTC m=+1025.498825779" lastFinishedPulling="2025-12-11 14:06:13.83055842 +0000 UTC m=+1064.674281006" observedRunningTime="2025-12-11 14:06:21.28457255 +0000 UTC m=+1072.128295136" watchObservedRunningTime="2025-12-11 14:06:21.286403369 +0000 UTC m=+1072.130125955" Dec 11 14:06:22 crc kubenswrapper[5050]: I1211 14:06:22.161369 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"55b7e535-46f6-403b-9cdf-bf172dba97b6","Type":"ContainerStarted","Data":"867f3427f68bf3282e541c138203522b401306d30e252d2e6f7d84ff42514106"} Dec 11 14:06:22 crc kubenswrapper[5050]: I1211 14:06:22.190830 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371984.663967 podStartE2EDuration="52.190809144s" podCreationTimestamp="2025-12-11 14:05:30 +0000 UTC" firstStartedPulling="2025-12-11 14:05:32.665827184 +0000 UTC m=+1023.509549770" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:06:22.190236938 +0000 UTC m=+1073.033959524" watchObservedRunningTime="2025-12-11 14:06:22.190809144 +0000 UTC m=+1073.034531730" Dec 11 14:06:23 crc kubenswrapper[5050]: I1211 14:06:23.178862 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8e60c3c2-6055-4e50-99b6-4a5f08728b17","Type":"ContainerStarted","Data":"54ab5edbcc14c67a1717bfd1d05ad6d09f2905446ab6d06cdf66777d774f523a"} Dec 11 14:06:23 crc kubenswrapper[5050]: I1211 14:06:23.180392 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 11 14:06:23 crc kubenswrapper[5050]: I1211 14:06:23.205746 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.590629399 podStartE2EDuration="48.205725157s" podCreationTimestamp="2025-12-11 14:05:35 +0000 UTC" firstStartedPulling="2025-12-11 14:05:36.483168971 +0000 UTC m=+1027.326891567" lastFinishedPulling="2025-12-11 14:06:22.098264739 +0000 UTC m=+1072.941987325" observedRunningTime="2025-12-11 14:06:23.201208725 +0000 UTC m=+1074.044931311" watchObservedRunningTime="2025-12-11 14:06:23.205725157 +0000 UTC m=+1074.049447753" Dec 11 14:06:23 crc kubenswrapper[5050]: I1211 14:06:23.906890 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 11 14:06:23 crc kubenswrapper[5050]: I1211 14:06:23.906955 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 11 14:06:25 crc kubenswrapper[5050]: I1211 14:06:25.580330 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-6rpgw"] Dec 11 14:06:25 crc kubenswrapper[5050]: I1211 14:06:25.581087 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-586b989cdc-6rpgw" podUID="e1f95cdc-064f-479e-af77-4520647ba58a" containerName="dnsmasq-dns" containerID="cri-o://06f60c8ff27471025ffbba71d2692f6c21c38b9265b9bfea67a492f45db55f86" gracePeriod=10 Dec 11 14:06:25 crc kubenswrapper[5050]: I1211 14:06:25.582285 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-586b989cdc-6rpgw" Dec 11 14:06:25 crc kubenswrapper[5050]: I1211 14:06:25.671744 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-9s7vd"] Dec 11 14:06:25 crc kubenswrapper[5050]: E1211 14:06:25.672593 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b68c591-2f4c-41c3-9ab1-372deed0e388" containerName="init" Dec 11 14:06:25 crc kubenswrapper[5050]: I1211 14:06:25.672613 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b68c591-2f4c-41c3-9ab1-372deed0e388" containerName="init" Dec 11 14:06:25 crc kubenswrapper[5050]: E1211 14:06:25.672625 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63c71212-7318-45f1-94f9-235d861faf86" containerName="init" Dec 11 14:06:25 crc kubenswrapper[5050]: I1211 14:06:25.672631 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="63c71212-7318-45f1-94f9-235d861faf86" containerName="init" Dec 11 14:06:25 crc kubenswrapper[5050]: I1211 14:06:25.672805 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b68c591-2f4c-41c3-9ab1-372deed0e388" containerName="init" Dec 11 14:06:25 crc kubenswrapper[5050]: I1211 14:06:25.672836 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="63c71212-7318-45f1-94f9-235d861faf86" containerName="init" Dec 11 14:06:25 crc kubenswrapper[5050]: I1211 14:06:25.674343 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-9s7vd" Dec 11 14:06:25 crc kubenswrapper[5050]: I1211 14:06:25.700328 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-9s7vd"] Dec 11 14:06:25 crc kubenswrapper[5050]: I1211 14:06:25.724093 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25f92ac7-0732-460f-bf9a-1e9947e71977-config\") pod \"dnsmasq-dns-67fdf7998c-9s7vd\" (UID: \"25f92ac7-0732-460f-bf9a-1e9947e71977\") " pod="openstack/dnsmasq-dns-67fdf7998c-9s7vd" Dec 11 14:06:25 crc kubenswrapper[5050]: I1211 14:06:25.724195 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25f92ac7-0732-460f-bf9a-1e9947e71977-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-9s7vd\" (UID: \"25f92ac7-0732-460f-bf9a-1e9947e71977\") " pod="openstack/dnsmasq-dns-67fdf7998c-9s7vd" Dec 11 14:06:25 crc kubenswrapper[5050]: I1211 14:06:25.724239 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25f92ac7-0732-460f-bf9a-1e9947e71977-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-9s7vd\" (UID: \"25f92ac7-0732-460f-bf9a-1e9947e71977\") " pod="openstack/dnsmasq-dns-67fdf7998c-9s7vd" Dec 11 14:06:25 crc kubenswrapper[5050]: I1211 14:06:25.724353 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvgmm\" (UniqueName: \"kubernetes.io/projected/25f92ac7-0732-460f-bf9a-1e9947e71977-kube-api-access-wvgmm\") pod \"dnsmasq-dns-67fdf7998c-9s7vd\" (UID: \"25f92ac7-0732-460f-bf9a-1e9947e71977\") " pod="openstack/dnsmasq-dns-67fdf7998c-9s7vd" Dec 11 14:06:25 crc kubenswrapper[5050]: I1211 14:06:25.724589 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25f92ac7-0732-460f-bf9a-1e9947e71977-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-9s7vd\" (UID: \"25f92ac7-0732-460f-bf9a-1e9947e71977\") " pod="openstack/dnsmasq-dns-67fdf7998c-9s7vd" Dec 11 14:06:25 crc kubenswrapper[5050]: I1211 14:06:25.837946 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25f92ac7-0732-460f-bf9a-1e9947e71977-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-9s7vd\" (UID: \"25f92ac7-0732-460f-bf9a-1e9947e71977\") " pod="openstack/dnsmasq-dns-67fdf7998c-9s7vd" Dec 11 14:06:25 crc kubenswrapper[5050]: I1211 14:06:25.838130 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25f92ac7-0732-460f-bf9a-1e9947e71977-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-9s7vd\" (UID: \"25f92ac7-0732-460f-bf9a-1e9947e71977\") " pod="openstack/dnsmasq-dns-67fdf7998c-9s7vd" Dec 11 14:06:25 crc kubenswrapper[5050]: I1211 14:06:25.838434 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvgmm\" (UniqueName: \"kubernetes.io/projected/25f92ac7-0732-460f-bf9a-1e9947e71977-kube-api-access-wvgmm\") pod \"dnsmasq-dns-67fdf7998c-9s7vd\" (UID: \"25f92ac7-0732-460f-bf9a-1e9947e71977\") " pod="openstack/dnsmasq-dns-67fdf7998c-9s7vd" Dec 11 14:06:25 crc kubenswrapper[5050]: I1211 14:06:25.838694 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25f92ac7-0732-460f-bf9a-1e9947e71977-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-9s7vd\" (UID: \"25f92ac7-0732-460f-bf9a-1e9947e71977\") " pod="openstack/dnsmasq-dns-67fdf7998c-9s7vd" Dec 11 14:06:25 crc kubenswrapper[5050]: I1211 14:06:25.838785 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25f92ac7-0732-460f-bf9a-1e9947e71977-config\") pod \"dnsmasq-dns-67fdf7998c-9s7vd\" (UID: \"25f92ac7-0732-460f-bf9a-1e9947e71977\") " pod="openstack/dnsmasq-dns-67fdf7998c-9s7vd" Dec 11 14:06:25 crc kubenswrapper[5050]: I1211 14:06:25.839496 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25f92ac7-0732-460f-bf9a-1e9947e71977-ovsdbserver-nb\") pod \"dnsmasq-dns-67fdf7998c-9s7vd\" (UID: \"25f92ac7-0732-460f-bf9a-1e9947e71977\") " pod="openstack/dnsmasq-dns-67fdf7998c-9s7vd" Dec 11 14:06:25 crc kubenswrapper[5050]: I1211 14:06:25.839503 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25f92ac7-0732-460f-bf9a-1e9947e71977-ovsdbserver-sb\") pod \"dnsmasq-dns-67fdf7998c-9s7vd\" (UID: \"25f92ac7-0732-460f-bf9a-1e9947e71977\") " pod="openstack/dnsmasq-dns-67fdf7998c-9s7vd" Dec 11 14:06:25 crc kubenswrapper[5050]: I1211 14:06:25.844904 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25f92ac7-0732-460f-bf9a-1e9947e71977-config\") pod \"dnsmasq-dns-67fdf7998c-9s7vd\" (UID: \"25f92ac7-0732-460f-bf9a-1e9947e71977\") " pod="openstack/dnsmasq-dns-67fdf7998c-9s7vd" Dec 11 14:06:25 crc kubenswrapper[5050]: I1211 14:06:25.848239 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25f92ac7-0732-460f-bf9a-1e9947e71977-dns-svc\") pod \"dnsmasq-dns-67fdf7998c-9s7vd\" (UID: \"25f92ac7-0732-460f-bf9a-1e9947e71977\") " pod="openstack/dnsmasq-dns-67fdf7998c-9s7vd" Dec 11 14:06:25 crc kubenswrapper[5050]: I1211 14:06:25.877110 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvgmm\" (UniqueName: \"kubernetes.io/projected/25f92ac7-0732-460f-bf9a-1e9947e71977-kube-api-access-wvgmm\") pod \"dnsmasq-dns-67fdf7998c-9s7vd\" (UID: \"25f92ac7-0732-460f-bf9a-1e9947e71977\") " pod="openstack/dnsmasq-dns-67fdf7998c-9s7vd" Dec 11 14:06:26 crc kubenswrapper[5050]: I1211 14:06:26.008222 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-9s7vd" Dec 11 14:06:26 crc kubenswrapper[5050]: I1211 14:06:26.676985 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-9s7vd"] Dec 11 14:06:26 crc kubenswrapper[5050]: W1211 14:06:26.683183 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25f92ac7_0732_460f_bf9a_1e9947e71977.slice/crio-b74f69406476557351174e3857a610a7abc48ac645f77b21d85afdc9c412fb43 WatchSource:0}: Error finding container b74f69406476557351174e3857a610a7abc48ac645f77b21d85afdc9c412fb43: Status 404 returned error can't find the container with id b74f69406476557351174e3857a610a7abc48ac645f77b21d85afdc9c412fb43 Dec 11 14:06:26 crc kubenswrapper[5050]: I1211 14:06:26.766470 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 11 14:06:26 crc kubenswrapper[5050]: I1211 14:06:26.773190 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 11 14:06:26 crc kubenswrapper[5050]: I1211 14:06:26.778323 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 11 14:06:26 crc kubenswrapper[5050]: I1211 14:06:26.778615 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 11 14:06:26 crc kubenswrapper[5050]: I1211 14:06:26.778804 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-n94qm" Dec 11 14:06:26 crc kubenswrapper[5050]: I1211 14:06:26.779106 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 11 14:06:26 crc kubenswrapper[5050]: I1211 14:06:26.784451 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 11 14:06:26 crc kubenswrapper[5050]: I1211 14:06:26.861161 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"a5dabf50-534b-45cb-87db-45373930fe82\") " pod="openstack/swift-storage-0" Dec 11 14:06:26 crc kubenswrapper[5050]: I1211 14:06:26.861592 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a5dabf50-534b-45cb-87db-45373930fe82-etc-swift\") pod \"swift-storage-0\" (UID: \"a5dabf50-534b-45cb-87db-45373930fe82\") " pod="openstack/swift-storage-0" Dec 11 14:06:26 crc kubenswrapper[5050]: I1211 14:06:26.861723 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtxbm\" (UniqueName: \"kubernetes.io/projected/a5dabf50-534b-45cb-87db-45373930fe82-kube-api-access-jtxbm\") pod \"swift-storage-0\" (UID: \"a5dabf50-534b-45cb-87db-45373930fe82\") " pod="openstack/swift-storage-0" Dec 11 14:06:26 crc kubenswrapper[5050]: I1211 14:06:26.861808 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a5dabf50-534b-45cb-87db-45373930fe82-cache\") pod \"swift-storage-0\" (UID: \"a5dabf50-534b-45cb-87db-45373930fe82\") " pod="openstack/swift-storage-0" Dec 11 14:06:26 crc kubenswrapper[5050]: I1211 14:06:26.861899 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a5dabf50-534b-45cb-87db-45373930fe82-lock\") pod \"swift-storage-0\" (UID: \"a5dabf50-534b-45cb-87db-45373930fe82\") " pod="openstack/swift-storage-0" Dec 11 14:06:26 crc kubenswrapper[5050]: I1211 14:06:26.921238 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7878659675-8x5rp" Dec 11 14:06:26 crc kubenswrapper[5050]: I1211 14:06:26.963325 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a5dabf50-534b-45cb-87db-45373930fe82-etc-swift\") pod \"swift-storage-0\" (UID: \"a5dabf50-534b-45cb-87db-45373930fe82\") " pod="openstack/swift-storage-0" Dec 11 14:06:26 crc kubenswrapper[5050]: I1211 14:06:26.963433 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtxbm\" (UniqueName: \"kubernetes.io/projected/a5dabf50-534b-45cb-87db-45373930fe82-kube-api-access-jtxbm\") pod \"swift-storage-0\" (UID: \"a5dabf50-534b-45cb-87db-45373930fe82\") " pod="openstack/swift-storage-0" Dec 11 14:06:26 crc kubenswrapper[5050]: I1211 14:06:26.963470 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a5dabf50-534b-45cb-87db-45373930fe82-cache\") pod \"swift-storage-0\" (UID: \"a5dabf50-534b-45cb-87db-45373930fe82\") " pod="openstack/swift-storage-0" Dec 11 14:06:26 crc kubenswrapper[5050]: I1211 14:06:26.963524 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a5dabf50-534b-45cb-87db-45373930fe82-lock\") pod \"swift-storage-0\" (UID: \"a5dabf50-534b-45cb-87db-45373930fe82\") " pod="openstack/swift-storage-0" Dec 11 14:06:26 crc kubenswrapper[5050]: I1211 14:06:26.963628 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"a5dabf50-534b-45cb-87db-45373930fe82\") " pod="openstack/swift-storage-0" Dec 11 14:06:26 crc kubenswrapper[5050]: E1211 14:06:26.963624 5050 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 11 14:06:26 crc kubenswrapper[5050]: E1211 14:06:26.963666 5050 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 11 14:06:26 crc kubenswrapper[5050]: E1211 14:06:26.963746 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a5dabf50-534b-45cb-87db-45373930fe82-etc-swift podName:a5dabf50-534b-45cb-87db-45373930fe82 nodeName:}" failed. No retries permitted until 2025-12-11 14:06:27.463719658 +0000 UTC m=+1078.307442244 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a5dabf50-534b-45cb-87db-45373930fe82-etc-swift") pod "swift-storage-0" (UID: "a5dabf50-534b-45cb-87db-45373930fe82") : configmap "swift-ring-files" not found Dec 11 14:06:26 crc kubenswrapper[5050]: I1211 14:06:26.964421 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a5dabf50-534b-45cb-87db-45373930fe82-lock\") pod \"swift-storage-0\" (UID: \"a5dabf50-534b-45cb-87db-45373930fe82\") " pod="openstack/swift-storage-0" Dec 11 14:06:26 crc kubenswrapper[5050]: I1211 14:06:26.964549 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a5dabf50-534b-45cb-87db-45373930fe82-cache\") pod \"swift-storage-0\" (UID: \"a5dabf50-534b-45cb-87db-45373930fe82\") " pod="openstack/swift-storage-0" Dec 11 14:06:26 crc kubenswrapper[5050]: I1211 14:06:26.966099 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"a5dabf50-534b-45cb-87db-45373930fe82\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/swift-storage-0" Dec 11 14:06:27 crc kubenswrapper[5050]: I1211 14:06:27.005813 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtxbm\" (UniqueName: \"kubernetes.io/projected/a5dabf50-534b-45cb-87db-45373930fe82-kube-api-access-jtxbm\") pod \"swift-storage-0\" (UID: \"a5dabf50-534b-45cb-87db-45373930fe82\") " pod="openstack/swift-storage-0" Dec 11 14:06:27 crc kubenswrapper[5050]: I1211 14:06:27.019038 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"swift-storage-0\" (UID: \"a5dabf50-534b-45cb-87db-45373930fe82\") " pod="openstack/swift-storage-0" Dec 11 14:06:27 crc kubenswrapper[5050]: I1211 14:06:27.081449 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-586b989cdc-6rpgw" podUID="e1f95cdc-064f-479e-af77-4520647ba58a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: connect: connection refused" Dec 11 14:06:27 crc kubenswrapper[5050]: I1211 14:06:27.234444 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-9s7vd" event={"ID":"25f92ac7-0732-460f-bf9a-1e9947e71977","Type":"ContainerStarted","Data":"b74f69406476557351174e3857a610a7abc48ac645f77b21d85afdc9c412fb43"} Dec 11 14:06:27 crc kubenswrapper[5050]: I1211 14:06:27.473666 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a5dabf50-534b-45cb-87db-45373930fe82-etc-swift\") pod \"swift-storage-0\" (UID: \"a5dabf50-534b-45cb-87db-45373930fe82\") " pod="openstack/swift-storage-0" Dec 11 14:06:27 crc kubenswrapper[5050]: E1211 14:06:27.473986 5050 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 11 14:06:27 crc kubenswrapper[5050]: E1211 14:06:27.474049 5050 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 11 14:06:27 crc kubenswrapper[5050]: E1211 14:06:27.474134 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a5dabf50-534b-45cb-87db-45373930fe82-etc-swift podName:a5dabf50-534b-45cb-87db-45373930fe82 nodeName:}" failed. No retries permitted until 2025-12-11 14:06:28.474108688 +0000 UTC m=+1079.317831274 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a5dabf50-534b-45cb-87db-45373930fe82-etc-swift") pod "swift-storage-0" (UID: "a5dabf50-534b-45cb-87db-45373930fe82") : configmap "swift-ring-files" not found Dec 11 14:06:28 crc kubenswrapper[5050]: I1211 14:06:28.115035 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-6rpgw" Dec 11 14:06:28 crc kubenswrapper[5050]: I1211 14:06:28.187071 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1f95cdc-064f-479e-af77-4520647ba58a-config\") pod \"e1f95cdc-064f-479e-af77-4520647ba58a\" (UID: \"e1f95cdc-064f-479e-af77-4520647ba58a\") " Dec 11 14:06:28 crc kubenswrapper[5050]: I1211 14:06:28.187147 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1f95cdc-064f-479e-af77-4520647ba58a-ovsdbserver-sb\") pod \"e1f95cdc-064f-479e-af77-4520647ba58a\" (UID: \"e1f95cdc-064f-479e-af77-4520647ba58a\") " Dec 11 14:06:28 crc kubenswrapper[5050]: I1211 14:06:28.187219 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zw64\" (UniqueName: \"kubernetes.io/projected/e1f95cdc-064f-479e-af77-4520647ba58a-kube-api-access-8zw64\") pod \"e1f95cdc-064f-479e-af77-4520647ba58a\" (UID: \"e1f95cdc-064f-479e-af77-4520647ba58a\") " Dec 11 14:06:28 crc kubenswrapper[5050]: I1211 14:06:28.187277 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1f95cdc-064f-479e-af77-4520647ba58a-dns-svc\") pod \"e1f95cdc-064f-479e-af77-4520647ba58a\" (UID: \"e1f95cdc-064f-479e-af77-4520647ba58a\") " Dec 11 14:06:28 crc kubenswrapper[5050]: I1211 14:06:28.187304 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1f95cdc-064f-479e-af77-4520647ba58a-ovsdbserver-nb\") pod \"e1f95cdc-064f-479e-af77-4520647ba58a\" (UID: \"e1f95cdc-064f-479e-af77-4520647ba58a\") " Dec 11 14:06:28 crc kubenswrapper[5050]: I1211 14:06:28.214876 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1f95cdc-064f-479e-af77-4520647ba58a-kube-api-access-8zw64" (OuterVolumeSpecName: "kube-api-access-8zw64") pod "e1f95cdc-064f-479e-af77-4520647ba58a" (UID: "e1f95cdc-064f-479e-af77-4520647ba58a"). InnerVolumeSpecName "kube-api-access-8zw64". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:06:28 crc kubenswrapper[5050]: I1211 14:06:28.245354 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1f95cdc-064f-479e-af77-4520647ba58a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e1f95cdc-064f-479e-af77-4520647ba58a" (UID: "e1f95cdc-064f-479e-af77-4520647ba58a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:06:28 crc kubenswrapper[5050]: I1211 14:06:28.245656 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1f95cdc-064f-479e-af77-4520647ba58a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e1f95cdc-064f-479e-af77-4520647ba58a" (UID: "e1f95cdc-064f-479e-af77-4520647ba58a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:06:28 crc kubenswrapper[5050]: I1211 14:06:28.247221 5050 generic.go:334] "Generic (PLEG): container finished" podID="25f92ac7-0732-460f-bf9a-1e9947e71977" containerID="120c3e0ab30f8cb2dc9dda0bd8ed8b1425fd55a5f7189314bd3f7b5ff0ffb1e8" exitCode=0 Dec 11 14:06:28 crc kubenswrapper[5050]: I1211 14:06:28.247299 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-9s7vd" event={"ID":"25f92ac7-0732-460f-bf9a-1e9947e71977","Type":"ContainerDied","Data":"120c3e0ab30f8cb2dc9dda0bd8ed8b1425fd55a5f7189314bd3f7b5ff0ffb1e8"} Dec 11 14:06:28 crc kubenswrapper[5050]: I1211 14:06:28.249874 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-586b989cdc-6rpgw" Dec 11 14:06:28 crc kubenswrapper[5050]: I1211 14:06:28.249892 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-6rpgw" event={"ID":"e1f95cdc-064f-479e-af77-4520647ba58a","Type":"ContainerDied","Data":"06f60c8ff27471025ffbba71d2692f6c21c38b9265b9bfea67a492f45db55f86"} Dec 11 14:06:28 crc kubenswrapper[5050]: I1211 14:06:28.249972 5050 scope.go:117] "RemoveContainer" containerID="06f60c8ff27471025ffbba71d2692f6c21c38b9265b9bfea67a492f45db55f86" Dec 11 14:06:28 crc kubenswrapper[5050]: I1211 14:06:28.249822 5050 generic.go:334] "Generic (PLEG): container finished" podID="e1f95cdc-064f-479e-af77-4520647ba58a" containerID="06f60c8ff27471025ffbba71d2692f6c21c38b9265b9bfea67a492f45db55f86" exitCode=0 Dec 11 14:06:28 crc kubenswrapper[5050]: I1211 14:06:28.250198 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-586b989cdc-6rpgw" event={"ID":"e1f95cdc-064f-479e-af77-4520647ba58a","Type":"ContainerDied","Data":"693dc6e68daa06b3ec7d24b236971e221f918c458e74634a0be300c852b231a4"} Dec 11 14:06:28 crc kubenswrapper[5050]: I1211 14:06:28.262865 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1f95cdc-064f-479e-af77-4520647ba58a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e1f95cdc-064f-479e-af77-4520647ba58a" (UID: "e1f95cdc-064f-479e-af77-4520647ba58a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:06:28 crc kubenswrapper[5050]: I1211 14:06:28.265971 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1f95cdc-064f-479e-af77-4520647ba58a-config" (OuterVolumeSpecName: "config") pod "e1f95cdc-064f-479e-af77-4520647ba58a" (UID: "e1f95cdc-064f-479e-af77-4520647ba58a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:06:28 crc kubenswrapper[5050]: I1211 14:06:28.289732 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1f95cdc-064f-479e-af77-4520647ba58a-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:28 crc kubenswrapper[5050]: I1211 14:06:28.289778 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1f95cdc-064f-479e-af77-4520647ba58a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:28 crc kubenswrapper[5050]: I1211 14:06:28.289790 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zw64\" (UniqueName: \"kubernetes.io/projected/e1f95cdc-064f-479e-af77-4520647ba58a-kube-api-access-8zw64\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:28 crc kubenswrapper[5050]: I1211 14:06:28.289801 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1f95cdc-064f-479e-af77-4520647ba58a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:28 crc kubenswrapper[5050]: I1211 14:06:28.289813 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1f95cdc-064f-479e-af77-4520647ba58a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:28 crc kubenswrapper[5050]: I1211 14:06:28.353675 5050 scope.go:117] "RemoveContainer" containerID="13e5044b8f3009d9cb81b52c6226f5a3afb295d4e0a5f46e1a876049d0afbccb" Dec 11 14:06:28 crc kubenswrapper[5050]: I1211 14:06:28.373856 5050 scope.go:117] "RemoveContainer" containerID="06f60c8ff27471025ffbba71d2692f6c21c38b9265b9bfea67a492f45db55f86" Dec 11 14:06:28 crc kubenswrapper[5050]: E1211 14:06:28.374745 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06f60c8ff27471025ffbba71d2692f6c21c38b9265b9bfea67a492f45db55f86\": container with ID starting with 06f60c8ff27471025ffbba71d2692f6c21c38b9265b9bfea67a492f45db55f86 not found: ID does not exist" containerID="06f60c8ff27471025ffbba71d2692f6c21c38b9265b9bfea67a492f45db55f86" Dec 11 14:06:28 crc kubenswrapper[5050]: I1211 14:06:28.374803 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06f60c8ff27471025ffbba71d2692f6c21c38b9265b9bfea67a492f45db55f86"} err="failed to get container status \"06f60c8ff27471025ffbba71d2692f6c21c38b9265b9bfea67a492f45db55f86\": rpc error: code = NotFound desc = could not find container \"06f60c8ff27471025ffbba71d2692f6c21c38b9265b9bfea67a492f45db55f86\": container with ID starting with 06f60c8ff27471025ffbba71d2692f6c21c38b9265b9bfea67a492f45db55f86 not found: ID does not exist" Dec 11 14:06:28 crc kubenswrapper[5050]: I1211 14:06:28.374848 5050 scope.go:117] "RemoveContainer" containerID="13e5044b8f3009d9cb81b52c6226f5a3afb295d4e0a5f46e1a876049d0afbccb" Dec 11 14:06:28 crc kubenswrapper[5050]: E1211 14:06:28.375305 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13e5044b8f3009d9cb81b52c6226f5a3afb295d4e0a5f46e1a876049d0afbccb\": container with ID starting with 13e5044b8f3009d9cb81b52c6226f5a3afb295d4e0a5f46e1a876049d0afbccb not found: ID does not exist" containerID="13e5044b8f3009d9cb81b52c6226f5a3afb295d4e0a5f46e1a876049d0afbccb" Dec 11 14:06:28 crc kubenswrapper[5050]: I1211 14:06:28.375353 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13e5044b8f3009d9cb81b52c6226f5a3afb295d4e0a5f46e1a876049d0afbccb"} err="failed to get container status \"13e5044b8f3009d9cb81b52c6226f5a3afb295d4e0a5f46e1a876049d0afbccb\": rpc error: code = NotFound desc = could not find container \"13e5044b8f3009d9cb81b52c6226f5a3afb295d4e0a5f46e1a876049d0afbccb\": container with ID starting with 13e5044b8f3009d9cb81b52c6226f5a3afb295d4e0a5f46e1a876049d0afbccb not found: ID does not exist" Dec 11 14:06:28 crc kubenswrapper[5050]: I1211 14:06:28.494942 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a5dabf50-534b-45cb-87db-45373930fe82-etc-swift\") pod \"swift-storage-0\" (UID: \"a5dabf50-534b-45cb-87db-45373930fe82\") " pod="openstack/swift-storage-0" Dec 11 14:06:28 crc kubenswrapper[5050]: E1211 14:06:28.495308 5050 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 11 14:06:28 crc kubenswrapper[5050]: E1211 14:06:28.495341 5050 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 11 14:06:28 crc kubenswrapper[5050]: E1211 14:06:28.495401 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a5dabf50-534b-45cb-87db-45373930fe82-etc-swift podName:a5dabf50-534b-45cb-87db-45373930fe82 nodeName:}" failed. No retries permitted until 2025-12-11 14:06:30.495383523 +0000 UTC m=+1081.339106109 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a5dabf50-534b-45cb-87db-45373930fe82-etc-swift") pod "swift-storage-0" (UID: "a5dabf50-534b-45cb-87db-45373930fe82") : configmap "swift-ring-files" not found Dec 11 14:06:28 crc kubenswrapper[5050]: I1211 14:06:28.592297 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-6rpgw"] Dec 11 14:06:28 crc kubenswrapper[5050]: I1211 14:06:28.621681 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-586b989cdc-6rpgw"] Dec 11 14:06:29 crc kubenswrapper[5050]: I1211 14:06:29.260497 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-9s7vd" event={"ID":"25f92ac7-0732-460f-bf9a-1e9947e71977","Type":"ContainerStarted","Data":"03a312c5ccc632709f5ff1d78cc0cc48e700ed20a1bdc843cbafee17867af835"} Dec 11 14:06:29 crc kubenswrapper[5050]: I1211 14:06:29.261358 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67fdf7998c-9s7vd" Dec 11 14:06:29 crc kubenswrapper[5050]: I1211 14:06:29.283217 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67fdf7998c-9s7vd" podStartSLOduration=4.283192083 podStartE2EDuration="4.283192083s" podCreationTimestamp="2025-12-11 14:06:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:06:29.281138728 +0000 UTC m=+1080.124861314" watchObservedRunningTime="2025-12-11 14:06:29.283192083 +0000 UTC m=+1080.126914669" Dec 11 14:06:29 crc kubenswrapper[5050]: I1211 14:06:29.557943 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1f95cdc-064f-479e-af77-4520647ba58a" path="/var/lib/kubelet/pods/e1f95cdc-064f-479e-af77-4520647ba58a/volumes" Dec 11 14:06:30 crc kubenswrapper[5050]: I1211 14:06:30.536719 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a5dabf50-534b-45cb-87db-45373930fe82-etc-swift\") pod \"swift-storage-0\" (UID: \"a5dabf50-534b-45cb-87db-45373930fe82\") " pod="openstack/swift-storage-0" Dec 11 14:06:30 crc kubenswrapper[5050]: E1211 14:06:30.537036 5050 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 11 14:06:30 crc kubenswrapper[5050]: E1211 14:06:30.537404 5050 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 11 14:06:30 crc kubenswrapper[5050]: E1211 14:06:30.537496 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a5dabf50-534b-45cb-87db-45373930fe82-etc-swift podName:a5dabf50-534b-45cb-87db-45373930fe82 nodeName:}" failed. No retries permitted until 2025-12-11 14:06:34.537468861 +0000 UTC m=+1085.381191457 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a5dabf50-534b-45cb-87db-45373930fe82-etc-swift") pod "swift-storage-0" (UID: "a5dabf50-534b-45cb-87db-45373930fe82") : configmap "swift-ring-files" not found Dec 11 14:06:30 crc kubenswrapper[5050]: I1211 14:06:30.675255 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-dl7tx"] Dec 11 14:06:30 crc kubenswrapper[5050]: E1211 14:06:30.676003 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1f95cdc-064f-479e-af77-4520647ba58a" containerName="init" Dec 11 14:06:30 crc kubenswrapper[5050]: I1211 14:06:30.676157 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1f95cdc-064f-479e-af77-4520647ba58a" containerName="init" Dec 11 14:06:30 crc kubenswrapper[5050]: E1211 14:06:30.676705 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1f95cdc-064f-479e-af77-4520647ba58a" containerName="dnsmasq-dns" Dec 11 14:06:30 crc kubenswrapper[5050]: I1211 14:06:30.676795 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1f95cdc-064f-479e-af77-4520647ba58a" containerName="dnsmasq-dns" Dec 11 14:06:30 crc kubenswrapper[5050]: I1211 14:06:30.677117 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1f95cdc-064f-479e-af77-4520647ba58a" containerName="dnsmasq-dns" Dec 11 14:06:30 crc kubenswrapper[5050]: I1211 14:06:30.677951 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dl7tx" Dec 11 14:06:30 crc kubenswrapper[5050]: I1211 14:06:30.680491 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 11 14:06:30 crc kubenswrapper[5050]: I1211 14:06:30.681906 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 11 14:06:30 crc kubenswrapper[5050]: I1211 14:06:30.693826 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 11 14:06:30 crc kubenswrapper[5050]: I1211 14:06:30.699961 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-dl7tx"] Dec 11 14:06:30 crc kubenswrapper[5050]: I1211 14:06:30.742161 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/401d393f-46fc-4150-b785-313d42022d95-combined-ca-bundle\") pod \"swift-ring-rebalance-dl7tx\" (UID: \"401d393f-46fc-4150-b785-313d42022d95\") " pod="openstack/swift-ring-rebalance-dl7tx" Dec 11 14:06:30 crc kubenswrapper[5050]: I1211 14:06:30.742250 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/401d393f-46fc-4150-b785-313d42022d95-swiftconf\") pod \"swift-ring-rebalance-dl7tx\" (UID: \"401d393f-46fc-4150-b785-313d42022d95\") " pod="openstack/swift-ring-rebalance-dl7tx" Dec 11 14:06:30 crc kubenswrapper[5050]: I1211 14:06:30.742274 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/401d393f-46fc-4150-b785-313d42022d95-scripts\") pod \"swift-ring-rebalance-dl7tx\" (UID: \"401d393f-46fc-4150-b785-313d42022d95\") " pod="openstack/swift-ring-rebalance-dl7tx" Dec 11 14:06:30 crc kubenswrapper[5050]: I1211 14:06:30.742532 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/401d393f-46fc-4150-b785-313d42022d95-etc-swift\") pod \"swift-ring-rebalance-dl7tx\" (UID: \"401d393f-46fc-4150-b785-313d42022d95\") " pod="openstack/swift-ring-rebalance-dl7tx" Dec 11 14:06:30 crc kubenswrapper[5050]: I1211 14:06:30.743112 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/401d393f-46fc-4150-b785-313d42022d95-dispersionconf\") pod \"swift-ring-rebalance-dl7tx\" (UID: \"401d393f-46fc-4150-b785-313d42022d95\") " pod="openstack/swift-ring-rebalance-dl7tx" Dec 11 14:06:30 crc kubenswrapper[5050]: I1211 14:06:30.743165 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtpjw\" (UniqueName: \"kubernetes.io/projected/401d393f-46fc-4150-b785-313d42022d95-kube-api-access-rtpjw\") pod \"swift-ring-rebalance-dl7tx\" (UID: \"401d393f-46fc-4150-b785-313d42022d95\") " pod="openstack/swift-ring-rebalance-dl7tx" Dec 11 14:06:30 crc kubenswrapper[5050]: I1211 14:06:30.743200 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/401d393f-46fc-4150-b785-313d42022d95-ring-data-devices\") pod \"swift-ring-rebalance-dl7tx\" (UID: \"401d393f-46fc-4150-b785-313d42022d95\") " pod="openstack/swift-ring-rebalance-dl7tx" Dec 11 14:06:30 crc kubenswrapper[5050]: I1211 14:06:30.844926 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/401d393f-46fc-4150-b785-313d42022d95-etc-swift\") pod \"swift-ring-rebalance-dl7tx\" (UID: \"401d393f-46fc-4150-b785-313d42022d95\") " pod="openstack/swift-ring-rebalance-dl7tx" Dec 11 14:06:30 crc kubenswrapper[5050]: I1211 14:06:30.845091 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/401d393f-46fc-4150-b785-313d42022d95-dispersionconf\") pod \"swift-ring-rebalance-dl7tx\" (UID: \"401d393f-46fc-4150-b785-313d42022d95\") " pod="openstack/swift-ring-rebalance-dl7tx" Dec 11 14:06:30 crc kubenswrapper[5050]: I1211 14:06:30.845123 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtpjw\" (UniqueName: \"kubernetes.io/projected/401d393f-46fc-4150-b785-313d42022d95-kube-api-access-rtpjw\") pod \"swift-ring-rebalance-dl7tx\" (UID: \"401d393f-46fc-4150-b785-313d42022d95\") " pod="openstack/swift-ring-rebalance-dl7tx" Dec 11 14:06:30 crc kubenswrapper[5050]: I1211 14:06:30.845153 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/401d393f-46fc-4150-b785-313d42022d95-ring-data-devices\") pod \"swift-ring-rebalance-dl7tx\" (UID: \"401d393f-46fc-4150-b785-313d42022d95\") " pod="openstack/swift-ring-rebalance-dl7tx" Dec 11 14:06:30 crc kubenswrapper[5050]: I1211 14:06:30.845187 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/401d393f-46fc-4150-b785-313d42022d95-combined-ca-bundle\") pod \"swift-ring-rebalance-dl7tx\" (UID: \"401d393f-46fc-4150-b785-313d42022d95\") " pod="openstack/swift-ring-rebalance-dl7tx" Dec 11 14:06:30 crc kubenswrapper[5050]: I1211 14:06:30.845233 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/401d393f-46fc-4150-b785-313d42022d95-swiftconf\") pod \"swift-ring-rebalance-dl7tx\" (UID: \"401d393f-46fc-4150-b785-313d42022d95\") " pod="openstack/swift-ring-rebalance-dl7tx" Dec 11 14:06:30 crc kubenswrapper[5050]: I1211 14:06:30.845258 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/401d393f-46fc-4150-b785-313d42022d95-scripts\") pod \"swift-ring-rebalance-dl7tx\" (UID: \"401d393f-46fc-4150-b785-313d42022d95\") " pod="openstack/swift-ring-rebalance-dl7tx" Dec 11 14:06:30 crc kubenswrapper[5050]: I1211 14:06:30.846259 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/401d393f-46fc-4150-b785-313d42022d95-scripts\") pod \"swift-ring-rebalance-dl7tx\" (UID: \"401d393f-46fc-4150-b785-313d42022d95\") " pod="openstack/swift-ring-rebalance-dl7tx" Dec 11 14:06:30 crc kubenswrapper[5050]: I1211 14:06:30.846592 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/401d393f-46fc-4150-b785-313d42022d95-etc-swift\") pod \"swift-ring-rebalance-dl7tx\" (UID: \"401d393f-46fc-4150-b785-313d42022d95\") " pod="openstack/swift-ring-rebalance-dl7tx" Dec 11 14:06:30 crc kubenswrapper[5050]: I1211 14:06:30.847295 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/401d393f-46fc-4150-b785-313d42022d95-ring-data-devices\") pod \"swift-ring-rebalance-dl7tx\" (UID: \"401d393f-46fc-4150-b785-313d42022d95\") " pod="openstack/swift-ring-rebalance-dl7tx" Dec 11 14:06:30 crc kubenswrapper[5050]: I1211 14:06:30.854572 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/401d393f-46fc-4150-b785-313d42022d95-combined-ca-bundle\") pod \"swift-ring-rebalance-dl7tx\" (UID: \"401d393f-46fc-4150-b785-313d42022d95\") " pod="openstack/swift-ring-rebalance-dl7tx" Dec 11 14:06:30 crc kubenswrapper[5050]: I1211 14:06:30.855712 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/401d393f-46fc-4150-b785-313d42022d95-swiftconf\") pod \"swift-ring-rebalance-dl7tx\" (UID: \"401d393f-46fc-4150-b785-313d42022d95\") " pod="openstack/swift-ring-rebalance-dl7tx" Dec 11 14:06:30 crc kubenswrapper[5050]: I1211 14:06:30.856164 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/401d393f-46fc-4150-b785-313d42022d95-dispersionconf\") pod \"swift-ring-rebalance-dl7tx\" (UID: \"401d393f-46fc-4150-b785-313d42022d95\") " pod="openstack/swift-ring-rebalance-dl7tx" Dec 11 14:06:30 crc kubenswrapper[5050]: I1211 14:06:30.873381 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtpjw\" (UniqueName: \"kubernetes.io/projected/401d393f-46fc-4150-b785-313d42022d95-kube-api-access-rtpjw\") pod \"swift-ring-rebalance-dl7tx\" (UID: \"401d393f-46fc-4150-b785-313d42022d95\") " pod="openstack/swift-ring-rebalance-dl7tx" Dec 11 14:06:30 crc kubenswrapper[5050]: I1211 14:06:30.999434 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-n94qm" Dec 11 14:06:31 crc kubenswrapper[5050]: I1211 14:06:31.007801 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dl7tx" Dec 11 14:06:31 crc kubenswrapper[5050]: I1211 14:06:31.679462 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-dl7tx"] Dec 11 14:06:32 crc kubenswrapper[5050]: I1211 14:06:32.117450 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 11 14:06:32 crc kubenswrapper[5050]: I1211 14:06:32.118537 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 11 14:06:32 crc kubenswrapper[5050]: I1211 14:06:32.211244 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 11 14:06:32 crc kubenswrapper[5050]: I1211 14:06:32.295588 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dl7tx" event={"ID":"401d393f-46fc-4150-b785-313d42022d95","Type":"ContainerStarted","Data":"5f5081475aa41be0949f86f409cae25a2a1444f4fde56c7ccd5a08d694245172"} Dec 11 14:06:32 crc kubenswrapper[5050]: I1211 14:06:32.371252 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 11 14:06:32 crc kubenswrapper[5050]: I1211 14:06:32.782992 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 11 14:06:33 crc kubenswrapper[5050]: I1211 14:06:33.461052 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-4d4b-account-create-update-4tzvj"] Dec 11 14:06:33 crc kubenswrapper[5050]: I1211 14:06:33.462749 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4d4b-account-create-update-4tzvj" Dec 11 14:06:33 crc kubenswrapper[5050]: I1211 14:06:33.465903 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 11 14:06:33 crc kubenswrapper[5050]: I1211 14:06:33.479988 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-j9vlr"] Dec 11 14:06:33 crc kubenswrapper[5050]: I1211 14:06:33.481583 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-j9vlr" Dec 11 14:06:33 crc kubenswrapper[5050]: I1211 14:06:33.499165 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-4d4b-account-create-update-4tzvj"] Dec 11 14:06:33 crc kubenswrapper[5050]: I1211 14:06:33.510349 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-j9vlr"] Dec 11 14:06:33 crc kubenswrapper[5050]: I1211 14:06:33.627747 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gck57\" (UniqueName: \"kubernetes.io/projected/1bff77bb-7533-45dd-9c1c-d20368964bc6-kube-api-access-gck57\") pod \"keystone-db-create-j9vlr\" (UID: \"1bff77bb-7533-45dd-9c1c-d20368964bc6\") " pod="openstack/keystone-db-create-j9vlr" Dec 11 14:06:33 crc kubenswrapper[5050]: I1211 14:06:33.627814 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab97d90f-f85d-4d2b-8b8e-6c62d74b7a07-operator-scripts\") pod \"keystone-4d4b-account-create-update-4tzvj\" (UID: \"ab97d90f-f85d-4d2b-8b8e-6c62d74b7a07\") " pod="openstack/keystone-4d4b-account-create-update-4tzvj" Dec 11 14:06:33 crc kubenswrapper[5050]: I1211 14:06:33.627894 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bff77bb-7533-45dd-9c1c-d20368964bc6-operator-scripts\") pod \"keystone-db-create-j9vlr\" (UID: \"1bff77bb-7533-45dd-9c1c-d20368964bc6\") " pod="openstack/keystone-db-create-j9vlr" Dec 11 14:06:33 crc kubenswrapper[5050]: I1211 14:06:33.627923 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klvhz\" (UniqueName: \"kubernetes.io/projected/ab97d90f-f85d-4d2b-8b8e-6c62d74b7a07-kube-api-access-klvhz\") pod \"keystone-4d4b-account-create-update-4tzvj\" (UID: \"ab97d90f-f85d-4d2b-8b8e-6c62d74b7a07\") " pod="openstack/keystone-4d4b-account-create-update-4tzvj" Dec 11 14:06:33 crc kubenswrapper[5050]: I1211 14:06:33.676671 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-p6ttc"] Dec 11 14:06:33 crc kubenswrapper[5050]: I1211 14:06:33.678048 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-p6ttc" Dec 11 14:06:33 crc kubenswrapper[5050]: I1211 14:06:33.685756 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-p6ttc"] Dec 11 14:06:33 crc kubenswrapper[5050]: I1211 14:06:33.730451 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gck57\" (UniqueName: \"kubernetes.io/projected/1bff77bb-7533-45dd-9c1c-d20368964bc6-kube-api-access-gck57\") pod \"keystone-db-create-j9vlr\" (UID: \"1bff77bb-7533-45dd-9c1c-d20368964bc6\") " pod="openstack/keystone-db-create-j9vlr" Dec 11 14:06:33 crc kubenswrapper[5050]: I1211 14:06:33.730508 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab97d90f-f85d-4d2b-8b8e-6c62d74b7a07-operator-scripts\") pod \"keystone-4d4b-account-create-update-4tzvj\" (UID: \"ab97d90f-f85d-4d2b-8b8e-6c62d74b7a07\") " pod="openstack/keystone-4d4b-account-create-update-4tzvj" Dec 11 14:06:33 crc kubenswrapper[5050]: I1211 14:06:33.730582 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bff77bb-7533-45dd-9c1c-d20368964bc6-operator-scripts\") pod \"keystone-db-create-j9vlr\" (UID: \"1bff77bb-7533-45dd-9c1c-d20368964bc6\") " pod="openstack/keystone-db-create-j9vlr" Dec 11 14:06:33 crc kubenswrapper[5050]: I1211 14:06:33.730609 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klvhz\" (UniqueName: \"kubernetes.io/projected/ab97d90f-f85d-4d2b-8b8e-6c62d74b7a07-kube-api-access-klvhz\") pod \"keystone-4d4b-account-create-update-4tzvj\" (UID: \"ab97d90f-f85d-4d2b-8b8e-6c62d74b7a07\") " pod="openstack/keystone-4d4b-account-create-update-4tzvj" Dec 11 14:06:33 crc kubenswrapper[5050]: I1211 14:06:33.732029 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bff77bb-7533-45dd-9c1c-d20368964bc6-operator-scripts\") pod \"keystone-db-create-j9vlr\" (UID: \"1bff77bb-7533-45dd-9c1c-d20368964bc6\") " pod="openstack/keystone-db-create-j9vlr" Dec 11 14:06:33 crc kubenswrapper[5050]: I1211 14:06:33.732283 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab97d90f-f85d-4d2b-8b8e-6c62d74b7a07-operator-scripts\") pod \"keystone-4d4b-account-create-update-4tzvj\" (UID: \"ab97d90f-f85d-4d2b-8b8e-6c62d74b7a07\") " pod="openstack/keystone-4d4b-account-create-update-4tzvj" Dec 11 14:06:33 crc kubenswrapper[5050]: I1211 14:06:33.754664 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klvhz\" (UniqueName: \"kubernetes.io/projected/ab97d90f-f85d-4d2b-8b8e-6c62d74b7a07-kube-api-access-klvhz\") pod \"keystone-4d4b-account-create-update-4tzvj\" (UID: \"ab97d90f-f85d-4d2b-8b8e-6c62d74b7a07\") " pod="openstack/keystone-4d4b-account-create-update-4tzvj" Dec 11 14:06:33 crc kubenswrapper[5050]: I1211 14:06:33.754861 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gck57\" (UniqueName: \"kubernetes.io/projected/1bff77bb-7533-45dd-9c1c-d20368964bc6-kube-api-access-gck57\") pod \"keystone-db-create-j9vlr\" (UID: \"1bff77bb-7533-45dd-9c1c-d20368964bc6\") " pod="openstack/keystone-db-create-j9vlr" Dec 11 14:06:33 crc kubenswrapper[5050]: I1211 14:06:33.796832 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4d4b-account-create-update-4tzvj" Dec 11 14:06:33 crc kubenswrapper[5050]: I1211 14:06:33.807928 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-j9vlr" Dec 11 14:06:33 crc kubenswrapper[5050]: I1211 14:06:33.832072 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf2cdf1b-29dd-484e-ad40-e287454d8534-operator-scripts\") pod \"placement-db-create-p6ttc\" (UID: \"bf2cdf1b-29dd-484e-ad40-e287454d8534\") " pod="openstack/placement-db-create-p6ttc" Dec 11 14:06:33 crc kubenswrapper[5050]: I1211 14:06:33.832182 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6vgs\" (UniqueName: \"kubernetes.io/projected/bf2cdf1b-29dd-484e-ad40-e287454d8534-kube-api-access-j6vgs\") pod \"placement-db-create-p6ttc\" (UID: \"bf2cdf1b-29dd-484e-ad40-e287454d8534\") " pod="openstack/placement-db-create-p6ttc" Dec 11 14:06:33 crc kubenswrapper[5050]: I1211 14:06:33.866749 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-d2f2-account-create-update-v2nsg"] Dec 11 14:06:33 crc kubenswrapper[5050]: I1211 14:06:33.868205 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d2f2-account-create-update-v2nsg" Dec 11 14:06:33 crc kubenswrapper[5050]: I1211 14:06:33.873996 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 11 14:06:33 crc kubenswrapper[5050]: I1211 14:06:33.879093 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d2f2-account-create-update-v2nsg"] Dec 11 14:06:33 crc kubenswrapper[5050]: I1211 14:06:33.934255 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf2cdf1b-29dd-484e-ad40-e287454d8534-operator-scripts\") pod \"placement-db-create-p6ttc\" (UID: \"bf2cdf1b-29dd-484e-ad40-e287454d8534\") " pod="openstack/placement-db-create-p6ttc" Dec 11 14:06:33 crc kubenswrapper[5050]: I1211 14:06:33.934777 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6vgs\" (UniqueName: \"kubernetes.io/projected/bf2cdf1b-29dd-484e-ad40-e287454d8534-kube-api-access-j6vgs\") pod \"placement-db-create-p6ttc\" (UID: \"bf2cdf1b-29dd-484e-ad40-e287454d8534\") " pod="openstack/placement-db-create-p6ttc" Dec 11 14:06:33 crc kubenswrapper[5050]: I1211 14:06:33.935170 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf2cdf1b-29dd-484e-ad40-e287454d8534-operator-scripts\") pod \"placement-db-create-p6ttc\" (UID: \"bf2cdf1b-29dd-484e-ad40-e287454d8534\") " pod="openstack/placement-db-create-p6ttc" Dec 11 14:06:33 crc kubenswrapper[5050]: I1211 14:06:33.956500 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6vgs\" (UniqueName: \"kubernetes.io/projected/bf2cdf1b-29dd-484e-ad40-e287454d8534-kube-api-access-j6vgs\") pod \"placement-db-create-p6ttc\" (UID: \"bf2cdf1b-29dd-484e-ad40-e287454d8534\") " pod="openstack/placement-db-create-p6ttc" Dec 11 14:06:34 crc kubenswrapper[5050]: I1211 14:06:34.001224 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-p6ttc" Dec 11 14:06:34 crc kubenswrapper[5050]: I1211 14:06:34.036176 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae8fd88c-6bf8-483c-950f-1466ea49c607-operator-scripts\") pod \"placement-d2f2-account-create-update-v2nsg\" (UID: \"ae8fd88c-6bf8-483c-950f-1466ea49c607\") " pod="openstack/placement-d2f2-account-create-update-v2nsg" Dec 11 14:06:34 crc kubenswrapper[5050]: I1211 14:06:34.036289 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2snr\" (UniqueName: \"kubernetes.io/projected/ae8fd88c-6bf8-483c-950f-1466ea49c607-kube-api-access-b2snr\") pod \"placement-d2f2-account-create-update-v2nsg\" (UID: \"ae8fd88c-6bf8-483c-950f-1466ea49c607\") " pod="openstack/placement-d2f2-account-create-update-v2nsg" Dec 11 14:06:34 crc kubenswrapper[5050]: I1211 14:06:34.105118 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 11 14:06:34 crc kubenswrapper[5050]: I1211 14:06:34.138707 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae8fd88c-6bf8-483c-950f-1466ea49c607-operator-scripts\") pod \"placement-d2f2-account-create-update-v2nsg\" (UID: \"ae8fd88c-6bf8-483c-950f-1466ea49c607\") " pod="openstack/placement-d2f2-account-create-update-v2nsg" Dec 11 14:06:34 crc kubenswrapper[5050]: I1211 14:06:34.138782 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2snr\" (UniqueName: \"kubernetes.io/projected/ae8fd88c-6bf8-483c-950f-1466ea49c607-kube-api-access-b2snr\") pod \"placement-d2f2-account-create-update-v2nsg\" (UID: \"ae8fd88c-6bf8-483c-950f-1466ea49c607\") " pod="openstack/placement-d2f2-account-create-update-v2nsg" Dec 11 14:06:34 crc kubenswrapper[5050]: I1211 14:06:34.140379 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae8fd88c-6bf8-483c-950f-1466ea49c607-operator-scripts\") pod \"placement-d2f2-account-create-update-v2nsg\" (UID: \"ae8fd88c-6bf8-483c-950f-1466ea49c607\") " pod="openstack/placement-d2f2-account-create-update-v2nsg" Dec 11 14:06:34 crc kubenswrapper[5050]: I1211 14:06:34.168654 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2snr\" (UniqueName: \"kubernetes.io/projected/ae8fd88c-6bf8-483c-950f-1466ea49c607-kube-api-access-b2snr\") pod \"placement-d2f2-account-create-update-v2nsg\" (UID: \"ae8fd88c-6bf8-483c-950f-1466ea49c607\") " pod="openstack/placement-d2f2-account-create-update-v2nsg" Dec 11 14:06:34 crc kubenswrapper[5050]: I1211 14:06:34.237922 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 11 14:06:34 crc kubenswrapper[5050]: I1211 14:06:34.275935 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d2f2-account-create-update-v2nsg" Dec 11 14:06:34 crc kubenswrapper[5050]: I1211 14:06:34.370344 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-4d4b-account-create-update-4tzvj"] Dec 11 14:06:34 crc kubenswrapper[5050]: W1211 14:06:34.372477 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab97d90f_f85d_4d2b_8b8e_6c62d74b7a07.slice/crio-a0ce2ca3dbbbb876aee6af604262321abd19980518548435ea12aea70268614e WatchSource:0}: Error finding container a0ce2ca3dbbbb876aee6af604262321abd19980518548435ea12aea70268614e: Status 404 returned error can't find the container with id a0ce2ca3dbbbb876aee6af604262321abd19980518548435ea12aea70268614e Dec 11 14:06:34 crc kubenswrapper[5050]: I1211 14:06:34.441396 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-j9vlr"] Dec 11 14:06:34 crc kubenswrapper[5050]: W1211 14:06:34.443394 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bff77bb_7533_45dd_9c1c_d20368964bc6.slice/crio-fcd226252c02107d2038a23c2469e1451d3b38d8f0199a2a7be16c387c9d5cda WatchSource:0}: Error finding container fcd226252c02107d2038a23c2469e1451d3b38d8f0199a2a7be16c387c9d5cda: Status 404 returned error can't find the container with id fcd226252c02107d2038a23c2469e1451d3b38d8f0199a2a7be16c387c9d5cda Dec 11 14:06:34 crc kubenswrapper[5050]: I1211 14:06:34.548754 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a5dabf50-534b-45cb-87db-45373930fe82-etc-swift\") pod \"swift-storage-0\" (UID: \"a5dabf50-534b-45cb-87db-45373930fe82\") " pod="openstack/swift-storage-0" Dec 11 14:06:34 crc kubenswrapper[5050]: E1211 14:06:34.548962 5050 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 11 14:06:34 crc kubenswrapper[5050]: E1211 14:06:34.548981 5050 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 11 14:06:34 crc kubenswrapper[5050]: E1211 14:06:34.549054 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a5dabf50-534b-45cb-87db-45373930fe82-etc-swift podName:a5dabf50-534b-45cb-87db-45373930fe82 nodeName:}" failed. No retries permitted until 2025-12-11 14:06:42.549035529 +0000 UTC m=+1093.392758115 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a5dabf50-534b-45cb-87db-45373930fe82-etc-swift") pod "swift-storage-0" (UID: "a5dabf50-534b-45cb-87db-45373930fe82") : configmap "swift-ring-files" not found Dec 11 14:06:34 crc kubenswrapper[5050]: I1211 14:06:34.566787 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-p6ttc"] Dec 11 14:06:34 crc kubenswrapper[5050]: W1211 14:06:34.578578 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf2cdf1b_29dd_484e_ad40_e287454d8534.slice/crio-a573a1bb9b8d5ae294204674a3acfdfe550195908846616e24a7b47ea627d247 WatchSource:0}: Error finding container a573a1bb9b8d5ae294204674a3acfdfe550195908846616e24a7b47ea627d247: Status 404 returned error can't find the container with id a573a1bb9b8d5ae294204674a3acfdfe550195908846616e24a7b47ea627d247 Dec 11 14:06:34 crc kubenswrapper[5050]: I1211 14:06:34.746702 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-d2f2-account-create-update-v2nsg"] Dec 11 14:06:35 crc kubenswrapper[5050]: I1211 14:06:35.323396 5050 generic.go:334] "Generic (PLEG): container finished" podID="bf2cdf1b-29dd-484e-ad40-e287454d8534" containerID="c4d83d8bcd5be1a638da2b5e58c918cebe164f68a9a419b211a09b8c18d559ca" exitCode=0 Dec 11 14:06:35 crc kubenswrapper[5050]: I1211 14:06:35.323484 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-p6ttc" event={"ID":"bf2cdf1b-29dd-484e-ad40-e287454d8534","Type":"ContainerDied","Data":"c4d83d8bcd5be1a638da2b5e58c918cebe164f68a9a419b211a09b8c18d559ca"} Dec 11 14:06:35 crc kubenswrapper[5050]: I1211 14:06:35.323980 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-p6ttc" event={"ID":"bf2cdf1b-29dd-484e-ad40-e287454d8534","Type":"ContainerStarted","Data":"a573a1bb9b8d5ae294204674a3acfdfe550195908846616e24a7b47ea627d247"} Dec 11 14:06:35 crc kubenswrapper[5050]: I1211 14:06:35.326719 5050 generic.go:334] "Generic (PLEG): container finished" podID="ae8fd88c-6bf8-483c-950f-1466ea49c607" containerID="0dcfe8c85171116ddfd570f8fd726877506b5485bc64bfae0b8fa6e75c5ea7d8" exitCode=0 Dec 11 14:06:35 crc kubenswrapper[5050]: I1211 14:06:35.326820 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d2f2-account-create-update-v2nsg" event={"ID":"ae8fd88c-6bf8-483c-950f-1466ea49c607","Type":"ContainerDied","Data":"0dcfe8c85171116ddfd570f8fd726877506b5485bc64bfae0b8fa6e75c5ea7d8"} Dec 11 14:06:35 crc kubenswrapper[5050]: I1211 14:06:35.326867 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d2f2-account-create-update-v2nsg" event={"ID":"ae8fd88c-6bf8-483c-950f-1466ea49c607","Type":"ContainerStarted","Data":"a2d831bbd26d3a1c16377a9e6e4b992058a5132bdec470f8c06026607e1ee3d1"} Dec 11 14:06:35 crc kubenswrapper[5050]: I1211 14:06:35.330987 5050 generic.go:334] "Generic (PLEG): container finished" podID="1bff77bb-7533-45dd-9c1c-d20368964bc6" containerID="0815f59a5ea8b9a1ce5a7cc867a781d8ee6b9ccda7be00873eebb4be9026b907" exitCode=0 Dec 11 14:06:35 crc kubenswrapper[5050]: I1211 14:06:35.331074 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-j9vlr" event={"ID":"1bff77bb-7533-45dd-9c1c-d20368964bc6","Type":"ContainerDied","Data":"0815f59a5ea8b9a1ce5a7cc867a781d8ee6b9ccda7be00873eebb4be9026b907"} Dec 11 14:06:35 crc kubenswrapper[5050]: I1211 14:06:35.331105 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-j9vlr" event={"ID":"1bff77bb-7533-45dd-9c1c-d20368964bc6","Type":"ContainerStarted","Data":"fcd226252c02107d2038a23c2469e1451d3b38d8f0199a2a7be16c387c9d5cda"} Dec 11 14:06:35 crc kubenswrapper[5050]: I1211 14:06:35.333142 5050 generic.go:334] "Generic (PLEG): container finished" podID="ab97d90f-f85d-4d2b-8b8e-6c62d74b7a07" containerID="c1583bcb8328969c751cd4b4397c74eb88bd573926b6bcd1b686432e9ee9696e" exitCode=0 Dec 11 14:06:35 crc kubenswrapper[5050]: I1211 14:06:35.333176 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4d4b-account-create-update-4tzvj" event={"ID":"ab97d90f-f85d-4d2b-8b8e-6c62d74b7a07","Type":"ContainerDied","Data":"c1583bcb8328969c751cd4b4397c74eb88bd573926b6bcd1b686432e9ee9696e"} Dec 11 14:06:35 crc kubenswrapper[5050]: I1211 14:06:35.333191 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4d4b-account-create-update-4tzvj" event={"ID":"ab97d90f-f85d-4d2b-8b8e-6c62d74b7a07","Type":"ContainerStarted","Data":"a0ce2ca3dbbbb876aee6af604262321abd19980518548435ea12aea70268614e"} Dec 11 14:06:35 crc kubenswrapper[5050]: I1211 14:06:35.577614 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 11 14:06:36 crc kubenswrapper[5050]: I1211 14:06:36.012803 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67fdf7998c-9s7vd" Dec 11 14:06:36 crc kubenswrapper[5050]: I1211 14:06:36.083937 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7878659675-8x5rp"] Dec 11 14:06:36 crc kubenswrapper[5050]: I1211 14:06:36.084744 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7878659675-8x5rp" podUID="ede85a3e-dba0-4946-a12d-3dd38485c815" containerName="dnsmasq-dns" containerID="cri-o://875da1a66cf4f9872f5bfa21153f6bf14de20de2c8334794f580a8e13f2166ea" gracePeriod=10 Dec 11 14:06:36 crc kubenswrapper[5050]: I1211 14:06:36.347357 5050 generic.go:334] "Generic (PLEG): container finished" podID="ede85a3e-dba0-4946-a12d-3dd38485c815" containerID="875da1a66cf4f9872f5bfa21153f6bf14de20de2c8334794f580a8e13f2166ea" exitCode=0 Dec 11 14:06:36 crc kubenswrapper[5050]: I1211 14:06:36.347442 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878659675-8x5rp" event={"ID":"ede85a3e-dba0-4946-a12d-3dd38485c815","Type":"ContainerDied","Data":"875da1a66cf4f9872f5bfa21153f6bf14de20de2c8334794f580a8e13f2166ea"} Dec 11 14:06:36 crc kubenswrapper[5050]: I1211 14:06:36.918600 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7878659675-8x5rp" podUID="ede85a3e-dba0-4946-a12d-3dd38485c815" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.250504 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d2f2-account-create-update-v2nsg" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.258800 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-p6ttc" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.288998 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4d4b-account-create-update-4tzvj" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.298710 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-j9vlr" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.348850 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-klvhz\" (UniqueName: \"kubernetes.io/projected/ab97d90f-f85d-4d2b-8b8e-6c62d74b7a07-kube-api-access-klvhz\") pod \"ab97d90f-f85d-4d2b-8b8e-6c62d74b7a07\" (UID: \"ab97d90f-f85d-4d2b-8b8e-6c62d74b7a07\") " Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.348899 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae8fd88c-6bf8-483c-950f-1466ea49c607-operator-scripts\") pod \"ae8fd88c-6bf8-483c-950f-1466ea49c607\" (UID: \"ae8fd88c-6bf8-483c-950f-1466ea49c607\") " Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.348967 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2snr\" (UniqueName: \"kubernetes.io/projected/ae8fd88c-6bf8-483c-950f-1466ea49c607-kube-api-access-b2snr\") pod \"ae8fd88c-6bf8-483c-950f-1466ea49c607\" (UID: \"ae8fd88c-6bf8-483c-950f-1466ea49c607\") " Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.350471 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab97d90f-f85d-4d2b-8b8e-6c62d74b7a07-operator-scripts\") pod \"ab97d90f-f85d-4d2b-8b8e-6c62d74b7a07\" (UID: \"ab97d90f-f85d-4d2b-8b8e-6c62d74b7a07\") " Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.350547 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae8fd88c-6bf8-483c-950f-1466ea49c607-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ae8fd88c-6bf8-483c-950f-1466ea49c607" (UID: "ae8fd88c-6bf8-483c-950f-1466ea49c607"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.350567 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf2cdf1b-29dd-484e-ad40-e287454d8534-operator-scripts\") pod \"bf2cdf1b-29dd-484e-ad40-e287454d8534\" (UID: \"bf2cdf1b-29dd-484e-ad40-e287454d8534\") " Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.350895 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6vgs\" (UniqueName: \"kubernetes.io/projected/bf2cdf1b-29dd-484e-ad40-e287454d8534-kube-api-access-j6vgs\") pod \"bf2cdf1b-29dd-484e-ad40-e287454d8534\" (UID: \"bf2cdf1b-29dd-484e-ad40-e287454d8534\") " Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.351238 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab97d90f-f85d-4d2b-8b8e-6c62d74b7a07-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ab97d90f-f85d-4d2b-8b8e-6c62d74b7a07" (UID: "ab97d90f-f85d-4d2b-8b8e-6c62d74b7a07"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.351649 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf2cdf1b-29dd-484e-ad40-e287454d8534-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bf2cdf1b-29dd-484e-ad40-e287454d8534" (UID: "bf2cdf1b-29dd-484e-ad40-e287454d8534"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.352020 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ab97d90f-f85d-4d2b-8b8e-6c62d74b7a07-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.352045 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf2cdf1b-29dd-484e-ad40-e287454d8534-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.352057 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae8fd88c-6bf8-483c-950f-1466ea49c607-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.352261 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7878659675-8x5rp" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.355869 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae8fd88c-6bf8-483c-950f-1466ea49c607-kube-api-access-b2snr" (OuterVolumeSpecName: "kube-api-access-b2snr") pod "ae8fd88c-6bf8-483c-950f-1466ea49c607" (UID: "ae8fd88c-6bf8-483c-950f-1466ea49c607"). InnerVolumeSpecName "kube-api-access-b2snr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.356081 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab97d90f-f85d-4d2b-8b8e-6c62d74b7a07-kube-api-access-klvhz" (OuterVolumeSpecName: "kube-api-access-klvhz") pod "ab97d90f-f85d-4d2b-8b8e-6c62d74b7a07" (UID: "ab97d90f-f85d-4d2b-8b8e-6c62d74b7a07"). InnerVolumeSpecName "kube-api-access-klvhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.356244 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf2cdf1b-29dd-484e-ad40-e287454d8534-kube-api-access-j6vgs" (OuterVolumeSpecName: "kube-api-access-j6vgs") pod "bf2cdf1b-29dd-484e-ad40-e287454d8534" (UID: "bf2cdf1b-29dd-484e-ad40-e287454d8534"). InnerVolumeSpecName "kube-api-access-j6vgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.375767 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4d4b-account-create-update-4tzvj" event={"ID":"ab97d90f-f85d-4d2b-8b8e-6c62d74b7a07","Type":"ContainerDied","Data":"a0ce2ca3dbbbb876aee6af604262321abd19980518548435ea12aea70268614e"} Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.375870 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0ce2ca3dbbbb876aee6af604262321abd19980518548435ea12aea70268614e" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.375807 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4d4b-account-create-update-4tzvj" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.378390 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-p6ttc" event={"ID":"bf2cdf1b-29dd-484e-ad40-e287454d8534","Type":"ContainerDied","Data":"a573a1bb9b8d5ae294204674a3acfdfe550195908846616e24a7b47ea627d247"} Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.380597 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a573a1bb9b8d5ae294204674a3acfdfe550195908846616e24a7b47ea627d247" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.380655 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-p6ttc" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.383189 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7878659675-8x5rp" event={"ID":"ede85a3e-dba0-4946-a12d-3dd38485c815","Type":"ContainerDied","Data":"6184f3f1f403c89a14fae481a6a3119d9673487edecc15d51b4b24ae11f85e38"} Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.383257 5050 scope.go:117] "RemoveContainer" containerID="875da1a66cf4f9872f5bfa21153f6bf14de20de2c8334794f580a8e13f2166ea" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.383411 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7878659675-8x5rp" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.385725 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dl7tx" event={"ID":"401d393f-46fc-4150-b785-313d42022d95","Type":"ContainerStarted","Data":"9363f944bb00bf65a18a77d105a5c3acb2935d1c6a51699593ad0beb061d83c4"} Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.392102 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-d2f2-account-create-update-v2nsg" event={"ID":"ae8fd88c-6bf8-483c-950f-1466ea49c607","Type":"ContainerDied","Data":"a2d831bbd26d3a1c16377a9e6e4b992058a5132bdec470f8c06026607e1ee3d1"} Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.392148 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2d831bbd26d3a1c16377a9e6e4b992058a5132bdec470f8c06026607e1ee3d1" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.392349 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-d2f2-account-create-update-v2nsg" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.400634 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-j9vlr" event={"ID":"1bff77bb-7533-45dd-9c1c-d20368964bc6","Type":"ContainerDied","Data":"fcd226252c02107d2038a23c2469e1451d3b38d8f0199a2a7be16c387c9d5cda"} Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.400712 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcd226252c02107d2038a23c2469e1451d3b38d8f0199a2a7be16c387c9d5cda" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.400804 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-j9vlr" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.431506 5050 scope.go:117] "RemoveContainer" containerID="014bfd854f87afffdd6f06a10aed570e95644be987cd5f116d9c3f0cd18b1004" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.453627 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ede85a3e-dba0-4946-a12d-3dd38485c815-dns-svc\") pod \"ede85a3e-dba0-4946-a12d-3dd38485c815\" (UID: \"ede85a3e-dba0-4946-a12d-3dd38485c815\") " Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.453673 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ede85a3e-dba0-4946-a12d-3dd38485c815-ovsdbserver-nb\") pod \"ede85a3e-dba0-4946-a12d-3dd38485c815\" (UID: \"ede85a3e-dba0-4946-a12d-3dd38485c815\") " Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.453729 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ede85a3e-dba0-4946-a12d-3dd38485c815-config\") pod \"ede85a3e-dba0-4946-a12d-3dd38485c815\" (UID: \"ede85a3e-dba0-4946-a12d-3dd38485c815\") " Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.453756 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bn4b\" (UniqueName: \"kubernetes.io/projected/ede85a3e-dba0-4946-a12d-3dd38485c815-kube-api-access-6bn4b\") pod \"ede85a3e-dba0-4946-a12d-3dd38485c815\" (UID: \"ede85a3e-dba0-4946-a12d-3dd38485c815\") " Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.453797 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gck57\" (UniqueName: \"kubernetes.io/projected/1bff77bb-7533-45dd-9c1c-d20368964bc6-kube-api-access-gck57\") pod \"1bff77bb-7533-45dd-9c1c-d20368964bc6\" (UID: \"1bff77bb-7533-45dd-9c1c-d20368964bc6\") " Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.453875 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bff77bb-7533-45dd-9c1c-d20368964bc6-operator-scripts\") pod \"1bff77bb-7533-45dd-9c1c-d20368964bc6\" (UID: \"1bff77bb-7533-45dd-9c1c-d20368964bc6\") " Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.454256 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6vgs\" (UniqueName: \"kubernetes.io/projected/bf2cdf1b-29dd-484e-ad40-e287454d8534-kube-api-access-j6vgs\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.454269 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-klvhz\" (UniqueName: \"kubernetes.io/projected/ab97d90f-f85d-4d2b-8b8e-6c62d74b7a07-kube-api-access-klvhz\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.454279 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2snr\" (UniqueName: \"kubernetes.io/projected/ae8fd88c-6bf8-483c-950f-1466ea49c607-kube-api-access-b2snr\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.454730 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bff77bb-7533-45dd-9c1c-d20368964bc6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1bff77bb-7533-45dd-9c1c-d20368964bc6" (UID: "1bff77bb-7533-45dd-9c1c-d20368964bc6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.458955 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ede85a3e-dba0-4946-a12d-3dd38485c815-kube-api-access-6bn4b" (OuterVolumeSpecName: "kube-api-access-6bn4b") pod "ede85a3e-dba0-4946-a12d-3dd38485c815" (UID: "ede85a3e-dba0-4946-a12d-3dd38485c815"). InnerVolumeSpecName "kube-api-access-6bn4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.459289 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bff77bb-7533-45dd-9c1c-d20368964bc6-kube-api-access-gck57" (OuterVolumeSpecName: "kube-api-access-gck57") pod "1bff77bb-7533-45dd-9c1c-d20368964bc6" (UID: "1bff77bb-7533-45dd-9c1c-d20368964bc6"). InnerVolumeSpecName "kube-api-access-gck57". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.506310 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ede85a3e-dba0-4946-a12d-3dd38485c815-config" (OuterVolumeSpecName: "config") pod "ede85a3e-dba0-4946-a12d-3dd38485c815" (UID: "ede85a3e-dba0-4946-a12d-3dd38485c815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.509550 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ede85a3e-dba0-4946-a12d-3dd38485c815-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ede85a3e-dba0-4946-a12d-3dd38485c815" (UID: "ede85a3e-dba0-4946-a12d-3dd38485c815"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.516662 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ede85a3e-dba0-4946-a12d-3dd38485c815-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ede85a3e-dba0-4946-a12d-3dd38485c815" (UID: "ede85a3e-dba0-4946-a12d-3dd38485c815"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.556488 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ede85a3e-dba0-4946-a12d-3dd38485c815-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.556527 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ede85a3e-dba0-4946-a12d-3dd38485c815-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.556540 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ede85a3e-dba0-4946-a12d-3dd38485c815-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.556557 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bn4b\" (UniqueName: \"kubernetes.io/projected/ede85a3e-dba0-4946-a12d-3dd38485c815-kube-api-access-6bn4b\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.556569 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gck57\" (UniqueName: \"kubernetes.io/projected/1bff77bb-7533-45dd-9c1c-d20368964bc6-kube-api-access-gck57\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.556577 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1bff77bb-7533-45dd-9c1c-d20368964bc6-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.718705 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7878659675-8x5rp"] Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.738252 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7878659675-8x5rp"] Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.989877 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-ql2cg"] Dec 11 14:06:38 crc kubenswrapper[5050]: E1211 14:06:38.990734 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab97d90f-f85d-4d2b-8b8e-6c62d74b7a07" containerName="mariadb-account-create-update" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.990761 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab97d90f-f85d-4d2b-8b8e-6c62d74b7a07" containerName="mariadb-account-create-update" Dec 11 14:06:38 crc kubenswrapper[5050]: E1211 14:06:38.990786 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bff77bb-7533-45dd-9c1c-d20368964bc6" containerName="mariadb-database-create" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.990795 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bff77bb-7533-45dd-9c1c-d20368964bc6" containerName="mariadb-database-create" Dec 11 14:06:38 crc kubenswrapper[5050]: E1211 14:06:38.990819 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae8fd88c-6bf8-483c-950f-1466ea49c607" containerName="mariadb-account-create-update" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.990828 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae8fd88c-6bf8-483c-950f-1466ea49c607" containerName="mariadb-account-create-update" Dec 11 14:06:38 crc kubenswrapper[5050]: E1211 14:06:38.990847 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ede85a3e-dba0-4946-a12d-3dd38485c815" containerName="init" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.990856 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ede85a3e-dba0-4946-a12d-3dd38485c815" containerName="init" Dec 11 14:06:38 crc kubenswrapper[5050]: E1211 14:06:38.990865 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf2cdf1b-29dd-484e-ad40-e287454d8534" containerName="mariadb-database-create" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.990874 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf2cdf1b-29dd-484e-ad40-e287454d8534" containerName="mariadb-database-create" Dec 11 14:06:38 crc kubenswrapper[5050]: E1211 14:06:38.990900 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ede85a3e-dba0-4946-a12d-3dd38485c815" containerName="dnsmasq-dns" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.990910 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ede85a3e-dba0-4946-a12d-3dd38485c815" containerName="dnsmasq-dns" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.991122 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf2cdf1b-29dd-484e-ad40-e287454d8534" containerName="mariadb-database-create" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.991141 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae8fd88c-6bf8-483c-950f-1466ea49c607" containerName="mariadb-account-create-update" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.991156 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="ede85a3e-dba0-4946-a12d-3dd38485c815" containerName="dnsmasq-dns" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.991171 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab97d90f-f85d-4d2b-8b8e-6c62d74b7a07" containerName="mariadb-account-create-update" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.991183 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bff77bb-7533-45dd-9c1c-d20368964bc6" containerName="mariadb-database-create" Dec 11 14:06:38 crc kubenswrapper[5050]: I1211 14:06:38.991859 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ql2cg" Dec 11 14:06:39 crc kubenswrapper[5050]: I1211 14:06:39.005705 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-ql2cg"] Dec 11 14:06:39 crc kubenswrapper[5050]: I1211 14:06:39.077176 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpc89\" (UniqueName: \"kubernetes.io/projected/392d68e3-ec0f-4e16-b58f-d1bbdbce674f-kube-api-access-mpc89\") pod \"glance-db-create-ql2cg\" (UID: \"392d68e3-ec0f-4e16-b58f-d1bbdbce674f\") " pod="openstack/glance-db-create-ql2cg" Dec 11 14:06:39 crc kubenswrapper[5050]: I1211 14:06:39.077259 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/392d68e3-ec0f-4e16-b58f-d1bbdbce674f-operator-scripts\") pod \"glance-db-create-ql2cg\" (UID: \"392d68e3-ec0f-4e16-b58f-d1bbdbce674f\") " pod="openstack/glance-db-create-ql2cg" Dec 11 14:06:39 crc kubenswrapper[5050]: I1211 14:06:39.080867 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-2653-account-create-update-gzdx8"] Dec 11 14:06:39 crc kubenswrapper[5050]: I1211 14:06:39.082251 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2653-account-create-update-gzdx8" Dec 11 14:06:39 crc kubenswrapper[5050]: I1211 14:06:39.086443 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 11 14:06:39 crc kubenswrapper[5050]: I1211 14:06:39.096454 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2653-account-create-update-gzdx8"] Dec 11 14:06:39 crc kubenswrapper[5050]: I1211 14:06:39.179080 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpc89\" (UniqueName: \"kubernetes.io/projected/392d68e3-ec0f-4e16-b58f-d1bbdbce674f-kube-api-access-mpc89\") pod \"glance-db-create-ql2cg\" (UID: \"392d68e3-ec0f-4e16-b58f-d1bbdbce674f\") " pod="openstack/glance-db-create-ql2cg" Dec 11 14:06:39 crc kubenswrapper[5050]: I1211 14:06:39.179158 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/392d68e3-ec0f-4e16-b58f-d1bbdbce674f-operator-scripts\") pod \"glance-db-create-ql2cg\" (UID: \"392d68e3-ec0f-4e16-b58f-d1bbdbce674f\") " pod="openstack/glance-db-create-ql2cg" Dec 11 14:06:39 crc kubenswrapper[5050]: I1211 14:06:39.179230 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prs4p\" (UniqueName: \"kubernetes.io/projected/f9d5e3e1-bee1-4425-a1ed-a6234cf3db49-kube-api-access-prs4p\") pod \"glance-2653-account-create-update-gzdx8\" (UID: \"f9d5e3e1-bee1-4425-a1ed-a6234cf3db49\") " pod="openstack/glance-2653-account-create-update-gzdx8" Dec 11 14:06:39 crc kubenswrapper[5050]: I1211 14:06:39.179298 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9d5e3e1-bee1-4425-a1ed-a6234cf3db49-operator-scripts\") pod \"glance-2653-account-create-update-gzdx8\" (UID: \"f9d5e3e1-bee1-4425-a1ed-a6234cf3db49\") " pod="openstack/glance-2653-account-create-update-gzdx8" Dec 11 14:06:39 crc kubenswrapper[5050]: I1211 14:06:39.180508 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/392d68e3-ec0f-4e16-b58f-d1bbdbce674f-operator-scripts\") pod \"glance-db-create-ql2cg\" (UID: \"392d68e3-ec0f-4e16-b58f-d1bbdbce674f\") " pod="openstack/glance-db-create-ql2cg" Dec 11 14:06:39 crc kubenswrapper[5050]: I1211 14:06:39.209573 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpc89\" (UniqueName: \"kubernetes.io/projected/392d68e3-ec0f-4e16-b58f-d1bbdbce674f-kube-api-access-mpc89\") pod \"glance-db-create-ql2cg\" (UID: \"392d68e3-ec0f-4e16-b58f-d1bbdbce674f\") " pod="openstack/glance-db-create-ql2cg" Dec 11 14:06:39 crc kubenswrapper[5050]: I1211 14:06:39.281983 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9d5e3e1-bee1-4425-a1ed-a6234cf3db49-operator-scripts\") pod \"glance-2653-account-create-update-gzdx8\" (UID: \"f9d5e3e1-bee1-4425-a1ed-a6234cf3db49\") " pod="openstack/glance-2653-account-create-update-gzdx8" Dec 11 14:06:39 crc kubenswrapper[5050]: I1211 14:06:39.282210 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prs4p\" (UniqueName: \"kubernetes.io/projected/f9d5e3e1-bee1-4425-a1ed-a6234cf3db49-kube-api-access-prs4p\") pod \"glance-2653-account-create-update-gzdx8\" (UID: \"f9d5e3e1-bee1-4425-a1ed-a6234cf3db49\") " pod="openstack/glance-2653-account-create-update-gzdx8" Dec 11 14:06:39 crc kubenswrapper[5050]: I1211 14:06:39.284029 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9d5e3e1-bee1-4425-a1ed-a6234cf3db49-operator-scripts\") pod \"glance-2653-account-create-update-gzdx8\" (UID: \"f9d5e3e1-bee1-4425-a1ed-a6234cf3db49\") " pod="openstack/glance-2653-account-create-update-gzdx8" Dec 11 14:06:39 crc kubenswrapper[5050]: I1211 14:06:39.312721 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ql2cg" Dec 11 14:06:39 crc kubenswrapper[5050]: I1211 14:06:39.315374 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prs4p\" (UniqueName: \"kubernetes.io/projected/f9d5e3e1-bee1-4425-a1ed-a6234cf3db49-kube-api-access-prs4p\") pod \"glance-2653-account-create-update-gzdx8\" (UID: \"f9d5e3e1-bee1-4425-a1ed-a6234cf3db49\") " pod="openstack/glance-2653-account-create-update-gzdx8" Dec 11 14:06:39 crc kubenswrapper[5050]: I1211 14:06:39.398501 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2653-account-create-update-gzdx8" Dec 11 14:06:39 crc kubenswrapper[5050]: I1211 14:06:39.436344 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-dl7tx" podStartSLOduration=2.953453698 podStartE2EDuration="9.436311957s" podCreationTimestamp="2025-12-11 14:06:30 +0000 UTC" firstStartedPulling="2025-12-11 14:06:31.605482945 +0000 UTC m=+1082.449205521" lastFinishedPulling="2025-12-11 14:06:38.088341194 +0000 UTC m=+1088.932063780" observedRunningTime="2025-12-11 14:06:39.432439932 +0000 UTC m=+1090.276162518" watchObservedRunningTime="2025-12-11 14:06:39.436311957 +0000 UTC m=+1090.280034543" Dec 11 14:06:39 crc kubenswrapper[5050]: I1211 14:06:39.558367 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ede85a3e-dba0-4946-a12d-3dd38485c815" path="/var/lib/kubelet/pods/ede85a3e-dba0-4946-a12d-3dd38485c815/volumes" Dec 11 14:06:39 crc kubenswrapper[5050]: I1211 14:06:39.685884 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-2653-account-create-update-gzdx8"] Dec 11 14:06:39 crc kubenswrapper[5050]: W1211 14:06:39.695227 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9d5e3e1_bee1_4425_a1ed_a6234cf3db49.slice/crio-a945fd5b1af6b7e75a12c1675e7d5addf6674efefe657ffe2b8cf83a3d061c7c WatchSource:0}: Error finding container a945fd5b1af6b7e75a12c1675e7d5addf6674efefe657ffe2b8cf83a3d061c7c: Status 404 returned error can't find the container with id a945fd5b1af6b7e75a12c1675e7d5addf6674efefe657ffe2b8cf83a3d061c7c Dec 11 14:06:39 crc kubenswrapper[5050]: I1211 14:06:39.805638 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-ql2cg"] Dec 11 14:06:39 crc kubenswrapper[5050]: W1211 14:06:39.808044 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod392d68e3_ec0f_4e16_b58f_d1bbdbce674f.slice/crio-0b80fd3cb343aabed65ae85b2ed4c5c165adf8daff6b615b69783d0e58a6a4b0 WatchSource:0}: Error finding container 0b80fd3cb343aabed65ae85b2ed4c5c165adf8daff6b615b69783d0e58a6a4b0: Status 404 returned error can't find the container with id 0b80fd3cb343aabed65ae85b2ed4c5c165adf8daff6b615b69783d0e58a6a4b0 Dec 11 14:06:40 crc kubenswrapper[5050]: I1211 14:06:40.424599 5050 generic.go:334] "Generic (PLEG): container finished" podID="392d68e3-ec0f-4e16-b58f-d1bbdbce674f" containerID="3624e6393f8a6eadd5c4286428ab748ba1155fa0943854327019eb997eadc689" exitCode=0 Dec 11 14:06:40 crc kubenswrapper[5050]: I1211 14:06:40.424727 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ql2cg" event={"ID":"392d68e3-ec0f-4e16-b58f-d1bbdbce674f","Type":"ContainerDied","Data":"3624e6393f8a6eadd5c4286428ab748ba1155fa0943854327019eb997eadc689"} Dec 11 14:06:40 crc kubenswrapper[5050]: I1211 14:06:40.424977 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ql2cg" event={"ID":"392d68e3-ec0f-4e16-b58f-d1bbdbce674f","Type":"ContainerStarted","Data":"0b80fd3cb343aabed65ae85b2ed4c5c165adf8daff6b615b69783d0e58a6a4b0"} Dec 11 14:06:40 crc kubenswrapper[5050]: I1211 14:06:40.428410 5050 generic.go:334] "Generic (PLEG): container finished" podID="f9d5e3e1-bee1-4425-a1ed-a6234cf3db49" containerID="38e653b3d3373170f5f490629772c2956f706a2d09203fe68bfcbb06130e8f4e" exitCode=0 Dec 11 14:06:40 crc kubenswrapper[5050]: I1211 14:06:40.428525 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2653-account-create-update-gzdx8" event={"ID":"f9d5e3e1-bee1-4425-a1ed-a6234cf3db49","Type":"ContainerDied","Data":"38e653b3d3373170f5f490629772c2956f706a2d09203fe68bfcbb06130e8f4e"} Dec 11 14:06:40 crc kubenswrapper[5050]: I1211 14:06:40.428546 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2653-account-create-update-gzdx8" event={"ID":"f9d5e3e1-bee1-4425-a1ed-a6234cf3db49","Type":"ContainerStarted","Data":"a945fd5b1af6b7e75a12c1675e7d5addf6674efefe657ffe2b8cf83a3d061c7c"} Dec 11 14:06:41 crc kubenswrapper[5050]: I1211 14:06:41.877075 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2653-account-create-update-gzdx8" Dec 11 14:06:41 crc kubenswrapper[5050]: I1211 14:06:41.884177 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ql2cg" Dec 11 14:06:42 crc kubenswrapper[5050]: I1211 14:06:42.036817 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpc89\" (UniqueName: \"kubernetes.io/projected/392d68e3-ec0f-4e16-b58f-d1bbdbce674f-kube-api-access-mpc89\") pod \"392d68e3-ec0f-4e16-b58f-d1bbdbce674f\" (UID: \"392d68e3-ec0f-4e16-b58f-d1bbdbce674f\") " Dec 11 14:06:42 crc kubenswrapper[5050]: I1211 14:06:42.036974 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/392d68e3-ec0f-4e16-b58f-d1bbdbce674f-operator-scripts\") pod \"392d68e3-ec0f-4e16-b58f-d1bbdbce674f\" (UID: \"392d68e3-ec0f-4e16-b58f-d1bbdbce674f\") " Dec 11 14:06:42 crc kubenswrapper[5050]: I1211 14:06:42.037225 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prs4p\" (UniqueName: \"kubernetes.io/projected/f9d5e3e1-bee1-4425-a1ed-a6234cf3db49-kube-api-access-prs4p\") pod \"f9d5e3e1-bee1-4425-a1ed-a6234cf3db49\" (UID: \"f9d5e3e1-bee1-4425-a1ed-a6234cf3db49\") " Dec 11 14:06:42 crc kubenswrapper[5050]: I1211 14:06:42.037265 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9d5e3e1-bee1-4425-a1ed-a6234cf3db49-operator-scripts\") pod \"f9d5e3e1-bee1-4425-a1ed-a6234cf3db49\" (UID: \"f9d5e3e1-bee1-4425-a1ed-a6234cf3db49\") " Dec 11 14:06:42 crc kubenswrapper[5050]: I1211 14:06:42.037872 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/392d68e3-ec0f-4e16-b58f-d1bbdbce674f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "392d68e3-ec0f-4e16-b58f-d1bbdbce674f" (UID: "392d68e3-ec0f-4e16-b58f-d1bbdbce674f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:06:42 crc kubenswrapper[5050]: I1211 14:06:42.038180 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9d5e3e1-bee1-4425-a1ed-a6234cf3db49-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f9d5e3e1-bee1-4425-a1ed-a6234cf3db49" (UID: "f9d5e3e1-bee1-4425-a1ed-a6234cf3db49"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:06:42 crc kubenswrapper[5050]: I1211 14:06:42.051853 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9d5e3e1-bee1-4425-a1ed-a6234cf3db49-kube-api-access-prs4p" (OuterVolumeSpecName: "kube-api-access-prs4p") pod "f9d5e3e1-bee1-4425-a1ed-a6234cf3db49" (UID: "f9d5e3e1-bee1-4425-a1ed-a6234cf3db49"). InnerVolumeSpecName "kube-api-access-prs4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:06:42 crc kubenswrapper[5050]: I1211 14:06:42.051988 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/392d68e3-ec0f-4e16-b58f-d1bbdbce674f-kube-api-access-mpc89" (OuterVolumeSpecName: "kube-api-access-mpc89") pod "392d68e3-ec0f-4e16-b58f-d1bbdbce674f" (UID: "392d68e3-ec0f-4e16-b58f-d1bbdbce674f"). InnerVolumeSpecName "kube-api-access-mpc89". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:06:42 crc kubenswrapper[5050]: I1211 14:06:42.139150 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prs4p\" (UniqueName: \"kubernetes.io/projected/f9d5e3e1-bee1-4425-a1ed-a6234cf3db49-kube-api-access-prs4p\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:42 crc kubenswrapper[5050]: I1211 14:06:42.139190 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f9d5e3e1-bee1-4425-a1ed-a6234cf3db49-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:42 crc kubenswrapper[5050]: I1211 14:06:42.139200 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpc89\" (UniqueName: \"kubernetes.io/projected/392d68e3-ec0f-4e16-b58f-d1bbdbce674f-kube-api-access-mpc89\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:42 crc kubenswrapper[5050]: I1211 14:06:42.139211 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/392d68e3-ec0f-4e16-b58f-d1bbdbce674f-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:42 crc kubenswrapper[5050]: I1211 14:06:42.448777 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-ql2cg" Dec 11 14:06:42 crc kubenswrapper[5050]: I1211 14:06:42.448778 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-ql2cg" event={"ID":"392d68e3-ec0f-4e16-b58f-d1bbdbce674f","Type":"ContainerDied","Data":"0b80fd3cb343aabed65ae85b2ed4c5c165adf8daff6b615b69783d0e58a6a4b0"} Dec 11 14:06:42 crc kubenswrapper[5050]: I1211 14:06:42.448889 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b80fd3cb343aabed65ae85b2ed4c5c165adf8daff6b615b69783d0e58a6a4b0" Dec 11 14:06:42 crc kubenswrapper[5050]: I1211 14:06:42.451325 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-2653-account-create-update-gzdx8" event={"ID":"f9d5e3e1-bee1-4425-a1ed-a6234cf3db49","Type":"ContainerDied","Data":"a945fd5b1af6b7e75a12c1675e7d5addf6674efefe657ffe2b8cf83a3d061c7c"} Dec 11 14:06:42 crc kubenswrapper[5050]: I1211 14:06:42.451383 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a945fd5b1af6b7e75a12c1675e7d5addf6674efefe657ffe2b8cf83a3d061c7c" Dec 11 14:06:42 crc kubenswrapper[5050]: I1211 14:06:42.451346 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-2653-account-create-update-gzdx8" Dec 11 14:06:42 crc kubenswrapper[5050]: I1211 14:06:42.648673 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a5dabf50-534b-45cb-87db-45373930fe82-etc-swift\") pod \"swift-storage-0\" (UID: \"a5dabf50-534b-45cb-87db-45373930fe82\") " pod="openstack/swift-storage-0" Dec 11 14:06:42 crc kubenswrapper[5050]: E1211 14:06:42.648936 5050 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 11 14:06:42 crc kubenswrapper[5050]: E1211 14:06:42.648981 5050 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 11 14:06:42 crc kubenswrapper[5050]: E1211 14:06:42.649051 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a5dabf50-534b-45cb-87db-45373930fe82-etc-swift podName:a5dabf50-534b-45cb-87db-45373930fe82 nodeName:}" failed. No retries permitted until 2025-12-11 14:06:58.649033675 +0000 UTC m=+1109.492756261 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a5dabf50-534b-45cb-87db-45373930fe82-etc-swift") pod "swift-storage-0" (UID: "a5dabf50-534b-45cb-87db-45373930fe82") : configmap "swift-ring-files" not found Dec 11 14:06:43 crc kubenswrapper[5050]: I1211 14:06:43.932922 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-47tvr" podUID="01fa4d89-aae5-451a-8798-2700053fe3d4" containerName="ovn-controller" probeResult="failure" output=< Dec 11 14:06:43 crc kubenswrapper[5050]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 11 14:06:43 crc kubenswrapper[5050]: > Dec 11 14:06:43 crc kubenswrapper[5050]: I1211 14:06:43.983247 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-pjzpq" Dec 11 14:06:43 crc kubenswrapper[5050]: I1211 14:06:43.983343 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-pjzpq" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.195074 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-47tvr-config-dqtg8"] Dec 11 14:06:44 crc kubenswrapper[5050]: E1211 14:06:44.195726 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="392d68e3-ec0f-4e16-b58f-d1bbdbce674f" containerName="mariadb-database-create" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.195751 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="392d68e3-ec0f-4e16-b58f-d1bbdbce674f" containerName="mariadb-database-create" Dec 11 14:06:44 crc kubenswrapper[5050]: E1211 14:06:44.195779 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9d5e3e1-bee1-4425-a1ed-a6234cf3db49" containerName="mariadb-account-create-update" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.195787 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9d5e3e1-bee1-4425-a1ed-a6234cf3db49" containerName="mariadb-account-create-update" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.218585 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="392d68e3-ec0f-4e16-b58f-d1bbdbce674f" containerName="mariadb-database-create" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.218651 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9d5e3e1-bee1-4425-a1ed-a6234cf3db49" containerName="mariadb-account-create-update" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.220123 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-47tvr-config-dqtg8" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.224305 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.234544 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-47tvr-config-dqtg8"] Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.280374 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/718f7a91-130a-4698-8eef-7db2f780bb12-var-log-ovn\") pod \"ovn-controller-47tvr-config-dqtg8\" (UID: \"718f7a91-130a-4698-8eef-7db2f780bb12\") " pod="openstack/ovn-controller-47tvr-config-dqtg8" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.280515 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/718f7a91-130a-4698-8eef-7db2f780bb12-var-run\") pod \"ovn-controller-47tvr-config-dqtg8\" (UID: \"718f7a91-130a-4698-8eef-7db2f780bb12\") " pod="openstack/ovn-controller-47tvr-config-dqtg8" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.280651 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/718f7a91-130a-4698-8eef-7db2f780bb12-scripts\") pod \"ovn-controller-47tvr-config-dqtg8\" (UID: \"718f7a91-130a-4698-8eef-7db2f780bb12\") " pod="openstack/ovn-controller-47tvr-config-dqtg8" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.280732 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmrtf\" (UniqueName: \"kubernetes.io/projected/718f7a91-130a-4698-8eef-7db2f780bb12-kube-api-access-fmrtf\") pod \"ovn-controller-47tvr-config-dqtg8\" (UID: \"718f7a91-130a-4698-8eef-7db2f780bb12\") " pod="openstack/ovn-controller-47tvr-config-dqtg8" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.280800 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/718f7a91-130a-4698-8eef-7db2f780bb12-additional-scripts\") pod \"ovn-controller-47tvr-config-dqtg8\" (UID: \"718f7a91-130a-4698-8eef-7db2f780bb12\") " pod="openstack/ovn-controller-47tvr-config-dqtg8" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.280904 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/718f7a91-130a-4698-8eef-7db2f780bb12-var-run-ovn\") pod \"ovn-controller-47tvr-config-dqtg8\" (UID: \"718f7a91-130a-4698-8eef-7db2f780bb12\") " pod="openstack/ovn-controller-47tvr-config-dqtg8" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.299129 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-j5zml"] Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.301149 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-j5zml" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.303836 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-j5zml"] Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.305939 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-2jgks" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.306340 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.383036 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t7sm\" (UniqueName: \"kubernetes.io/projected/669fc9ec-b625-44f9-bd15-bc8a79158127-kube-api-access-5t7sm\") pod \"glance-db-sync-j5zml\" (UID: \"669fc9ec-b625-44f9-bd15-bc8a79158127\") " pod="openstack/glance-db-sync-j5zml" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.383150 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/718f7a91-130a-4698-8eef-7db2f780bb12-var-run-ovn\") pod \"ovn-controller-47tvr-config-dqtg8\" (UID: \"718f7a91-130a-4698-8eef-7db2f780bb12\") " pod="openstack/ovn-controller-47tvr-config-dqtg8" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.383188 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/669fc9ec-b625-44f9-bd15-bc8a79158127-db-sync-config-data\") pod \"glance-db-sync-j5zml\" (UID: \"669fc9ec-b625-44f9-bd15-bc8a79158127\") " pod="openstack/glance-db-sync-j5zml" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.383235 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/718f7a91-130a-4698-8eef-7db2f780bb12-var-log-ovn\") pod \"ovn-controller-47tvr-config-dqtg8\" (UID: \"718f7a91-130a-4698-8eef-7db2f780bb12\") " pod="openstack/ovn-controller-47tvr-config-dqtg8" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.383267 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/718f7a91-130a-4698-8eef-7db2f780bb12-var-run\") pod \"ovn-controller-47tvr-config-dqtg8\" (UID: \"718f7a91-130a-4698-8eef-7db2f780bb12\") " pod="openstack/ovn-controller-47tvr-config-dqtg8" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.383295 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/718f7a91-130a-4698-8eef-7db2f780bb12-scripts\") pod \"ovn-controller-47tvr-config-dqtg8\" (UID: \"718f7a91-130a-4698-8eef-7db2f780bb12\") " pod="openstack/ovn-controller-47tvr-config-dqtg8" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.383325 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/669fc9ec-b625-44f9-bd15-bc8a79158127-combined-ca-bundle\") pod \"glance-db-sync-j5zml\" (UID: \"669fc9ec-b625-44f9-bd15-bc8a79158127\") " pod="openstack/glance-db-sync-j5zml" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.383370 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmrtf\" (UniqueName: \"kubernetes.io/projected/718f7a91-130a-4698-8eef-7db2f780bb12-kube-api-access-fmrtf\") pod \"ovn-controller-47tvr-config-dqtg8\" (UID: \"718f7a91-130a-4698-8eef-7db2f780bb12\") " pod="openstack/ovn-controller-47tvr-config-dqtg8" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.383432 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/718f7a91-130a-4698-8eef-7db2f780bb12-additional-scripts\") pod \"ovn-controller-47tvr-config-dqtg8\" (UID: \"718f7a91-130a-4698-8eef-7db2f780bb12\") " pod="openstack/ovn-controller-47tvr-config-dqtg8" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.383456 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/669fc9ec-b625-44f9-bd15-bc8a79158127-config-data\") pod \"glance-db-sync-j5zml\" (UID: \"669fc9ec-b625-44f9-bd15-bc8a79158127\") " pod="openstack/glance-db-sync-j5zml" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.383464 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/718f7a91-130a-4698-8eef-7db2f780bb12-var-run-ovn\") pod \"ovn-controller-47tvr-config-dqtg8\" (UID: \"718f7a91-130a-4698-8eef-7db2f780bb12\") " pod="openstack/ovn-controller-47tvr-config-dqtg8" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.383516 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/718f7a91-130a-4698-8eef-7db2f780bb12-var-run\") pod \"ovn-controller-47tvr-config-dqtg8\" (UID: \"718f7a91-130a-4698-8eef-7db2f780bb12\") " pod="openstack/ovn-controller-47tvr-config-dqtg8" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.383743 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/718f7a91-130a-4698-8eef-7db2f780bb12-var-log-ovn\") pod \"ovn-controller-47tvr-config-dqtg8\" (UID: \"718f7a91-130a-4698-8eef-7db2f780bb12\") " pod="openstack/ovn-controller-47tvr-config-dqtg8" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.384507 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/718f7a91-130a-4698-8eef-7db2f780bb12-additional-scripts\") pod \"ovn-controller-47tvr-config-dqtg8\" (UID: \"718f7a91-130a-4698-8eef-7db2f780bb12\") " pod="openstack/ovn-controller-47tvr-config-dqtg8" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.386342 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/718f7a91-130a-4698-8eef-7db2f780bb12-scripts\") pod \"ovn-controller-47tvr-config-dqtg8\" (UID: \"718f7a91-130a-4698-8eef-7db2f780bb12\") " pod="openstack/ovn-controller-47tvr-config-dqtg8" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.409338 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmrtf\" (UniqueName: \"kubernetes.io/projected/718f7a91-130a-4698-8eef-7db2f780bb12-kube-api-access-fmrtf\") pod \"ovn-controller-47tvr-config-dqtg8\" (UID: \"718f7a91-130a-4698-8eef-7db2f780bb12\") " pod="openstack/ovn-controller-47tvr-config-dqtg8" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.485437 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/669fc9ec-b625-44f9-bd15-bc8a79158127-config-data\") pod \"glance-db-sync-j5zml\" (UID: \"669fc9ec-b625-44f9-bd15-bc8a79158127\") " pod="openstack/glance-db-sync-j5zml" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.485543 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5t7sm\" (UniqueName: \"kubernetes.io/projected/669fc9ec-b625-44f9-bd15-bc8a79158127-kube-api-access-5t7sm\") pod \"glance-db-sync-j5zml\" (UID: \"669fc9ec-b625-44f9-bd15-bc8a79158127\") " pod="openstack/glance-db-sync-j5zml" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.485611 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/669fc9ec-b625-44f9-bd15-bc8a79158127-db-sync-config-data\") pod \"glance-db-sync-j5zml\" (UID: \"669fc9ec-b625-44f9-bd15-bc8a79158127\") " pod="openstack/glance-db-sync-j5zml" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.485673 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/669fc9ec-b625-44f9-bd15-bc8a79158127-combined-ca-bundle\") pod \"glance-db-sync-j5zml\" (UID: \"669fc9ec-b625-44f9-bd15-bc8a79158127\") " pod="openstack/glance-db-sync-j5zml" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.489682 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/669fc9ec-b625-44f9-bd15-bc8a79158127-config-data\") pod \"glance-db-sync-j5zml\" (UID: \"669fc9ec-b625-44f9-bd15-bc8a79158127\") " pod="openstack/glance-db-sync-j5zml" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.489974 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/669fc9ec-b625-44f9-bd15-bc8a79158127-db-sync-config-data\") pod \"glance-db-sync-j5zml\" (UID: \"669fc9ec-b625-44f9-bd15-bc8a79158127\") " pod="openstack/glance-db-sync-j5zml" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.499059 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/669fc9ec-b625-44f9-bd15-bc8a79158127-combined-ca-bundle\") pod \"glance-db-sync-j5zml\" (UID: \"669fc9ec-b625-44f9-bd15-bc8a79158127\") " pod="openstack/glance-db-sync-j5zml" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.504584 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t7sm\" (UniqueName: \"kubernetes.io/projected/669fc9ec-b625-44f9-bd15-bc8a79158127-kube-api-access-5t7sm\") pod \"glance-db-sync-j5zml\" (UID: \"669fc9ec-b625-44f9-bd15-bc8a79158127\") " pod="openstack/glance-db-sync-j5zml" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.566698 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-47tvr-config-dqtg8" Dec 11 14:06:44 crc kubenswrapper[5050]: I1211 14:06:44.619800 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-j5zml" Dec 11 14:06:45 crc kubenswrapper[5050]: I1211 14:06:45.208842 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-47tvr-config-dqtg8"] Dec 11 14:06:45 crc kubenswrapper[5050]: W1211 14:06:45.214542 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod718f7a91_130a_4698_8eef_7db2f780bb12.slice/crio-334a6e547544ba4b988e681a5217afed03fb50a1cb33b4441482455719bbed9f WatchSource:0}: Error finding container 334a6e547544ba4b988e681a5217afed03fb50a1cb33b4441482455719bbed9f: Status 404 returned error can't find the container with id 334a6e547544ba4b988e681a5217afed03fb50a1cb33b4441482455719bbed9f Dec 11 14:06:45 crc kubenswrapper[5050]: I1211 14:06:45.394406 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-j5zml"] Dec 11 14:06:45 crc kubenswrapper[5050]: W1211 14:06:45.406862 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod669fc9ec_b625_44f9_bd15_bc8a79158127.slice/crio-2ada08a11e643f8412fb7755887a76bd31d98890ee81329aceccd8eeb66d2653 WatchSource:0}: Error finding container 2ada08a11e643f8412fb7755887a76bd31d98890ee81329aceccd8eeb66d2653: Status 404 returned error can't find the container with id 2ada08a11e643f8412fb7755887a76bd31d98890ee81329aceccd8eeb66d2653 Dec 11 14:06:45 crc kubenswrapper[5050]: I1211 14:06:45.480754 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-j5zml" event={"ID":"669fc9ec-b625-44f9-bd15-bc8a79158127","Type":"ContainerStarted","Data":"2ada08a11e643f8412fb7755887a76bd31d98890ee81329aceccd8eeb66d2653"} Dec 11 14:06:45 crc kubenswrapper[5050]: I1211 14:06:45.483532 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-47tvr-config-dqtg8" event={"ID":"718f7a91-130a-4698-8eef-7db2f780bb12","Type":"ContainerStarted","Data":"334a6e547544ba4b988e681a5217afed03fb50a1cb33b4441482455719bbed9f"} Dec 11 14:06:46 crc kubenswrapper[5050]: I1211 14:06:46.497988 5050 generic.go:334] "Generic (PLEG): container finished" podID="718f7a91-130a-4698-8eef-7db2f780bb12" containerID="1791df88a6b816e0b72db7b665200275015d5dbb7dca85bbeb168f77b2438276" exitCode=0 Dec 11 14:06:46 crc kubenswrapper[5050]: I1211 14:06:46.498180 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-47tvr-config-dqtg8" event={"ID":"718f7a91-130a-4698-8eef-7db2f780bb12","Type":"ContainerDied","Data":"1791df88a6b816e0b72db7b665200275015d5dbb7dca85bbeb168f77b2438276"} Dec 11 14:06:47 crc kubenswrapper[5050]: I1211 14:06:47.511380 5050 generic.go:334] "Generic (PLEG): container finished" podID="401d393f-46fc-4150-b785-313d42022d95" containerID="9363f944bb00bf65a18a77d105a5c3acb2935d1c6a51699593ad0beb061d83c4" exitCode=0 Dec 11 14:06:47 crc kubenswrapper[5050]: I1211 14:06:47.511990 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dl7tx" event={"ID":"401d393f-46fc-4150-b785-313d42022d95","Type":"ContainerDied","Data":"9363f944bb00bf65a18a77d105a5c3acb2935d1c6a51699593ad0beb061d83c4"} Dec 11 14:06:47 crc kubenswrapper[5050]: I1211 14:06:47.862160 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-47tvr-config-dqtg8" Dec 11 14:06:47 crc kubenswrapper[5050]: I1211 14:06:47.959367 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/718f7a91-130a-4698-8eef-7db2f780bb12-var-run-ovn\") pod \"718f7a91-130a-4698-8eef-7db2f780bb12\" (UID: \"718f7a91-130a-4698-8eef-7db2f780bb12\") " Dec 11 14:06:47 crc kubenswrapper[5050]: I1211 14:06:47.959437 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/718f7a91-130a-4698-8eef-7db2f780bb12-additional-scripts\") pod \"718f7a91-130a-4698-8eef-7db2f780bb12\" (UID: \"718f7a91-130a-4698-8eef-7db2f780bb12\") " Dec 11 14:06:47 crc kubenswrapper[5050]: I1211 14:06:47.959504 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/718f7a91-130a-4698-8eef-7db2f780bb12-var-run\") pod \"718f7a91-130a-4698-8eef-7db2f780bb12\" (UID: \"718f7a91-130a-4698-8eef-7db2f780bb12\") " Dec 11 14:06:47 crc kubenswrapper[5050]: I1211 14:06:47.959504 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/718f7a91-130a-4698-8eef-7db2f780bb12-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "718f7a91-130a-4698-8eef-7db2f780bb12" (UID: "718f7a91-130a-4698-8eef-7db2f780bb12"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 14:06:47 crc kubenswrapper[5050]: I1211 14:06:47.959533 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmrtf\" (UniqueName: \"kubernetes.io/projected/718f7a91-130a-4698-8eef-7db2f780bb12-kube-api-access-fmrtf\") pod \"718f7a91-130a-4698-8eef-7db2f780bb12\" (UID: \"718f7a91-130a-4698-8eef-7db2f780bb12\") " Dec 11 14:06:47 crc kubenswrapper[5050]: I1211 14:06:47.959656 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/718f7a91-130a-4698-8eef-7db2f780bb12-var-run" (OuterVolumeSpecName: "var-run") pod "718f7a91-130a-4698-8eef-7db2f780bb12" (UID: "718f7a91-130a-4698-8eef-7db2f780bb12"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 14:06:47 crc kubenswrapper[5050]: I1211 14:06:47.959687 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/718f7a91-130a-4698-8eef-7db2f780bb12-var-log-ovn\") pod \"718f7a91-130a-4698-8eef-7db2f780bb12\" (UID: \"718f7a91-130a-4698-8eef-7db2f780bb12\") " Dec 11 14:06:47 crc kubenswrapper[5050]: I1211 14:06:47.959800 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/718f7a91-130a-4698-8eef-7db2f780bb12-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "718f7a91-130a-4698-8eef-7db2f780bb12" (UID: "718f7a91-130a-4698-8eef-7db2f780bb12"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 14:06:47 crc kubenswrapper[5050]: I1211 14:06:47.959897 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/718f7a91-130a-4698-8eef-7db2f780bb12-scripts\") pod \"718f7a91-130a-4698-8eef-7db2f780bb12\" (UID: \"718f7a91-130a-4698-8eef-7db2f780bb12\") " Dec 11 14:06:47 crc kubenswrapper[5050]: I1211 14:06:47.960607 5050 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/718f7a91-130a-4698-8eef-7db2f780bb12-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:47 crc kubenswrapper[5050]: I1211 14:06:47.960689 5050 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/718f7a91-130a-4698-8eef-7db2f780bb12-var-run\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:47 crc kubenswrapper[5050]: I1211 14:06:47.960702 5050 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/718f7a91-130a-4698-8eef-7db2f780bb12-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:47 crc kubenswrapper[5050]: I1211 14:06:47.960694 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/718f7a91-130a-4698-8eef-7db2f780bb12-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "718f7a91-130a-4698-8eef-7db2f780bb12" (UID: "718f7a91-130a-4698-8eef-7db2f780bb12"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:06:47 crc kubenswrapper[5050]: I1211 14:06:47.961088 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/718f7a91-130a-4698-8eef-7db2f780bb12-scripts" (OuterVolumeSpecName: "scripts") pod "718f7a91-130a-4698-8eef-7db2f780bb12" (UID: "718f7a91-130a-4698-8eef-7db2f780bb12"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:06:47 crc kubenswrapper[5050]: I1211 14:06:47.965737 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/718f7a91-130a-4698-8eef-7db2f780bb12-kube-api-access-fmrtf" (OuterVolumeSpecName: "kube-api-access-fmrtf") pod "718f7a91-130a-4698-8eef-7db2f780bb12" (UID: "718f7a91-130a-4698-8eef-7db2f780bb12"). InnerVolumeSpecName "kube-api-access-fmrtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:06:48 crc kubenswrapper[5050]: I1211 14:06:48.062847 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmrtf\" (UniqueName: \"kubernetes.io/projected/718f7a91-130a-4698-8eef-7db2f780bb12-kube-api-access-fmrtf\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:48 crc kubenswrapper[5050]: I1211 14:06:48.062929 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/718f7a91-130a-4698-8eef-7db2f780bb12-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:48 crc kubenswrapper[5050]: I1211 14:06:48.062942 5050 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/718f7a91-130a-4698-8eef-7db2f780bb12-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:48 crc kubenswrapper[5050]: I1211 14:06:48.523211 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-47tvr-config-dqtg8" event={"ID":"718f7a91-130a-4698-8eef-7db2f780bb12","Type":"ContainerDied","Data":"334a6e547544ba4b988e681a5217afed03fb50a1cb33b4441482455719bbed9f"} Dec 11 14:06:48 crc kubenswrapper[5050]: I1211 14:06:48.523301 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="334a6e547544ba4b988e681a5217afed03fb50a1cb33b4441482455719bbed9f" Dec 11 14:06:48 crc kubenswrapper[5050]: I1211 14:06:48.523249 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-47tvr-config-dqtg8" Dec 11 14:06:48 crc kubenswrapper[5050]: I1211 14:06:48.867601 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dl7tx" Dec 11 14:06:48 crc kubenswrapper[5050]: I1211 14:06:48.954334 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-47tvr" Dec 11 14:06:48 crc kubenswrapper[5050]: I1211 14:06:48.978867 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/401d393f-46fc-4150-b785-313d42022d95-etc-swift\") pod \"401d393f-46fc-4150-b785-313d42022d95\" (UID: \"401d393f-46fc-4150-b785-313d42022d95\") " Dec 11 14:06:48 crc kubenswrapper[5050]: I1211 14:06:48.978953 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtpjw\" (UniqueName: \"kubernetes.io/projected/401d393f-46fc-4150-b785-313d42022d95-kube-api-access-rtpjw\") pod \"401d393f-46fc-4150-b785-313d42022d95\" (UID: \"401d393f-46fc-4150-b785-313d42022d95\") " Dec 11 14:06:48 crc kubenswrapper[5050]: I1211 14:06:48.979408 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/401d393f-46fc-4150-b785-313d42022d95-swiftconf\") pod \"401d393f-46fc-4150-b785-313d42022d95\" (UID: \"401d393f-46fc-4150-b785-313d42022d95\") " Dec 11 14:06:48 crc kubenswrapper[5050]: I1211 14:06:48.979472 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/401d393f-46fc-4150-b785-313d42022d95-combined-ca-bundle\") pod \"401d393f-46fc-4150-b785-313d42022d95\" (UID: \"401d393f-46fc-4150-b785-313d42022d95\") " Dec 11 14:06:48 crc kubenswrapper[5050]: I1211 14:06:48.979545 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/401d393f-46fc-4150-b785-313d42022d95-ring-data-devices\") pod \"401d393f-46fc-4150-b785-313d42022d95\" (UID: \"401d393f-46fc-4150-b785-313d42022d95\") " Dec 11 14:06:48 crc kubenswrapper[5050]: I1211 14:06:48.979563 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/401d393f-46fc-4150-b785-313d42022d95-scripts\") pod \"401d393f-46fc-4150-b785-313d42022d95\" (UID: \"401d393f-46fc-4150-b785-313d42022d95\") " Dec 11 14:06:48 crc kubenswrapper[5050]: I1211 14:06:48.979598 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/401d393f-46fc-4150-b785-313d42022d95-dispersionconf\") pod \"401d393f-46fc-4150-b785-313d42022d95\" (UID: \"401d393f-46fc-4150-b785-313d42022d95\") " Dec 11 14:06:48 crc kubenswrapper[5050]: I1211 14:06:48.979972 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/401d393f-46fc-4150-b785-313d42022d95-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "401d393f-46fc-4150-b785-313d42022d95" (UID: "401d393f-46fc-4150-b785-313d42022d95"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:06:48 crc kubenswrapper[5050]: I1211 14:06:48.980866 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/401d393f-46fc-4150-b785-313d42022d95-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "401d393f-46fc-4150-b785-313d42022d95" (UID: "401d393f-46fc-4150-b785-313d42022d95"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:06:48 crc kubenswrapper[5050]: I1211 14:06:48.993173 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-47tvr-config-dqtg8"] Dec 11 14:06:48 crc kubenswrapper[5050]: I1211 14:06:48.995293 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/401d393f-46fc-4150-b785-313d42022d95-kube-api-access-rtpjw" (OuterVolumeSpecName: "kube-api-access-rtpjw") pod "401d393f-46fc-4150-b785-313d42022d95" (UID: "401d393f-46fc-4150-b785-313d42022d95"). InnerVolumeSpecName "kube-api-access-rtpjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:06:49 crc kubenswrapper[5050]: I1211 14:06:49.013973 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-47tvr-config-dqtg8"] Dec 11 14:06:49 crc kubenswrapper[5050]: I1211 14:06:49.018146 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/401d393f-46fc-4150-b785-313d42022d95-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "401d393f-46fc-4150-b785-313d42022d95" (UID: "401d393f-46fc-4150-b785-313d42022d95"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:06:49 crc kubenswrapper[5050]: I1211 14:06:49.027566 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/401d393f-46fc-4150-b785-313d42022d95-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "401d393f-46fc-4150-b785-313d42022d95" (UID: "401d393f-46fc-4150-b785-313d42022d95"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:06:49 crc kubenswrapper[5050]: I1211 14:06:49.030722 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/401d393f-46fc-4150-b785-313d42022d95-scripts" (OuterVolumeSpecName: "scripts") pod "401d393f-46fc-4150-b785-313d42022d95" (UID: "401d393f-46fc-4150-b785-313d42022d95"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:06:49 crc kubenswrapper[5050]: I1211 14:06:49.044530 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/401d393f-46fc-4150-b785-313d42022d95-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "401d393f-46fc-4150-b785-313d42022d95" (UID: "401d393f-46fc-4150-b785-313d42022d95"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:06:49 crc kubenswrapper[5050]: I1211 14:06:49.081869 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/401d393f-46fc-4150-b785-313d42022d95-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:49 crc kubenswrapper[5050]: I1211 14:06:49.081911 5050 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/401d393f-46fc-4150-b785-313d42022d95-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:49 crc kubenswrapper[5050]: I1211 14:06:49.081927 5050 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/401d393f-46fc-4150-b785-313d42022d95-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:49 crc kubenswrapper[5050]: I1211 14:06:49.081941 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtpjw\" (UniqueName: \"kubernetes.io/projected/401d393f-46fc-4150-b785-313d42022d95-kube-api-access-rtpjw\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:49 crc kubenswrapper[5050]: I1211 14:06:49.081952 5050 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/401d393f-46fc-4150-b785-313d42022d95-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:49 crc kubenswrapper[5050]: I1211 14:06:49.081964 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/401d393f-46fc-4150-b785-313d42022d95-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:49 crc kubenswrapper[5050]: I1211 14:06:49.081974 5050 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/401d393f-46fc-4150-b785-313d42022d95-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 11 14:06:49 crc kubenswrapper[5050]: I1211 14:06:49.539198 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dl7tx" event={"ID":"401d393f-46fc-4150-b785-313d42022d95","Type":"ContainerDied","Data":"5f5081475aa41be0949f86f409cae25a2a1444f4fde56c7ccd5a08d694245172"} Dec 11 14:06:49 crc kubenswrapper[5050]: I1211 14:06:49.539240 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f5081475aa41be0949f86f409cae25a2a1444f4fde56c7ccd5a08d694245172" Dec 11 14:06:49 crc kubenswrapper[5050]: I1211 14:06:49.539268 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dl7tx" Dec 11 14:06:49 crc kubenswrapper[5050]: I1211 14:06:49.568678 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="718f7a91-130a-4698-8eef-7db2f780bb12" path="/var/lib/kubelet/pods/718f7a91-130a-4698-8eef-7db2f780bb12/volumes" Dec 11 14:06:51 crc kubenswrapper[5050]: I1211 14:06:51.568671 5050 generic.go:334] "Generic (PLEG): container finished" podID="0891f075-8101-475b-b844-e7cb42a4990b" containerID="b030d9ea1d520c633a941cacbfc01b8167a3e4ea9d95d099c782876dc0ce6862" exitCode=0 Dec 11 14:06:51 crc kubenswrapper[5050]: I1211 14:06:51.568768 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0891f075-8101-475b-b844-e7cb42a4990b","Type":"ContainerDied","Data":"b030d9ea1d520c633a941cacbfc01b8167a3e4ea9d95d099c782876dc0ce6862"} Dec 11 14:06:51 crc kubenswrapper[5050]: I1211 14:06:51.571347 5050 generic.go:334] "Generic (PLEG): container finished" podID="458f05be-2fd6-44d9-8034-f077356964ce" containerID="510949b6fa4514794979cb46d1baa4411178e70e74985dbfb206b0b3da3f4cc4" exitCode=0 Dec 11 14:06:51 crc kubenswrapper[5050]: I1211 14:06:51.571390 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"458f05be-2fd6-44d9-8034-f077356964ce","Type":"ContainerDied","Data":"510949b6fa4514794979cb46d1baa4411178e70e74985dbfb206b0b3da3f4cc4"} Dec 11 14:06:58 crc kubenswrapper[5050]: I1211 14:06:58.692243 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a5dabf50-534b-45cb-87db-45373930fe82-etc-swift\") pod \"swift-storage-0\" (UID: \"a5dabf50-534b-45cb-87db-45373930fe82\") " pod="openstack/swift-storage-0" Dec 11 14:06:58 crc kubenswrapper[5050]: I1211 14:06:58.702732 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a5dabf50-534b-45cb-87db-45373930fe82-etc-swift\") pod \"swift-storage-0\" (UID: \"a5dabf50-534b-45cb-87db-45373930fe82\") " pod="openstack/swift-storage-0" Dec 11 14:06:58 crc kubenswrapper[5050]: I1211 14:06:58.981139 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 11 14:07:00 crc kubenswrapper[5050]: E1211 14:07:00.582298 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f" Dec 11 14:07:00 crc kubenswrapper[5050]: E1211 14:07:00.583848 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5t7sm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-j5zml_openstack(669fc9ec-b625-44f9-bd15-bc8a79158127): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 14:07:00 crc kubenswrapper[5050]: E1211 14:07:00.585342 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-j5zml" podUID="669fc9ec-b625-44f9-bd15-bc8a79158127" Dec 11 14:07:01 crc kubenswrapper[5050]: I1211 14:07:01.014783 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0891f075-8101-475b-b844-e7cb42a4990b","Type":"ContainerStarted","Data":"db23d3f3f27190827f163f21b2da4cd0ca1fc9aa0bfb390a14b8c83a5ed2ee47"} Dec 11 14:07:01 crc kubenswrapper[5050]: I1211 14:07:01.015387 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 11 14:07:01 crc kubenswrapper[5050]: I1211 14:07:01.017941 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"458f05be-2fd6-44d9-8034-f077356964ce","Type":"ContainerStarted","Data":"7fc0726972676985eb911b818bc159c8c1b12a1ca0e646ddda6558ea21079201"} Dec 11 14:07:01 crc kubenswrapper[5050]: I1211 14:07:01.018821 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:07:01 crc kubenswrapper[5050]: E1211 14:07:01.020048 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api@sha256:e4aa4ebbb1e581a12040e9ad2ae2709ac31b5d965bb64fc4252d1028b05c565f\\\"\"" pod="openstack/glance-db-sync-j5zml" podUID="669fc9ec-b625-44f9-bd15-bc8a79158127" Dec 11 14:07:01 crc kubenswrapper[5050]: I1211 14:07:01.042118 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=47.804261534 podStartE2EDuration="1m32.042101066s" podCreationTimestamp="2025-12-11 14:05:29 +0000 UTC" firstStartedPulling="2025-12-11 14:05:31.924881457 +0000 UTC m=+1022.768604043" lastFinishedPulling="2025-12-11 14:06:16.162720989 +0000 UTC m=+1067.006443575" observedRunningTime="2025-12-11 14:07:01.040478092 +0000 UTC m=+1111.884200678" watchObservedRunningTime="2025-12-11 14:07:01.042101066 +0000 UTC m=+1111.885823652" Dec 11 14:07:01 crc kubenswrapper[5050]: I1211 14:07:01.082476 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=47.108108897 podStartE2EDuration="1m32.082452344s" podCreationTimestamp="2025-12-11 14:05:29 +0000 UTC" firstStartedPulling="2025-12-11 14:05:31.459980805 +0000 UTC m=+1022.303703401" lastFinishedPulling="2025-12-11 14:06:16.434324262 +0000 UTC m=+1067.278046848" observedRunningTime="2025-12-11 14:07:01.07672017 +0000 UTC m=+1111.920442766" watchObservedRunningTime="2025-12-11 14:07:01.082452344 +0000 UTC m=+1111.926174930" Dec 11 14:07:01 crc kubenswrapper[5050]: I1211 14:07:01.097448 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 11 14:07:02 crc kubenswrapper[5050]: I1211 14:07:02.029162 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a5dabf50-534b-45cb-87db-45373930fe82","Type":"ContainerStarted","Data":"e4960e7eda3e1efa4061f07af255476e8516177687e0f468f6a9a0c6571c04a9"} Dec 11 14:07:03 crc kubenswrapper[5050]: I1211 14:07:03.052842 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a5dabf50-534b-45cb-87db-45373930fe82","Type":"ContainerStarted","Data":"c1a089eb1d8d523f1a786eee0915def7fa7aab5c3e4514f0c035a46c61eef1cb"} Dec 11 14:07:03 crc kubenswrapper[5050]: I1211 14:07:03.053412 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a5dabf50-534b-45cb-87db-45373930fe82","Type":"ContainerStarted","Data":"feac1300b5d8a5ea16c8321c45cc457e5dbf72ac6aab1103080d7accf21709e1"} Dec 11 14:07:03 crc kubenswrapper[5050]: I1211 14:07:03.053432 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a5dabf50-534b-45cb-87db-45373930fe82","Type":"ContainerStarted","Data":"69f5ff7e4ffed5e07ece2e747c39e17e11ce1252b75c01b5c3313338481c02f5"} Dec 11 14:07:05 crc kubenswrapper[5050]: I1211 14:07:05.074436 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a5dabf50-534b-45cb-87db-45373930fe82","Type":"ContainerStarted","Data":"cc96ca859857b932852bab79b175e12e28dd66ea2b3f97528e65f1c394df699c"} Dec 11 14:07:07 crc kubenswrapper[5050]: I1211 14:07:07.097870 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a5dabf50-534b-45cb-87db-45373930fe82","Type":"ContainerStarted","Data":"b6c3d2263c2a8d964cc7422913cdc01c0a98e50a91cd20af0a8e5219f5c49d84"} Dec 11 14:07:07 crc kubenswrapper[5050]: I1211 14:07:07.098998 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a5dabf50-534b-45cb-87db-45373930fe82","Type":"ContainerStarted","Data":"3c2652501efb162ddb07fcdf676ff7b425046c43c56a32e87cf2a1b7f86d8517"} Dec 11 14:07:07 crc kubenswrapper[5050]: I1211 14:07:07.099175 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a5dabf50-534b-45cb-87db-45373930fe82","Type":"ContainerStarted","Data":"0b5dadf04d75b453da45faaba622c4ca8fa3ea72335c7e1cbe662af9423d5319"} Dec 11 14:07:07 crc kubenswrapper[5050]: I1211 14:07:07.099190 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a5dabf50-534b-45cb-87db-45373930fe82","Type":"ContainerStarted","Data":"a92c2e4e55be6c0dccf533363df9021ca510e9f14d1f5a908a2795582d914ca4"} Dec 11 14:07:09 crc kubenswrapper[5050]: I1211 14:07:09.150143 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a5dabf50-534b-45cb-87db-45373930fe82","Type":"ContainerStarted","Data":"038b0092de538faefca3e8ca1075a18dd7d58853d0c6eb5fdadf157d7e0f2147"} Dec 11 14:07:09 crc kubenswrapper[5050]: I1211 14:07:09.150628 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a5dabf50-534b-45cb-87db-45373930fe82","Type":"ContainerStarted","Data":"cf225480f25db60b0e9d83e3b98e796c13673684172c9b6129d91e173f39beb6"} Dec 11 14:07:09 crc kubenswrapper[5050]: I1211 14:07:09.150640 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a5dabf50-534b-45cb-87db-45373930fe82","Type":"ContainerStarted","Data":"cca45253ddc48fc0f165034563c70e630dd7fac3f3c0cf0ba23d657266869519"} Dec 11 14:07:09 crc kubenswrapper[5050]: I1211 14:07:09.150650 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a5dabf50-534b-45cb-87db-45373930fe82","Type":"ContainerStarted","Data":"0cac73e478a996fa3e9d0714853b7480372b37e951d6e3e0667c3722790407c8"} Dec 11 14:07:10 crc kubenswrapper[5050]: I1211 14:07:10.169643 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a5dabf50-534b-45cb-87db-45373930fe82","Type":"ContainerStarted","Data":"44acc22d4dbaf9801a70faf934b08100e13594f1cab4f854bc7c2b3dd8963fb5"} Dec 11 14:07:10 crc kubenswrapper[5050]: I1211 14:07:10.170203 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a5dabf50-534b-45cb-87db-45373930fe82","Type":"ContainerStarted","Data":"bfa20bc6bb25080f92169274679704ad90a7e9f219408ae8226d21d94b1cbce8"} Dec 11 14:07:10 crc kubenswrapper[5050]: I1211 14:07:10.170217 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a5dabf50-534b-45cb-87db-45373930fe82","Type":"ContainerStarted","Data":"c59f8bb548eec4e62535766386e180811808e5f7cf7913a3c02582a806b4073f"} Dec 11 14:07:10 crc kubenswrapper[5050]: I1211 14:07:10.225694 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=38.195631419 podStartE2EDuration="45.225628527s" podCreationTimestamp="2025-12-11 14:06:25 +0000 UTC" firstStartedPulling="2025-12-11 14:07:01.10380578 +0000 UTC m=+1111.947528366" lastFinishedPulling="2025-12-11 14:07:08.133802848 +0000 UTC m=+1118.977525474" observedRunningTime="2025-12-11 14:07:10.218962727 +0000 UTC m=+1121.062685303" watchObservedRunningTime="2025-12-11 14:07:10.225628527 +0000 UTC m=+1121.069351113" Dec 11 14:07:10 crc kubenswrapper[5050]: I1211 14:07:10.605952 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8db84466c-xqjfw"] Dec 11 14:07:10 crc kubenswrapper[5050]: E1211 14:07:10.606464 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="718f7a91-130a-4698-8eef-7db2f780bb12" containerName="ovn-config" Dec 11 14:07:10 crc kubenswrapper[5050]: I1211 14:07:10.606509 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="718f7a91-130a-4698-8eef-7db2f780bb12" containerName="ovn-config" Dec 11 14:07:10 crc kubenswrapper[5050]: E1211 14:07:10.606540 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="401d393f-46fc-4150-b785-313d42022d95" containerName="swift-ring-rebalance" Dec 11 14:07:10 crc kubenswrapper[5050]: I1211 14:07:10.606550 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="401d393f-46fc-4150-b785-313d42022d95" containerName="swift-ring-rebalance" Dec 11 14:07:10 crc kubenswrapper[5050]: I1211 14:07:10.606783 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="401d393f-46fc-4150-b785-313d42022d95" containerName="swift-ring-rebalance" Dec 11 14:07:10 crc kubenswrapper[5050]: I1211 14:07:10.606821 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="718f7a91-130a-4698-8eef-7db2f780bb12" containerName="ovn-config" Dec 11 14:07:10 crc kubenswrapper[5050]: I1211 14:07:10.607932 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-xqjfw" Dec 11 14:07:10 crc kubenswrapper[5050]: I1211 14:07:10.610807 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 11 14:07:10 crc kubenswrapper[5050]: I1211 14:07:10.629732 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-xqjfw"] Dec 11 14:07:10 crc kubenswrapper[5050]: I1211 14:07:10.746552 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38fac7eb-e076-4c58-9d8e-961461e27f92-dns-svc\") pod \"dnsmasq-dns-8db84466c-xqjfw\" (UID: \"38fac7eb-e076-4c58-9d8e-961461e27f92\") " pod="openstack/dnsmasq-dns-8db84466c-xqjfw" Dec 11 14:07:10 crc kubenswrapper[5050]: I1211 14:07:10.746648 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38fac7eb-e076-4c58-9d8e-961461e27f92-ovsdbserver-sb\") pod \"dnsmasq-dns-8db84466c-xqjfw\" (UID: \"38fac7eb-e076-4c58-9d8e-961461e27f92\") " pod="openstack/dnsmasq-dns-8db84466c-xqjfw" Dec 11 14:07:10 crc kubenswrapper[5050]: I1211 14:07:10.746844 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38fac7eb-e076-4c58-9d8e-961461e27f92-dns-swift-storage-0\") pod \"dnsmasq-dns-8db84466c-xqjfw\" (UID: \"38fac7eb-e076-4c58-9d8e-961461e27f92\") " pod="openstack/dnsmasq-dns-8db84466c-xqjfw" Dec 11 14:07:10 crc kubenswrapper[5050]: I1211 14:07:10.746915 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgbld\" (UniqueName: \"kubernetes.io/projected/38fac7eb-e076-4c58-9d8e-961461e27f92-kube-api-access-zgbld\") pod \"dnsmasq-dns-8db84466c-xqjfw\" (UID: \"38fac7eb-e076-4c58-9d8e-961461e27f92\") " pod="openstack/dnsmasq-dns-8db84466c-xqjfw" Dec 11 14:07:10 crc kubenswrapper[5050]: I1211 14:07:10.747235 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38fac7eb-e076-4c58-9d8e-961461e27f92-config\") pod \"dnsmasq-dns-8db84466c-xqjfw\" (UID: \"38fac7eb-e076-4c58-9d8e-961461e27f92\") " pod="openstack/dnsmasq-dns-8db84466c-xqjfw" Dec 11 14:07:10 crc kubenswrapper[5050]: I1211 14:07:10.747389 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38fac7eb-e076-4c58-9d8e-961461e27f92-ovsdbserver-nb\") pod \"dnsmasq-dns-8db84466c-xqjfw\" (UID: \"38fac7eb-e076-4c58-9d8e-961461e27f92\") " pod="openstack/dnsmasq-dns-8db84466c-xqjfw" Dec 11 14:07:10 crc kubenswrapper[5050]: I1211 14:07:10.847374 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:07:10 crc kubenswrapper[5050]: I1211 14:07:10.848869 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38fac7eb-e076-4c58-9d8e-961461e27f92-dns-swift-storage-0\") pod \"dnsmasq-dns-8db84466c-xqjfw\" (UID: \"38fac7eb-e076-4c58-9d8e-961461e27f92\") " pod="openstack/dnsmasq-dns-8db84466c-xqjfw" Dec 11 14:07:10 crc kubenswrapper[5050]: I1211 14:07:10.848931 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgbld\" (UniqueName: \"kubernetes.io/projected/38fac7eb-e076-4c58-9d8e-961461e27f92-kube-api-access-zgbld\") pod \"dnsmasq-dns-8db84466c-xqjfw\" (UID: \"38fac7eb-e076-4c58-9d8e-961461e27f92\") " pod="openstack/dnsmasq-dns-8db84466c-xqjfw" Dec 11 14:07:10 crc kubenswrapper[5050]: I1211 14:07:10.848983 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38fac7eb-e076-4c58-9d8e-961461e27f92-config\") pod \"dnsmasq-dns-8db84466c-xqjfw\" (UID: \"38fac7eb-e076-4c58-9d8e-961461e27f92\") " pod="openstack/dnsmasq-dns-8db84466c-xqjfw" Dec 11 14:07:10 crc kubenswrapper[5050]: I1211 14:07:10.849025 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38fac7eb-e076-4c58-9d8e-961461e27f92-ovsdbserver-nb\") pod \"dnsmasq-dns-8db84466c-xqjfw\" (UID: \"38fac7eb-e076-4c58-9d8e-961461e27f92\") " pod="openstack/dnsmasq-dns-8db84466c-xqjfw" Dec 11 14:07:10 crc kubenswrapper[5050]: I1211 14:07:10.849133 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38fac7eb-e076-4c58-9d8e-961461e27f92-dns-svc\") pod \"dnsmasq-dns-8db84466c-xqjfw\" (UID: \"38fac7eb-e076-4c58-9d8e-961461e27f92\") " pod="openstack/dnsmasq-dns-8db84466c-xqjfw" Dec 11 14:07:10 crc kubenswrapper[5050]: I1211 14:07:10.849170 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38fac7eb-e076-4c58-9d8e-961461e27f92-ovsdbserver-sb\") pod \"dnsmasq-dns-8db84466c-xqjfw\" (UID: \"38fac7eb-e076-4c58-9d8e-961461e27f92\") " pod="openstack/dnsmasq-dns-8db84466c-xqjfw" Dec 11 14:07:10 crc kubenswrapper[5050]: I1211 14:07:10.850351 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38fac7eb-e076-4c58-9d8e-961461e27f92-dns-swift-storage-0\") pod \"dnsmasq-dns-8db84466c-xqjfw\" (UID: \"38fac7eb-e076-4c58-9d8e-961461e27f92\") " pod="openstack/dnsmasq-dns-8db84466c-xqjfw" Dec 11 14:07:10 crc kubenswrapper[5050]: I1211 14:07:10.850393 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38fac7eb-e076-4c58-9d8e-961461e27f92-ovsdbserver-sb\") pod \"dnsmasq-dns-8db84466c-xqjfw\" (UID: \"38fac7eb-e076-4c58-9d8e-961461e27f92\") " pod="openstack/dnsmasq-dns-8db84466c-xqjfw" Dec 11 14:07:10 crc kubenswrapper[5050]: I1211 14:07:10.850399 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38fac7eb-e076-4c58-9d8e-961461e27f92-config\") pod \"dnsmasq-dns-8db84466c-xqjfw\" (UID: \"38fac7eb-e076-4c58-9d8e-961461e27f92\") " pod="openstack/dnsmasq-dns-8db84466c-xqjfw" Dec 11 14:07:10 crc kubenswrapper[5050]: I1211 14:07:10.850496 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38fac7eb-e076-4c58-9d8e-961461e27f92-ovsdbserver-nb\") pod \"dnsmasq-dns-8db84466c-xqjfw\" (UID: \"38fac7eb-e076-4c58-9d8e-961461e27f92\") " pod="openstack/dnsmasq-dns-8db84466c-xqjfw" Dec 11 14:07:10 crc kubenswrapper[5050]: I1211 14:07:10.850496 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38fac7eb-e076-4c58-9d8e-961461e27f92-dns-svc\") pod \"dnsmasq-dns-8db84466c-xqjfw\" (UID: \"38fac7eb-e076-4c58-9d8e-961461e27f92\") " pod="openstack/dnsmasq-dns-8db84466c-xqjfw" Dec 11 14:07:10 crc kubenswrapper[5050]: I1211 14:07:10.874777 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgbld\" (UniqueName: \"kubernetes.io/projected/38fac7eb-e076-4c58-9d8e-961461e27f92-kube-api-access-zgbld\") pod \"dnsmasq-dns-8db84466c-xqjfw\" (UID: \"38fac7eb-e076-4c58-9d8e-961461e27f92\") " pod="openstack/dnsmasq-dns-8db84466c-xqjfw" Dec 11 14:07:10 crc kubenswrapper[5050]: I1211 14:07:10.952721 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-xqjfw" Dec 11 14:07:11 crc kubenswrapper[5050]: I1211 14:07:11.171408 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 11 14:07:11 crc kubenswrapper[5050]: I1211 14:07:11.264200 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-xqjfw"] Dec 11 14:07:12 crc kubenswrapper[5050]: I1211 14:07:12.206852 5050 generic.go:334] "Generic (PLEG): container finished" podID="38fac7eb-e076-4c58-9d8e-961461e27f92" containerID="e2177809717135e23043125b0f0aada3f38012dcec64a3abac11fe875f2a1baa" exitCode=0 Dec 11 14:07:12 crc kubenswrapper[5050]: I1211 14:07:12.206947 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-xqjfw" event={"ID":"38fac7eb-e076-4c58-9d8e-961461e27f92","Type":"ContainerDied","Data":"e2177809717135e23043125b0f0aada3f38012dcec64a3abac11fe875f2a1baa"} Dec 11 14:07:12 crc kubenswrapper[5050]: I1211 14:07:12.207424 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-xqjfw" event={"ID":"38fac7eb-e076-4c58-9d8e-961461e27f92","Type":"ContainerStarted","Data":"bb14d4056d44bd4a7eb35a2951d19dbbf0f868432b8894238d02ca665d89befa"} Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.110099 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-hblvw"] Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.112277 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hblvw" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.130460 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-hblvw"] Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.196078 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-862c9"] Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.202899 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-862c9" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.203777 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgdkn\" (UniqueName: \"kubernetes.io/projected/a71df796-b040-4319-bc57-96a894dada33-kube-api-access-vgdkn\") pod \"barbican-db-create-hblvw\" (UID: \"a71df796-b040-4319-bc57-96a894dada33\") " pod="openstack/barbican-db-create-hblvw" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.203988 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a71df796-b040-4319-bc57-96a894dada33-operator-scripts\") pod \"barbican-db-create-hblvw\" (UID: \"a71df796-b040-4319-bc57-96a894dada33\") " pod="openstack/barbican-db-create-hblvw" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.227628 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-xqjfw" event={"ID":"38fac7eb-e076-4c58-9d8e-961461e27f92","Type":"ContainerStarted","Data":"ee297d5093f29c9f272b39eb275be212b9c71255851c9af6568a077917a24a37"} Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.227921 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8db84466c-xqjfw" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.228158 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-27b7-account-create-update-7xmsw"] Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.232932 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-27b7-account-create-update-7xmsw" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.235808 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.246775 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-862c9"] Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.260886 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-27b7-account-create-update-7xmsw"] Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.305653 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8db84466c-xqjfw" podStartSLOduration=3.305619548 podStartE2EDuration="3.305619548s" podCreationTimestamp="2025-12-11 14:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:07:13.262284459 +0000 UTC m=+1124.106007045" watchObservedRunningTime="2025-12-11 14:07:13.305619548 +0000 UTC m=+1124.149342134" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.307741 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7f3c014-a9d3-4424-be41-e87a3736a58d-operator-scripts\") pod \"cinder-27b7-account-create-update-7xmsw\" (UID: \"e7f3c014-a9d3-4424-be41-e87a3736a58d\") " pod="openstack/cinder-27b7-account-create-update-7xmsw" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.310101 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a71df796-b040-4319-bc57-96a894dada33-operator-scripts\") pod \"barbican-db-create-hblvw\" (UID: \"a71df796-b040-4319-bc57-96a894dada33\") " pod="openstack/barbican-db-create-hblvw" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.310237 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vhvg\" (UniqueName: \"kubernetes.io/projected/cca0032a-ceed-4a6a-9d4e-9a782c3bfe55-kube-api-access-7vhvg\") pod \"cinder-db-create-862c9\" (UID: \"cca0032a-ceed-4a6a-9d4e-9a782c3bfe55\") " pod="openstack/cinder-db-create-862c9" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.310291 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cca0032a-ceed-4a6a-9d4e-9a782c3bfe55-operator-scripts\") pod \"cinder-db-create-862c9\" (UID: \"cca0032a-ceed-4a6a-9d4e-9a782c3bfe55\") " pod="openstack/cinder-db-create-862c9" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.310330 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8zkk\" (UniqueName: \"kubernetes.io/projected/e7f3c014-a9d3-4424-be41-e87a3736a58d-kube-api-access-p8zkk\") pod \"cinder-27b7-account-create-update-7xmsw\" (UID: \"e7f3c014-a9d3-4424-be41-e87a3736a58d\") " pod="openstack/cinder-27b7-account-create-update-7xmsw" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.312259 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a71df796-b040-4319-bc57-96a894dada33-operator-scripts\") pod \"barbican-db-create-hblvw\" (UID: \"a71df796-b040-4319-bc57-96a894dada33\") " pod="openstack/barbican-db-create-hblvw" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.312350 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgdkn\" (UniqueName: \"kubernetes.io/projected/a71df796-b040-4319-bc57-96a894dada33-kube-api-access-vgdkn\") pod \"barbican-db-create-hblvw\" (UID: \"a71df796-b040-4319-bc57-96a894dada33\") " pod="openstack/barbican-db-create-hblvw" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.380457 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgdkn\" (UniqueName: \"kubernetes.io/projected/a71df796-b040-4319-bc57-96a894dada33-kube-api-access-vgdkn\") pod \"barbican-db-create-hblvw\" (UID: \"a71df796-b040-4319-bc57-96a894dada33\") " pod="openstack/barbican-db-create-hblvw" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.415589 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7f3c014-a9d3-4424-be41-e87a3736a58d-operator-scripts\") pod \"cinder-27b7-account-create-update-7xmsw\" (UID: \"e7f3c014-a9d3-4424-be41-e87a3736a58d\") " pod="openstack/cinder-27b7-account-create-update-7xmsw" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.415714 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vhvg\" (UniqueName: \"kubernetes.io/projected/cca0032a-ceed-4a6a-9d4e-9a782c3bfe55-kube-api-access-7vhvg\") pod \"cinder-db-create-862c9\" (UID: \"cca0032a-ceed-4a6a-9d4e-9a782c3bfe55\") " pod="openstack/cinder-db-create-862c9" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.415745 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cca0032a-ceed-4a6a-9d4e-9a782c3bfe55-operator-scripts\") pod \"cinder-db-create-862c9\" (UID: \"cca0032a-ceed-4a6a-9d4e-9a782c3bfe55\") " pod="openstack/cinder-db-create-862c9" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.415767 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8zkk\" (UniqueName: \"kubernetes.io/projected/e7f3c014-a9d3-4424-be41-e87a3736a58d-kube-api-access-p8zkk\") pod \"cinder-27b7-account-create-update-7xmsw\" (UID: \"e7f3c014-a9d3-4424-be41-e87a3736a58d\") " pod="openstack/cinder-27b7-account-create-update-7xmsw" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.417138 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7f3c014-a9d3-4424-be41-e87a3736a58d-operator-scripts\") pod \"cinder-27b7-account-create-update-7xmsw\" (UID: \"e7f3c014-a9d3-4424-be41-e87a3736a58d\") " pod="openstack/cinder-27b7-account-create-update-7xmsw" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.417672 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cca0032a-ceed-4a6a-9d4e-9a782c3bfe55-operator-scripts\") pod \"cinder-db-create-862c9\" (UID: \"cca0032a-ceed-4a6a-9d4e-9a782c3bfe55\") " pod="openstack/cinder-db-create-862c9" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.426381 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-34ac-account-create-update-q8nhn"] Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.427813 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-34ac-account-create-update-q8nhn" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.431157 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.440100 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-34ac-account-create-update-q8nhn"] Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.446498 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hblvw" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.458577 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-44xtk"] Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.460303 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-44xtk" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.469475 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8zkk\" (UniqueName: \"kubernetes.io/projected/e7f3c014-a9d3-4424-be41-e87a3736a58d-kube-api-access-p8zkk\") pod \"cinder-27b7-account-create-update-7xmsw\" (UID: \"e7f3c014-a9d3-4424-be41-e87a3736a58d\") " pod="openstack/cinder-27b7-account-create-update-7xmsw" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.469956 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-44xtk"] Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.470039 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vhvg\" (UniqueName: \"kubernetes.io/projected/cca0032a-ceed-4a6a-9d4e-9a782c3bfe55-kube-api-access-7vhvg\") pod \"cinder-db-create-862c9\" (UID: \"cca0032a-ceed-4a6a-9d4e-9a782c3bfe55\") " pod="openstack/cinder-db-create-862c9" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.517232 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9t6q\" (UniqueName: \"kubernetes.io/projected/118a06f4-3d12-4a10-8de7-bfcb56b3f237-kube-api-access-d9t6q\") pod \"barbican-34ac-account-create-update-q8nhn\" (UID: \"118a06f4-3d12-4a10-8de7-bfcb56b3f237\") " pod="openstack/barbican-34ac-account-create-update-q8nhn" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.517329 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/118a06f4-3d12-4a10-8de7-bfcb56b3f237-operator-scripts\") pod \"barbican-34ac-account-create-update-q8nhn\" (UID: \"118a06f4-3d12-4a10-8de7-bfcb56b3f237\") " pod="openstack/barbican-34ac-account-create-update-q8nhn" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.520615 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-kfqnl"] Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.522226 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kfqnl" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.527284 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.527510 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-tckgc" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.527827 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.527975 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.531690 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-kfqnl"] Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.532116 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-862c9" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.557547 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-27b7-account-create-update-7xmsw" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.619159 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9f0ade5-6144-4596-a78b-afeca167af55-config-data\") pod \"keystone-db-sync-kfqnl\" (UID: \"e9f0ade5-6144-4596-a78b-afeca167af55\") " pod="openstack/keystone-db-sync-kfqnl" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.619472 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9t6q\" (UniqueName: \"kubernetes.io/projected/118a06f4-3d12-4a10-8de7-bfcb56b3f237-kube-api-access-d9t6q\") pod \"barbican-34ac-account-create-update-q8nhn\" (UID: \"118a06f4-3d12-4a10-8de7-bfcb56b3f237\") " pod="openstack/barbican-34ac-account-create-update-q8nhn" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.619747 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/118a06f4-3d12-4a10-8de7-bfcb56b3f237-operator-scripts\") pod \"barbican-34ac-account-create-update-q8nhn\" (UID: \"118a06f4-3d12-4a10-8de7-bfcb56b3f237\") " pod="openstack/barbican-34ac-account-create-update-q8nhn" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.619857 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4qb7\" (UniqueName: \"kubernetes.io/projected/e9f0ade5-6144-4596-a78b-afeca167af55-kube-api-access-w4qb7\") pod \"keystone-db-sync-kfqnl\" (UID: \"e9f0ade5-6144-4596-a78b-afeca167af55\") " pod="openstack/keystone-db-sync-kfqnl" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.619920 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6bb80a3-78fe-4854-91bf-69a0f93a2f48-operator-scripts\") pod \"neutron-db-create-44xtk\" (UID: \"f6bb80a3-78fe-4854-91bf-69a0f93a2f48\") " pod="openstack/neutron-db-create-44xtk" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.620260 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5rc6\" (UniqueName: \"kubernetes.io/projected/f6bb80a3-78fe-4854-91bf-69a0f93a2f48-kube-api-access-q5rc6\") pod \"neutron-db-create-44xtk\" (UID: \"f6bb80a3-78fe-4854-91bf-69a0f93a2f48\") " pod="openstack/neutron-db-create-44xtk" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.620346 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9f0ade5-6144-4596-a78b-afeca167af55-combined-ca-bundle\") pod \"keystone-db-sync-kfqnl\" (UID: \"e9f0ade5-6144-4596-a78b-afeca167af55\") " pod="openstack/keystone-db-sync-kfqnl" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.620906 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/118a06f4-3d12-4a10-8de7-bfcb56b3f237-operator-scripts\") pod \"barbican-34ac-account-create-update-q8nhn\" (UID: \"118a06f4-3d12-4a10-8de7-bfcb56b3f237\") " pod="openstack/barbican-34ac-account-create-update-q8nhn" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.645292 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9t6q\" (UniqueName: \"kubernetes.io/projected/118a06f4-3d12-4a10-8de7-bfcb56b3f237-kube-api-access-d9t6q\") pod \"barbican-34ac-account-create-update-q8nhn\" (UID: \"118a06f4-3d12-4a10-8de7-bfcb56b3f237\") " pod="openstack/barbican-34ac-account-create-update-q8nhn" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.721812 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7594-account-create-update-9fmt4"] Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.723370 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5rc6\" (UniqueName: \"kubernetes.io/projected/f6bb80a3-78fe-4854-91bf-69a0f93a2f48-kube-api-access-q5rc6\") pod \"neutron-db-create-44xtk\" (UID: \"f6bb80a3-78fe-4854-91bf-69a0f93a2f48\") " pod="openstack/neutron-db-create-44xtk" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.723450 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9f0ade5-6144-4596-a78b-afeca167af55-combined-ca-bundle\") pod \"keystone-db-sync-kfqnl\" (UID: \"e9f0ade5-6144-4596-a78b-afeca167af55\") " pod="openstack/keystone-db-sync-kfqnl" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.723507 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9f0ade5-6144-4596-a78b-afeca167af55-config-data\") pod \"keystone-db-sync-kfqnl\" (UID: \"e9f0ade5-6144-4596-a78b-afeca167af55\") " pod="openstack/keystone-db-sync-kfqnl" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.723637 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4qb7\" (UniqueName: \"kubernetes.io/projected/e9f0ade5-6144-4596-a78b-afeca167af55-kube-api-access-w4qb7\") pod \"keystone-db-sync-kfqnl\" (UID: \"e9f0ade5-6144-4596-a78b-afeca167af55\") " pod="openstack/keystone-db-sync-kfqnl" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.723667 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6bb80a3-78fe-4854-91bf-69a0f93a2f48-operator-scripts\") pod \"neutron-db-create-44xtk\" (UID: \"f6bb80a3-78fe-4854-91bf-69a0f93a2f48\") " pod="openstack/neutron-db-create-44xtk" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.724119 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7594-account-create-update-9fmt4" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.724595 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6bb80a3-78fe-4854-91bf-69a0f93a2f48-operator-scripts\") pod \"neutron-db-create-44xtk\" (UID: \"f6bb80a3-78fe-4854-91bf-69a0f93a2f48\") " pod="openstack/neutron-db-create-44xtk" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.728839 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.729984 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9f0ade5-6144-4596-a78b-afeca167af55-config-data\") pod \"keystone-db-sync-kfqnl\" (UID: \"e9f0ade5-6144-4596-a78b-afeca167af55\") " pod="openstack/keystone-db-sync-kfqnl" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.748858 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9f0ade5-6144-4596-a78b-afeca167af55-combined-ca-bundle\") pod \"keystone-db-sync-kfqnl\" (UID: \"e9f0ade5-6144-4596-a78b-afeca167af55\") " pod="openstack/keystone-db-sync-kfqnl" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.753353 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7594-account-create-update-9fmt4"] Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.766757 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5rc6\" (UniqueName: \"kubernetes.io/projected/f6bb80a3-78fe-4854-91bf-69a0f93a2f48-kube-api-access-q5rc6\") pod \"neutron-db-create-44xtk\" (UID: \"f6bb80a3-78fe-4854-91bf-69a0f93a2f48\") " pod="openstack/neutron-db-create-44xtk" Dec 11 14:07:13 crc kubenswrapper[5050]: I1211 14:07:13.771757 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4qb7\" (UniqueName: \"kubernetes.io/projected/e9f0ade5-6144-4596-a78b-afeca167af55-kube-api-access-w4qb7\") pod \"keystone-db-sync-kfqnl\" (UID: \"e9f0ade5-6144-4596-a78b-afeca167af55\") " pod="openstack/keystone-db-sync-kfqnl" Dec 11 14:07:14 crc kubenswrapper[5050]: I1211 14:07:13.826557 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8169d760-5539-44ed-9586-6dd71f7fcda5-operator-scripts\") pod \"neutron-7594-account-create-update-9fmt4\" (UID: \"8169d760-5539-44ed-9586-6dd71f7fcda5\") " pod="openstack/neutron-7594-account-create-update-9fmt4" Dec 11 14:07:14 crc kubenswrapper[5050]: I1211 14:07:13.827023 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh9mr\" (UniqueName: \"kubernetes.io/projected/8169d760-5539-44ed-9586-6dd71f7fcda5-kube-api-access-nh9mr\") pod \"neutron-7594-account-create-update-9fmt4\" (UID: \"8169d760-5539-44ed-9586-6dd71f7fcda5\") " pod="openstack/neutron-7594-account-create-update-9fmt4" Dec 11 14:07:14 crc kubenswrapper[5050]: I1211 14:07:13.882251 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-34ac-account-create-update-q8nhn" Dec 11 14:07:14 crc kubenswrapper[5050]: I1211 14:07:13.929159 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh9mr\" (UniqueName: \"kubernetes.io/projected/8169d760-5539-44ed-9586-6dd71f7fcda5-kube-api-access-nh9mr\") pod \"neutron-7594-account-create-update-9fmt4\" (UID: \"8169d760-5539-44ed-9586-6dd71f7fcda5\") " pod="openstack/neutron-7594-account-create-update-9fmt4" Dec 11 14:07:14 crc kubenswrapper[5050]: I1211 14:07:13.929691 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8169d760-5539-44ed-9586-6dd71f7fcda5-operator-scripts\") pod \"neutron-7594-account-create-update-9fmt4\" (UID: \"8169d760-5539-44ed-9586-6dd71f7fcda5\") " pod="openstack/neutron-7594-account-create-update-9fmt4" Dec 11 14:07:14 crc kubenswrapper[5050]: I1211 14:07:13.930573 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8169d760-5539-44ed-9586-6dd71f7fcda5-operator-scripts\") pod \"neutron-7594-account-create-update-9fmt4\" (UID: \"8169d760-5539-44ed-9586-6dd71f7fcda5\") " pod="openstack/neutron-7594-account-create-update-9fmt4" Dec 11 14:07:14 crc kubenswrapper[5050]: I1211 14:07:13.937225 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-hblvw"] Dec 11 14:07:14 crc kubenswrapper[5050]: I1211 14:07:13.940410 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-44xtk" Dec 11 14:07:14 crc kubenswrapper[5050]: W1211 14:07:13.940931 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda71df796_b040_4319_bc57_96a894dada33.slice/crio-c498dbb407986a11352e8397b6495d643127c9617cc81d2035d80783a7a9e2b1 WatchSource:0}: Error finding container c498dbb407986a11352e8397b6495d643127c9617cc81d2035d80783a7a9e2b1: Status 404 returned error can't find the container with id c498dbb407986a11352e8397b6495d643127c9617cc81d2035d80783a7a9e2b1 Dec 11 14:07:14 crc kubenswrapper[5050]: I1211 14:07:13.949469 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kfqnl" Dec 11 14:07:14 crc kubenswrapper[5050]: I1211 14:07:13.951440 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh9mr\" (UniqueName: \"kubernetes.io/projected/8169d760-5539-44ed-9586-6dd71f7fcda5-kube-api-access-nh9mr\") pod \"neutron-7594-account-create-update-9fmt4\" (UID: \"8169d760-5539-44ed-9586-6dd71f7fcda5\") " pod="openstack/neutron-7594-account-create-update-9fmt4" Dec 11 14:07:14 crc kubenswrapper[5050]: I1211 14:07:14.085529 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7594-account-create-update-9fmt4" Dec 11 14:07:14 crc kubenswrapper[5050]: I1211 14:07:14.238380 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hblvw" event={"ID":"a71df796-b040-4319-bc57-96a894dada33","Type":"ContainerStarted","Data":"c498dbb407986a11352e8397b6495d643127c9617cc81d2035d80783a7a9e2b1"} Dec 11 14:07:15 crc kubenswrapper[5050]: I1211 14:07:15.012288 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-862c9"] Dec 11 14:07:15 crc kubenswrapper[5050]: I1211 14:07:15.033705 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-34ac-account-create-update-q8nhn"] Dec 11 14:07:15 crc kubenswrapper[5050]: I1211 14:07:15.048355 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-27b7-account-create-update-7xmsw"] Dec 11 14:07:15 crc kubenswrapper[5050]: I1211 14:07:15.055752 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-44xtk"] Dec 11 14:07:15 crc kubenswrapper[5050]: W1211 14:07:15.064418 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7f3c014_a9d3_4424_be41_e87a3736a58d.slice/crio-a0a18efabf1d63bde37ba29d7df85e73b01494fd4c00e992fb600279dbbb4040 WatchSource:0}: Error finding container a0a18efabf1d63bde37ba29d7df85e73b01494fd4c00e992fb600279dbbb4040: Status 404 returned error can't find the container with id a0a18efabf1d63bde37ba29d7df85e73b01494fd4c00e992fb600279dbbb4040 Dec 11 14:07:15 crc kubenswrapper[5050]: I1211 14:07:15.067966 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7594-account-create-update-9fmt4"] Dec 11 14:07:15 crc kubenswrapper[5050]: I1211 14:07:15.076425 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-kfqnl"] Dec 11 14:07:15 crc kubenswrapper[5050]: I1211 14:07:15.295244 5050 generic.go:334] "Generic (PLEG): container finished" podID="a71df796-b040-4319-bc57-96a894dada33" containerID="2f62aeb6162bd178083d9d882bc213a977e97d767cf138a5471e9b6d54190929" exitCode=0 Dec 11 14:07:15 crc kubenswrapper[5050]: I1211 14:07:15.295344 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hblvw" event={"ID":"a71df796-b040-4319-bc57-96a894dada33","Type":"ContainerDied","Data":"2f62aeb6162bd178083d9d882bc213a977e97d767cf138a5471e9b6d54190929"} Dec 11 14:07:15 crc kubenswrapper[5050]: I1211 14:07:15.303003 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-27b7-account-create-update-7xmsw" event={"ID":"e7f3c014-a9d3-4424-be41-e87a3736a58d","Type":"ContainerStarted","Data":"a0a18efabf1d63bde37ba29d7df85e73b01494fd4c00e992fb600279dbbb4040"} Dec 11 14:07:15 crc kubenswrapper[5050]: I1211 14:07:15.306086 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-44xtk" event={"ID":"f6bb80a3-78fe-4854-91bf-69a0f93a2f48","Type":"ContainerStarted","Data":"a5a35cc0475f773c22f0e6eb1053c6702e366dcc9bcb439a4403e45593bc7cd2"} Dec 11 14:07:15 crc kubenswrapper[5050]: I1211 14:07:15.311380 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-j5zml" event={"ID":"669fc9ec-b625-44f9-bd15-bc8a79158127","Type":"ContainerStarted","Data":"08bfa765d647f306601d0abaff12d769ea8332592ea0f0283de458df6c5e5537"} Dec 11 14:07:15 crc kubenswrapper[5050]: I1211 14:07:15.326393 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-34ac-account-create-update-q8nhn" event={"ID":"118a06f4-3d12-4a10-8de7-bfcb56b3f237","Type":"ContainerStarted","Data":"54e5e003a13ffa09504e7db48660a4aba77c1f34a5760db5b470faf7ae8446f3"} Dec 11 14:07:15 crc kubenswrapper[5050]: I1211 14:07:15.329604 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-862c9" event={"ID":"cca0032a-ceed-4a6a-9d4e-9a782c3bfe55","Type":"ContainerStarted","Data":"06a2ba886c161be3bc559f039bdb03390467da14228c67ba586a86cfecac43cb"} Dec 11 14:07:15 crc kubenswrapper[5050]: I1211 14:07:15.332314 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kfqnl" event={"ID":"e9f0ade5-6144-4596-a78b-afeca167af55","Type":"ContainerStarted","Data":"ad7cb902bef92cd82641483a5191b767112f9a08679a690ed153700e5a185b60"} Dec 11 14:07:15 crc kubenswrapper[5050]: I1211 14:07:15.343135 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7594-account-create-update-9fmt4" event={"ID":"8169d760-5539-44ed-9586-6dd71f7fcda5","Type":"ContainerStarted","Data":"5a08db2a5329f55f3a71fa96d3f6bd21e9f6279b14537ca6f733293cd168988a"} Dec 11 14:07:15 crc kubenswrapper[5050]: I1211 14:07:15.367750 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-j5zml" podStartSLOduration=2.42424426 podStartE2EDuration="31.367720814s" podCreationTimestamp="2025-12-11 14:06:44 +0000 UTC" firstStartedPulling="2025-12-11 14:06:45.409372548 +0000 UTC m=+1096.253095134" lastFinishedPulling="2025-12-11 14:07:14.352849102 +0000 UTC m=+1125.196571688" observedRunningTime="2025-12-11 14:07:15.338156687 +0000 UTC m=+1126.181879303" watchObservedRunningTime="2025-12-11 14:07:15.367720814 +0000 UTC m=+1126.211443400" Dec 11 14:07:16 crc kubenswrapper[5050]: I1211 14:07:16.359368 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-44xtk" event={"ID":"f6bb80a3-78fe-4854-91bf-69a0f93a2f48","Type":"ContainerStarted","Data":"7b47caabbd51a8e4fa31011df4b0b71c1cfb2074ee3115985077cd353b1679e4"} Dec 11 14:07:16 crc kubenswrapper[5050]: I1211 14:07:16.364135 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-34ac-account-create-update-q8nhn" event={"ID":"118a06f4-3d12-4a10-8de7-bfcb56b3f237","Type":"ContainerStarted","Data":"70921762a5cd41a13f21b3df228b676e8d09da1a291c372ac76bbe1b1e001aa6"} Dec 11 14:07:16 crc kubenswrapper[5050]: I1211 14:07:16.366078 5050 generic.go:334] "Generic (PLEG): container finished" podID="cca0032a-ceed-4a6a-9d4e-9a782c3bfe55" containerID="9438fa89f17dcb4f6482af1d497bd7752b9ebcbd02295a6a5a1d83d614b1180a" exitCode=0 Dec 11 14:07:16 crc kubenswrapper[5050]: I1211 14:07:16.366304 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-862c9" event={"ID":"cca0032a-ceed-4a6a-9d4e-9a782c3bfe55","Type":"ContainerDied","Data":"9438fa89f17dcb4f6482af1d497bd7752b9ebcbd02295a6a5a1d83d614b1180a"} Dec 11 14:07:16 crc kubenswrapper[5050]: I1211 14:07:16.369046 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7594-account-create-update-9fmt4" event={"ID":"8169d760-5539-44ed-9586-6dd71f7fcda5","Type":"ContainerStarted","Data":"88c804a2ae7858f1fafb5a1d5c8ca6fe31381dd1f0e6ee9034716872440fe5b4"} Dec 11 14:07:16 crc kubenswrapper[5050]: I1211 14:07:16.371343 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-27b7-account-create-update-7xmsw" event={"ID":"e7f3c014-a9d3-4424-be41-e87a3736a58d","Type":"ContainerStarted","Data":"68ec03639f4c9549411c965fea1c418136ebf64d20c05ca0423c32eb7e1ab199"} Dec 11 14:07:16 crc kubenswrapper[5050]: I1211 14:07:16.384447 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-44xtk" podStartSLOduration=3.384430166 podStartE2EDuration="3.384430166s" podCreationTimestamp="2025-12-11 14:07:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:07:16.380586683 +0000 UTC m=+1127.224309269" watchObservedRunningTime="2025-12-11 14:07:16.384430166 +0000 UTC m=+1127.228152752" Dec 11 14:07:16 crc kubenswrapper[5050]: I1211 14:07:16.398208 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-34ac-account-create-update-q8nhn" podStartSLOduration=3.398185627 podStartE2EDuration="3.398185627s" podCreationTimestamp="2025-12-11 14:07:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:07:16.396775419 +0000 UTC m=+1127.240498025" watchObservedRunningTime="2025-12-11 14:07:16.398185627 +0000 UTC m=+1127.241908213" Dec 11 14:07:16 crc kubenswrapper[5050]: I1211 14:07:16.417913 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-27b7-account-create-update-7xmsw" podStartSLOduration=3.417879498 podStartE2EDuration="3.417879498s" podCreationTimestamp="2025-12-11 14:07:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:07:16.414539898 +0000 UTC m=+1127.258262484" watchObservedRunningTime="2025-12-11 14:07:16.417879498 +0000 UTC m=+1127.261602084" Dec 11 14:07:16 crc kubenswrapper[5050]: I1211 14:07:16.751246 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hblvw" Dec 11 14:07:16 crc kubenswrapper[5050]: I1211 14:07:16.776239 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7594-account-create-update-9fmt4" podStartSLOduration=3.77621881 podStartE2EDuration="3.77621881s" podCreationTimestamp="2025-12-11 14:07:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:07:16.454812704 +0000 UTC m=+1127.298535290" watchObservedRunningTime="2025-12-11 14:07:16.77621881 +0000 UTC m=+1127.619941406" Dec 11 14:07:16 crc kubenswrapper[5050]: I1211 14:07:16.866740 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a71df796-b040-4319-bc57-96a894dada33-operator-scripts\") pod \"a71df796-b040-4319-bc57-96a894dada33\" (UID: \"a71df796-b040-4319-bc57-96a894dada33\") " Dec 11 14:07:16 crc kubenswrapper[5050]: I1211 14:07:16.866951 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgdkn\" (UniqueName: \"kubernetes.io/projected/a71df796-b040-4319-bc57-96a894dada33-kube-api-access-vgdkn\") pod \"a71df796-b040-4319-bc57-96a894dada33\" (UID: \"a71df796-b040-4319-bc57-96a894dada33\") " Dec 11 14:07:16 crc kubenswrapper[5050]: I1211 14:07:16.868002 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a71df796-b040-4319-bc57-96a894dada33-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a71df796-b040-4319-bc57-96a894dada33" (UID: "a71df796-b040-4319-bc57-96a894dada33"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:07:16 crc kubenswrapper[5050]: I1211 14:07:16.876583 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a71df796-b040-4319-bc57-96a894dada33-kube-api-access-vgdkn" (OuterVolumeSpecName: "kube-api-access-vgdkn") pod "a71df796-b040-4319-bc57-96a894dada33" (UID: "a71df796-b040-4319-bc57-96a894dada33"). InnerVolumeSpecName "kube-api-access-vgdkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:07:16 crc kubenswrapper[5050]: I1211 14:07:16.969248 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgdkn\" (UniqueName: \"kubernetes.io/projected/a71df796-b040-4319-bc57-96a894dada33-kube-api-access-vgdkn\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:16 crc kubenswrapper[5050]: I1211 14:07:16.969304 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a71df796-b040-4319-bc57-96a894dada33-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:17 crc kubenswrapper[5050]: I1211 14:07:17.384683 5050 generic.go:334] "Generic (PLEG): container finished" podID="e7f3c014-a9d3-4424-be41-e87a3736a58d" containerID="68ec03639f4c9549411c965fea1c418136ebf64d20c05ca0423c32eb7e1ab199" exitCode=0 Dec 11 14:07:17 crc kubenswrapper[5050]: I1211 14:07:17.384808 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-27b7-account-create-update-7xmsw" event={"ID":"e7f3c014-a9d3-4424-be41-e87a3736a58d","Type":"ContainerDied","Data":"68ec03639f4c9549411c965fea1c418136ebf64d20c05ca0423c32eb7e1ab199"} Dec 11 14:07:17 crc kubenswrapper[5050]: I1211 14:07:17.386593 5050 generic.go:334] "Generic (PLEG): container finished" podID="f6bb80a3-78fe-4854-91bf-69a0f93a2f48" containerID="7b47caabbd51a8e4fa31011df4b0b71c1cfb2074ee3115985077cd353b1679e4" exitCode=0 Dec 11 14:07:17 crc kubenswrapper[5050]: I1211 14:07:17.386682 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-44xtk" event={"ID":"f6bb80a3-78fe-4854-91bf-69a0f93a2f48","Type":"ContainerDied","Data":"7b47caabbd51a8e4fa31011df4b0b71c1cfb2074ee3115985077cd353b1679e4"} Dec 11 14:07:17 crc kubenswrapper[5050]: I1211 14:07:17.393414 5050 generic.go:334] "Generic (PLEG): container finished" podID="118a06f4-3d12-4a10-8de7-bfcb56b3f237" containerID="70921762a5cd41a13f21b3df228b676e8d09da1a291c372ac76bbe1b1e001aa6" exitCode=0 Dec 11 14:07:17 crc kubenswrapper[5050]: I1211 14:07:17.393528 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-34ac-account-create-update-q8nhn" event={"ID":"118a06f4-3d12-4a10-8de7-bfcb56b3f237","Type":"ContainerDied","Data":"70921762a5cd41a13f21b3df228b676e8d09da1a291c372ac76bbe1b1e001aa6"} Dec 11 14:07:17 crc kubenswrapper[5050]: I1211 14:07:17.397716 5050 generic.go:334] "Generic (PLEG): container finished" podID="8169d760-5539-44ed-9586-6dd71f7fcda5" containerID="88c804a2ae7858f1fafb5a1d5c8ca6fe31381dd1f0e6ee9034716872440fe5b4" exitCode=0 Dec 11 14:07:17 crc kubenswrapper[5050]: I1211 14:07:17.397844 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7594-account-create-update-9fmt4" event={"ID":"8169d760-5539-44ed-9586-6dd71f7fcda5","Type":"ContainerDied","Data":"88c804a2ae7858f1fafb5a1d5c8ca6fe31381dd1f0e6ee9034716872440fe5b4"} Dec 11 14:07:17 crc kubenswrapper[5050]: I1211 14:07:17.410773 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-hblvw" Dec 11 14:07:17 crc kubenswrapper[5050]: I1211 14:07:17.410767 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-hblvw" event={"ID":"a71df796-b040-4319-bc57-96a894dada33","Type":"ContainerDied","Data":"c498dbb407986a11352e8397b6495d643127c9617cc81d2035d80783a7a9e2b1"} Dec 11 14:07:17 crc kubenswrapper[5050]: I1211 14:07:17.410853 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c498dbb407986a11352e8397b6495d643127c9617cc81d2035d80783a7a9e2b1" Dec 11 14:07:17 crc kubenswrapper[5050]: I1211 14:07:17.741767 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-862c9" Dec 11 14:07:17 crc kubenswrapper[5050]: I1211 14:07:17.906746 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vhvg\" (UniqueName: \"kubernetes.io/projected/cca0032a-ceed-4a6a-9d4e-9a782c3bfe55-kube-api-access-7vhvg\") pod \"cca0032a-ceed-4a6a-9d4e-9a782c3bfe55\" (UID: \"cca0032a-ceed-4a6a-9d4e-9a782c3bfe55\") " Dec 11 14:07:17 crc kubenswrapper[5050]: I1211 14:07:17.907128 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cca0032a-ceed-4a6a-9d4e-9a782c3bfe55-operator-scripts\") pod \"cca0032a-ceed-4a6a-9d4e-9a782c3bfe55\" (UID: \"cca0032a-ceed-4a6a-9d4e-9a782c3bfe55\") " Dec 11 14:07:17 crc kubenswrapper[5050]: I1211 14:07:17.908140 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cca0032a-ceed-4a6a-9d4e-9a782c3bfe55-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cca0032a-ceed-4a6a-9d4e-9a782c3bfe55" (UID: "cca0032a-ceed-4a6a-9d4e-9a782c3bfe55"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:07:17 crc kubenswrapper[5050]: I1211 14:07:17.919219 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cca0032a-ceed-4a6a-9d4e-9a782c3bfe55-kube-api-access-7vhvg" (OuterVolumeSpecName: "kube-api-access-7vhvg") pod "cca0032a-ceed-4a6a-9d4e-9a782c3bfe55" (UID: "cca0032a-ceed-4a6a-9d4e-9a782c3bfe55"). InnerVolumeSpecName "kube-api-access-7vhvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:07:18 crc kubenswrapper[5050]: I1211 14:07:18.008604 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cca0032a-ceed-4a6a-9d4e-9a782c3bfe55-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:18 crc kubenswrapper[5050]: I1211 14:07:18.008652 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vhvg\" (UniqueName: \"kubernetes.io/projected/cca0032a-ceed-4a6a-9d4e-9a782c3bfe55-kube-api-access-7vhvg\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:18 crc kubenswrapper[5050]: I1211 14:07:18.426729 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-862c9" event={"ID":"cca0032a-ceed-4a6a-9d4e-9a782c3bfe55","Type":"ContainerDied","Data":"06a2ba886c161be3bc559f039bdb03390467da14228c67ba586a86cfecac43cb"} Dec 11 14:07:18 crc kubenswrapper[5050]: I1211 14:07:18.426839 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06a2ba886c161be3bc559f039bdb03390467da14228c67ba586a86cfecac43cb" Dec 11 14:07:18 crc kubenswrapper[5050]: I1211 14:07:18.426864 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-862c9" Dec 11 14:07:20 crc kubenswrapper[5050]: I1211 14:07:20.451883 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7594-account-create-update-9fmt4" event={"ID":"8169d760-5539-44ed-9586-6dd71f7fcda5","Type":"ContainerDied","Data":"5a08db2a5329f55f3a71fa96d3f6bd21e9f6279b14537ca6f733293cd168988a"} Dec 11 14:07:20 crc kubenswrapper[5050]: I1211 14:07:20.452435 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a08db2a5329f55f3a71fa96d3f6bd21e9f6279b14537ca6f733293cd168988a" Dec 11 14:07:20 crc kubenswrapper[5050]: I1211 14:07:20.454726 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-27b7-account-create-update-7xmsw" event={"ID":"e7f3c014-a9d3-4424-be41-e87a3736a58d","Type":"ContainerDied","Data":"a0a18efabf1d63bde37ba29d7df85e73b01494fd4c00e992fb600279dbbb4040"} Dec 11 14:07:20 crc kubenswrapper[5050]: I1211 14:07:20.454819 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0a18efabf1d63bde37ba29d7df85e73b01494fd4c00e992fb600279dbbb4040" Dec 11 14:07:20 crc kubenswrapper[5050]: I1211 14:07:20.456970 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-44xtk" event={"ID":"f6bb80a3-78fe-4854-91bf-69a0f93a2f48","Type":"ContainerDied","Data":"a5a35cc0475f773c22f0e6eb1053c6702e366dcc9bcb439a4403e45593bc7cd2"} Dec 11 14:07:20 crc kubenswrapper[5050]: I1211 14:07:20.457096 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5a35cc0475f773c22f0e6eb1053c6702e366dcc9bcb439a4403e45593bc7cd2" Dec 11 14:07:20 crc kubenswrapper[5050]: I1211 14:07:20.459684 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-34ac-account-create-update-q8nhn" event={"ID":"118a06f4-3d12-4a10-8de7-bfcb56b3f237","Type":"ContainerDied","Data":"54e5e003a13ffa09504e7db48660a4aba77c1f34a5760db5b470faf7ae8446f3"} Dec 11 14:07:20 crc kubenswrapper[5050]: I1211 14:07:20.459717 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54e5e003a13ffa09504e7db48660a4aba77c1f34a5760db5b470faf7ae8446f3" Dec 11 14:07:20 crc kubenswrapper[5050]: I1211 14:07:20.642171 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-44xtk" Dec 11 14:07:20 crc kubenswrapper[5050]: I1211 14:07:20.651676 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-27b7-account-create-update-7xmsw" Dec 11 14:07:20 crc kubenswrapper[5050]: I1211 14:07:20.691443 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7594-account-create-update-9fmt4" Dec 11 14:07:20 crc kubenswrapper[5050]: I1211 14:07:20.696502 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-34ac-account-create-update-q8nhn" Dec 11 14:07:20 crc kubenswrapper[5050]: I1211 14:07:20.763176 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5rc6\" (UniqueName: \"kubernetes.io/projected/f6bb80a3-78fe-4854-91bf-69a0f93a2f48-kube-api-access-q5rc6\") pod \"f6bb80a3-78fe-4854-91bf-69a0f93a2f48\" (UID: \"f6bb80a3-78fe-4854-91bf-69a0f93a2f48\") " Dec 11 14:07:20 crc kubenswrapper[5050]: I1211 14:07:20.763417 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6bb80a3-78fe-4854-91bf-69a0f93a2f48-operator-scripts\") pod \"f6bb80a3-78fe-4854-91bf-69a0f93a2f48\" (UID: \"f6bb80a3-78fe-4854-91bf-69a0f93a2f48\") " Dec 11 14:07:20 crc kubenswrapper[5050]: I1211 14:07:20.763460 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8zkk\" (UniqueName: \"kubernetes.io/projected/e7f3c014-a9d3-4424-be41-e87a3736a58d-kube-api-access-p8zkk\") pod \"e7f3c014-a9d3-4424-be41-e87a3736a58d\" (UID: \"e7f3c014-a9d3-4424-be41-e87a3736a58d\") " Dec 11 14:07:20 crc kubenswrapper[5050]: I1211 14:07:20.763539 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7f3c014-a9d3-4424-be41-e87a3736a58d-operator-scripts\") pod \"e7f3c014-a9d3-4424-be41-e87a3736a58d\" (UID: \"e7f3c014-a9d3-4424-be41-e87a3736a58d\") " Dec 11 14:07:20 crc kubenswrapper[5050]: I1211 14:07:20.764859 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7f3c014-a9d3-4424-be41-e87a3736a58d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e7f3c014-a9d3-4424-be41-e87a3736a58d" (UID: "e7f3c014-a9d3-4424-be41-e87a3736a58d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:07:20 crc kubenswrapper[5050]: I1211 14:07:20.765070 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6bb80a3-78fe-4854-91bf-69a0f93a2f48-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f6bb80a3-78fe-4854-91bf-69a0f93a2f48" (UID: "f6bb80a3-78fe-4854-91bf-69a0f93a2f48"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:07:20 crc kubenswrapper[5050]: I1211 14:07:20.767047 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7f3c014-a9d3-4424-be41-e87a3736a58d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:20 crc kubenswrapper[5050]: I1211 14:07:20.767091 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6bb80a3-78fe-4854-91bf-69a0f93a2f48-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:20 crc kubenswrapper[5050]: I1211 14:07:20.769561 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6bb80a3-78fe-4854-91bf-69a0f93a2f48-kube-api-access-q5rc6" (OuterVolumeSpecName: "kube-api-access-q5rc6") pod "f6bb80a3-78fe-4854-91bf-69a0f93a2f48" (UID: "f6bb80a3-78fe-4854-91bf-69a0f93a2f48"). InnerVolumeSpecName "kube-api-access-q5rc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:07:20 crc kubenswrapper[5050]: I1211 14:07:20.771375 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7f3c014-a9d3-4424-be41-e87a3736a58d-kube-api-access-p8zkk" (OuterVolumeSpecName: "kube-api-access-p8zkk") pod "e7f3c014-a9d3-4424-be41-e87a3736a58d" (UID: "e7f3c014-a9d3-4424-be41-e87a3736a58d"). InnerVolumeSpecName "kube-api-access-p8zkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:07:20 crc kubenswrapper[5050]: I1211 14:07:20.868340 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/118a06f4-3d12-4a10-8de7-bfcb56b3f237-operator-scripts\") pod \"118a06f4-3d12-4a10-8de7-bfcb56b3f237\" (UID: \"118a06f4-3d12-4a10-8de7-bfcb56b3f237\") " Dec 11 14:07:20 crc kubenswrapper[5050]: I1211 14:07:20.868483 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nh9mr\" (UniqueName: \"kubernetes.io/projected/8169d760-5539-44ed-9586-6dd71f7fcda5-kube-api-access-nh9mr\") pod \"8169d760-5539-44ed-9586-6dd71f7fcda5\" (UID: \"8169d760-5539-44ed-9586-6dd71f7fcda5\") " Dec 11 14:07:20 crc kubenswrapper[5050]: I1211 14:07:20.868527 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9t6q\" (UniqueName: \"kubernetes.io/projected/118a06f4-3d12-4a10-8de7-bfcb56b3f237-kube-api-access-d9t6q\") pod \"118a06f4-3d12-4a10-8de7-bfcb56b3f237\" (UID: \"118a06f4-3d12-4a10-8de7-bfcb56b3f237\") " Dec 11 14:07:20 crc kubenswrapper[5050]: I1211 14:07:20.868626 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8169d760-5539-44ed-9586-6dd71f7fcda5-operator-scripts\") pod \"8169d760-5539-44ed-9586-6dd71f7fcda5\" (UID: \"8169d760-5539-44ed-9586-6dd71f7fcda5\") " Dec 11 14:07:20 crc kubenswrapper[5050]: I1211 14:07:20.868874 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5rc6\" (UniqueName: \"kubernetes.io/projected/f6bb80a3-78fe-4854-91bf-69a0f93a2f48-kube-api-access-q5rc6\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:20 crc kubenswrapper[5050]: I1211 14:07:20.868914 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8zkk\" (UniqueName: \"kubernetes.io/projected/e7f3c014-a9d3-4424-be41-e87a3736a58d-kube-api-access-p8zkk\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:20 crc kubenswrapper[5050]: I1211 14:07:20.869037 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/118a06f4-3d12-4a10-8de7-bfcb56b3f237-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "118a06f4-3d12-4a10-8de7-bfcb56b3f237" (UID: "118a06f4-3d12-4a10-8de7-bfcb56b3f237"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:07:20 crc kubenswrapper[5050]: I1211 14:07:20.869309 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8169d760-5539-44ed-9586-6dd71f7fcda5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8169d760-5539-44ed-9586-6dd71f7fcda5" (UID: "8169d760-5539-44ed-9586-6dd71f7fcda5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:07:20 crc kubenswrapper[5050]: I1211 14:07:20.875675 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/118a06f4-3d12-4a10-8de7-bfcb56b3f237-kube-api-access-d9t6q" (OuterVolumeSpecName: "kube-api-access-d9t6q") pod "118a06f4-3d12-4a10-8de7-bfcb56b3f237" (UID: "118a06f4-3d12-4a10-8de7-bfcb56b3f237"). InnerVolumeSpecName "kube-api-access-d9t6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:07:20 crc kubenswrapper[5050]: I1211 14:07:20.876622 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8169d760-5539-44ed-9586-6dd71f7fcda5-kube-api-access-nh9mr" (OuterVolumeSpecName: "kube-api-access-nh9mr") pod "8169d760-5539-44ed-9586-6dd71f7fcda5" (UID: "8169d760-5539-44ed-9586-6dd71f7fcda5"). InnerVolumeSpecName "kube-api-access-nh9mr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:07:20 crc kubenswrapper[5050]: I1211 14:07:20.955252 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8db84466c-xqjfw" Dec 11 14:07:20 crc kubenswrapper[5050]: I1211 14:07:20.970486 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/118a06f4-3d12-4a10-8de7-bfcb56b3f237-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:20 crc kubenswrapper[5050]: I1211 14:07:20.970520 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nh9mr\" (UniqueName: \"kubernetes.io/projected/8169d760-5539-44ed-9586-6dd71f7fcda5-kube-api-access-nh9mr\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:20 crc kubenswrapper[5050]: I1211 14:07:20.970533 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9t6q\" (UniqueName: \"kubernetes.io/projected/118a06f4-3d12-4a10-8de7-bfcb56b3f237-kube-api-access-d9t6q\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:20 crc kubenswrapper[5050]: I1211 14:07:20.970543 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8169d760-5539-44ed-9586-6dd71f7fcda5-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:21 crc kubenswrapper[5050]: I1211 14:07:21.087471 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-9s7vd"] Dec 11 14:07:21 crc kubenswrapper[5050]: I1211 14:07:21.088088 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67fdf7998c-9s7vd" podUID="25f92ac7-0732-460f-bf9a-1e9947e71977" containerName="dnsmasq-dns" containerID="cri-o://03a312c5ccc632709f5ff1d78cc0cc48e700ed20a1bdc843cbafee17867af835" gracePeriod=10 Dec 11 14:07:21 crc kubenswrapper[5050]: I1211 14:07:21.470533 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-34ac-account-create-update-q8nhn" Dec 11 14:07:21 crc kubenswrapper[5050]: I1211 14:07:21.472146 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-27b7-account-create-update-7xmsw" Dec 11 14:07:21 crc kubenswrapper[5050]: I1211 14:07:21.472190 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kfqnl" event={"ID":"e9f0ade5-6144-4596-a78b-afeca167af55","Type":"ContainerStarted","Data":"81545cbc54d359524dfbf5ab0186a09ed8e7e6cc553c36752bf700cb488097c1"} Dec 11 14:07:21 crc kubenswrapper[5050]: I1211 14:07:21.472394 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-44xtk" Dec 11 14:07:21 crc kubenswrapper[5050]: I1211 14:07:21.472389 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7594-account-create-update-9fmt4" Dec 11 14:07:21 crc kubenswrapper[5050]: I1211 14:07:21.494877 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-kfqnl" podStartSLOduration=3.135391361 podStartE2EDuration="8.494846649s" podCreationTimestamp="2025-12-11 14:07:13 +0000 UTC" firstStartedPulling="2025-12-11 14:07:15.134138277 +0000 UTC m=+1125.977860863" lastFinishedPulling="2025-12-11 14:07:20.493593565 +0000 UTC m=+1131.337316151" observedRunningTime="2025-12-11 14:07:21.491225011 +0000 UTC m=+1132.334947597" watchObservedRunningTime="2025-12-11 14:07:21.494846649 +0000 UTC m=+1132.338569235" Dec 11 14:07:22 crc kubenswrapper[5050]: I1211 14:07:22.159030 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-9s7vd" Dec 11 14:07:22 crc kubenswrapper[5050]: I1211 14:07:22.198455 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvgmm\" (UniqueName: \"kubernetes.io/projected/25f92ac7-0732-460f-bf9a-1e9947e71977-kube-api-access-wvgmm\") pod \"25f92ac7-0732-460f-bf9a-1e9947e71977\" (UID: \"25f92ac7-0732-460f-bf9a-1e9947e71977\") " Dec 11 14:07:22 crc kubenswrapper[5050]: I1211 14:07:22.199800 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25f92ac7-0732-460f-bf9a-1e9947e71977-config\") pod \"25f92ac7-0732-460f-bf9a-1e9947e71977\" (UID: \"25f92ac7-0732-460f-bf9a-1e9947e71977\") " Dec 11 14:07:22 crc kubenswrapper[5050]: I1211 14:07:22.199882 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25f92ac7-0732-460f-bf9a-1e9947e71977-ovsdbserver-nb\") pod \"25f92ac7-0732-460f-bf9a-1e9947e71977\" (UID: \"25f92ac7-0732-460f-bf9a-1e9947e71977\") " Dec 11 14:07:22 crc kubenswrapper[5050]: I1211 14:07:22.199963 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25f92ac7-0732-460f-bf9a-1e9947e71977-dns-svc\") pod \"25f92ac7-0732-460f-bf9a-1e9947e71977\" (UID: \"25f92ac7-0732-460f-bf9a-1e9947e71977\") " Dec 11 14:07:22 crc kubenswrapper[5050]: I1211 14:07:22.200281 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25f92ac7-0732-460f-bf9a-1e9947e71977-ovsdbserver-sb\") pod \"25f92ac7-0732-460f-bf9a-1e9947e71977\" (UID: \"25f92ac7-0732-460f-bf9a-1e9947e71977\") " Dec 11 14:07:22 crc kubenswrapper[5050]: I1211 14:07:22.222237 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25f92ac7-0732-460f-bf9a-1e9947e71977-kube-api-access-wvgmm" (OuterVolumeSpecName: "kube-api-access-wvgmm") pod "25f92ac7-0732-460f-bf9a-1e9947e71977" (UID: "25f92ac7-0732-460f-bf9a-1e9947e71977"). InnerVolumeSpecName "kube-api-access-wvgmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:07:22 crc kubenswrapper[5050]: I1211 14:07:22.260454 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25f92ac7-0732-460f-bf9a-1e9947e71977-config" (OuterVolumeSpecName: "config") pod "25f92ac7-0732-460f-bf9a-1e9947e71977" (UID: "25f92ac7-0732-460f-bf9a-1e9947e71977"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:07:22 crc kubenswrapper[5050]: I1211 14:07:22.267371 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25f92ac7-0732-460f-bf9a-1e9947e71977-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "25f92ac7-0732-460f-bf9a-1e9947e71977" (UID: "25f92ac7-0732-460f-bf9a-1e9947e71977"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:07:22 crc kubenswrapper[5050]: I1211 14:07:22.275704 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25f92ac7-0732-460f-bf9a-1e9947e71977-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "25f92ac7-0732-460f-bf9a-1e9947e71977" (UID: "25f92ac7-0732-460f-bf9a-1e9947e71977"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:07:22 crc kubenswrapper[5050]: I1211 14:07:22.284827 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25f92ac7-0732-460f-bf9a-1e9947e71977-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "25f92ac7-0732-460f-bf9a-1e9947e71977" (UID: "25f92ac7-0732-460f-bf9a-1e9947e71977"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:07:22 crc kubenswrapper[5050]: I1211 14:07:22.303368 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvgmm\" (UniqueName: \"kubernetes.io/projected/25f92ac7-0732-460f-bf9a-1e9947e71977-kube-api-access-wvgmm\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:22 crc kubenswrapper[5050]: I1211 14:07:22.303415 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25f92ac7-0732-460f-bf9a-1e9947e71977-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:22 crc kubenswrapper[5050]: I1211 14:07:22.303427 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/25f92ac7-0732-460f-bf9a-1e9947e71977-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:22 crc kubenswrapper[5050]: I1211 14:07:22.303437 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/25f92ac7-0732-460f-bf9a-1e9947e71977-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:22 crc kubenswrapper[5050]: I1211 14:07:22.303447 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/25f92ac7-0732-460f-bf9a-1e9947e71977-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:22 crc kubenswrapper[5050]: I1211 14:07:22.485602 5050 generic.go:334] "Generic (PLEG): container finished" podID="25f92ac7-0732-460f-bf9a-1e9947e71977" containerID="03a312c5ccc632709f5ff1d78cc0cc48e700ed20a1bdc843cbafee17867af835" exitCode=0 Dec 11 14:07:22 crc kubenswrapper[5050]: I1211 14:07:22.485698 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-9s7vd" event={"ID":"25f92ac7-0732-460f-bf9a-1e9947e71977","Type":"ContainerDied","Data":"03a312c5ccc632709f5ff1d78cc0cc48e700ed20a1bdc843cbafee17867af835"} Dec 11 14:07:22 crc kubenswrapper[5050]: I1211 14:07:22.485757 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67fdf7998c-9s7vd" Dec 11 14:07:22 crc kubenswrapper[5050]: I1211 14:07:22.485792 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67fdf7998c-9s7vd" event={"ID":"25f92ac7-0732-460f-bf9a-1e9947e71977","Type":"ContainerDied","Data":"b74f69406476557351174e3857a610a7abc48ac645f77b21d85afdc9c412fb43"} Dec 11 14:07:22 crc kubenswrapper[5050]: I1211 14:07:22.485820 5050 scope.go:117] "RemoveContainer" containerID="03a312c5ccc632709f5ff1d78cc0cc48e700ed20a1bdc843cbafee17867af835" Dec 11 14:07:22 crc kubenswrapper[5050]: I1211 14:07:22.517289 5050 scope.go:117] "RemoveContainer" containerID="120c3e0ab30f8cb2dc9dda0bd8ed8b1425fd55a5f7189314bd3f7b5ff0ffb1e8" Dec 11 14:07:22 crc kubenswrapper[5050]: I1211 14:07:22.538645 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-9s7vd"] Dec 11 14:07:22 crc kubenswrapper[5050]: I1211 14:07:22.547706 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67fdf7998c-9s7vd"] Dec 11 14:07:22 crc kubenswrapper[5050]: I1211 14:07:22.567218 5050 scope.go:117] "RemoveContainer" containerID="03a312c5ccc632709f5ff1d78cc0cc48e700ed20a1bdc843cbafee17867af835" Dec 11 14:07:22 crc kubenswrapper[5050]: E1211 14:07:22.568211 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03a312c5ccc632709f5ff1d78cc0cc48e700ed20a1bdc843cbafee17867af835\": container with ID starting with 03a312c5ccc632709f5ff1d78cc0cc48e700ed20a1bdc843cbafee17867af835 not found: ID does not exist" containerID="03a312c5ccc632709f5ff1d78cc0cc48e700ed20a1bdc843cbafee17867af835" Dec 11 14:07:22 crc kubenswrapper[5050]: I1211 14:07:22.568402 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03a312c5ccc632709f5ff1d78cc0cc48e700ed20a1bdc843cbafee17867af835"} err="failed to get container status \"03a312c5ccc632709f5ff1d78cc0cc48e700ed20a1bdc843cbafee17867af835\": rpc error: code = NotFound desc = could not find container \"03a312c5ccc632709f5ff1d78cc0cc48e700ed20a1bdc843cbafee17867af835\": container with ID starting with 03a312c5ccc632709f5ff1d78cc0cc48e700ed20a1bdc843cbafee17867af835 not found: ID does not exist" Dec 11 14:07:22 crc kubenswrapper[5050]: I1211 14:07:22.568461 5050 scope.go:117] "RemoveContainer" containerID="120c3e0ab30f8cb2dc9dda0bd8ed8b1425fd55a5f7189314bd3f7b5ff0ffb1e8" Dec 11 14:07:22 crc kubenswrapper[5050]: E1211 14:07:22.569541 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"120c3e0ab30f8cb2dc9dda0bd8ed8b1425fd55a5f7189314bd3f7b5ff0ffb1e8\": container with ID starting with 120c3e0ab30f8cb2dc9dda0bd8ed8b1425fd55a5f7189314bd3f7b5ff0ffb1e8 not found: ID does not exist" containerID="120c3e0ab30f8cb2dc9dda0bd8ed8b1425fd55a5f7189314bd3f7b5ff0ffb1e8" Dec 11 14:07:22 crc kubenswrapper[5050]: I1211 14:07:22.569603 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"120c3e0ab30f8cb2dc9dda0bd8ed8b1425fd55a5f7189314bd3f7b5ff0ffb1e8"} err="failed to get container status \"120c3e0ab30f8cb2dc9dda0bd8ed8b1425fd55a5f7189314bd3f7b5ff0ffb1e8\": rpc error: code = NotFound desc = could not find container \"120c3e0ab30f8cb2dc9dda0bd8ed8b1425fd55a5f7189314bd3f7b5ff0ffb1e8\": container with ID starting with 120c3e0ab30f8cb2dc9dda0bd8ed8b1425fd55a5f7189314bd3f7b5ff0ffb1e8 not found: ID does not exist" Dec 11 14:07:23 crc kubenswrapper[5050]: I1211 14:07:23.557232 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25f92ac7-0732-460f-bf9a-1e9947e71977" path="/var/lib/kubelet/pods/25f92ac7-0732-460f-bf9a-1e9947e71977/volumes" Dec 11 14:07:31 crc kubenswrapper[5050]: I1211 14:07:31.617375 5050 generic.go:334] "Generic (PLEG): container finished" podID="669fc9ec-b625-44f9-bd15-bc8a79158127" containerID="08bfa765d647f306601d0abaff12d769ea8332592ea0f0283de458df6c5e5537" exitCode=0 Dec 11 14:07:31 crc kubenswrapper[5050]: I1211 14:07:31.617470 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-j5zml" event={"ID":"669fc9ec-b625-44f9-bd15-bc8a79158127","Type":"ContainerDied","Data":"08bfa765d647f306601d0abaff12d769ea8332592ea0f0283de458df6c5e5537"} Dec 11 14:07:31 crc kubenswrapper[5050]: I1211 14:07:31.624637 5050 generic.go:334] "Generic (PLEG): container finished" podID="e9f0ade5-6144-4596-a78b-afeca167af55" containerID="81545cbc54d359524dfbf5ab0186a09ed8e7e6cc553c36752bf700cb488097c1" exitCode=0 Dec 11 14:07:31 crc kubenswrapper[5050]: I1211 14:07:31.624730 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kfqnl" event={"ID":"e9f0ade5-6144-4596-a78b-afeca167af55","Type":"ContainerDied","Data":"81545cbc54d359524dfbf5ab0186a09ed8e7e6cc553c36752bf700cb488097c1"} Dec 11 14:07:32 crc kubenswrapper[5050]: I1211 14:07:32.997507 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kfqnl" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.023639 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4qb7\" (UniqueName: \"kubernetes.io/projected/e9f0ade5-6144-4596-a78b-afeca167af55-kube-api-access-w4qb7\") pod \"e9f0ade5-6144-4596-a78b-afeca167af55\" (UID: \"e9f0ade5-6144-4596-a78b-afeca167af55\") " Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.023714 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9f0ade5-6144-4596-a78b-afeca167af55-combined-ca-bundle\") pod \"e9f0ade5-6144-4596-a78b-afeca167af55\" (UID: \"e9f0ade5-6144-4596-a78b-afeca167af55\") " Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.023760 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9f0ade5-6144-4596-a78b-afeca167af55-config-data\") pod \"e9f0ade5-6144-4596-a78b-afeca167af55\" (UID: \"e9f0ade5-6144-4596-a78b-afeca167af55\") " Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.047322 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9f0ade5-6144-4596-a78b-afeca167af55-kube-api-access-w4qb7" (OuterVolumeSpecName: "kube-api-access-w4qb7") pod "e9f0ade5-6144-4596-a78b-afeca167af55" (UID: "e9f0ade5-6144-4596-a78b-afeca167af55"). InnerVolumeSpecName "kube-api-access-w4qb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.056263 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9f0ade5-6144-4596-a78b-afeca167af55-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e9f0ade5-6144-4596-a78b-afeca167af55" (UID: "e9f0ade5-6144-4596-a78b-afeca167af55"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.083140 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9f0ade5-6144-4596-a78b-afeca167af55-config-data" (OuterVolumeSpecName: "config-data") pod "e9f0ade5-6144-4596-a78b-afeca167af55" (UID: "e9f0ade5-6144-4596-a78b-afeca167af55"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.125998 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4qb7\" (UniqueName: \"kubernetes.io/projected/e9f0ade5-6144-4596-a78b-afeca167af55-kube-api-access-w4qb7\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.126050 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9f0ade5-6144-4596-a78b-afeca167af55-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.126063 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9f0ade5-6144-4596-a78b-afeca167af55-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.164388 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-j5zml" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.328742 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/669fc9ec-b625-44f9-bd15-bc8a79158127-config-data\") pod \"669fc9ec-b625-44f9-bd15-bc8a79158127\" (UID: \"669fc9ec-b625-44f9-bd15-bc8a79158127\") " Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.329449 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/669fc9ec-b625-44f9-bd15-bc8a79158127-db-sync-config-data\") pod \"669fc9ec-b625-44f9-bd15-bc8a79158127\" (UID: \"669fc9ec-b625-44f9-bd15-bc8a79158127\") " Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.329480 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5t7sm\" (UniqueName: \"kubernetes.io/projected/669fc9ec-b625-44f9-bd15-bc8a79158127-kube-api-access-5t7sm\") pod \"669fc9ec-b625-44f9-bd15-bc8a79158127\" (UID: \"669fc9ec-b625-44f9-bd15-bc8a79158127\") " Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.329567 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/669fc9ec-b625-44f9-bd15-bc8a79158127-combined-ca-bundle\") pod \"669fc9ec-b625-44f9-bd15-bc8a79158127\" (UID: \"669fc9ec-b625-44f9-bd15-bc8a79158127\") " Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.334443 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/669fc9ec-b625-44f9-bd15-bc8a79158127-kube-api-access-5t7sm" (OuterVolumeSpecName: "kube-api-access-5t7sm") pod "669fc9ec-b625-44f9-bd15-bc8a79158127" (UID: "669fc9ec-b625-44f9-bd15-bc8a79158127"). InnerVolumeSpecName "kube-api-access-5t7sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.335086 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/669fc9ec-b625-44f9-bd15-bc8a79158127-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "669fc9ec-b625-44f9-bd15-bc8a79158127" (UID: "669fc9ec-b625-44f9-bd15-bc8a79158127"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.356100 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/669fc9ec-b625-44f9-bd15-bc8a79158127-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "669fc9ec-b625-44f9-bd15-bc8a79158127" (UID: "669fc9ec-b625-44f9-bd15-bc8a79158127"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.394368 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/669fc9ec-b625-44f9-bd15-bc8a79158127-config-data" (OuterVolumeSpecName: "config-data") pod "669fc9ec-b625-44f9-bd15-bc8a79158127" (UID: "669fc9ec-b625-44f9-bd15-bc8a79158127"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.433399 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/669fc9ec-b625-44f9-bd15-bc8a79158127-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.433480 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/669fc9ec-b625-44f9-bd15-bc8a79158127-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.433514 5050 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/669fc9ec-b625-44f9-bd15-bc8a79158127-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.433529 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5t7sm\" (UniqueName: \"kubernetes.io/projected/669fc9ec-b625-44f9-bd15-bc8a79158127-kube-api-access-5t7sm\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.646679 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-j5zml" event={"ID":"669fc9ec-b625-44f9-bd15-bc8a79158127","Type":"ContainerDied","Data":"2ada08a11e643f8412fb7755887a76bd31d98890ee81329aceccd8eeb66d2653"} Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.646797 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ada08a11e643f8412fb7755887a76bd31d98890ee81329aceccd8eeb66d2653" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.646720 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-j5zml" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.648285 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-kfqnl" event={"ID":"e9f0ade5-6144-4596-a78b-afeca167af55","Type":"ContainerDied","Data":"ad7cb902bef92cd82641483a5191b767112f9a08679a690ed153700e5a185b60"} Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.648318 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad7cb902bef92cd82641483a5191b767112f9a08679a690ed153700e5a185b60" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.648391 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-kfqnl" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.973692 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-767d96458c-2vdb8"] Dec 11 14:07:33 crc kubenswrapper[5050]: E1211 14:07:33.974191 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25f92ac7-0732-460f-bf9a-1e9947e71977" containerName="init" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.974205 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="25f92ac7-0732-460f-bf9a-1e9947e71977" containerName="init" Dec 11 14:07:33 crc kubenswrapper[5050]: E1211 14:07:33.974214 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a71df796-b040-4319-bc57-96a894dada33" containerName="mariadb-database-create" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.974262 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a71df796-b040-4319-bc57-96a894dada33" containerName="mariadb-database-create" Dec 11 14:07:33 crc kubenswrapper[5050]: E1211 14:07:33.974274 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7f3c014-a9d3-4424-be41-e87a3736a58d" containerName="mariadb-account-create-update" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.974279 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7f3c014-a9d3-4424-be41-e87a3736a58d" containerName="mariadb-account-create-update" Dec 11 14:07:33 crc kubenswrapper[5050]: E1211 14:07:33.974302 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="669fc9ec-b625-44f9-bd15-bc8a79158127" containerName="glance-db-sync" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.974307 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="669fc9ec-b625-44f9-bd15-bc8a79158127" containerName="glance-db-sync" Dec 11 14:07:33 crc kubenswrapper[5050]: E1211 14:07:33.974330 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6bb80a3-78fe-4854-91bf-69a0f93a2f48" containerName="mariadb-database-create" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.974335 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6bb80a3-78fe-4854-91bf-69a0f93a2f48" containerName="mariadb-database-create" Dec 11 14:07:33 crc kubenswrapper[5050]: E1211 14:07:33.974346 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25f92ac7-0732-460f-bf9a-1e9947e71977" containerName="dnsmasq-dns" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.974352 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="25f92ac7-0732-460f-bf9a-1e9947e71977" containerName="dnsmasq-dns" Dec 11 14:07:33 crc kubenswrapper[5050]: E1211 14:07:33.974362 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8169d760-5539-44ed-9586-6dd71f7fcda5" containerName="mariadb-account-create-update" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.974368 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="8169d760-5539-44ed-9586-6dd71f7fcda5" containerName="mariadb-account-create-update" Dec 11 14:07:33 crc kubenswrapper[5050]: E1211 14:07:33.974378 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cca0032a-ceed-4a6a-9d4e-9a782c3bfe55" containerName="mariadb-database-create" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.974385 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="cca0032a-ceed-4a6a-9d4e-9a782c3bfe55" containerName="mariadb-database-create" Dec 11 14:07:33 crc kubenswrapper[5050]: E1211 14:07:33.974400 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="118a06f4-3d12-4a10-8de7-bfcb56b3f237" containerName="mariadb-account-create-update" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.974406 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="118a06f4-3d12-4a10-8de7-bfcb56b3f237" containerName="mariadb-account-create-update" Dec 11 14:07:33 crc kubenswrapper[5050]: E1211 14:07:33.974415 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9f0ade5-6144-4596-a78b-afeca167af55" containerName="keystone-db-sync" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.974422 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9f0ade5-6144-4596-a78b-afeca167af55" containerName="keystone-db-sync" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.974589 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6bb80a3-78fe-4854-91bf-69a0f93a2f48" containerName="mariadb-database-create" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.974605 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="25f92ac7-0732-460f-bf9a-1e9947e71977" containerName="dnsmasq-dns" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.974616 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="a71df796-b040-4319-bc57-96a894dada33" containerName="mariadb-database-create" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.974625 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="669fc9ec-b625-44f9-bd15-bc8a79158127" containerName="glance-db-sync" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.974632 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="cca0032a-ceed-4a6a-9d4e-9a782c3bfe55" containerName="mariadb-database-create" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.974645 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="118a06f4-3d12-4a10-8de7-bfcb56b3f237" containerName="mariadb-account-create-update" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.974657 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9f0ade5-6144-4596-a78b-afeca167af55" containerName="keystone-db-sync" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.974667 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="8169d760-5539-44ed-9586-6dd71f7fcda5" containerName="mariadb-account-create-update" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.974679 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7f3c014-a9d3-4424-be41-e87a3736a58d" containerName="mariadb-account-create-update" Dec 11 14:07:33 crc kubenswrapper[5050]: I1211 14:07:33.975690 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-767d96458c-2vdb8" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.034913 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-767d96458c-2vdb8"] Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.048477 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd-ovsdbserver-sb\") pod \"dnsmasq-dns-767d96458c-2vdb8\" (UID: \"b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd\") " pod="openstack/dnsmasq-dns-767d96458c-2vdb8" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.048547 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6tn4\" (UniqueName: \"kubernetes.io/projected/b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd-kube-api-access-h6tn4\") pod \"dnsmasq-dns-767d96458c-2vdb8\" (UID: \"b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd\") " pod="openstack/dnsmasq-dns-767d96458c-2vdb8" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.048579 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd-ovsdbserver-nb\") pod \"dnsmasq-dns-767d96458c-2vdb8\" (UID: \"b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd\") " pod="openstack/dnsmasq-dns-767d96458c-2vdb8" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.048605 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd-config\") pod \"dnsmasq-dns-767d96458c-2vdb8\" (UID: \"b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd\") " pod="openstack/dnsmasq-dns-767d96458c-2vdb8" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.048636 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd-dns-svc\") pod \"dnsmasq-dns-767d96458c-2vdb8\" (UID: \"b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd\") " pod="openstack/dnsmasq-dns-767d96458c-2vdb8" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.048657 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd-dns-swift-storage-0\") pod \"dnsmasq-dns-767d96458c-2vdb8\" (UID: \"b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd\") " pod="openstack/dnsmasq-dns-767d96458c-2vdb8" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.074596 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-cvstt"] Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.078582 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cvstt" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.085285 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.085312 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-tckgc" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.085583 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.085677 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.085780 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.103710 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-cvstt"] Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.161321 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd-ovsdbserver-nb\") pod \"dnsmasq-dns-767d96458c-2vdb8\" (UID: \"b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd\") " pod="openstack/dnsmasq-dns-767d96458c-2vdb8" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.161403 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd-config\") pod \"dnsmasq-dns-767d96458c-2vdb8\" (UID: \"b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd\") " pod="openstack/dnsmasq-dns-767d96458c-2vdb8" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.161444 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/295ba8ac-0609-4021-920b-3ab943ea21ad-fernet-keys\") pod \"keystone-bootstrap-cvstt\" (UID: \"295ba8ac-0609-4021-920b-3ab943ea21ad\") " pod="openstack/keystone-bootstrap-cvstt" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.161477 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd-dns-svc\") pod \"dnsmasq-dns-767d96458c-2vdb8\" (UID: \"b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd\") " pod="openstack/dnsmasq-dns-767d96458c-2vdb8" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.161504 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd-dns-swift-storage-0\") pod \"dnsmasq-dns-767d96458c-2vdb8\" (UID: \"b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd\") " pod="openstack/dnsmasq-dns-767d96458c-2vdb8" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.161561 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/295ba8ac-0609-4021-920b-3ab943ea21ad-scripts\") pod \"keystone-bootstrap-cvstt\" (UID: \"295ba8ac-0609-4021-920b-3ab943ea21ad\") " pod="openstack/keystone-bootstrap-cvstt" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.161591 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/295ba8ac-0609-4021-920b-3ab943ea21ad-combined-ca-bundle\") pod \"keystone-bootstrap-cvstt\" (UID: \"295ba8ac-0609-4021-920b-3ab943ea21ad\") " pod="openstack/keystone-bootstrap-cvstt" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.161641 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhpp9\" (UniqueName: \"kubernetes.io/projected/295ba8ac-0609-4021-920b-3ab943ea21ad-kube-api-access-nhpp9\") pod \"keystone-bootstrap-cvstt\" (UID: \"295ba8ac-0609-4021-920b-3ab943ea21ad\") " pod="openstack/keystone-bootstrap-cvstt" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.161694 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd-ovsdbserver-sb\") pod \"dnsmasq-dns-767d96458c-2vdb8\" (UID: \"b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd\") " pod="openstack/dnsmasq-dns-767d96458c-2vdb8" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.161719 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/295ba8ac-0609-4021-920b-3ab943ea21ad-config-data\") pod \"keystone-bootstrap-cvstt\" (UID: \"295ba8ac-0609-4021-920b-3ab943ea21ad\") " pod="openstack/keystone-bootstrap-cvstt" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.161752 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/295ba8ac-0609-4021-920b-3ab943ea21ad-credential-keys\") pod \"keystone-bootstrap-cvstt\" (UID: \"295ba8ac-0609-4021-920b-3ab943ea21ad\") " pod="openstack/keystone-bootstrap-cvstt" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.161803 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6tn4\" (UniqueName: \"kubernetes.io/projected/b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd-kube-api-access-h6tn4\") pod \"dnsmasq-dns-767d96458c-2vdb8\" (UID: \"b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd\") " pod="openstack/dnsmasq-dns-767d96458c-2vdb8" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.164320 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd-config\") pod \"dnsmasq-dns-767d96458c-2vdb8\" (UID: \"b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd\") " pod="openstack/dnsmasq-dns-767d96458c-2vdb8" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.165128 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd-dns-svc\") pod \"dnsmasq-dns-767d96458c-2vdb8\" (UID: \"b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd\") " pod="openstack/dnsmasq-dns-767d96458c-2vdb8" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.165827 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd-dns-swift-storage-0\") pod \"dnsmasq-dns-767d96458c-2vdb8\" (UID: \"b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd\") " pod="openstack/dnsmasq-dns-767d96458c-2vdb8" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.168071 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd-ovsdbserver-sb\") pod \"dnsmasq-dns-767d96458c-2vdb8\" (UID: \"b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd\") " pod="openstack/dnsmasq-dns-767d96458c-2vdb8" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.169484 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd-ovsdbserver-nb\") pod \"dnsmasq-dns-767d96458c-2vdb8\" (UID: \"b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd\") " pod="openstack/dnsmasq-dns-767d96458c-2vdb8" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.212804 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-7kvxk"] Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.223768 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7kvxk" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.232862 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6tn4\" (UniqueName: \"kubernetes.io/projected/b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd-kube-api-access-h6tn4\") pod \"dnsmasq-dns-767d96458c-2vdb8\" (UID: \"b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd\") " pod="openstack/dnsmasq-dns-767d96458c-2vdb8" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.235205 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7kvxk"] Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.258530 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-9ptxk" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.261409 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.263066 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/295ba8ac-0609-4021-920b-3ab943ea21ad-fernet-keys\") pod \"keystone-bootstrap-cvstt\" (UID: \"295ba8ac-0609-4021-920b-3ab943ea21ad\") " pod="openstack/keystone-bootstrap-cvstt" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.263118 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2d9200d5-8f1c-46be-a802-995c7f58b754-etc-machine-id\") pod \"cinder-db-sync-7kvxk\" (UID: \"2d9200d5-8f1c-46be-a802-995c7f58b754\") " pod="openstack/cinder-db-sync-7kvxk" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.263167 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/295ba8ac-0609-4021-920b-3ab943ea21ad-scripts\") pod \"keystone-bootstrap-cvstt\" (UID: \"295ba8ac-0609-4021-920b-3ab943ea21ad\") " pod="openstack/keystone-bootstrap-cvstt" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.263190 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/295ba8ac-0609-4021-920b-3ab943ea21ad-combined-ca-bundle\") pod \"keystone-bootstrap-cvstt\" (UID: \"295ba8ac-0609-4021-920b-3ab943ea21ad\") " pod="openstack/keystone-bootstrap-cvstt" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.263224 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhpp9\" (UniqueName: \"kubernetes.io/projected/295ba8ac-0609-4021-920b-3ab943ea21ad-kube-api-access-nhpp9\") pod \"keystone-bootstrap-cvstt\" (UID: \"295ba8ac-0609-4021-920b-3ab943ea21ad\") " pod="openstack/keystone-bootstrap-cvstt" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.263253 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2d9200d5-8f1c-46be-a802-995c7f58b754-db-sync-config-data\") pod \"cinder-db-sync-7kvxk\" (UID: \"2d9200d5-8f1c-46be-a802-995c7f58b754\") " pod="openstack/cinder-db-sync-7kvxk" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.263287 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/295ba8ac-0609-4021-920b-3ab943ea21ad-config-data\") pod \"keystone-bootstrap-cvstt\" (UID: \"295ba8ac-0609-4021-920b-3ab943ea21ad\") " pod="openstack/keystone-bootstrap-cvstt" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.263314 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/295ba8ac-0609-4021-920b-3ab943ea21ad-credential-keys\") pod \"keystone-bootstrap-cvstt\" (UID: \"295ba8ac-0609-4021-920b-3ab943ea21ad\") " pod="openstack/keystone-bootstrap-cvstt" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.263338 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d9200d5-8f1c-46be-a802-995c7f58b754-config-data\") pod \"cinder-db-sync-7kvxk\" (UID: \"2d9200d5-8f1c-46be-a802-995c7f58b754\") " pod="openstack/cinder-db-sync-7kvxk" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.263374 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d9200d5-8f1c-46be-a802-995c7f58b754-scripts\") pod \"cinder-db-sync-7kvxk\" (UID: \"2d9200d5-8f1c-46be-a802-995c7f58b754\") " pod="openstack/cinder-db-sync-7kvxk" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.263393 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26th6\" (UniqueName: \"kubernetes.io/projected/2d9200d5-8f1c-46be-a802-995c7f58b754-kube-api-access-26th6\") pod \"cinder-db-sync-7kvxk\" (UID: \"2d9200d5-8f1c-46be-a802-995c7f58b754\") " pod="openstack/cinder-db-sync-7kvxk" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.263412 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d9200d5-8f1c-46be-a802-995c7f58b754-combined-ca-bundle\") pod \"cinder-db-sync-7kvxk\" (UID: \"2d9200d5-8f1c-46be-a802-995c7f58b754\") " pod="openstack/cinder-db-sync-7kvxk" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.268133 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/295ba8ac-0609-4021-920b-3ab943ea21ad-fernet-keys\") pod \"keystone-bootstrap-cvstt\" (UID: \"295ba8ac-0609-4021-920b-3ab943ea21ad\") " pod="openstack/keystone-bootstrap-cvstt" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.283175 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/295ba8ac-0609-4021-920b-3ab943ea21ad-config-data\") pod \"keystone-bootstrap-cvstt\" (UID: \"295ba8ac-0609-4021-920b-3ab943ea21ad\") " pod="openstack/keystone-bootstrap-cvstt" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.283588 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.287813 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/295ba8ac-0609-4021-920b-3ab943ea21ad-scripts\") pod \"keystone-bootstrap-cvstt\" (UID: \"295ba8ac-0609-4021-920b-3ab943ea21ad\") " pod="openstack/keystone-bootstrap-cvstt" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.295753 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/295ba8ac-0609-4021-920b-3ab943ea21ad-credential-keys\") pod \"keystone-bootstrap-cvstt\" (UID: \"295ba8ac-0609-4021-920b-3ab943ea21ad\") " pod="openstack/keystone-bootstrap-cvstt" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.304808 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/295ba8ac-0609-4021-920b-3ab943ea21ad-combined-ca-bundle\") pod \"keystone-bootstrap-cvstt\" (UID: \"295ba8ac-0609-4021-920b-3ab943ea21ad\") " pod="openstack/keystone-bootstrap-cvstt" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.317464 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-767d96458c-2vdb8" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.355150 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhpp9\" (UniqueName: \"kubernetes.io/projected/295ba8ac-0609-4021-920b-3ab943ea21ad-kube-api-access-nhpp9\") pod \"keystone-bootstrap-cvstt\" (UID: \"295ba8ac-0609-4021-920b-3ab943ea21ad\") " pod="openstack/keystone-bootstrap-cvstt" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.371535 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2d9200d5-8f1c-46be-a802-995c7f58b754-db-sync-config-data\") pod \"cinder-db-sync-7kvxk\" (UID: \"2d9200d5-8f1c-46be-a802-995c7f58b754\") " pod="openstack/cinder-db-sync-7kvxk" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.371634 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d9200d5-8f1c-46be-a802-995c7f58b754-config-data\") pod \"cinder-db-sync-7kvxk\" (UID: \"2d9200d5-8f1c-46be-a802-995c7f58b754\") " pod="openstack/cinder-db-sync-7kvxk" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.371695 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d9200d5-8f1c-46be-a802-995c7f58b754-scripts\") pod \"cinder-db-sync-7kvxk\" (UID: \"2d9200d5-8f1c-46be-a802-995c7f58b754\") " pod="openstack/cinder-db-sync-7kvxk" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.371726 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26th6\" (UniqueName: \"kubernetes.io/projected/2d9200d5-8f1c-46be-a802-995c7f58b754-kube-api-access-26th6\") pod \"cinder-db-sync-7kvxk\" (UID: \"2d9200d5-8f1c-46be-a802-995c7f58b754\") " pod="openstack/cinder-db-sync-7kvxk" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.371769 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d9200d5-8f1c-46be-a802-995c7f58b754-combined-ca-bundle\") pod \"cinder-db-sync-7kvxk\" (UID: \"2d9200d5-8f1c-46be-a802-995c7f58b754\") " pod="openstack/cinder-db-sync-7kvxk" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.371828 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2d9200d5-8f1c-46be-a802-995c7f58b754-etc-machine-id\") pod \"cinder-db-sync-7kvxk\" (UID: \"2d9200d5-8f1c-46be-a802-995c7f58b754\") " pod="openstack/cinder-db-sync-7kvxk" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.371960 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2d9200d5-8f1c-46be-a802-995c7f58b754-etc-machine-id\") pod \"cinder-db-sync-7kvxk\" (UID: \"2d9200d5-8f1c-46be-a802-995c7f58b754\") " pod="openstack/cinder-db-sync-7kvxk" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.381462 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2d9200d5-8f1c-46be-a802-995c7f58b754-db-sync-config-data\") pod \"cinder-db-sync-7kvxk\" (UID: \"2d9200d5-8f1c-46be-a802-995c7f58b754\") " pod="openstack/cinder-db-sync-7kvxk" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.392445 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d9200d5-8f1c-46be-a802-995c7f58b754-config-data\") pod \"cinder-db-sync-7kvxk\" (UID: \"2d9200d5-8f1c-46be-a802-995c7f58b754\") " pod="openstack/cinder-db-sync-7kvxk" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.398784 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d9200d5-8f1c-46be-a802-995c7f58b754-scripts\") pod \"cinder-db-sync-7kvxk\" (UID: \"2d9200d5-8f1c-46be-a802-995c7f58b754\") " pod="openstack/cinder-db-sync-7kvxk" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.403070 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d9200d5-8f1c-46be-a802-995c7f58b754-combined-ca-bundle\") pod \"cinder-db-sync-7kvxk\" (UID: \"2d9200d5-8f1c-46be-a802-995c7f58b754\") " pod="openstack/cinder-db-sync-7kvxk" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.414196 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-767d96458c-2vdb8"] Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.417858 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26th6\" (UniqueName: \"kubernetes.io/projected/2d9200d5-8f1c-46be-a802-995c7f58b754-kube-api-access-26th6\") pod \"cinder-db-sync-7kvxk\" (UID: \"2d9200d5-8f1c-46be-a802-995c7f58b754\") " pod="openstack/cinder-db-sync-7kvxk" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.424501 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cvstt" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.437168 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7kvxk" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.446075 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-rdbqg"] Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.447500 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rdbqg" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.460523 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-rdbqg"] Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.472830 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.473003 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.477131 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff61f93-f202-4057-a14e-7b395a73e323-combined-ca-bundle\") pod \"neutron-db-sync-rdbqg\" (UID: \"aff61f93-f202-4057-a14e-7b395a73e323\") " pod="openstack/neutron-db-sync-rdbqg" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.477222 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/aff61f93-f202-4057-a14e-7b395a73e323-config\") pod \"neutron-db-sync-rdbqg\" (UID: \"aff61f93-f202-4057-a14e-7b395a73e323\") " pod="openstack/neutron-db-sync-rdbqg" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.477262 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx5vq\" (UniqueName: \"kubernetes.io/projected/aff61f93-f202-4057-a14e-7b395a73e323-kube-api-access-cx5vq\") pod \"neutron-db-sync-rdbqg\" (UID: \"aff61f93-f202-4057-a14e-7b395a73e323\") " pod="openstack/neutron-db-sync-rdbqg" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.484240 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-kk6kg" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.500877 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-zwtmr"] Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.579232 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-zwtmr" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.581602 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff61f93-f202-4057-a14e-7b395a73e323-combined-ca-bundle\") pod \"neutron-db-sync-rdbqg\" (UID: \"aff61f93-f202-4057-a14e-7b395a73e323\") " pod="openstack/neutron-db-sync-rdbqg" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.591435 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/aff61f93-f202-4057-a14e-7b395a73e323-config\") pod \"neutron-db-sync-rdbqg\" (UID: \"aff61f93-f202-4057-a14e-7b395a73e323\") " pod="openstack/neutron-db-sync-rdbqg" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.592610 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx5vq\" (UniqueName: \"kubernetes.io/projected/aff61f93-f202-4057-a14e-7b395a73e323-kube-api-access-cx5vq\") pod \"neutron-db-sync-rdbqg\" (UID: \"aff61f93-f202-4057-a14e-7b395a73e323\") " pod="openstack/neutron-db-sync-rdbqg" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.585667 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-mkvlc" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.585798 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.588527 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.605399 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-5ml4z"] Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.621953 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-5ml4z" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.651219 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff61f93-f202-4057-a14e-7b395a73e323-combined-ca-bundle\") pod \"neutron-db-sync-rdbqg\" (UID: \"aff61f93-f202-4057-a14e-7b395a73e323\") " pod="openstack/neutron-db-sync-rdbqg" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.677656 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/aff61f93-f202-4057-a14e-7b395a73e323-config\") pod \"neutron-db-sync-rdbqg\" (UID: \"aff61f93-f202-4057-a14e-7b395a73e323\") " pod="openstack/neutron-db-sync-rdbqg" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.677822 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx5vq\" (UniqueName: \"kubernetes.io/projected/aff61f93-f202-4057-a14e-7b395a73e323-kube-api-access-cx5vq\") pod \"neutron-db-sync-rdbqg\" (UID: \"aff61f93-f202-4057-a14e-7b395a73e323\") " pod="openstack/neutron-db-sync-rdbqg" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.681253 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-zwtmr"] Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.718410 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvmbz\" (UniqueName: \"kubernetes.io/projected/f1d16bf5-88f5-4ec7-943a-fc1ec7c15425-kube-api-access-nvmbz\") pod \"placement-db-sync-zwtmr\" (UID: \"f1d16bf5-88f5-4ec7-943a-fc1ec7c15425\") " pod="openstack/placement-db-sync-zwtmr" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.718543 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1d16bf5-88f5-4ec7-943a-fc1ec7c15425-logs\") pod \"placement-db-sync-zwtmr\" (UID: \"f1d16bf5-88f5-4ec7-943a-fc1ec7c15425\") " pod="openstack/placement-db-sync-zwtmr" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.718640 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf5c6192-388a-487b-bf09-893e13347a2f-dns-swift-storage-0\") pod \"dnsmasq-dns-5fdbfbc95f-5ml4z\" (UID: \"cf5c6192-388a-487b-bf09-893e13347a2f\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-5ml4z" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.718746 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1d16bf5-88f5-4ec7-943a-fc1ec7c15425-scripts\") pod \"placement-db-sync-zwtmr\" (UID: \"f1d16bf5-88f5-4ec7-943a-fc1ec7c15425\") " pod="openstack/placement-db-sync-zwtmr" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.718812 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf5c6192-388a-487b-bf09-893e13347a2f-ovsdbserver-sb\") pod \"dnsmasq-dns-5fdbfbc95f-5ml4z\" (UID: \"cf5c6192-388a-487b-bf09-893e13347a2f\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-5ml4z" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.740753 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d16bf5-88f5-4ec7-943a-fc1ec7c15425-combined-ca-bundle\") pod \"placement-db-sync-zwtmr\" (UID: \"f1d16bf5-88f5-4ec7-943a-fc1ec7c15425\") " pod="openstack/placement-db-sync-zwtmr" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.740813 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrpwm\" (UniqueName: \"kubernetes.io/projected/cf5c6192-388a-487b-bf09-893e13347a2f-kube-api-access-qrpwm\") pod \"dnsmasq-dns-5fdbfbc95f-5ml4z\" (UID: \"cf5c6192-388a-487b-bf09-893e13347a2f\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-5ml4z" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.740984 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf5c6192-388a-487b-bf09-893e13347a2f-config\") pod \"dnsmasq-dns-5fdbfbc95f-5ml4z\" (UID: \"cf5c6192-388a-487b-bf09-893e13347a2f\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-5ml4z" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.741003 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf5c6192-388a-487b-bf09-893e13347a2f-ovsdbserver-nb\") pod \"dnsmasq-dns-5fdbfbc95f-5ml4z\" (UID: \"cf5c6192-388a-487b-bf09-893e13347a2f\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-5ml4z" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.741116 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1d16bf5-88f5-4ec7-943a-fc1ec7c15425-config-data\") pod \"placement-db-sync-zwtmr\" (UID: \"f1d16bf5-88f5-4ec7-943a-fc1ec7c15425\") " pod="openstack/placement-db-sync-zwtmr" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.741209 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf5c6192-388a-487b-bf09-893e13347a2f-dns-svc\") pod \"dnsmasq-dns-5fdbfbc95f-5ml4z\" (UID: \"cf5c6192-388a-487b-bf09-893e13347a2f\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-5ml4z" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.754404 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-5ml4z"] Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.811873 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-6vzzr"] Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.813445 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6vzzr" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.821027 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-zdpcp" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.821355 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.826098 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-5ml4z"] Dec 11 14:07:34 crc kubenswrapper[5050]: E1211 14:07:34.827129 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc dns-swift-storage-0 kube-api-access-qrpwm ovsdbserver-nb ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-5fdbfbc95f-5ml4z" podUID="cf5c6192-388a-487b-bf09-893e13347a2f" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.827314 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rdbqg" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.845878 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d16bf5-88f5-4ec7-943a-fc1ec7c15425-combined-ca-bundle\") pod \"placement-db-sync-zwtmr\" (UID: \"f1d16bf5-88f5-4ec7-943a-fc1ec7c15425\") " pod="openstack/placement-db-sync-zwtmr" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.845943 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrpwm\" (UniqueName: \"kubernetes.io/projected/cf5c6192-388a-487b-bf09-893e13347a2f-kube-api-access-qrpwm\") pod \"dnsmasq-dns-5fdbfbc95f-5ml4z\" (UID: \"cf5c6192-388a-487b-bf09-893e13347a2f\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-5ml4z" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.847505 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf5c6192-388a-487b-bf09-893e13347a2f-config\") pod \"dnsmasq-dns-5fdbfbc95f-5ml4z\" (UID: \"cf5c6192-388a-487b-bf09-893e13347a2f\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-5ml4z" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.847565 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf5c6192-388a-487b-bf09-893e13347a2f-ovsdbserver-nb\") pod \"dnsmasq-dns-5fdbfbc95f-5ml4z\" (UID: \"cf5c6192-388a-487b-bf09-893e13347a2f\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-5ml4z" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.847647 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1d16bf5-88f5-4ec7-943a-fc1ec7c15425-config-data\") pod \"placement-db-sync-zwtmr\" (UID: \"f1d16bf5-88f5-4ec7-943a-fc1ec7c15425\") " pod="openstack/placement-db-sync-zwtmr" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.847721 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf5c6192-388a-487b-bf09-893e13347a2f-dns-svc\") pod \"dnsmasq-dns-5fdbfbc95f-5ml4z\" (UID: \"cf5c6192-388a-487b-bf09-893e13347a2f\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-5ml4z" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.847768 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/107786ed-ea8f-4c2f-ac86-54b1bb504a69-combined-ca-bundle\") pod \"barbican-db-sync-6vzzr\" (UID: \"107786ed-ea8f-4c2f-ac86-54b1bb504a69\") " pod="openstack/barbican-db-sync-6vzzr" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.847830 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx8rt\" (UniqueName: \"kubernetes.io/projected/107786ed-ea8f-4c2f-ac86-54b1bb504a69-kube-api-access-sx8rt\") pod \"barbican-db-sync-6vzzr\" (UID: \"107786ed-ea8f-4c2f-ac86-54b1bb504a69\") " pod="openstack/barbican-db-sync-6vzzr" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.847904 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvmbz\" (UniqueName: \"kubernetes.io/projected/f1d16bf5-88f5-4ec7-943a-fc1ec7c15425-kube-api-access-nvmbz\") pod \"placement-db-sync-zwtmr\" (UID: \"f1d16bf5-88f5-4ec7-943a-fc1ec7c15425\") " pod="openstack/placement-db-sync-zwtmr" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.847929 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1d16bf5-88f5-4ec7-943a-fc1ec7c15425-logs\") pod \"placement-db-sync-zwtmr\" (UID: \"f1d16bf5-88f5-4ec7-943a-fc1ec7c15425\") " pod="openstack/placement-db-sync-zwtmr" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.847966 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf5c6192-388a-487b-bf09-893e13347a2f-dns-swift-storage-0\") pod \"dnsmasq-dns-5fdbfbc95f-5ml4z\" (UID: \"cf5c6192-388a-487b-bf09-893e13347a2f\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-5ml4z" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.848064 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/107786ed-ea8f-4c2f-ac86-54b1bb504a69-db-sync-config-data\") pod \"barbican-db-sync-6vzzr\" (UID: \"107786ed-ea8f-4c2f-ac86-54b1bb504a69\") " pod="openstack/barbican-db-sync-6vzzr" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.850349 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf5c6192-388a-487b-bf09-893e13347a2f-config\") pod \"dnsmasq-dns-5fdbfbc95f-5ml4z\" (UID: \"cf5c6192-388a-487b-bf09-893e13347a2f\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-5ml4z" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.852679 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1d16bf5-88f5-4ec7-943a-fc1ec7c15425-logs\") pod \"placement-db-sync-zwtmr\" (UID: \"f1d16bf5-88f5-4ec7-943a-fc1ec7c15425\") " pod="openstack/placement-db-sync-zwtmr" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.855638 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf5c6192-388a-487b-bf09-893e13347a2f-dns-svc\") pod \"dnsmasq-dns-5fdbfbc95f-5ml4z\" (UID: \"cf5c6192-388a-487b-bf09-893e13347a2f\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-5ml4z" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.855854 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-6vzzr"] Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.848095 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1d16bf5-88f5-4ec7-943a-fc1ec7c15425-scripts\") pod \"placement-db-sync-zwtmr\" (UID: \"f1d16bf5-88f5-4ec7-943a-fc1ec7c15425\") " pod="openstack/placement-db-sync-zwtmr" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.855990 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf5c6192-388a-487b-bf09-893e13347a2f-ovsdbserver-sb\") pod \"dnsmasq-dns-5fdbfbc95f-5ml4z\" (UID: \"cf5c6192-388a-487b-bf09-893e13347a2f\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-5ml4z" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.856416 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf5c6192-388a-487b-bf09-893e13347a2f-dns-swift-storage-0\") pod \"dnsmasq-dns-5fdbfbc95f-5ml4z\" (UID: \"cf5c6192-388a-487b-bf09-893e13347a2f\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-5ml4z" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.856883 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf5c6192-388a-487b-bf09-893e13347a2f-ovsdbserver-sb\") pod \"dnsmasq-dns-5fdbfbc95f-5ml4z\" (UID: \"cf5c6192-388a-487b-bf09-893e13347a2f\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-5ml4z" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.859215 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf5c6192-388a-487b-bf09-893e13347a2f-ovsdbserver-nb\") pod \"dnsmasq-dns-5fdbfbc95f-5ml4z\" (UID: \"cf5c6192-388a-487b-bf09-893e13347a2f\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-5ml4z" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.862363 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1d16bf5-88f5-4ec7-943a-fc1ec7c15425-config-data\") pod \"placement-db-sync-zwtmr\" (UID: \"f1d16bf5-88f5-4ec7-943a-fc1ec7c15425\") " pod="openstack/placement-db-sync-zwtmr" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.862873 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d16bf5-88f5-4ec7-943a-fc1ec7c15425-combined-ca-bundle\") pod \"placement-db-sync-zwtmr\" (UID: \"f1d16bf5-88f5-4ec7-943a-fc1ec7c15425\") " pod="openstack/placement-db-sync-zwtmr" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.871193 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1d16bf5-88f5-4ec7-943a-fc1ec7c15425-scripts\") pod \"placement-db-sync-zwtmr\" (UID: \"f1d16bf5-88f5-4ec7-943a-fc1ec7c15425\") " pod="openstack/placement-db-sync-zwtmr" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.881430 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrpwm\" (UniqueName: \"kubernetes.io/projected/cf5c6192-388a-487b-bf09-893e13347a2f-kube-api-access-qrpwm\") pod \"dnsmasq-dns-5fdbfbc95f-5ml4z\" (UID: \"cf5c6192-388a-487b-bf09-893e13347a2f\") " pod="openstack/dnsmasq-dns-5fdbfbc95f-5ml4z" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.881970 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvmbz\" (UniqueName: \"kubernetes.io/projected/f1d16bf5-88f5-4ec7-943a-fc1ec7c15425-kube-api-access-nvmbz\") pod \"placement-db-sync-zwtmr\" (UID: \"f1d16bf5-88f5-4ec7-943a-fc1ec7c15425\") " pod="openstack/placement-db-sync-zwtmr" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.887755 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-rvxcv"] Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.892473 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-rvxcv" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.895033 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.900714 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.908905 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.913733 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.914001 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.919228 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-rvxcv"] Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.958350 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49e25755-0205-47fb-a88c-2f7a3291a687-log-httpd\") pod \"ceilometer-0\" (UID: \"49e25755-0205-47fb-a88c-2f7a3291a687\") " pod="openstack/ceilometer-0" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.958409 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/107786ed-ea8f-4c2f-ac86-54b1bb504a69-db-sync-config-data\") pod \"barbican-db-sync-6vzzr\" (UID: \"107786ed-ea8f-4c2f-ac86-54b1bb504a69\") " pod="openstack/barbican-db-sync-6vzzr" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.958464 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/220e4ed5-e988-428e-b186-7a4231311831-config\") pod \"dnsmasq-dns-6f6f8cb849-rvxcv\" (UID: \"220e4ed5-e988-428e-b186-7a4231311831\") " pod="openstack/dnsmasq-dns-6f6f8cb849-rvxcv" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.958488 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/220e4ed5-e988-428e-b186-7a4231311831-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f8cb849-rvxcv\" (UID: \"220e4ed5-e988-428e-b186-7a4231311831\") " pod="openstack/dnsmasq-dns-6f6f8cb849-rvxcv" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.958514 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/220e4ed5-e988-428e-b186-7a4231311831-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f8cb849-rvxcv\" (UID: \"220e4ed5-e988-428e-b186-7a4231311831\") " pod="openstack/dnsmasq-dns-6f6f8cb849-rvxcv" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.958554 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49e25755-0205-47fb-a88c-2f7a3291a687-run-httpd\") pod \"ceilometer-0\" (UID: \"49e25755-0205-47fb-a88c-2f7a3291a687\") " pod="openstack/ceilometer-0" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.958599 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92d7x\" (UniqueName: \"kubernetes.io/projected/220e4ed5-e988-428e-b186-7a4231311831-kube-api-access-92d7x\") pod \"dnsmasq-dns-6f6f8cb849-rvxcv\" (UID: \"220e4ed5-e988-428e-b186-7a4231311831\") " pod="openstack/dnsmasq-dns-6f6f8cb849-rvxcv" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.958623 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/107786ed-ea8f-4c2f-ac86-54b1bb504a69-combined-ca-bundle\") pod \"barbican-db-sync-6vzzr\" (UID: \"107786ed-ea8f-4c2f-ac86-54b1bb504a69\") " pod="openstack/barbican-db-sync-6vzzr" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.958667 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/220e4ed5-e988-428e-b186-7a4231311831-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f8cb849-rvxcv\" (UID: \"220e4ed5-e988-428e-b186-7a4231311831\") " pod="openstack/dnsmasq-dns-6f6f8cb849-rvxcv" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.958691 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx8rt\" (UniqueName: \"kubernetes.io/projected/107786ed-ea8f-4c2f-ac86-54b1bb504a69-kube-api-access-sx8rt\") pod \"barbican-db-sync-6vzzr\" (UID: \"107786ed-ea8f-4c2f-ac86-54b1bb504a69\") " pod="openstack/barbican-db-sync-6vzzr" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.958723 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbpf7\" (UniqueName: \"kubernetes.io/projected/49e25755-0205-47fb-a88c-2f7a3291a687-kube-api-access-tbpf7\") pod \"ceilometer-0\" (UID: \"49e25755-0205-47fb-a88c-2f7a3291a687\") " pod="openstack/ceilometer-0" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.958749 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49e25755-0205-47fb-a88c-2f7a3291a687-config-data\") pod \"ceilometer-0\" (UID: \"49e25755-0205-47fb-a88c-2f7a3291a687\") " pod="openstack/ceilometer-0" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.958774 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49e25755-0205-47fb-a88c-2f7a3291a687-scripts\") pod \"ceilometer-0\" (UID: \"49e25755-0205-47fb-a88c-2f7a3291a687\") " pod="openstack/ceilometer-0" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.958796 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/220e4ed5-e988-428e-b186-7a4231311831-dns-svc\") pod \"dnsmasq-dns-6f6f8cb849-rvxcv\" (UID: \"220e4ed5-e988-428e-b186-7a4231311831\") " pod="openstack/dnsmasq-dns-6f6f8cb849-rvxcv" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.958829 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49e25755-0205-47fb-a88c-2f7a3291a687-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"49e25755-0205-47fb-a88c-2f7a3291a687\") " pod="openstack/ceilometer-0" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.958850 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49e25755-0205-47fb-a88c-2f7a3291a687-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"49e25755-0205-47fb-a88c-2f7a3291a687\") " pod="openstack/ceilometer-0" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.967725 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/107786ed-ea8f-4c2f-ac86-54b1bb504a69-db-sync-config-data\") pod \"barbican-db-sync-6vzzr\" (UID: \"107786ed-ea8f-4c2f-ac86-54b1bb504a69\") " pod="openstack/barbican-db-sync-6vzzr" Dec 11 14:07:34 crc kubenswrapper[5050]: I1211 14:07:34.968094 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/107786ed-ea8f-4c2f-ac86-54b1bb504a69-combined-ca-bundle\") pod \"barbican-db-sync-6vzzr\" (UID: \"107786ed-ea8f-4c2f-ac86-54b1bb504a69\") " pod="openstack/barbican-db-sync-6vzzr" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.001166 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx8rt\" (UniqueName: \"kubernetes.io/projected/107786ed-ea8f-4c2f-ac86-54b1bb504a69-kube-api-access-sx8rt\") pod \"barbican-db-sync-6vzzr\" (UID: \"107786ed-ea8f-4c2f-ac86-54b1bb504a69\") " pod="openstack/barbican-db-sync-6vzzr" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.021415 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-zwtmr" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.060910 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/220e4ed5-e988-428e-b186-7a4231311831-config\") pod \"dnsmasq-dns-6f6f8cb849-rvxcv\" (UID: \"220e4ed5-e988-428e-b186-7a4231311831\") " pod="openstack/dnsmasq-dns-6f6f8cb849-rvxcv" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.060979 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/220e4ed5-e988-428e-b186-7a4231311831-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f8cb849-rvxcv\" (UID: \"220e4ed5-e988-428e-b186-7a4231311831\") " pod="openstack/dnsmasq-dns-6f6f8cb849-rvxcv" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.061093 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/220e4ed5-e988-428e-b186-7a4231311831-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f8cb849-rvxcv\" (UID: \"220e4ed5-e988-428e-b186-7a4231311831\") " pod="openstack/dnsmasq-dns-6f6f8cb849-rvxcv" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.061158 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49e25755-0205-47fb-a88c-2f7a3291a687-run-httpd\") pod \"ceilometer-0\" (UID: \"49e25755-0205-47fb-a88c-2f7a3291a687\") " pod="openstack/ceilometer-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.061212 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92d7x\" (UniqueName: \"kubernetes.io/projected/220e4ed5-e988-428e-b186-7a4231311831-kube-api-access-92d7x\") pod \"dnsmasq-dns-6f6f8cb849-rvxcv\" (UID: \"220e4ed5-e988-428e-b186-7a4231311831\") " pod="openstack/dnsmasq-dns-6f6f8cb849-rvxcv" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.061598 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/220e4ed5-e988-428e-b186-7a4231311831-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f8cb849-rvxcv\" (UID: \"220e4ed5-e988-428e-b186-7a4231311831\") " pod="openstack/dnsmasq-dns-6f6f8cb849-rvxcv" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.061646 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbpf7\" (UniqueName: \"kubernetes.io/projected/49e25755-0205-47fb-a88c-2f7a3291a687-kube-api-access-tbpf7\") pod \"ceilometer-0\" (UID: \"49e25755-0205-47fb-a88c-2f7a3291a687\") " pod="openstack/ceilometer-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.063132 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/220e4ed5-e988-428e-b186-7a4231311831-ovsdbserver-sb\") pod \"dnsmasq-dns-6f6f8cb849-rvxcv\" (UID: \"220e4ed5-e988-428e-b186-7a4231311831\") " pod="openstack/dnsmasq-dns-6f6f8cb849-rvxcv" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.063286 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/220e4ed5-e988-428e-b186-7a4231311831-dns-swift-storage-0\") pod \"dnsmasq-dns-6f6f8cb849-rvxcv\" (UID: \"220e4ed5-e988-428e-b186-7a4231311831\") " pod="openstack/dnsmasq-dns-6f6f8cb849-rvxcv" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.064338 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/220e4ed5-e988-428e-b186-7a4231311831-config\") pod \"dnsmasq-dns-6f6f8cb849-rvxcv\" (UID: \"220e4ed5-e988-428e-b186-7a4231311831\") " pod="openstack/dnsmasq-dns-6f6f8cb849-rvxcv" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.064466 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/220e4ed5-e988-428e-b186-7a4231311831-ovsdbserver-nb\") pod \"dnsmasq-dns-6f6f8cb849-rvxcv\" (UID: \"220e4ed5-e988-428e-b186-7a4231311831\") " pod="openstack/dnsmasq-dns-6f6f8cb849-rvxcv" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.064735 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49e25755-0205-47fb-a88c-2f7a3291a687-run-httpd\") pod \"ceilometer-0\" (UID: \"49e25755-0205-47fb-a88c-2f7a3291a687\") " pod="openstack/ceilometer-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.064972 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49e25755-0205-47fb-a88c-2f7a3291a687-config-data\") pod \"ceilometer-0\" (UID: \"49e25755-0205-47fb-a88c-2f7a3291a687\") " pod="openstack/ceilometer-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.065050 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49e25755-0205-47fb-a88c-2f7a3291a687-scripts\") pod \"ceilometer-0\" (UID: \"49e25755-0205-47fb-a88c-2f7a3291a687\") " pod="openstack/ceilometer-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.065541 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/220e4ed5-e988-428e-b186-7a4231311831-dns-svc\") pod \"dnsmasq-dns-6f6f8cb849-rvxcv\" (UID: \"220e4ed5-e988-428e-b186-7a4231311831\") " pod="openstack/dnsmasq-dns-6f6f8cb849-rvxcv" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.065591 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49e25755-0205-47fb-a88c-2f7a3291a687-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"49e25755-0205-47fb-a88c-2f7a3291a687\") " pod="openstack/ceilometer-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.065619 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49e25755-0205-47fb-a88c-2f7a3291a687-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"49e25755-0205-47fb-a88c-2f7a3291a687\") " pod="openstack/ceilometer-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.065692 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49e25755-0205-47fb-a88c-2f7a3291a687-log-httpd\") pod \"ceilometer-0\" (UID: \"49e25755-0205-47fb-a88c-2f7a3291a687\") " pod="openstack/ceilometer-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.067554 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/220e4ed5-e988-428e-b186-7a4231311831-dns-svc\") pod \"dnsmasq-dns-6f6f8cb849-rvxcv\" (UID: \"220e4ed5-e988-428e-b186-7a4231311831\") " pod="openstack/dnsmasq-dns-6f6f8cb849-rvxcv" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.067882 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49e25755-0205-47fb-a88c-2f7a3291a687-log-httpd\") pod \"ceilometer-0\" (UID: \"49e25755-0205-47fb-a88c-2f7a3291a687\") " pod="openstack/ceilometer-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.098966 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbpf7\" (UniqueName: \"kubernetes.io/projected/49e25755-0205-47fb-a88c-2f7a3291a687-kube-api-access-tbpf7\") pod \"ceilometer-0\" (UID: \"49e25755-0205-47fb-a88c-2f7a3291a687\") " pod="openstack/ceilometer-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.099083 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49e25755-0205-47fb-a88c-2f7a3291a687-scripts\") pod \"ceilometer-0\" (UID: \"49e25755-0205-47fb-a88c-2f7a3291a687\") " pod="openstack/ceilometer-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.100792 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49e25755-0205-47fb-a88c-2f7a3291a687-config-data\") pod \"ceilometer-0\" (UID: \"49e25755-0205-47fb-a88c-2f7a3291a687\") " pod="openstack/ceilometer-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.100884 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49e25755-0205-47fb-a88c-2f7a3291a687-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"49e25755-0205-47fb-a88c-2f7a3291a687\") " pod="openstack/ceilometer-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.101466 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49e25755-0205-47fb-a88c-2f7a3291a687-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"49e25755-0205-47fb-a88c-2f7a3291a687\") " pod="openstack/ceilometer-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.102093 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92d7x\" (UniqueName: \"kubernetes.io/projected/220e4ed5-e988-428e-b186-7a4231311831-kube-api-access-92d7x\") pod \"dnsmasq-dns-6f6f8cb849-rvxcv\" (UID: \"220e4ed5-e988-428e-b186-7a4231311831\") " pod="openstack/dnsmasq-dns-6f6f8cb849-rvxcv" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.156864 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-rvxcv" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.158308 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6vzzr" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.174894 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.244401 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-767d96458c-2vdb8"] Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.372578 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-cvstt"] Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.392622 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-7kvxk"] Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.530103 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.532826 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.539216 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.539470 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-2jgks" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.540411 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.577876 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.585022 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"29144864-8034-4f1c-a270-3fa278d3a4c5\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.585091 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29144864-8034-4f1c-a270-3fa278d3a4c5-config-data\") pod \"glance-default-external-api-0\" (UID: \"29144864-8034-4f1c-a270-3fa278d3a4c5\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.585167 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29144864-8034-4f1c-a270-3fa278d3a4c5-scripts\") pod \"glance-default-external-api-0\" (UID: \"29144864-8034-4f1c-a270-3fa278d3a4c5\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.585246 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/29144864-8034-4f1c-a270-3fa278d3a4c5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"29144864-8034-4f1c-a270-3fa278d3a4c5\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.585294 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c86w5\" (UniqueName: \"kubernetes.io/projected/29144864-8034-4f1c-a270-3fa278d3a4c5-kube-api-access-c86w5\") pod \"glance-default-external-api-0\" (UID: \"29144864-8034-4f1c-a270-3fa278d3a4c5\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.585385 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29144864-8034-4f1c-a270-3fa278d3a4c5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"29144864-8034-4f1c-a270-3fa278d3a4c5\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.585410 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29144864-8034-4f1c-a270-3fa278d3a4c5-logs\") pod \"glance-default-external-api-0\" (UID: \"29144864-8034-4f1c-a270-3fa278d3a4c5\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.601705 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-rdbqg"] Dec 11 14:07:35 crc kubenswrapper[5050]: W1211 14:07:35.640917 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaff61f93_f202_4057_a14e_7b395a73e323.slice/crio-de395991995705ad773b339b04a206f6026ad47c0b1a2ff0b31463aa71cf0cf3 WatchSource:0}: Error finding container de395991995705ad773b339b04a206f6026ad47c0b1a2ff0b31463aa71cf0cf3: Status 404 returned error can't find the container with id de395991995705ad773b339b04a206f6026ad47c0b1a2ff0b31463aa71cf0cf3 Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.687430 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"29144864-8034-4f1c-a270-3fa278d3a4c5\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.687944 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29144864-8034-4f1c-a270-3fa278d3a4c5-config-data\") pod \"glance-default-external-api-0\" (UID: \"29144864-8034-4f1c-a270-3fa278d3a4c5\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.688161 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29144864-8034-4f1c-a270-3fa278d3a4c5-scripts\") pod \"glance-default-external-api-0\" (UID: \"29144864-8034-4f1c-a270-3fa278d3a4c5\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.688194 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/29144864-8034-4f1c-a270-3fa278d3a4c5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"29144864-8034-4f1c-a270-3fa278d3a4c5\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.688244 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c86w5\" (UniqueName: \"kubernetes.io/projected/29144864-8034-4f1c-a270-3fa278d3a4c5-kube-api-access-c86w5\") pod \"glance-default-external-api-0\" (UID: \"29144864-8034-4f1c-a270-3fa278d3a4c5\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.688366 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29144864-8034-4f1c-a270-3fa278d3a4c5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"29144864-8034-4f1c-a270-3fa278d3a4c5\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.688411 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29144864-8034-4f1c-a270-3fa278d3a4c5-logs\") pod \"glance-default-external-api-0\" (UID: \"29144864-8034-4f1c-a270-3fa278d3a4c5\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.689085 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29144864-8034-4f1c-a270-3fa278d3a4c5-logs\") pod \"glance-default-external-api-0\" (UID: \"29144864-8034-4f1c-a270-3fa278d3a4c5\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.689470 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"29144864-8034-4f1c-a270-3fa278d3a4c5\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.690867 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/29144864-8034-4f1c-a270-3fa278d3a4c5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"29144864-8034-4f1c-a270-3fa278d3a4c5\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.712282 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29144864-8034-4f1c-a270-3fa278d3a4c5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"29144864-8034-4f1c-a270-3fa278d3a4c5\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.715163 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c86w5\" (UniqueName: \"kubernetes.io/projected/29144864-8034-4f1c-a270-3fa278d3a4c5-kube-api-access-c86w5\") pod \"glance-default-external-api-0\" (UID: \"29144864-8034-4f1c-a270-3fa278d3a4c5\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.715679 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29144864-8034-4f1c-a270-3fa278d3a4c5-config-data\") pod \"glance-default-external-api-0\" (UID: \"29144864-8034-4f1c-a270-3fa278d3a4c5\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.721847 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29144864-8034-4f1c-a270-3fa278d3a4c5-scripts\") pod \"glance-default-external-api-0\" (UID: \"29144864-8034-4f1c-a270-3fa278d3a4c5\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.736197 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7kvxk" event={"ID":"2d9200d5-8f1c-46be-a802-995c7f58b754","Type":"ContainerStarted","Data":"1d9c5ca984053216d7d1231a28e67f27bc2f8619d7a076726c333375cf4c47ed"} Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.753771 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rdbqg" event={"ID":"aff61f93-f202-4057-a14e-7b395a73e323","Type":"ContainerStarted","Data":"de395991995705ad773b339b04a206f6026ad47c0b1a2ff0b31463aa71cf0cf3"} Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.781378 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cvstt" event={"ID":"295ba8ac-0609-4021-920b-3ab943ea21ad","Type":"ContainerStarted","Data":"1d4dd509ef4b84160fad41127c9030a5ce0133e7952cc6876f1b63ad6c2dd0a1"} Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.801479 5050 generic.go:334] "Generic (PLEG): container finished" podID="b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd" containerID="883a55eabbd16e8ce34faa0b68e2973a310cf413e0685b2c298d47d6c2825518" exitCode=0 Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.801660 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-5ml4z" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.801749 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-767d96458c-2vdb8" event={"ID":"b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd","Type":"ContainerDied","Data":"883a55eabbd16e8ce34faa0b68e2973a310cf413e0685b2c298d47d6c2825518"} Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.801830 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-767d96458c-2vdb8" event={"ID":"b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd","Type":"ContainerStarted","Data":"f498f5174bd7e13f53990c95b1ffd3f90f83497a2a7a7b906cb1901b110b2697"} Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.807297 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"29144864-8034-4f1c-a270-3fa278d3a4c5\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.854915 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-5ml4z" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.861209 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.861791 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.871500 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.878448 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.944187 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-zwtmr"] Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.998204 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrpwm\" (UniqueName: \"kubernetes.io/projected/cf5c6192-388a-487b-bf09-893e13347a2f-kube-api-access-qrpwm\") pod \"cf5c6192-388a-487b-bf09-893e13347a2f\" (UID: \"cf5c6192-388a-487b-bf09-893e13347a2f\") " Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.998569 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf5c6192-388a-487b-bf09-893e13347a2f-ovsdbserver-sb\") pod \"cf5c6192-388a-487b-bf09-893e13347a2f\" (UID: \"cf5c6192-388a-487b-bf09-893e13347a2f\") " Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.998641 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf5c6192-388a-487b-bf09-893e13347a2f-dns-svc\") pod \"cf5c6192-388a-487b-bf09-893e13347a2f\" (UID: \"cf5c6192-388a-487b-bf09-893e13347a2f\") " Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.998794 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf5c6192-388a-487b-bf09-893e13347a2f-dns-swift-storage-0\") pod \"cf5c6192-388a-487b-bf09-893e13347a2f\" (UID: \"cf5c6192-388a-487b-bf09-893e13347a2f\") " Dec 11 14:07:35 crc kubenswrapper[5050]: I1211 14:07:35.998887 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf5c6192-388a-487b-bf09-893e13347a2f-config\") pod \"cf5c6192-388a-487b-bf09-893e13347a2f\" (UID: \"cf5c6192-388a-487b-bf09-893e13347a2f\") " Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.004073 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf5c6192-388a-487b-bf09-893e13347a2f-ovsdbserver-nb\") pod \"cf5c6192-388a-487b-bf09-893e13347a2f\" (UID: \"cf5c6192-388a-487b-bf09-893e13347a2f\") " Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.000688 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf5c6192-388a-487b-bf09-893e13347a2f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cf5c6192-388a-487b-bf09-893e13347a2f" (UID: "cf5c6192-388a-487b-bf09-893e13347a2f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.000937 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf5c6192-388a-487b-bf09-893e13347a2f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cf5c6192-388a-487b-bf09-893e13347a2f" (UID: "cf5c6192-388a-487b-bf09-893e13347a2f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.001293 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf5c6192-388a-487b-bf09-893e13347a2f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cf5c6192-388a-487b-bf09-893e13347a2f" (UID: "cf5c6192-388a-487b-bf09-893e13347a2f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.001509 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf5c6192-388a-487b-bf09-893e13347a2f-config" (OuterVolumeSpecName: "config") pod "cf5c6192-388a-487b-bf09-893e13347a2f" (UID: "cf5c6192-388a-487b-bf09-893e13347a2f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.004877 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cf5c6192-388a-487b-bf09-893e13347a2f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cf5c6192-388a-487b-bf09-893e13347a2f" (UID: "cf5c6192-388a-487b-bf09-893e13347a2f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.005244 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28c13081-fdad-4d05-86c1-50bfd9745239-config-data\") pod \"glance-default-internal-api-0\" (UID: \"28c13081-fdad-4d05-86c1-50bfd9745239\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.005401 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28c13081-fdad-4d05-86c1-50bfd9745239-logs\") pod \"glance-default-internal-api-0\" (UID: \"28c13081-fdad-4d05-86c1-50bfd9745239\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.005491 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/28c13081-fdad-4d05-86c1-50bfd9745239-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"28c13081-fdad-4d05-86c1-50bfd9745239\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.005621 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"28c13081-fdad-4d05-86c1-50bfd9745239\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.005789 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s87xx\" (UniqueName: \"kubernetes.io/projected/28c13081-fdad-4d05-86c1-50bfd9745239-kube-api-access-s87xx\") pod \"glance-default-internal-api-0\" (UID: \"28c13081-fdad-4d05-86c1-50bfd9745239\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.005916 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28c13081-fdad-4d05-86c1-50bfd9745239-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"28c13081-fdad-4d05-86c1-50bfd9745239\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.005980 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28c13081-fdad-4d05-86c1-50bfd9745239-scripts\") pod \"glance-default-internal-api-0\" (UID: \"28c13081-fdad-4d05-86c1-50bfd9745239\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.006119 5050 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cf5c6192-388a-487b-bf09-893e13347a2f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.006185 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cf5c6192-388a-487b-bf09-893e13347a2f-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.006238 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cf5c6192-388a-487b-bf09-893e13347a2f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.006288 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cf5c6192-388a-487b-bf09-893e13347a2f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.006337 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cf5c6192-388a-487b-bf09-893e13347a2f-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.034406 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf5c6192-388a-487b-bf09-893e13347a2f-kube-api-access-qrpwm" (OuterVolumeSpecName: "kube-api-access-qrpwm") pod "cf5c6192-388a-487b-bf09-893e13347a2f" (UID: "cf5c6192-388a-487b-bf09-893e13347a2f"). InnerVolumeSpecName "kube-api-access-qrpwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.050577 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.109513 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s87xx\" (UniqueName: \"kubernetes.io/projected/28c13081-fdad-4d05-86c1-50bfd9745239-kube-api-access-s87xx\") pod \"glance-default-internal-api-0\" (UID: \"28c13081-fdad-4d05-86c1-50bfd9745239\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.109592 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28c13081-fdad-4d05-86c1-50bfd9745239-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"28c13081-fdad-4d05-86c1-50bfd9745239\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.109613 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28c13081-fdad-4d05-86c1-50bfd9745239-scripts\") pod \"glance-default-internal-api-0\" (UID: \"28c13081-fdad-4d05-86c1-50bfd9745239\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.109673 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28c13081-fdad-4d05-86c1-50bfd9745239-config-data\") pod \"glance-default-internal-api-0\" (UID: \"28c13081-fdad-4d05-86c1-50bfd9745239\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.109714 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28c13081-fdad-4d05-86c1-50bfd9745239-logs\") pod \"glance-default-internal-api-0\" (UID: \"28c13081-fdad-4d05-86c1-50bfd9745239\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.109737 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/28c13081-fdad-4d05-86c1-50bfd9745239-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"28c13081-fdad-4d05-86c1-50bfd9745239\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.109771 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"28c13081-fdad-4d05-86c1-50bfd9745239\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.109843 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrpwm\" (UniqueName: \"kubernetes.io/projected/cf5c6192-388a-487b-bf09-893e13347a2f-kube-api-access-qrpwm\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.110284 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"28c13081-fdad-4d05-86c1-50bfd9745239\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.116914 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28c13081-fdad-4d05-86c1-50bfd9745239-logs\") pod \"glance-default-internal-api-0\" (UID: \"28c13081-fdad-4d05-86c1-50bfd9745239\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.117520 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/28c13081-fdad-4d05-86c1-50bfd9745239-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"28c13081-fdad-4d05-86c1-50bfd9745239\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.119485 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28c13081-fdad-4d05-86c1-50bfd9745239-scripts\") pod \"glance-default-internal-api-0\" (UID: \"28c13081-fdad-4d05-86c1-50bfd9745239\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.136095 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-6vzzr"] Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.165344 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28c13081-fdad-4d05-86c1-50bfd9745239-config-data\") pod \"glance-default-internal-api-0\" (UID: \"28c13081-fdad-4d05-86c1-50bfd9745239\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.182728 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28c13081-fdad-4d05-86c1-50bfd9745239-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"28c13081-fdad-4d05-86c1-50bfd9745239\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.206188 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s87xx\" (UniqueName: \"kubernetes.io/projected/28c13081-fdad-4d05-86c1-50bfd9745239-kube-api-access-s87xx\") pod \"glance-default-internal-api-0\" (UID: \"28c13081-fdad-4d05-86c1-50bfd9745239\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.240492 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"28c13081-fdad-4d05-86c1-50bfd9745239\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.331260 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-rvxcv"] Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.344392 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.506813 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.596287 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-767d96458c-2vdb8" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.673855 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.743893 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd-ovsdbserver-sb\") pod \"b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd\" (UID: \"b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd\") " Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.744037 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd-ovsdbserver-nb\") pod \"b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd\" (UID: \"b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd\") " Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.744139 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd-dns-svc\") pod \"b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd\" (UID: \"b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd\") " Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.744226 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd-config\") pod \"b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd\" (UID: \"b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd\") " Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.744324 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6tn4\" (UniqueName: \"kubernetes.io/projected/b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd-kube-api-access-h6tn4\") pod \"b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd\" (UID: \"b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd\") " Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.744382 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd-dns-swift-storage-0\") pod \"b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd\" (UID: \"b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd\") " Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.796192 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.802298 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd-kube-api-access-h6tn4" (OuterVolumeSpecName: "kube-api-access-h6tn4") pod "b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd" (UID: "b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd"). InnerVolumeSpecName "kube-api-access-h6tn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.829124 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd" (UID: "b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.849416 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6tn4\" (UniqueName: \"kubernetes.io/projected/b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd-kube-api-access-h6tn4\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.849446 5050 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.870248 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd" (UID: "b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.879928 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd-config" (OuterVolumeSpecName: "config") pod "b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd" (UID: "b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.909189 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.922295 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.925664 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd" (UID: "b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.928277 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd" (UID: "b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.956604 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rdbqg" event={"ID":"aff61f93-f202-4057-a14e-7b395a73e323","Type":"ContainerStarted","Data":"a1fbc95eb3b3987970a436f95d6c365fceacbe66e3d72a6ce3cf2ff678c4bb9f"} Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.967206 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.967269 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.967326 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.967695 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.985474 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6vzzr" event={"ID":"107786ed-ea8f-4c2f-ac86-54b1bb504a69","Type":"ContainerStarted","Data":"5a508779bbb983a2c769b51c0e588de7c8e3373254ef2dd4b2dbac4a0a8f525b"} Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.998208 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"29144864-8034-4f1c-a270-3fa278d3a4c5","Type":"ContainerStarted","Data":"aa07007837b2febca47a8b66d647f06308b3770f7d4e2ae6dd3e89cb32f533b8"} Dec 11 14:07:36 crc kubenswrapper[5050]: I1211 14:07:36.999470 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49e25755-0205-47fb-a88c-2f7a3291a687","Type":"ContainerStarted","Data":"3b15d68432c00d3d1c2be73a44464075776850bd495545f7ff3ff7265a20be3f"} Dec 11 14:07:37 crc kubenswrapper[5050]: I1211 14:07:37.013680 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-rdbqg" podStartSLOduration=3.013646936 podStartE2EDuration="3.013646936s" podCreationTimestamp="2025-12-11 14:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:07:37.00450884 +0000 UTC m=+1147.848231426" watchObservedRunningTime="2025-12-11 14:07:37.013646936 +0000 UTC m=+1147.857369522" Dec 11 14:07:37 crc kubenswrapper[5050]: I1211 14:07:37.015661 5050 generic.go:334] "Generic (PLEG): container finished" podID="220e4ed5-e988-428e-b186-7a4231311831" containerID="5a4b5f983fdbecf512731820ec94e59c7f16b943e0633a4e8536e9e64f2b792f" exitCode=0 Dec 11 14:07:37 crc kubenswrapper[5050]: I1211 14:07:37.015773 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-rvxcv" event={"ID":"220e4ed5-e988-428e-b186-7a4231311831","Type":"ContainerDied","Data":"5a4b5f983fdbecf512731820ec94e59c7f16b943e0633a4e8536e9e64f2b792f"} Dec 11 14:07:37 crc kubenswrapper[5050]: I1211 14:07:37.015814 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-rvxcv" event={"ID":"220e4ed5-e988-428e-b186-7a4231311831","Type":"ContainerStarted","Data":"fe7de1dece40429ea37cfacf13e4dbf438717340ffc11ecabbae16e6154dba0d"} Dec 11 14:07:37 crc kubenswrapper[5050]: I1211 14:07:37.025775 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cvstt" event={"ID":"295ba8ac-0609-4021-920b-3ab943ea21ad","Type":"ContainerStarted","Data":"9c0ae62c16c8df7398252c5d9b6936e3fe54053471442f1e95b49a66210f7004"} Dec 11 14:07:37 crc kubenswrapper[5050]: I1211 14:07:37.030850 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-zwtmr" event={"ID":"f1d16bf5-88f5-4ec7-943a-fc1ec7c15425","Type":"ContainerStarted","Data":"9e14aa931615f338ad584d55b33ab1930879c99690332bf2e69e014270736d2e"} Dec 11 14:07:37 crc kubenswrapper[5050]: I1211 14:07:37.070565 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fdbfbc95f-5ml4z" Dec 11 14:07:37 crc kubenswrapper[5050]: I1211 14:07:37.070659 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-767d96458c-2vdb8" Dec 11 14:07:37 crc kubenswrapper[5050]: I1211 14:07:37.070742 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-767d96458c-2vdb8" event={"ID":"b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd","Type":"ContainerDied","Data":"f498f5174bd7e13f53990c95b1ffd3f90f83497a2a7a7b906cb1901b110b2697"} Dec 11 14:07:37 crc kubenswrapper[5050]: I1211 14:07:37.119040 5050 scope.go:117] "RemoveContainer" containerID="883a55eabbd16e8ce34faa0b68e2973a310cf413e0685b2c298d47d6c2825518" Dec 11 14:07:37 crc kubenswrapper[5050]: I1211 14:07:37.178077 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-cvstt" podStartSLOduration=4.178039898 podStartE2EDuration="4.178039898s" podCreationTimestamp="2025-12-11 14:07:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:07:37.134890635 +0000 UTC m=+1147.978613221" watchObservedRunningTime="2025-12-11 14:07:37.178039898 +0000 UTC m=+1148.021762484" Dec 11 14:07:37 crc kubenswrapper[5050]: I1211 14:07:37.294507 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-5ml4z"] Dec 11 14:07:37 crc kubenswrapper[5050]: I1211 14:07:37.331028 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fdbfbc95f-5ml4z"] Dec 11 14:07:37 crc kubenswrapper[5050]: I1211 14:07:37.363059 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 14:07:37 crc kubenswrapper[5050]: I1211 14:07:37.398421 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-767d96458c-2vdb8"] Dec 11 14:07:37 crc kubenswrapper[5050]: I1211 14:07:37.417469 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-767d96458c-2vdb8"] Dec 11 14:07:37 crc kubenswrapper[5050]: I1211 14:07:37.565289 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd" path="/var/lib/kubelet/pods/b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd/volumes" Dec 11 14:07:37 crc kubenswrapper[5050]: I1211 14:07:37.565905 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf5c6192-388a-487b-bf09-893e13347a2f" path="/var/lib/kubelet/pods/cf5c6192-388a-487b-bf09-893e13347a2f/volumes" Dec 11 14:07:38 crc kubenswrapper[5050]: I1211 14:07:38.132877 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"28c13081-fdad-4d05-86c1-50bfd9745239","Type":"ContainerStarted","Data":"918b10805c2c41ccf18baf8c93bea1b8dfafb6c5bf9a755d2ab4e89f7d3e8e71"} Dec 11 14:07:38 crc kubenswrapper[5050]: I1211 14:07:38.169985 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-rvxcv" event={"ID":"220e4ed5-e988-428e-b186-7a4231311831","Type":"ContainerStarted","Data":"6f9318782ca9007f6ad48d70d0b16e91ab76be1db28d346a1199392c0864541f"} Dec 11 14:07:38 crc kubenswrapper[5050]: I1211 14:07:38.170354 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f6f8cb849-rvxcv" Dec 11 14:07:38 crc kubenswrapper[5050]: I1211 14:07:38.199183 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f6f8cb849-rvxcv" podStartSLOduration=4.199153449 podStartE2EDuration="4.199153449s" podCreationTimestamp="2025-12-11 14:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:07:38.194898254 +0000 UTC m=+1149.038620840" watchObservedRunningTime="2025-12-11 14:07:38.199153449 +0000 UTC m=+1149.042876035" Dec 11 14:07:39 crc kubenswrapper[5050]: I1211 14:07:39.191936 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"28c13081-fdad-4d05-86c1-50bfd9745239","Type":"ContainerStarted","Data":"d17ce11b3a577fe1f99ed43c3b6239570256f4f6ffc04bae15d0cb648cfe9762"} Dec 11 14:07:39 crc kubenswrapper[5050]: I1211 14:07:39.196712 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="29144864-8034-4f1c-a270-3fa278d3a4c5" containerName="glance-log" containerID="cri-o://c57629ecbce0006535ae0214131185c2b057c9fee32a3f999689b8e1d1e66e33" gracePeriod=30 Dec 11 14:07:39 crc kubenswrapper[5050]: I1211 14:07:39.197107 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"29144864-8034-4f1c-a270-3fa278d3a4c5","Type":"ContainerStarted","Data":"c57629ecbce0006535ae0214131185c2b057c9fee32a3f999689b8e1d1e66e33"} Dec 11 14:07:39 crc kubenswrapper[5050]: I1211 14:07:39.197143 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"29144864-8034-4f1c-a270-3fa278d3a4c5","Type":"ContainerStarted","Data":"5122c99446f10696d16778c42f47ac3e6025c19538fcce0301edfc8643b0c5e9"} Dec 11 14:07:39 crc kubenswrapper[5050]: I1211 14:07:39.197482 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="29144864-8034-4f1c-a270-3fa278d3a4c5" containerName="glance-httpd" containerID="cri-o://5122c99446f10696d16778c42f47ac3e6025c19538fcce0301edfc8643b0c5e9" gracePeriod=30 Dec 11 14:07:39 crc kubenswrapper[5050]: I1211 14:07:39.228806 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.228756709 podStartE2EDuration="5.228756709s" podCreationTimestamp="2025-12-11 14:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:07:39.217349251 +0000 UTC m=+1150.061071827" watchObservedRunningTime="2025-12-11 14:07:39.228756709 +0000 UTC m=+1150.072479315" Dec 11 14:07:40 crc kubenswrapper[5050]: I1211 14:07:40.228140 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"28c13081-fdad-4d05-86c1-50bfd9745239","Type":"ContainerStarted","Data":"0fb1b5259f3b526f668e3021f74203bb87f10713c1686a64348b93ee9334f000"} Dec 11 14:07:40 crc kubenswrapper[5050]: I1211 14:07:40.228292 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="28c13081-fdad-4d05-86c1-50bfd9745239" containerName="glance-log" containerID="cri-o://d17ce11b3a577fe1f99ed43c3b6239570256f4f6ffc04bae15d0cb648cfe9762" gracePeriod=30 Dec 11 14:07:40 crc kubenswrapper[5050]: I1211 14:07:40.228791 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="28c13081-fdad-4d05-86c1-50bfd9745239" containerName="glance-httpd" containerID="cri-o://0fb1b5259f3b526f668e3021f74203bb87f10713c1686a64348b93ee9334f000" gracePeriod=30 Dec 11 14:07:40 crc kubenswrapper[5050]: I1211 14:07:40.232949 5050 generic.go:334] "Generic (PLEG): container finished" podID="29144864-8034-4f1c-a270-3fa278d3a4c5" containerID="5122c99446f10696d16778c42f47ac3e6025c19538fcce0301edfc8643b0c5e9" exitCode=143 Dec 11 14:07:40 crc kubenswrapper[5050]: I1211 14:07:40.232988 5050 generic.go:334] "Generic (PLEG): container finished" podID="29144864-8034-4f1c-a270-3fa278d3a4c5" containerID="c57629ecbce0006535ae0214131185c2b057c9fee32a3f999689b8e1d1e66e33" exitCode=143 Dec 11 14:07:40 crc kubenswrapper[5050]: I1211 14:07:40.233471 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"29144864-8034-4f1c-a270-3fa278d3a4c5","Type":"ContainerDied","Data":"5122c99446f10696d16778c42f47ac3e6025c19538fcce0301edfc8643b0c5e9"} Dec 11 14:07:40 crc kubenswrapper[5050]: I1211 14:07:40.233532 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"29144864-8034-4f1c-a270-3fa278d3a4c5","Type":"ContainerDied","Data":"c57629ecbce0006535ae0214131185c2b057c9fee32a3f999689b8e1d1e66e33"} Dec 11 14:07:40 crc kubenswrapper[5050]: I1211 14:07:40.258597 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.258557953 podStartE2EDuration="6.258557953s" podCreationTimestamp="2025-12-11 14:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:07:40.25474244 +0000 UTC m=+1151.098465036" watchObservedRunningTime="2025-12-11 14:07:40.258557953 +0000 UTC m=+1151.102280559" Dec 11 14:07:41 crc kubenswrapper[5050]: I1211 14:07:41.251556 5050 generic.go:334] "Generic (PLEG): container finished" podID="28c13081-fdad-4d05-86c1-50bfd9745239" containerID="0fb1b5259f3b526f668e3021f74203bb87f10713c1686a64348b93ee9334f000" exitCode=0 Dec 11 14:07:41 crc kubenswrapper[5050]: I1211 14:07:41.252305 5050 generic.go:334] "Generic (PLEG): container finished" podID="28c13081-fdad-4d05-86c1-50bfd9745239" containerID="d17ce11b3a577fe1f99ed43c3b6239570256f4f6ffc04bae15d0cb648cfe9762" exitCode=143 Dec 11 14:07:41 crc kubenswrapper[5050]: I1211 14:07:41.251830 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"28c13081-fdad-4d05-86c1-50bfd9745239","Type":"ContainerDied","Data":"0fb1b5259f3b526f668e3021f74203bb87f10713c1686a64348b93ee9334f000"} Dec 11 14:07:41 crc kubenswrapper[5050]: I1211 14:07:41.252438 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"28c13081-fdad-4d05-86c1-50bfd9745239","Type":"ContainerDied","Data":"d17ce11b3a577fe1f99ed43c3b6239570256f4f6ffc04bae15d0cb648cfe9762"} Dec 11 14:07:42 crc kubenswrapper[5050]: I1211 14:07:42.277853 5050 generic.go:334] "Generic (PLEG): container finished" podID="295ba8ac-0609-4021-920b-3ab943ea21ad" containerID="9c0ae62c16c8df7398252c5d9b6936e3fe54053471442f1e95b49a66210f7004" exitCode=0 Dec 11 14:07:42 crc kubenswrapper[5050]: I1211 14:07:42.278047 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cvstt" event={"ID":"295ba8ac-0609-4021-920b-3ab943ea21ad","Type":"ContainerDied","Data":"9c0ae62c16c8df7398252c5d9b6936e3fe54053471442f1e95b49a66210f7004"} Dec 11 14:07:45 crc kubenswrapper[5050]: I1211 14:07:45.159432 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f6f8cb849-rvxcv" Dec 11 14:07:45 crc kubenswrapper[5050]: I1211 14:07:45.249219 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-xqjfw"] Dec 11 14:07:45 crc kubenswrapper[5050]: I1211 14:07:45.249909 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8db84466c-xqjfw" podUID="38fac7eb-e076-4c58-9d8e-961461e27f92" containerName="dnsmasq-dns" containerID="cri-o://ee297d5093f29c9f272b39eb275be212b9c71255851c9af6568a077917a24a37" gracePeriod=10 Dec 11 14:07:45 crc kubenswrapper[5050]: I1211 14:07:45.953636 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8db84466c-xqjfw" podUID="38fac7eb-e076-4c58-9d8e-961461e27f92" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: connect: connection refused" Dec 11 14:07:45 crc kubenswrapper[5050]: I1211 14:07:45.960833 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.130033 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29144864-8034-4f1c-a270-3fa278d3a4c5-config-data\") pod \"29144864-8034-4f1c-a270-3fa278d3a4c5\" (UID: \"29144864-8034-4f1c-a270-3fa278d3a4c5\") " Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.130173 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c86w5\" (UniqueName: \"kubernetes.io/projected/29144864-8034-4f1c-a270-3fa278d3a4c5-kube-api-access-c86w5\") pod \"29144864-8034-4f1c-a270-3fa278d3a4c5\" (UID: \"29144864-8034-4f1c-a270-3fa278d3a4c5\") " Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.130261 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"29144864-8034-4f1c-a270-3fa278d3a4c5\" (UID: \"29144864-8034-4f1c-a270-3fa278d3a4c5\") " Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.130414 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29144864-8034-4f1c-a270-3fa278d3a4c5-logs\") pod \"29144864-8034-4f1c-a270-3fa278d3a4c5\" (UID: \"29144864-8034-4f1c-a270-3fa278d3a4c5\") " Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.130524 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29144864-8034-4f1c-a270-3fa278d3a4c5-scripts\") pod \"29144864-8034-4f1c-a270-3fa278d3a4c5\" (UID: \"29144864-8034-4f1c-a270-3fa278d3a4c5\") " Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.130583 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/29144864-8034-4f1c-a270-3fa278d3a4c5-httpd-run\") pod \"29144864-8034-4f1c-a270-3fa278d3a4c5\" (UID: \"29144864-8034-4f1c-a270-3fa278d3a4c5\") " Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.130764 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29144864-8034-4f1c-a270-3fa278d3a4c5-combined-ca-bundle\") pod \"29144864-8034-4f1c-a270-3fa278d3a4c5\" (UID: \"29144864-8034-4f1c-a270-3fa278d3a4c5\") " Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.131675 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29144864-8034-4f1c-a270-3fa278d3a4c5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "29144864-8034-4f1c-a270-3fa278d3a4c5" (UID: "29144864-8034-4f1c-a270-3fa278d3a4c5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.131781 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29144864-8034-4f1c-a270-3fa278d3a4c5-logs" (OuterVolumeSpecName: "logs") pod "29144864-8034-4f1c-a270-3fa278d3a4c5" (UID: "29144864-8034-4f1c-a270-3fa278d3a4c5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.134191 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29144864-8034-4f1c-a270-3fa278d3a4c5-logs\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.134707 5050 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/29144864-8034-4f1c-a270-3fa278d3a4c5-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.140609 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29144864-8034-4f1c-a270-3fa278d3a4c5-kube-api-access-c86w5" (OuterVolumeSpecName: "kube-api-access-c86w5") pod "29144864-8034-4f1c-a270-3fa278d3a4c5" (UID: "29144864-8034-4f1c-a270-3fa278d3a4c5"). InnerVolumeSpecName "kube-api-access-c86w5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.141445 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "29144864-8034-4f1c-a270-3fa278d3a4c5" (UID: "29144864-8034-4f1c-a270-3fa278d3a4c5"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.145491 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29144864-8034-4f1c-a270-3fa278d3a4c5-scripts" (OuterVolumeSpecName: "scripts") pod "29144864-8034-4f1c-a270-3fa278d3a4c5" (UID: "29144864-8034-4f1c-a270-3fa278d3a4c5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.160665 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29144864-8034-4f1c-a270-3fa278d3a4c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29144864-8034-4f1c-a270-3fa278d3a4c5" (UID: "29144864-8034-4f1c-a270-3fa278d3a4c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.209067 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29144864-8034-4f1c-a270-3fa278d3a4c5-config-data" (OuterVolumeSpecName: "config-data") pod "29144864-8034-4f1c-a270-3fa278d3a4c5" (UID: "29144864-8034-4f1c-a270-3fa278d3a4c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.236492 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29144864-8034-4f1c-a270-3fa278d3a4c5-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.236539 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29144864-8034-4f1c-a270-3fa278d3a4c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.236553 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29144864-8034-4f1c-a270-3fa278d3a4c5-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.236567 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c86w5\" (UniqueName: \"kubernetes.io/projected/29144864-8034-4f1c-a270-3fa278d3a4c5-kube-api-access-c86w5\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.236619 5050 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.261268 5050 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.338168 5050 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.347157 5050 generic.go:334] "Generic (PLEG): container finished" podID="38fac7eb-e076-4c58-9d8e-961461e27f92" containerID="ee297d5093f29c9f272b39eb275be212b9c71255851c9af6568a077917a24a37" exitCode=0 Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.347254 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-xqjfw" event={"ID":"38fac7eb-e076-4c58-9d8e-961461e27f92","Type":"ContainerDied","Data":"ee297d5093f29c9f272b39eb275be212b9c71255851c9af6568a077917a24a37"} Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.350630 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"29144864-8034-4f1c-a270-3fa278d3a4c5","Type":"ContainerDied","Data":"aa07007837b2febca47a8b66d647f06308b3770f7d4e2ae6dd3e89cb32f533b8"} Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.350694 5050 scope.go:117] "RemoveContainer" containerID="5122c99446f10696d16778c42f47ac3e6025c19538fcce0301edfc8643b0c5e9" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.350877 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.394699 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.413759 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.430821 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 14:07:46 crc kubenswrapper[5050]: E1211 14:07:46.431414 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd" containerName="init" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.431444 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd" containerName="init" Dec 11 14:07:46 crc kubenswrapper[5050]: E1211 14:07:46.431473 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29144864-8034-4f1c-a270-3fa278d3a4c5" containerName="glance-log" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.431486 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="29144864-8034-4f1c-a270-3fa278d3a4c5" containerName="glance-log" Dec 11 14:07:46 crc kubenswrapper[5050]: E1211 14:07:46.431525 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29144864-8034-4f1c-a270-3fa278d3a4c5" containerName="glance-httpd" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.431534 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="29144864-8034-4f1c-a270-3fa278d3a4c5" containerName="glance-httpd" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.431755 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="b06a8cfc-cccf-48d2-b354-91fc8b4cd4bd" containerName="init" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.431779 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="29144864-8034-4f1c-a270-3fa278d3a4c5" containerName="glance-httpd" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.431800 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="29144864-8034-4f1c-a270-3fa278d3a4c5" containerName="glance-log" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.435993 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.439177 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.439641 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.458950 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.545935 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e61388f2-3282-4387-a850-9b4bffbf0e2b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e61388f2-3282-4387-a850-9b4bffbf0e2b\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.546063 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpw78\" (UniqueName: \"kubernetes.io/projected/e61388f2-3282-4387-a850-9b4bffbf0e2b-kube-api-access-vpw78\") pod \"glance-default-external-api-0\" (UID: \"e61388f2-3282-4387-a850-9b4bffbf0e2b\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.546129 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e61388f2-3282-4387-a850-9b4bffbf0e2b-logs\") pod \"glance-default-external-api-0\" (UID: \"e61388f2-3282-4387-a850-9b4bffbf0e2b\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.546182 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e61388f2-3282-4387-a850-9b4bffbf0e2b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e61388f2-3282-4387-a850-9b4bffbf0e2b\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.546228 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e61388f2-3282-4387-a850-9b4bffbf0e2b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e61388f2-3282-4387-a850-9b4bffbf0e2b\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.546259 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e61388f2-3282-4387-a850-9b4bffbf0e2b-config-data\") pod \"glance-default-external-api-0\" (UID: \"e61388f2-3282-4387-a850-9b4bffbf0e2b\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.546289 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"e61388f2-3282-4387-a850-9b4bffbf0e2b\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.546346 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e61388f2-3282-4387-a850-9b4bffbf0e2b-scripts\") pod \"glance-default-external-api-0\" (UID: \"e61388f2-3282-4387-a850-9b4bffbf0e2b\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.648578 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e61388f2-3282-4387-a850-9b4bffbf0e2b-logs\") pod \"glance-default-external-api-0\" (UID: \"e61388f2-3282-4387-a850-9b4bffbf0e2b\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.648706 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e61388f2-3282-4387-a850-9b4bffbf0e2b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e61388f2-3282-4387-a850-9b4bffbf0e2b\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.648775 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e61388f2-3282-4387-a850-9b4bffbf0e2b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e61388f2-3282-4387-a850-9b4bffbf0e2b\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.648806 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e61388f2-3282-4387-a850-9b4bffbf0e2b-config-data\") pod \"glance-default-external-api-0\" (UID: \"e61388f2-3282-4387-a850-9b4bffbf0e2b\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.648828 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"e61388f2-3282-4387-a850-9b4bffbf0e2b\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.648871 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e61388f2-3282-4387-a850-9b4bffbf0e2b-scripts\") pod \"glance-default-external-api-0\" (UID: \"e61388f2-3282-4387-a850-9b4bffbf0e2b\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.648903 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e61388f2-3282-4387-a850-9b4bffbf0e2b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e61388f2-3282-4387-a850-9b4bffbf0e2b\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.648959 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpw78\" (UniqueName: \"kubernetes.io/projected/e61388f2-3282-4387-a850-9b4bffbf0e2b-kube-api-access-vpw78\") pod \"glance-default-external-api-0\" (UID: \"e61388f2-3282-4387-a850-9b4bffbf0e2b\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.649421 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"e61388f2-3282-4387-a850-9b4bffbf0e2b\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.695737 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e61388f2-3282-4387-a850-9b4bffbf0e2b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e61388f2-3282-4387-a850-9b4bffbf0e2b\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.696518 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e61388f2-3282-4387-a850-9b4bffbf0e2b-logs\") pod \"glance-default-external-api-0\" (UID: \"e61388f2-3282-4387-a850-9b4bffbf0e2b\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.773203 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e61388f2-3282-4387-a850-9b4bffbf0e2b-config-data\") pod \"glance-default-external-api-0\" (UID: \"e61388f2-3282-4387-a850-9b4bffbf0e2b\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.773995 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e61388f2-3282-4387-a850-9b4bffbf0e2b-scripts\") pod \"glance-default-external-api-0\" (UID: \"e61388f2-3282-4387-a850-9b4bffbf0e2b\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.776983 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpw78\" (UniqueName: \"kubernetes.io/projected/e61388f2-3282-4387-a850-9b4bffbf0e2b-kube-api-access-vpw78\") pod \"glance-default-external-api-0\" (UID: \"e61388f2-3282-4387-a850-9b4bffbf0e2b\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.777187 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e61388f2-3282-4387-a850-9b4bffbf0e2b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e61388f2-3282-4387-a850-9b4bffbf0e2b\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.777681 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e61388f2-3282-4387-a850-9b4bffbf0e2b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e61388f2-3282-4387-a850-9b4bffbf0e2b\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:46 crc kubenswrapper[5050]: I1211 14:07:46.778331 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"e61388f2-3282-4387-a850-9b4bffbf0e2b\") " pod="openstack/glance-default-external-api-0" Dec 11 14:07:47 crc kubenswrapper[5050]: I1211 14:07:47.056413 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 14:07:47 crc kubenswrapper[5050]: I1211 14:07:47.556985 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29144864-8034-4f1c-a270-3fa278d3a4c5" path="/var/lib/kubelet/pods/29144864-8034-4f1c-a270-3fa278d3a4c5/volumes" Dec 11 14:07:50 crc kubenswrapper[5050]: E1211 14:07:50.676071 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16" Dec 11 14:07:50 crc kubenswrapper[5050]: E1211 14:07:50.677181 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sx8rt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-6vzzr_openstack(107786ed-ea8f-4c2f-ac86-54b1bb504a69): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 14:07:50 crc kubenswrapper[5050]: E1211 14:07:50.678379 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-6vzzr" podUID="107786ed-ea8f-4c2f-ac86-54b1bb504a69" Dec 11 14:07:50 crc kubenswrapper[5050]: I1211 14:07:50.796315 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 14:07:50 crc kubenswrapper[5050]: I1211 14:07:50.946097 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28c13081-fdad-4d05-86c1-50bfd9745239-config-data\") pod \"28c13081-fdad-4d05-86c1-50bfd9745239\" (UID: \"28c13081-fdad-4d05-86c1-50bfd9745239\") " Dec 11 14:07:50 crc kubenswrapper[5050]: I1211 14:07:50.946165 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s87xx\" (UniqueName: \"kubernetes.io/projected/28c13081-fdad-4d05-86c1-50bfd9745239-kube-api-access-s87xx\") pod \"28c13081-fdad-4d05-86c1-50bfd9745239\" (UID: \"28c13081-fdad-4d05-86c1-50bfd9745239\") " Dec 11 14:07:50 crc kubenswrapper[5050]: I1211 14:07:50.946296 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28c13081-fdad-4d05-86c1-50bfd9745239-logs\") pod \"28c13081-fdad-4d05-86c1-50bfd9745239\" (UID: \"28c13081-fdad-4d05-86c1-50bfd9745239\") " Dec 11 14:07:50 crc kubenswrapper[5050]: I1211 14:07:50.946446 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"28c13081-fdad-4d05-86c1-50bfd9745239\" (UID: \"28c13081-fdad-4d05-86c1-50bfd9745239\") " Dec 11 14:07:50 crc kubenswrapper[5050]: I1211 14:07:50.946778 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28c13081-fdad-4d05-86c1-50bfd9745239-logs" (OuterVolumeSpecName: "logs") pod "28c13081-fdad-4d05-86c1-50bfd9745239" (UID: "28c13081-fdad-4d05-86c1-50bfd9745239"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:07:50 crc kubenswrapper[5050]: I1211 14:07:50.947090 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28c13081-fdad-4d05-86c1-50bfd9745239-scripts\") pod \"28c13081-fdad-4d05-86c1-50bfd9745239\" (UID: \"28c13081-fdad-4d05-86c1-50bfd9745239\") " Dec 11 14:07:50 crc kubenswrapper[5050]: I1211 14:07:50.947152 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/28c13081-fdad-4d05-86c1-50bfd9745239-httpd-run\") pod \"28c13081-fdad-4d05-86c1-50bfd9745239\" (UID: \"28c13081-fdad-4d05-86c1-50bfd9745239\") " Dec 11 14:07:50 crc kubenswrapper[5050]: I1211 14:07:50.947195 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28c13081-fdad-4d05-86c1-50bfd9745239-combined-ca-bundle\") pod \"28c13081-fdad-4d05-86c1-50bfd9745239\" (UID: \"28c13081-fdad-4d05-86c1-50bfd9745239\") " Dec 11 14:07:50 crc kubenswrapper[5050]: I1211 14:07:50.947855 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/28c13081-fdad-4d05-86c1-50bfd9745239-logs\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:50 crc kubenswrapper[5050]: I1211 14:07:50.948064 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28c13081-fdad-4d05-86c1-50bfd9745239-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "28c13081-fdad-4d05-86c1-50bfd9745239" (UID: "28c13081-fdad-4d05-86c1-50bfd9745239"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:07:50 crc kubenswrapper[5050]: I1211 14:07:50.954069 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8db84466c-xqjfw" podUID="38fac7eb-e076-4c58-9d8e-961461e27f92" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: connect: connection refused" Dec 11 14:07:50 crc kubenswrapper[5050]: I1211 14:07:50.954111 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28c13081-fdad-4d05-86c1-50bfd9745239-kube-api-access-s87xx" (OuterVolumeSpecName: "kube-api-access-s87xx") pod "28c13081-fdad-4d05-86c1-50bfd9745239" (UID: "28c13081-fdad-4d05-86c1-50bfd9745239"). InnerVolumeSpecName "kube-api-access-s87xx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:07:50 crc kubenswrapper[5050]: I1211 14:07:50.954243 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "28c13081-fdad-4d05-86c1-50bfd9745239" (UID: "28c13081-fdad-4d05-86c1-50bfd9745239"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 14:07:50 crc kubenswrapper[5050]: I1211 14:07:50.961219 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28c13081-fdad-4d05-86c1-50bfd9745239-scripts" (OuterVolumeSpecName: "scripts") pod "28c13081-fdad-4d05-86c1-50bfd9745239" (UID: "28c13081-fdad-4d05-86c1-50bfd9745239"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:07:50 crc kubenswrapper[5050]: I1211 14:07:50.979566 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28c13081-fdad-4d05-86c1-50bfd9745239-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "28c13081-fdad-4d05-86c1-50bfd9745239" (UID: "28c13081-fdad-4d05-86c1-50bfd9745239"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:50.999946 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28c13081-fdad-4d05-86c1-50bfd9745239-config-data" (OuterVolumeSpecName: "config-data") pod "28c13081-fdad-4d05-86c1-50bfd9745239" (UID: "28c13081-fdad-4d05-86c1-50bfd9745239"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.050156 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s87xx\" (UniqueName: \"kubernetes.io/projected/28c13081-fdad-4d05-86c1-50bfd9745239-kube-api-access-s87xx\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.050254 5050 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.050272 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/28c13081-fdad-4d05-86c1-50bfd9745239-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.050286 5050 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/28c13081-fdad-4d05-86c1-50bfd9745239-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.050300 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28c13081-fdad-4d05-86c1-50bfd9745239-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.050312 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/28c13081-fdad-4d05-86c1-50bfd9745239-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.071928 5050 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.152579 5050 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.417660 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"28c13081-fdad-4d05-86c1-50bfd9745239","Type":"ContainerDied","Data":"918b10805c2c41ccf18baf8c93bea1b8dfafb6c5bf9a755d2ab4e89f7d3e8e71"} Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.417692 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 14:07:51 crc kubenswrapper[5050]: E1211 14:07:51.419560 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:fe32d3ea620f0c7ecfdde9bbf28417fde03bc18c6f60b1408fa8da24d8188f16\\\"\"" pod="openstack/barbican-db-sync-6vzzr" podUID="107786ed-ea8f-4c2f-ac86-54b1bb504a69" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.479757 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.492607 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.508657 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 14:07:51 crc kubenswrapper[5050]: E1211 14:07:51.509197 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28c13081-fdad-4d05-86c1-50bfd9745239" containerName="glance-log" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.509214 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="28c13081-fdad-4d05-86c1-50bfd9745239" containerName="glance-log" Dec 11 14:07:51 crc kubenswrapper[5050]: E1211 14:07:51.509269 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28c13081-fdad-4d05-86c1-50bfd9745239" containerName="glance-httpd" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.509275 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="28c13081-fdad-4d05-86c1-50bfd9745239" containerName="glance-httpd" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.509537 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="28c13081-fdad-4d05-86c1-50bfd9745239" containerName="glance-httpd" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.509552 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="28c13081-fdad-4d05-86c1-50bfd9745239" containerName="glance-log" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.511416 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.514951 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.515676 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.527313 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.560965 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28c13081-fdad-4d05-86c1-50bfd9745239" path="/var/lib/kubelet/pods/28c13081-fdad-4d05-86c1-50bfd9745239/volumes" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.663644 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l84hn\" (UniqueName: \"kubernetes.io/projected/1553db29-21b6-4403-ab72-67c4d725a99d-kube-api-access-l84hn\") pod \"glance-default-internal-api-0\" (UID: \"1553db29-21b6-4403-ab72-67c4d725a99d\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.663707 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1553db29-21b6-4403-ab72-67c4d725a99d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1553db29-21b6-4403-ab72-67c4d725a99d\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.663754 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1553db29-21b6-4403-ab72-67c4d725a99d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1553db29-21b6-4403-ab72-67c4d725a99d\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.663792 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1553db29-21b6-4403-ab72-67c4d725a99d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1553db29-21b6-4403-ab72-67c4d725a99d\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.663833 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1553db29-21b6-4403-ab72-67c4d725a99d-logs\") pod \"glance-default-internal-api-0\" (UID: \"1553db29-21b6-4403-ab72-67c4d725a99d\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.663852 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1553db29-21b6-4403-ab72-67c4d725a99d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1553db29-21b6-4403-ab72-67c4d725a99d\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.663897 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1553db29-21b6-4403-ab72-67c4d725a99d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1553db29-21b6-4403-ab72-67c4d725a99d\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.663920 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"1553db29-21b6-4403-ab72-67c4d725a99d\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.766192 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1553db29-21b6-4403-ab72-67c4d725a99d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1553db29-21b6-4403-ab72-67c4d725a99d\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.766254 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"1553db29-21b6-4403-ab72-67c4d725a99d\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.766307 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l84hn\" (UniqueName: \"kubernetes.io/projected/1553db29-21b6-4403-ab72-67c4d725a99d-kube-api-access-l84hn\") pod \"glance-default-internal-api-0\" (UID: \"1553db29-21b6-4403-ab72-67c4d725a99d\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.766333 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1553db29-21b6-4403-ab72-67c4d725a99d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1553db29-21b6-4403-ab72-67c4d725a99d\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.766371 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1553db29-21b6-4403-ab72-67c4d725a99d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1553db29-21b6-4403-ab72-67c4d725a99d\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.766405 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1553db29-21b6-4403-ab72-67c4d725a99d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1553db29-21b6-4403-ab72-67c4d725a99d\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.766444 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1553db29-21b6-4403-ab72-67c4d725a99d-logs\") pod \"glance-default-internal-api-0\" (UID: \"1553db29-21b6-4403-ab72-67c4d725a99d\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.766464 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1553db29-21b6-4403-ab72-67c4d725a99d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1553db29-21b6-4403-ab72-67c4d725a99d\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.767797 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1553db29-21b6-4403-ab72-67c4d725a99d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1553db29-21b6-4403-ab72-67c4d725a99d\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.768103 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"1553db29-21b6-4403-ab72-67c4d725a99d\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.772736 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1553db29-21b6-4403-ab72-67c4d725a99d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1553db29-21b6-4403-ab72-67c4d725a99d\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.773058 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1553db29-21b6-4403-ab72-67c4d725a99d-logs\") pod \"glance-default-internal-api-0\" (UID: \"1553db29-21b6-4403-ab72-67c4d725a99d\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.775386 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1553db29-21b6-4403-ab72-67c4d725a99d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1553db29-21b6-4403-ab72-67c4d725a99d\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.776080 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1553db29-21b6-4403-ab72-67c4d725a99d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1553db29-21b6-4403-ab72-67c4d725a99d\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.777622 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1553db29-21b6-4403-ab72-67c4d725a99d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1553db29-21b6-4403-ab72-67c4d725a99d\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.790483 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l84hn\" (UniqueName: \"kubernetes.io/projected/1553db29-21b6-4403-ab72-67c4d725a99d-kube-api-access-l84hn\") pod \"glance-default-internal-api-0\" (UID: \"1553db29-21b6-4403-ab72-67c4d725a99d\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.807958 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"1553db29-21b6-4403-ab72-67c4d725a99d\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:07:51 crc kubenswrapper[5050]: I1211 14:07:51.831687 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 14:07:55 crc kubenswrapper[5050]: I1211 14:07:55.953561 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8db84466c-xqjfw" podUID="38fac7eb-e076-4c58-9d8e-961461e27f92" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: connect: connection refused" Dec 11 14:07:55 crc kubenswrapper[5050]: I1211 14:07:55.954341 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8db84466c-xqjfw" Dec 11 14:07:59 crc kubenswrapper[5050]: I1211 14:07:59.916769 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cvstt" Dec 11 14:08:00 crc kubenswrapper[5050]: I1211 14:08:00.070777 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/295ba8ac-0609-4021-920b-3ab943ea21ad-config-data\") pod \"295ba8ac-0609-4021-920b-3ab943ea21ad\" (UID: \"295ba8ac-0609-4021-920b-3ab943ea21ad\") " Dec 11 14:08:00 crc kubenswrapper[5050]: I1211 14:08:00.071077 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhpp9\" (UniqueName: \"kubernetes.io/projected/295ba8ac-0609-4021-920b-3ab943ea21ad-kube-api-access-nhpp9\") pod \"295ba8ac-0609-4021-920b-3ab943ea21ad\" (UID: \"295ba8ac-0609-4021-920b-3ab943ea21ad\") " Dec 11 14:08:00 crc kubenswrapper[5050]: I1211 14:08:00.071276 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/295ba8ac-0609-4021-920b-3ab943ea21ad-fernet-keys\") pod \"295ba8ac-0609-4021-920b-3ab943ea21ad\" (UID: \"295ba8ac-0609-4021-920b-3ab943ea21ad\") " Dec 11 14:08:00 crc kubenswrapper[5050]: I1211 14:08:00.071344 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/295ba8ac-0609-4021-920b-3ab943ea21ad-combined-ca-bundle\") pod \"295ba8ac-0609-4021-920b-3ab943ea21ad\" (UID: \"295ba8ac-0609-4021-920b-3ab943ea21ad\") " Dec 11 14:08:00 crc kubenswrapper[5050]: I1211 14:08:00.071390 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/295ba8ac-0609-4021-920b-3ab943ea21ad-credential-keys\") pod \"295ba8ac-0609-4021-920b-3ab943ea21ad\" (UID: \"295ba8ac-0609-4021-920b-3ab943ea21ad\") " Dec 11 14:08:00 crc kubenswrapper[5050]: I1211 14:08:00.071443 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/295ba8ac-0609-4021-920b-3ab943ea21ad-scripts\") pod \"295ba8ac-0609-4021-920b-3ab943ea21ad\" (UID: \"295ba8ac-0609-4021-920b-3ab943ea21ad\") " Dec 11 14:08:00 crc kubenswrapper[5050]: I1211 14:08:00.078419 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/295ba8ac-0609-4021-920b-3ab943ea21ad-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "295ba8ac-0609-4021-920b-3ab943ea21ad" (UID: "295ba8ac-0609-4021-920b-3ab943ea21ad"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:08:00 crc kubenswrapper[5050]: I1211 14:08:00.083195 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/295ba8ac-0609-4021-920b-3ab943ea21ad-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "295ba8ac-0609-4021-920b-3ab943ea21ad" (UID: "295ba8ac-0609-4021-920b-3ab943ea21ad"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:08:00 crc kubenswrapper[5050]: I1211 14:08:00.085392 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/295ba8ac-0609-4021-920b-3ab943ea21ad-kube-api-access-nhpp9" (OuterVolumeSpecName: "kube-api-access-nhpp9") pod "295ba8ac-0609-4021-920b-3ab943ea21ad" (UID: "295ba8ac-0609-4021-920b-3ab943ea21ad"). InnerVolumeSpecName "kube-api-access-nhpp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:08:00 crc kubenswrapper[5050]: I1211 14:08:00.098835 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/295ba8ac-0609-4021-920b-3ab943ea21ad-scripts" (OuterVolumeSpecName: "scripts") pod "295ba8ac-0609-4021-920b-3ab943ea21ad" (UID: "295ba8ac-0609-4021-920b-3ab943ea21ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:08:00 crc kubenswrapper[5050]: I1211 14:08:00.104221 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/295ba8ac-0609-4021-920b-3ab943ea21ad-config-data" (OuterVolumeSpecName: "config-data") pod "295ba8ac-0609-4021-920b-3ab943ea21ad" (UID: "295ba8ac-0609-4021-920b-3ab943ea21ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:08:00 crc kubenswrapper[5050]: I1211 14:08:00.106761 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/295ba8ac-0609-4021-920b-3ab943ea21ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "295ba8ac-0609-4021-920b-3ab943ea21ad" (UID: "295ba8ac-0609-4021-920b-3ab943ea21ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:08:00 crc kubenswrapper[5050]: I1211 14:08:00.174233 5050 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/295ba8ac-0609-4021-920b-3ab943ea21ad-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:00 crc kubenswrapper[5050]: I1211 14:08:00.174279 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/295ba8ac-0609-4021-920b-3ab943ea21ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:00 crc kubenswrapper[5050]: I1211 14:08:00.174292 5050 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/295ba8ac-0609-4021-920b-3ab943ea21ad-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:00 crc kubenswrapper[5050]: I1211 14:08:00.174305 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/295ba8ac-0609-4021-920b-3ab943ea21ad-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:00 crc kubenswrapper[5050]: I1211 14:08:00.174316 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/295ba8ac-0609-4021-920b-3ab943ea21ad-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:00 crc kubenswrapper[5050]: I1211 14:08:00.174326 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhpp9\" (UniqueName: \"kubernetes.io/projected/295ba8ac-0609-4021-920b-3ab943ea21ad-kube-api-access-nhpp9\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:00 crc kubenswrapper[5050]: I1211 14:08:00.515707 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cvstt" event={"ID":"295ba8ac-0609-4021-920b-3ab943ea21ad","Type":"ContainerDied","Data":"1d4dd509ef4b84160fad41127c9030a5ce0133e7952cc6876f1b63ad6c2dd0a1"} Dec 11 14:08:00 crc kubenswrapper[5050]: I1211 14:08:00.515758 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cvstt" Dec 11 14:08:00 crc kubenswrapper[5050]: I1211 14:08:00.515758 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d4dd509ef4b84160fad41127c9030a5ce0133e7952cc6876f1b63ad6c2dd0a1" Dec 11 14:08:01 crc kubenswrapper[5050]: I1211 14:08:01.027432 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-cvstt"] Dec 11 14:08:01 crc kubenswrapper[5050]: I1211 14:08:01.039121 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-cvstt"] Dec 11 14:08:01 crc kubenswrapper[5050]: I1211 14:08:01.116799 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-tnv94"] Dec 11 14:08:01 crc kubenswrapper[5050]: E1211 14:08:01.117488 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="295ba8ac-0609-4021-920b-3ab943ea21ad" containerName="keystone-bootstrap" Dec 11 14:08:01 crc kubenswrapper[5050]: I1211 14:08:01.117514 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="295ba8ac-0609-4021-920b-3ab943ea21ad" containerName="keystone-bootstrap" Dec 11 14:08:01 crc kubenswrapper[5050]: I1211 14:08:01.117692 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="295ba8ac-0609-4021-920b-3ab943ea21ad" containerName="keystone-bootstrap" Dec 11 14:08:01 crc kubenswrapper[5050]: I1211 14:08:01.118510 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tnv94" Dec 11 14:08:01 crc kubenswrapper[5050]: I1211 14:08:01.121295 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 11 14:08:01 crc kubenswrapper[5050]: I1211 14:08:01.121656 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 11 14:08:01 crc kubenswrapper[5050]: I1211 14:08:01.121811 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 11 14:08:01 crc kubenswrapper[5050]: I1211 14:08:01.122136 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-tckgc" Dec 11 14:08:01 crc kubenswrapper[5050]: I1211 14:08:01.122283 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 11 14:08:01 crc kubenswrapper[5050]: I1211 14:08:01.138635 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tnv94"] Dec 11 14:08:01 crc kubenswrapper[5050]: I1211 14:08:01.197082 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c69742c2-ef6f-478b-bf96-754808e9a127-fernet-keys\") pod \"keystone-bootstrap-tnv94\" (UID: \"c69742c2-ef6f-478b-bf96-754808e9a127\") " pod="openstack/keystone-bootstrap-tnv94" Dec 11 14:08:01 crc kubenswrapper[5050]: I1211 14:08:01.197155 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c69742c2-ef6f-478b-bf96-754808e9a127-combined-ca-bundle\") pod \"keystone-bootstrap-tnv94\" (UID: \"c69742c2-ef6f-478b-bf96-754808e9a127\") " pod="openstack/keystone-bootstrap-tnv94" Dec 11 14:08:01 crc kubenswrapper[5050]: I1211 14:08:01.197283 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c69742c2-ef6f-478b-bf96-754808e9a127-scripts\") pod \"keystone-bootstrap-tnv94\" (UID: \"c69742c2-ef6f-478b-bf96-754808e9a127\") " pod="openstack/keystone-bootstrap-tnv94" Dec 11 14:08:01 crc kubenswrapper[5050]: I1211 14:08:01.197311 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxvjf\" (UniqueName: \"kubernetes.io/projected/c69742c2-ef6f-478b-bf96-754808e9a127-kube-api-access-zxvjf\") pod \"keystone-bootstrap-tnv94\" (UID: \"c69742c2-ef6f-478b-bf96-754808e9a127\") " pod="openstack/keystone-bootstrap-tnv94" Dec 11 14:08:01 crc kubenswrapper[5050]: I1211 14:08:01.197342 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c69742c2-ef6f-478b-bf96-754808e9a127-config-data\") pod \"keystone-bootstrap-tnv94\" (UID: \"c69742c2-ef6f-478b-bf96-754808e9a127\") " pod="openstack/keystone-bootstrap-tnv94" Dec 11 14:08:01 crc kubenswrapper[5050]: I1211 14:08:01.197384 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c69742c2-ef6f-478b-bf96-754808e9a127-credential-keys\") pod \"keystone-bootstrap-tnv94\" (UID: \"c69742c2-ef6f-478b-bf96-754808e9a127\") " pod="openstack/keystone-bootstrap-tnv94" Dec 11 14:08:01 crc kubenswrapper[5050]: I1211 14:08:01.299380 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c69742c2-ef6f-478b-bf96-754808e9a127-fernet-keys\") pod \"keystone-bootstrap-tnv94\" (UID: \"c69742c2-ef6f-478b-bf96-754808e9a127\") " pod="openstack/keystone-bootstrap-tnv94" Dec 11 14:08:01 crc kubenswrapper[5050]: I1211 14:08:01.299486 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c69742c2-ef6f-478b-bf96-754808e9a127-combined-ca-bundle\") pod \"keystone-bootstrap-tnv94\" (UID: \"c69742c2-ef6f-478b-bf96-754808e9a127\") " pod="openstack/keystone-bootstrap-tnv94" Dec 11 14:08:01 crc kubenswrapper[5050]: I1211 14:08:01.299544 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c69742c2-ef6f-478b-bf96-754808e9a127-scripts\") pod \"keystone-bootstrap-tnv94\" (UID: \"c69742c2-ef6f-478b-bf96-754808e9a127\") " pod="openstack/keystone-bootstrap-tnv94" Dec 11 14:08:01 crc kubenswrapper[5050]: I1211 14:08:01.299576 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxvjf\" (UniqueName: \"kubernetes.io/projected/c69742c2-ef6f-478b-bf96-754808e9a127-kube-api-access-zxvjf\") pod \"keystone-bootstrap-tnv94\" (UID: \"c69742c2-ef6f-478b-bf96-754808e9a127\") " pod="openstack/keystone-bootstrap-tnv94" Dec 11 14:08:01 crc kubenswrapper[5050]: I1211 14:08:01.299607 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c69742c2-ef6f-478b-bf96-754808e9a127-config-data\") pod \"keystone-bootstrap-tnv94\" (UID: \"c69742c2-ef6f-478b-bf96-754808e9a127\") " pod="openstack/keystone-bootstrap-tnv94" Dec 11 14:08:01 crc kubenswrapper[5050]: I1211 14:08:01.299665 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c69742c2-ef6f-478b-bf96-754808e9a127-credential-keys\") pod \"keystone-bootstrap-tnv94\" (UID: \"c69742c2-ef6f-478b-bf96-754808e9a127\") " pod="openstack/keystone-bootstrap-tnv94" Dec 11 14:08:01 crc kubenswrapper[5050]: I1211 14:08:01.305178 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c69742c2-ef6f-478b-bf96-754808e9a127-scripts\") pod \"keystone-bootstrap-tnv94\" (UID: \"c69742c2-ef6f-478b-bf96-754808e9a127\") " pod="openstack/keystone-bootstrap-tnv94" Dec 11 14:08:01 crc kubenswrapper[5050]: I1211 14:08:01.305519 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c69742c2-ef6f-478b-bf96-754808e9a127-fernet-keys\") pod \"keystone-bootstrap-tnv94\" (UID: \"c69742c2-ef6f-478b-bf96-754808e9a127\") " pod="openstack/keystone-bootstrap-tnv94" Dec 11 14:08:01 crc kubenswrapper[5050]: I1211 14:08:01.305786 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c69742c2-ef6f-478b-bf96-754808e9a127-credential-keys\") pod \"keystone-bootstrap-tnv94\" (UID: \"c69742c2-ef6f-478b-bf96-754808e9a127\") " pod="openstack/keystone-bootstrap-tnv94" Dec 11 14:08:01 crc kubenswrapper[5050]: I1211 14:08:01.307621 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c69742c2-ef6f-478b-bf96-754808e9a127-config-data\") pod \"keystone-bootstrap-tnv94\" (UID: \"c69742c2-ef6f-478b-bf96-754808e9a127\") " pod="openstack/keystone-bootstrap-tnv94" Dec 11 14:08:01 crc kubenswrapper[5050]: I1211 14:08:01.310815 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c69742c2-ef6f-478b-bf96-754808e9a127-combined-ca-bundle\") pod \"keystone-bootstrap-tnv94\" (UID: \"c69742c2-ef6f-478b-bf96-754808e9a127\") " pod="openstack/keystone-bootstrap-tnv94" Dec 11 14:08:01 crc kubenswrapper[5050]: I1211 14:08:01.325207 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxvjf\" (UniqueName: \"kubernetes.io/projected/c69742c2-ef6f-478b-bf96-754808e9a127-kube-api-access-zxvjf\") pod \"keystone-bootstrap-tnv94\" (UID: \"c69742c2-ef6f-478b-bf96-754808e9a127\") " pod="openstack/keystone-bootstrap-tnv94" Dec 11 14:08:01 crc kubenswrapper[5050]: I1211 14:08:01.456106 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tnv94" Dec 11 14:08:01 crc kubenswrapper[5050]: I1211 14:08:01.562842 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="295ba8ac-0609-4021-920b-3ab943ea21ad" path="/var/lib/kubelet/pods/295ba8ac-0609-4021-920b-3ab943ea21ad/volumes" Dec 11 14:08:05 crc kubenswrapper[5050]: I1211 14:08:05.344495 5050 scope.go:117] "RemoveContainer" containerID="c57629ecbce0006535ae0214131185c2b057c9fee32a3f999689b8e1d1e66e33" Dec 11 14:08:05 crc kubenswrapper[5050]: E1211 14:08:05.378405 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49" Dec 11 14:08:05 crc kubenswrapper[5050]: E1211 14:08:05.379265 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-26th6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-7kvxk_openstack(2d9200d5-8f1c-46be-a802-995c7f58b754): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 14:08:05 crc kubenswrapper[5050]: E1211 14:08:05.380585 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-7kvxk" podUID="2d9200d5-8f1c-46be-a802-995c7f58b754" Dec 11 14:08:05 crc kubenswrapper[5050]: I1211 14:08:05.412709 5050 scope.go:117] "RemoveContainer" containerID="0fb1b5259f3b526f668e3021f74203bb87f10713c1686a64348b93ee9334f000" Dec 11 14:08:05 crc kubenswrapper[5050]: I1211 14:08:05.530597 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-xqjfw" Dec 11 14:08:05 crc kubenswrapper[5050]: I1211 14:08:05.578071 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8db84466c-xqjfw" event={"ID":"38fac7eb-e076-4c58-9d8e-961461e27f92","Type":"ContainerDied","Data":"bb14d4056d44bd4a7eb35a2951d19dbbf0f868432b8894238d02ca665d89befa"} Dec 11 14:08:05 crc kubenswrapper[5050]: I1211 14:08:05.578159 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8db84466c-xqjfw" Dec 11 14:08:05 crc kubenswrapper[5050]: E1211 14:08:05.601911 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:b59b7445e581cc720038107e421371c86c5765b2967e77d884ef29b1d9fd0f49\\\"\"" pod="openstack/cinder-db-sync-7kvxk" podUID="2d9200d5-8f1c-46be-a802-995c7f58b754" Dec 11 14:08:05 crc kubenswrapper[5050]: I1211 14:08:05.602332 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38fac7eb-e076-4c58-9d8e-961461e27f92-ovsdbserver-nb\") pod \"38fac7eb-e076-4c58-9d8e-961461e27f92\" (UID: \"38fac7eb-e076-4c58-9d8e-961461e27f92\") " Dec 11 14:08:05 crc kubenswrapper[5050]: I1211 14:08:05.604198 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38fac7eb-e076-4c58-9d8e-961461e27f92-ovsdbserver-sb\") pod \"38fac7eb-e076-4c58-9d8e-961461e27f92\" (UID: \"38fac7eb-e076-4c58-9d8e-961461e27f92\") " Dec 11 14:08:05 crc kubenswrapper[5050]: I1211 14:08:05.604398 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38fac7eb-e076-4c58-9d8e-961461e27f92-config\") pod \"38fac7eb-e076-4c58-9d8e-961461e27f92\" (UID: \"38fac7eb-e076-4c58-9d8e-961461e27f92\") " Dec 11 14:08:05 crc kubenswrapper[5050]: I1211 14:08:05.604451 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38fac7eb-e076-4c58-9d8e-961461e27f92-dns-svc\") pod \"38fac7eb-e076-4c58-9d8e-961461e27f92\" (UID: \"38fac7eb-e076-4c58-9d8e-961461e27f92\") " Dec 11 14:08:05 crc kubenswrapper[5050]: I1211 14:08:05.604532 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgbld\" (UniqueName: \"kubernetes.io/projected/38fac7eb-e076-4c58-9d8e-961461e27f92-kube-api-access-zgbld\") pod \"38fac7eb-e076-4c58-9d8e-961461e27f92\" (UID: \"38fac7eb-e076-4c58-9d8e-961461e27f92\") " Dec 11 14:08:05 crc kubenswrapper[5050]: I1211 14:08:05.604602 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38fac7eb-e076-4c58-9d8e-961461e27f92-dns-swift-storage-0\") pod \"38fac7eb-e076-4c58-9d8e-961461e27f92\" (UID: \"38fac7eb-e076-4c58-9d8e-961461e27f92\") " Dec 11 14:08:05 crc kubenswrapper[5050]: I1211 14:08:05.627150 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38fac7eb-e076-4c58-9d8e-961461e27f92-kube-api-access-zgbld" (OuterVolumeSpecName: "kube-api-access-zgbld") pod "38fac7eb-e076-4c58-9d8e-961461e27f92" (UID: "38fac7eb-e076-4c58-9d8e-961461e27f92"). InnerVolumeSpecName "kube-api-access-zgbld". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:08:05 crc kubenswrapper[5050]: I1211 14:08:05.642285 5050 scope.go:117] "RemoveContainer" containerID="d17ce11b3a577fe1f99ed43c3b6239570256f4f6ffc04bae15d0cb648cfe9762" Dec 11 14:08:05 crc kubenswrapper[5050]: I1211 14:08:05.672859 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38fac7eb-e076-4c58-9d8e-961461e27f92-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "38fac7eb-e076-4c58-9d8e-961461e27f92" (UID: "38fac7eb-e076-4c58-9d8e-961461e27f92"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:08:05 crc kubenswrapper[5050]: I1211 14:08:05.681099 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38fac7eb-e076-4c58-9d8e-961461e27f92-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "38fac7eb-e076-4c58-9d8e-961461e27f92" (UID: "38fac7eb-e076-4c58-9d8e-961461e27f92"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:08:05 crc kubenswrapper[5050]: I1211 14:08:05.699462 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38fac7eb-e076-4c58-9d8e-961461e27f92-config" (OuterVolumeSpecName: "config") pod "38fac7eb-e076-4c58-9d8e-961461e27f92" (UID: "38fac7eb-e076-4c58-9d8e-961461e27f92"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:08:05 crc kubenswrapper[5050]: I1211 14:08:05.700781 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38fac7eb-e076-4c58-9d8e-961461e27f92-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "38fac7eb-e076-4c58-9d8e-961461e27f92" (UID: "38fac7eb-e076-4c58-9d8e-961461e27f92"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:08:05 crc kubenswrapper[5050]: I1211 14:08:05.711119 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38fac7eb-e076-4c58-9d8e-961461e27f92-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "38fac7eb-e076-4c58-9d8e-961461e27f92" (UID: "38fac7eb-e076-4c58-9d8e-961461e27f92"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:08:05 crc kubenswrapper[5050]: I1211 14:08:05.720548 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/38fac7eb-e076-4c58-9d8e-961461e27f92-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:05 crc kubenswrapper[5050]: I1211 14:08:05.721120 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgbld\" (UniqueName: \"kubernetes.io/projected/38fac7eb-e076-4c58-9d8e-961461e27f92-kube-api-access-zgbld\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:05 crc kubenswrapper[5050]: I1211 14:08:05.721150 5050 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/38fac7eb-e076-4c58-9d8e-961461e27f92-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:05 crc kubenswrapper[5050]: I1211 14:08:05.721165 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/38fac7eb-e076-4c58-9d8e-961461e27f92-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:05 crc kubenswrapper[5050]: I1211 14:08:05.721184 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/38fac7eb-e076-4c58-9d8e-961461e27f92-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:05 crc kubenswrapper[5050]: I1211 14:08:05.721200 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38fac7eb-e076-4c58-9d8e-961461e27f92-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:05 crc kubenswrapper[5050]: I1211 14:08:05.797713 5050 scope.go:117] "RemoveContainer" containerID="ee297d5093f29c9f272b39eb275be212b9c71255851c9af6568a077917a24a37" Dec 11 14:08:05 crc kubenswrapper[5050]: I1211 14:08:05.841158 5050 scope.go:117] "RemoveContainer" containerID="e2177809717135e23043125b0f0aada3f38012dcec64a3abac11fe875f2a1baa" Dec 11 14:08:05 crc kubenswrapper[5050]: I1211 14:08:05.926106 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-xqjfw"] Dec 11 14:08:05 crc kubenswrapper[5050]: I1211 14:08:05.946366 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8db84466c-xqjfw"] Dec 11 14:08:05 crc kubenswrapper[5050]: I1211 14:08:05.956383 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8db84466c-xqjfw" podUID="38fac7eb-e076-4c58-9d8e-961461e27f92" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: i/o timeout" Dec 11 14:08:06 crc kubenswrapper[5050]: I1211 14:08:06.021691 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-tnv94"] Dec 11 14:08:06 crc kubenswrapper[5050]: I1211 14:08:06.043828 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 14:08:06 crc kubenswrapper[5050]: I1211 14:08:06.145793 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 14:08:06 crc kubenswrapper[5050]: W1211 14:08:06.166774 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1553db29_21b6_4403_ab72_67c4d725a99d.slice/crio-f567e6adab7ba3318d0dc9a26001aa270bbf3fafa1056dd7029fc390fb591a63 WatchSource:0}: Error finding container f567e6adab7ba3318d0dc9a26001aa270bbf3fafa1056dd7029fc390fb591a63: Status 404 returned error can't find the container with id f567e6adab7ba3318d0dc9a26001aa270bbf3fafa1056dd7029fc390fb591a63 Dec 11 14:08:06 crc kubenswrapper[5050]: I1211 14:08:06.594275 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tnv94" event={"ID":"c69742c2-ef6f-478b-bf96-754808e9a127","Type":"ContainerStarted","Data":"673029f70ba162cc7b362003c7987e48d633f387b63e6ba5c4b0b70b4937b5a3"} Dec 11 14:08:06 crc kubenswrapper[5050]: I1211 14:08:06.594791 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tnv94" event={"ID":"c69742c2-ef6f-478b-bf96-754808e9a127","Type":"ContainerStarted","Data":"66b3c06e043bca64c94910fa3cd347b9e51af44e24c77ce8e71cb8c6e26dc93d"} Dec 11 14:08:06 crc kubenswrapper[5050]: I1211 14:08:06.599851 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6vzzr" event={"ID":"107786ed-ea8f-4c2f-ac86-54b1bb504a69","Type":"ContainerStarted","Data":"a4e0e87c678bd4b93a483f4f15d4562ac37b3aa202a6b10e620dc13a9773d991"} Dec 11 14:08:06 crc kubenswrapper[5050]: I1211 14:08:06.601800 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49e25755-0205-47fb-a88c-2f7a3291a687","Type":"ContainerStarted","Data":"bc5bd4da507e5e98c22354d37317440ad1b08d0fdef5e93aee4a399f722d5c89"} Dec 11 14:08:06 crc kubenswrapper[5050]: I1211 14:08:06.603920 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-zwtmr" event={"ID":"f1d16bf5-88f5-4ec7-943a-fc1ec7c15425","Type":"ContainerStarted","Data":"54c25f0b8e6964a434a0a013a4076df6309838c2adfc994b7ac912ba272f2845"} Dec 11 14:08:06 crc kubenswrapper[5050]: I1211 14:08:06.605068 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1553db29-21b6-4403-ab72-67c4d725a99d","Type":"ContainerStarted","Data":"f567e6adab7ba3318d0dc9a26001aa270bbf3fafa1056dd7029fc390fb591a63"} Dec 11 14:08:06 crc kubenswrapper[5050]: I1211 14:08:06.606329 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e61388f2-3282-4387-a850-9b4bffbf0e2b","Type":"ContainerStarted","Data":"4475fda2ac479fc444a9017e44f882b794a79273947b47195c2a2f1b5b58374c"} Dec 11 14:08:06 crc kubenswrapper[5050]: I1211 14:08:06.621332 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-tnv94" podStartSLOduration=5.621314076 podStartE2EDuration="5.621314076s" podCreationTimestamp="2025-12-11 14:08:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:08:06.617629317 +0000 UTC m=+1177.461351913" watchObservedRunningTime="2025-12-11 14:08:06.621314076 +0000 UTC m=+1177.465036662" Dec 11 14:08:06 crc kubenswrapper[5050]: I1211 14:08:06.648897 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-zwtmr" podStartSLOduration=3.104839216 podStartE2EDuration="32.648871809s" podCreationTimestamp="2025-12-11 14:07:34 +0000 UTC" firstStartedPulling="2025-12-11 14:07:35.827902616 +0000 UTC m=+1146.671625212" lastFinishedPulling="2025-12-11 14:08:05.371935219 +0000 UTC m=+1176.215657805" observedRunningTime="2025-12-11 14:08:06.632029995 +0000 UTC m=+1177.475752591" watchObservedRunningTime="2025-12-11 14:08:06.648871809 +0000 UTC m=+1177.492594395" Dec 11 14:08:06 crc kubenswrapper[5050]: I1211 14:08:06.653972 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-6vzzr" podStartSLOduration=2.8921568989999997 podStartE2EDuration="32.653949756s" podCreationTimestamp="2025-12-11 14:07:34 +0000 UTC" firstStartedPulling="2025-12-11 14:07:35.915822757 +0000 UTC m=+1146.759545343" lastFinishedPulling="2025-12-11 14:08:05.677615614 +0000 UTC m=+1176.521338200" observedRunningTime="2025-12-11 14:08:06.652494236 +0000 UTC m=+1177.496216822" watchObservedRunningTime="2025-12-11 14:08:06.653949756 +0000 UTC m=+1177.497672342" Dec 11 14:08:07 crc kubenswrapper[5050]: I1211 14:08:07.560173 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38fac7eb-e076-4c58-9d8e-961461e27f92" path="/var/lib/kubelet/pods/38fac7eb-e076-4c58-9d8e-961461e27f92/volumes" Dec 11 14:08:07 crc kubenswrapper[5050]: I1211 14:08:07.643027 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e61388f2-3282-4387-a850-9b4bffbf0e2b","Type":"ContainerStarted","Data":"0e7deec88bef5db6b7479f0a1d1b0310b574699f8cd3bdca098e09352d918df8"} Dec 11 14:08:07 crc kubenswrapper[5050]: I1211 14:08:07.643116 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e61388f2-3282-4387-a850-9b4bffbf0e2b","Type":"ContainerStarted","Data":"03c89d6c92be0c0483362292c60802d54cf7cf479b193165dada5800417ea68f"} Dec 11 14:08:07 crc kubenswrapper[5050]: I1211 14:08:07.658359 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1553db29-21b6-4403-ab72-67c4d725a99d","Type":"ContainerStarted","Data":"36bbba731d297f10aa7e33a81c20476d6cf18cf25132fed5a0399b134ec2f19c"} Dec 11 14:08:07 crc kubenswrapper[5050]: I1211 14:08:07.658456 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1553db29-21b6-4403-ab72-67c4d725a99d","Type":"ContainerStarted","Data":"fca9a9c9137887d1725a8887e573a39781d41f645e74763e0e567170226b2342"} Dec 11 14:08:07 crc kubenswrapper[5050]: I1211 14:08:07.694323 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=21.694295692 podStartE2EDuration="21.694295692s" podCreationTimestamp="2025-12-11 14:07:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:08:07.684510328 +0000 UTC m=+1178.528232914" watchObservedRunningTime="2025-12-11 14:08:07.694295692 +0000 UTC m=+1178.538018278" Dec 11 14:08:07 crc kubenswrapper[5050]: I1211 14:08:07.718366 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=16.71833374 podStartE2EDuration="16.71833374s" podCreationTimestamp="2025-12-11 14:07:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:08:07.711518316 +0000 UTC m=+1178.555240912" watchObservedRunningTime="2025-12-11 14:08:07.71833374 +0000 UTC m=+1178.562056326" Dec 11 14:08:08 crc kubenswrapper[5050]: I1211 14:08:08.690986 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49e25755-0205-47fb-a88c-2f7a3291a687","Type":"ContainerStarted","Data":"7f6f169f1e21cd536cd2066b6883085b8233e7f19f2c348689687c417f9d7905"} Dec 11 14:08:09 crc kubenswrapper[5050]: I1211 14:08:09.702240 5050 generic.go:334] "Generic (PLEG): container finished" podID="f1d16bf5-88f5-4ec7-943a-fc1ec7c15425" containerID="54c25f0b8e6964a434a0a013a4076df6309838c2adfc994b7ac912ba272f2845" exitCode=0 Dec 11 14:08:09 crc kubenswrapper[5050]: I1211 14:08:09.702316 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-zwtmr" event={"ID":"f1d16bf5-88f5-4ec7-943a-fc1ec7c15425","Type":"ContainerDied","Data":"54c25f0b8e6964a434a0a013a4076df6309838c2adfc994b7ac912ba272f2845"} Dec 11 14:08:09 crc kubenswrapper[5050]: I1211 14:08:09.704810 5050 generic.go:334] "Generic (PLEG): container finished" podID="c69742c2-ef6f-478b-bf96-754808e9a127" containerID="673029f70ba162cc7b362003c7987e48d633f387b63e6ba5c4b0b70b4937b5a3" exitCode=0 Dec 11 14:08:09 crc kubenswrapper[5050]: I1211 14:08:09.704848 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tnv94" event={"ID":"c69742c2-ef6f-478b-bf96-754808e9a127","Type":"ContainerDied","Data":"673029f70ba162cc7b362003c7987e48d633f387b63e6ba5c4b0b70b4937b5a3"} Dec 11 14:08:10 crc kubenswrapper[5050]: I1211 14:08:10.722975 5050 generic.go:334] "Generic (PLEG): container finished" podID="107786ed-ea8f-4c2f-ac86-54b1bb504a69" containerID="a4e0e87c678bd4b93a483f4f15d4562ac37b3aa202a6b10e620dc13a9773d991" exitCode=0 Dec 11 14:08:10 crc kubenswrapper[5050]: I1211 14:08:10.723056 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6vzzr" event={"ID":"107786ed-ea8f-4c2f-ac86-54b1bb504a69","Type":"ContainerDied","Data":"a4e0e87c678bd4b93a483f4f15d4562ac37b3aa202a6b10e620dc13a9773d991"} Dec 11 14:08:11 crc kubenswrapper[5050]: I1211 14:08:11.832538 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 11 14:08:11 crc kubenswrapper[5050]: I1211 14:08:11.833004 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 11 14:08:11 crc kubenswrapper[5050]: I1211 14:08:11.887356 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 11 14:08:11 crc kubenswrapper[5050]: I1211 14:08:11.900881 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 11 14:08:12 crc kubenswrapper[5050]: I1211 14:08:12.687934 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-zwtmr" Dec 11 14:08:12 crc kubenswrapper[5050]: I1211 14:08:12.766828 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-zwtmr" Dec 11 14:08:12 crc kubenswrapper[5050]: I1211 14:08:12.768098 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-zwtmr" event={"ID":"f1d16bf5-88f5-4ec7-943a-fc1ec7c15425","Type":"ContainerDied","Data":"9e14aa931615f338ad584d55b33ab1930879c99690332bf2e69e014270736d2e"} Dec 11 14:08:12 crc kubenswrapper[5050]: I1211 14:08:12.768148 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e14aa931615f338ad584d55b33ab1930879c99690332bf2e69e014270736d2e" Dec 11 14:08:12 crc kubenswrapper[5050]: I1211 14:08:12.768177 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 11 14:08:12 crc kubenswrapper[5050]: I1211 14:08:12.768191 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 11 14:08:12 crc kubenswrapper[5050]: I1211 14:08:12.783079 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvmbz\" (UniqueName: \"kubernetes.io/projected/f1d16bf5-88f5-4ec7-943a-fc1ec7c15425-kube-api-access-nvmbz\") pod \"f1d16bf5-88f5-4ec7-943a-fc1ec7c15425\" (UID: \"f1d16bf5-88f5-4ec7-943a-fc1ec7c15425\") " Dec 11 14:08:12 crc kubenswrapper[5050]: I1211 14:08:12.783151 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1d16bf5-88f5-4ec7-943a-fc1ec7c15425-config-data\") pod \"f1d16bf5-88f5-4ec7-943a-fc1ec7c15425\" (UID: \"f1d16bf5-88f5-4ec7-943a-fc1ec7c15425\") " Dec 11 14:08:12 crc kubenswrapper[5050]: I1211 14:08:12.783763 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d16bf5-88f5-4ec7-943a-fc1ec7c15425-combined-ca-bundle\") pod \"f1d16bf5-88f5-4ec7-943a-fc1ec7c15425\" (UID: \"f1d16bf5-88f5-4ec7-943a-fc1ec7c15425\") " Dec 11 14:08:12 crc kubenswrapper[5050]: I1211 14:08:12.783845 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1d16bf5-88f5-4ec7-943a-fc1ec7c15425-logs\") pod \"f1d16bf5-88f5-4ec7-943a-fc1ec7c15425\" (UID: \"f1d16bf5-88f5-4ec7-943a-fc1ec7c15425\") " Dec 11 14:08:12 crc kubenswrapper[5050]: I1211 14:08:12.783916 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1d16bf5-88f5-4ec7-943a-fc1ec7c15425-scripts\") pod \"f1d16bf5-88f5-4ec7-943a-fc1ec7c15425\" (UID: \"f1d16bf5-88f5-4ec7-943a-fc1ec7c15425\") " Dec 11 14:08:12 crc kubenswrapper[5050]: I1211 14:08:12.785894 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1d16bf5-88f5-4ec7-943a-fc1ec7c15425-logs" (OuterVolumeSpecName: "logs") pod "f1d16bf5-88f5-4ec7-943a-fc1ec7c15425" (UID: "f1d16bf5-88f5-4ec7-943a-fc1ec7c15425"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:08:12 crc kubenswrapper[5050]: I1211 14:08:12.810310 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1d16bf5-88f5-4ec7-943a-fc1ec7c15425-scripts" (OuterVolumeSpecName: "scripts") pod "f1d16bf5-88f5-4ec7-943a-fc1ec7c15425" (UID: "f1d16bf5-88f5-4ec7-943a-fc1ec7c15425"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:08:12 crc kubenswrapper[5050]: I1211 14:08:12.810576 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1d16bf5-88f5-4ec7-943a-fc1ec7c15425-kube-api-access-nvmbz" (OuterVolumeSpecName: "kube-api-access-nvmbz") pod "f1d16bf5-88f5-4ec7-943a-fc1ec7c15425" (UID: "f1d16bf5-88f5-4ec7-943a-fc1ec7c15425"). InnerVolumeSpecName "kube-api-access-nvmbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:08:12 crc kubenswrapper[5050]: I1211 14:08:12.829558 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1d16bf5-88f5-4ec7-943a-fc1ec7c15425-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1d16bf5-88f5-4ec7-943a-fc1ec7c15425" (UID: "f1d16bf5-88f5-4ec7-943a-fc1ec7c15425"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:08:12 crc kubenswrapper[5050]: I1211 14:08:12.855184 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1d16bf5-88f5-4ec7-943a-fc1ec7c15425-config-data" (OuterVolumeSpecName: "config-data") pod "f1d16bf5-88f5-4ec7-943a-fc1ec7c15425" (UID: "f1d16bf5-88f5-4ec7-943a-fc1ec7c15425"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:08:12 crc kubenswrapper[5050]: I1211 14:08:12.887325 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d16bf5-88f5-4ec7-943a-fc1ec7c15425-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:12 crc kubenswrapper[5050]: I1211 14:08:12.887496 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f1d16bf5-88f5-4ec7-943a-fc1ec7c15425-logs\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:12 crc kubenswrapper[5050]: I1211 14:08:12.887515 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1d16bf5-88f5-4ec7-943a-fc1ec7c15425-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:12 crc kubenswrapper[5050]: I1211 14:08:12.887525 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvmbz\" (UniqueName: \"kubernetes.io/projected/f1d16bf5-88f5-4ec7-943a-fc1ec7c15425-kube-api-access-nvmbz\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:12 crc kubenswrapper[5050]: I1211 14:08:12.887537 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1d16bf5-88f5-4ec7-943a-fc1ec7c15425-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:13 crc kubenswrapper[5050]: I1211 14:08:13.868453 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-78ccc9f8bd-jdg2t"] Dec 11 14:08:13 crc kubenswrapper[5050]: E1211 14:08:13.869413 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38fac7eb-e076-4c58-9d8e-961461e27f92" containerName="dnsmasq-dns" Dec 11 14:08:13 crc kubenswrapper[5050]: I1211 14:08:13.869431 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="38fac7eb-e076-4c58-9d8e-961461e27f92" containerName="dnsmasq-dns" Dec 11 14:08:13 crc kubenswrapper[5050]: E1211 14:08:13.869457 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1d16bf5-88f5-4ec7-943a-fc1ec7c15425" containerName="placement-db-sync" Dec 11 14:08:13 crc kubenswrapper[5050]: I1211 14:08:13.869463 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1d16bf5-88f5-4ec7-943a-fc1ec7c15425" containerName="placement-db-sync" Dec 11 14:08:13 crc kubenswrapper[5050]: E1211 14:08:13.869491 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38fac7eb-e076-4c58-9d8e-961461e27f92" containerName="init" Dec 11 14:08:13 crc kubenswrapper[5050]: I1211 14:08:13.869498 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="38fac7eb-e076-4c58-9d8e-961461e27f92" containerName="init" Dec 11 14:08:13 crc kubenswrapper[5050]: I1211 14:08:13.869688 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1d16bf5-88f5-4ec7-943a-fc1ec7c15425" containerName="placement-db-sync" Dec 11 14:08:13 crc kubenswrapper[5050]: I1211 14:08:13.869705 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="38fac7eb-e076-4c58-9d8e-961461e27f92" containerName="dnsmasq-dns" Dec 11 14:08:13 crc kubenswrapper[5050]: I1211 14:08:13.870800 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78ccc9f8bd-jdg2t" Dec 11 14:08:13 crc kubenswrapper[5050]: I1211 14:08:13.876659 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 11 14:08:13 crc kubenswrapper[5050]: I1211 14:08:13.876690 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 11 14:08:13 crc kubenswrapper[5050]: I1211 14:08:13.876979 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-mkvlc" Dec 11 14:08:13 crc kubenswrapper[5050]: I1211 14:08:13.877140 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 11 14:08:13 crc kubenswrapper[5050]: I1211 14:08:13.877331 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 11 14:08:13 crc kubenswrapper[5050]: I1211 14:08:13.901729 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-78ccc9f8bd-jdg2t"] Dec 11 14:08:14 crc kubenswrapper[5050]: I1211 14:08:14.014821 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4de557a0-8b74-4d40-8c91-351ba127eb13-scripts\") pod \"placement-78ccc9f8bd-jdg2t\" (UID: \"4de557a0-8b74-4d40-8c91-351ba127eb13\") " pod="openstack/placement-78ccc9f8bd-jdg2t" Dec 11 14:08:14 crc kubenswrapper[5050]: I1211 14:08:14.014873 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4de557a0-8b74-4d40-8c91-351ba127eb13-config-data\") pod \"placement-78ccc9f8bd-jdg2t\" (UID: \"4de557a0-8b74-4d40-8c91-351ba127eb13\") " pod="openstack/placement-78ccc9f8bd-jdg2t" Dec 11 14:08:14 crc kubenswrapper[5050]: I1211 14:08:14.014918 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4de557a0-8b74-4d40-8c91-351ba127eb13-combined-ca-bundle\") pod \"placement-78ccc9f8bd-jdg2t\" (UID: \"4de557a0-8b74-4d40-8c91-351ba127eb13\") " pod="openstack/placement-78ccc9f8bd-jdg2t" Dec 11 14:08:14 crc kubenswrapper[5050]: I1211 14:08:14.014965 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4de557a0-8b74-4d40-8c91-351ba127eb13-public-tls-certs\") pod \"placement-78ccc9f8bd-jdg2t\" (UID: \"4de557a0-8b74-4d40-8c91-351ba127eb13\") " pod="openstack/placement-78ccc9f8bd-jdg2t" Dec 11 14:08:14 crc kubenswrapper[5050]: I1211 14:08:14.014985 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4de557a0-8b74-4d40-8c91-351ba127eb13-internal-tls-certs\") pod \"placement-78ccc9f8bd-jdg2t\" (UID: \"4de557a0-8b74-4d40-8c91-351ba127eb13\") " pod="openstack/placement-78ccc9f8bd-jdg2t" Dec 11 14:08:14 crc kubenswrapper[5050]: I1211 14:08:14.015024 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzrcd\" (UniqueName: \"kubernetes.io/projected/4de557a0-8b74-4d40-8c91-351ba127eb13-kube-api-access-vzrcd\") pod \"placement-78ccc9f8bd-jdg2t\" (UID: \"4de557a0-8b74-4d40-8c91-351ba127eb13\") " pod="openstack/placement-78ccc9f8bd-jdg2t" Dec 11 14:08:14 crc kubenswrapper[5050]: I1211 14:08:14.015054 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4de557a0-8b74-4d40-8c91-351ba127eb13-logs\") pod \"placement-78ccc9f8bd-jdg2t\" (UID: \"4de557a0-8b74-4d40-8c91-351ba127eb13\") " pod="openstack/placement-78ccc9f8bd-jdg2t" Dec 11 14:08:14 crc kubenswrapper[5050]: I1211 14:08:14.116638 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4de557a0-8b74-4d40-8c91-351ba127eb13-scripts\") pod \"placement-78ccc9f8bd-jdg2t\" (UID: \"4de557a0-8b74-4d40-8c91-351ba127eb13\") " pod="openstack/placement-78ccc9f8bd-jdg2t" Dec 11 14:08:14 crc kubenswrapper[5050]: I1211 14:08:14.116710 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4de557a0-8b74-4d40-8c91-351ba127eb13-config-data\") pod \"placement-78ccc9f8bd-jdg2t\" (UID: \"4de557a0-8b74-4d40-8c91-351ba127eb13\") " pod="openstack/placement-78ccc9f8bd-jdg2t" Dec 11 14:08:14 crc kubenswrapper[5050]: I1211 14:08:14.116774 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4de557a0-8b74-4d40-8c91-351ba127eb13-combined-ca-bundle\") pod \"placement-78ccc9f8bd-jdg2t\" (UID: \"4de557a0-8b74-4d40-8c91-351ba127eb13\") " pod="openstack/placement-78ccc9f8bd-jdg2t" Dec 11 14:08:14 crc kubenswrapper[5050]: I1211 14:08:14.116823 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4de557a0-8b74-4d40-8c91-351ba127eb13-public-tls-certs\") pod \"placement-78ccc9f8bd-jdg2t\" (UID: \"4de557a0-8b74-4d40-8c91-351ba127eb13\") " pod="openstack/placement-78ccc9f8bd-jdg2t" Dec 11 14:08:14 crc kubenswrapper[5050]: I1211 14:08:14.116842 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4de557a0-8b74-4d40-8c91-351ba127eb13-internal-tls-certs\") pod \"placement-78ccc9f8bd-jdg2t\" (UID: \"4de557a0-8b74-4d40-8c91-351ba127eb13\") " pod="openstack/placement-78ccc9f8bd-jdg2t" Dec 11 14:08:14 crc kubenswrapper[5050]: I1211 14:08:14.116863 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzrcd\" (UniqueName: \"kubernetes.io/projected/4de557a0-8b74-4d40-8c91-351ba127eb13-kube-api-access-vzrcd\") pod \"placement-78ccc9f8bd-jdg2t\" (UID: \"4de557a0-8b74-4d40-8c91-351ba127eb13\") " pod="openstack/placement-78ccc9f8bd-jdg2t" Dec 11 14:08:14 crc kubenswrapper[5050]: I1211 14:08:14.116921 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4de557a0-8b74-4d40-8c91-351ba127eb13-logs\") pod \"placement-78ccc9f8bd-jdg2t\" (UID: \"4de557a0-8b74-4d40-8c91-351ba127eb13\") " pod="openstack/placement-78ccc9f8bd-jdg2t" Dec 11 14:08:14 crc kubenswrapper[5050]: I1211 14:08:14.117495 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4de557a0-8b74-4d40-8c91-351ba127eb13-logs\") pod \"placement-78ccc9f8bd-jdg2t\" (UID: \"4de557a0-8b74-4d40-8c91-351ba127eb13\") " pod="openstack/placement-78ccc9f8bd-jdg2t" Dec 11 14:08:14 crc kubenswrapper[5050]: I1211 14:08:14.123469 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4de557a0-8b74-4d40-8c91-351ba127eb13-combined-ca-bundle\") pod \"placement-78ccc9f8bd-jdg2t\" (UID: \"4de557a0-8b74-4d40-8c91-351ba127eb13\") " pod="openstack/placement-78ccc9f8bd-jdg2t" Dec 11 14:08:14 crc kubenswrapper[5050]: I1211 14:08:14.138269 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4de557a0-8b74-4d40-8c91-351ba127eb13-public-tls-certs\") pod \"placement-78ccc9f8bd-jdg2t\" (UID: \"4de557a0-8b74-4d40-8c91-351ba127eb13\") " pod="openstack/placement-78ccc9f8bd-jdg2t" Dec 11 14:08:14 crc kubenswrapper[5050]: I1211 14:08:14.138747 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4de557a0-8b74-4d40-8c91-351ba127eb13-internal-tls-certs\") pod \"placement-78ccc9f8bd-jdg2t\" (UID: \"4de557a0-8b74-4d40-8c91-351ba127eb13\") " pod="openstack/placement-78ccc9f8bd-jdg2t" Dec 11 14:08:14 crc kubenswrapper[5050]: I1211 14:08:14.140763 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4de557a0-8b74-4d40-8c91-351ba127eb13-scripts\") pod \"placement-78ccc9f8bd-jdg2t\" (UID: \"4de557a0-8b74-4d40-8c91-351ba127eb13\") " pod="openstack/placement-78ccc9f8bd-jdg2t" Dec 11 14:08:14 crc kubenswrapper[5050]: I1211 14:08:14.140772 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4de557a0-8b74-4d40-8c91-351ba127eb13-config-data\") pod \"placement-78ccc9f8bd-jdg2t\" (UID: \"4de557a0-8b74-4d40-8c91-351ba127eb13\") " pod="openstack/placement-78ccc9f8bd-jdg2t" Dec 11 14:08:14 crc kubenswrapper[5050]: I1211 14:08:14.144422 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzrcd\" (UniqueName: \"kubernetes.io/projected/4de557a0-8b74-4d40-8c91-351ba127eb13-kube-api-access-vzrcd\") pod \"placement-78ccc9f8bd-jdg2t\" (UID: \"4de557a0-8b74-4d40-8c91-351ba127eb13\") " pod="openstack/placement-78ccc9f8bd-jdg2t" Dec 11 14:08:14 crc kubenswrapper[5050]: I1211 14:08:14.187823 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78ccc9f8bd-jdg2t" Dec 11 14:08:14 crc kubenswrapper[5050]: I1211 14:08:14.792671 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rdbqg" event={"ID":"aff61f93-f202-4057-a14e-7b395a73e323","Type":"ContainerDied","Data":"a1fbc95eb3b3987970a436f95d6c365fceacbe66e3d72a6ce3cf2ff678c4bb9f"} Dec 11 14:08:14 crc kubenswrapper[5050]: I1211 14:08:14.796739 5050 generic.go:334] "Generic (PLEG): container finished" podID="aff61f93-f202-4057-a14e-7b395a73e323" containerID="a1fbc95eb3b3987970a436f95d6c365fceacbe66e3d72a6ce3cf2ff678c4bb9f" exitCode=0 Dec 11 14:08:14 crc kubenswrapper[5050]: I1211 14:08:14.796993 5050 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 14:08:14 crc kubenswrapper[5050]: I1211 14:08:14.797045 5050 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 14:08:15 crc kubenswrapper[5050]: I1211 14:08:15.364902 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 11 14:08:15 crc kubenswrapper[5050]: I1211 14:08:15.366634 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 11 14:08:15 crc kubenswrapper[5050]: I1211 14:08:15.803163 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tnv94" Dec 11 14:08:15 crc kubenswrapper[5050]: I1211 14:08:15.823189 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6vzzr" Dec 11 14:08:15 crc kubenswrapper[5050]: I1211 14:08:15.842679 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-tnv94" Dec 11 14:08:15 crc kubenswrapper[5050]: I1211 14:08:15.843246 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-tnv94" event={"ID":"c69742c2-ef6f-478b-bf96-754808e9a127","Type":"ContainerDied","Data":"66b3c06e043bca64c94910fa3cd347b9e51af44e24c77ce8e71cb8c6e26dc93d"} Dec 11 14:08:15 crc kubenswrapper[5050]: I1211 14:08:15.843280 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66b3c06e043bca64c94910fa3cd347b9e51af44e24c77ce8e71cb8c6e26dc93d" Dec 11 14:08:15 crc kubenswrapper[5050]: I1211 14:08:15.858796 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c69742c2-ef6f-478b-bf96-754808e9a127-scripts\") pod \"c69742c2-ef6f-478b-bf96-754808e9a127\" (UID: \"c69742c2-ef6f-478b-bf96-754808e9a127\") " Dec 11 14:08:15 crc kubenswrapper[5050]: I1211 14:08:15.858890 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c69742c2-ef6f-478b-bf96-754808e9a127-config-data\") pod \"c69742c2-ef6f-478b-bf96-754808e9a127\" (UID: \"c69742c2-ef6f-478b-bf96-754808e9a127\") " Dec 11 14:08:15 crc kubenswrapper[5050]: I1211 14:08:15.858945 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c69742c2-ef6f-478b-bf96-754808e9a127-fernet-keys\") pod \"c69742c2-ef6f-478b-bf96-754808e9a127\" (UID: \"c69742c2-ef6f-478b-bf96-754808e9a127\") " Dec 11 14:08:15 crc kubenswrapper[5050]: I1211 14:08:15.859142 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxvjf\" (UniqueName: \"kubernetes.io/projected/c69742c2-ef6f-478b-bf96-754808e9a127-kube-api-access-zxvjf\") pod \"c69742c2-ef6f-478b-bf96-754808e9a127\" (UID: \"c69742c2-ef6f-478b-bf96-754808e9a127\") " Dec 11 14:08:15 crc kubenswrapper[5050]: I1211 14:08:15.859169 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c69742c2-ef6f-478b-bf96-754808e9a127-credential-keys\") pod \"c69742c2-ef6f-478b-bf96-754808e9a127\" (UID: \"c69742c2-ef6f-478b-bf96-754808e9a127\") " Dec 11 14:08:15 crc kubenswrapper[5050]: I1211 14:08:15.859307 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c69742c2-ef6f-478b-bf96-754808e9a127-combined-ca-bundle\") pod \"c69742c2-ef6f-478b-bf96-754808e9a127\" (UID: \"c69742c2-ef6f-478b-bf96-754808e9a127\") " Dec 11 14:08:15 crc kubenswrapper[5050]: I1211 14:08:15.867062 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-6vzzr" Dec 11 14:08:15 crc kubenswrapper[5050]: I1211 14:08:15.868133 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-6vzzr" event={"ID":"107786ed-ea8f-4c2f-ac86-54b1bb504a69","Type":"ContainerDied","Data":"5a508779bbb983a2c769b51c0e588de7c8e3373254ef2dd4b2dbac4a0a8f525b"} Dec 11 14:08:15 crc kubenswrapper[5050]: I1211 14:08:15.868187 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a508779bbb983a2c769b51c0e588de7c8e3373254ef2dd4b2dbac4a0a8f525b" Dec 11 14:08:15 crc kubenswrapper[5050]: I1211 14:08:15.878305 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c69742c2-ef6f-478b-bf96-754808e9a127-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "c69742c2-ef6f-478b-bf96-754808e9a127" (UID: "c69742c2-ef6f-478b-bf96-754808e9a127"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:08:15 crc kubenswrapper[5050]: I1211 14:08:15.878539 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c69742c2-ef6f-478b-bf96-754808e9a127-kube-api-access-zxvjf" (OuterVolumeSpecName: "kube-api-access-zxvjf") pod "c69742c2-ef6f-478b-bf96-754808e9a127" (UID: "c69742c2-ef6f-478b-bf96-754808e9a127"). InnerVolumeSpecName "kube-api-access-zxvjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:08:15 crc kubenswrapper[5050]: I1211 14:08:15.879092 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c69742c2-ef6f-478b-bf96-754808e9a127-scripts" (OuterVolumeSpecName: "scripts") pod "c69742c2-ef6f-478b-bf96-754808e9a127" (UID: "c69742c2-ef6f-478b-bf96-754808e9a127"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:08:15 crc kubenswrapper[5050]: I1211 14:08:15.902043 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c69742c2-ef6f-478b-bf96-754808e9a127-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c69742c2-ef6f-478b-bf96-754808e9a127" (UID: "c69742c2-ef6f-478b-bf96-754808e9a127"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:08:15 crc kubenswrapper[5050]: I1211 14:08:15.945590 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-78ccc9f8bd-jdg2t"] Dec 11 14:08:15 crc kubenswrapper[5050]: I1211 14:08:15.962443 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/107786ed-ea8f-4c2f-ac86-54b1bb504a69-combined-ca-bundle\") pod \"107786ed-ea8f-4c2f-ac86-54b1bb504a69\" (UID: \"107786ed-ea8f-4c2f-ac86-54b1bb504a69\") " Dec 11 14:08:15 crc kubenswrapper[5050]: I1211 14:08:15.962568 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/107786ed-ea8f-4c2f-ac86-54b1bb504a69-db-sync-config-data\") pod \"107786ed-ea8f-4c2f-ac86-54b1bb504a69\" (UID: \"107786ed-ea8f-4c2f-ac86-54b1bb504a69\") " Dec 11 14:08:15 crc kubenswrapper[5050]: I1211 14:08:15.962650 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sx8rt\" (UniqueName: \"kubernetes.io/projected/107786ed-ea8f-4c2f-ac86-54b1bb504a69-kube-api-access-sx8rt\") pod \"107786ed-ea8f-4c2f-ac86-54b1bb504a69\" (UID: \"107786ed-ea8f-4c2f-ac86-54b1bb504a69\") " Dec 11 14:08:15 crc kubenswrapper[5050]: I1211 14:08:15.964635 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxvjf\" (UniqueName: \"kubernetes.io/projected/c69742c2-ef6f-478b-bf96-754808e9a127-kube-api-access-zxvjf\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:15 crc kubenswrapper[5050]: I1211 14:08:15.964658 5050 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c69742c2-ef6f-478b-bf96-754808e9a127-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:15 crc kubenswrapper[5050]: I1211 14:08:15.964667 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c69742c2-ef6f-478b-bf96-754808e9a127-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:15 crc kubenswrapper[5050]: I1211 14:08:15.964678 5050 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c69742c2-ef6f-478b-bf96-754808e9a127-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:15 crc kubenswrapper[5050]: W1211 14:08:15.972466 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4de557a0_8b74_4d40_8c91_351ba127eb13.slice/crio-990e9b23f5d093ce7839248fb892b423f7090078e0fff3c4669e7c3068b46283 WatchSource:0}: Error finding container 990e9b23f5d093ce7839248fb892b423f7090078e0fff3c4669e7c3068b46283: Status 404 returned error can't find the container with id 990e9b23f5d093ce7839248fb892b423f7090078e0fff3c4669e7c3068b46283 Dec 11 14:08:16 crc kubenswrapper[5050]: I1211 14:08:15.991652 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/107786ed-ea8f-4c2f-ac86-54b1bb504a69-kube-api-access-sx8rt" (OuterVolumeSpecName: "kube-api-access-sx8rt") pod "107786ed-ea8f-4c2f-ac86-54b1bb504a69" (UID: "107786ed-ea8f-4c2f-ac86-54b1bb504a69"). InnerVolumeSpecName "kube-api-access-sx8rt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:08:16 crc kubenswrapper[5050]: I1211 14:08:15.991794 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/107786ed-ea8f-4c2f-ac86-54b1bb504a69-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "107786ed-ea8f-4c2f-ac86-54b1bb504a69" (UID: "107786ed-ea8f-4c2f-ac86-54b1bb504a69"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:08:16 crc kubenswrapper[5050]: I1211 14:08:15.991938 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c69742c2-ef6f-478b-bf96-754808e9a127-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c69742c2-ef6f-478b-bf96-754808e9a127" (UID: "c69742c2-ef6f-478b-bf96-754808e9a127"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:08:16 crc kubenswrapper[5050]: I1211 14:08:16.069705 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sx8rt\" (UniqueName: \"kubernetes.io/projected/107786ed-ea8f-4c2f-ac86-54b1bb504a69-kube-api-access-sx8rt\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:16 crc kubenswrapper[5050]: I1211 14:08:16.069744 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c69742c2-ef6f-478b-bf96-754808e9a127-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:16 crc kubenswrapper[5050]: I1211 14:08:16.069754 5050 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/107786ed-ea8f-4c2f-ac86-54b1bb504a69-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:16 crc kubenswrapper[5050]: I1211 14:08:16.086303 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c69742c2-ef6f-478b-bf96-754808e9a127-config-data" (OuterVolumeSpecName: "config-data") pod "c69742c2-ef6f-478b-bf96-754808e9a127" (UID: "c69742c2-ef6f-478b-bf96-754808e9a127"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:08:16 crc kubenswrapper[5050]: I1211 14:08:16.098284 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/107786ed-ea8f-4c2f-ac86-54b1bb504a69-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "107786ed-ea8f-4c2f-ac86-54b1bb504a69" (UID: "107786ed-ea8f-4c2f-ac86-54b1bb504a69"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:08:16 crc kubenswrapper[5050]: I1211 14:08:16.129421 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rdbqg" Dec 11 14:08:16 crc kubenswrapper[5050]: I1211 14:08:16.172690 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c69742c2-ef6f-478b-bf96-754808e9a127-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:16 crc kubenswrapper[5050]: I1211 14:08:16.172729 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/107786ed-ea8f-4c2f-ac86-54b1bb504a69-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:16 crc kubenswrapper[5050]: I1211 14:08:16.277902 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cx5vq\" (UniqueName: \"kubernetes.io/projected/aff61f93-f202-4057-a14e-7b395a73e323-kube-api-access-cx5vq\") pod \"aff61f93-f202-4057-a14e-7b395a73e323\" (UID: \"aff61f93-f202-4057-a14e-7b395a73e323\") " Dec 11 14:08:16 crc kubenswrapper[5050]: I1211 14:08:16.278127 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/aff61f93-f202-4057-a14e-7b395a73e323-config\") pod \"aff61f93-f202-4057-a14e-7b395a73e323\" (UID: \"aff61f93-f202-4057-a14e-7b395a73e323\") " Dec 11 14:08:16 crc kubenswrapper[5050]: I1211 14:08:16.278195 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff61f93-f202-4057-a14e-7b395a73e323-combined-ca-bundle\") pod \"aff61f93-f202-4057-a14e-7b395a73e323\" (UID: \"aff61f93-f202-4057-a14e-7b395a73e323\") " Dec 11 14:08:16 crc kubenswrapper[5050]: I1211 14:08:16.287190 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aff61f93-f202-4057-a14e-7b395a73e323-kube-api-access-cx5vq" (OuterVolumeSpecName: "kube-api-access-cx5vq") pod "aff61f93-f202-4057-a14e-7b395a73e323" (UID: "aff61f93-f202-4057-a14e-7b395a73e323"). InnerVolumeSpecName "kube-api-access-cx5vq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:08:16 crc kubenswrapper[5050]: I1211 14:08:16.305763 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aff61f93-f202-4057-a14e-7b395a73e323-config" (OuterVolumeSpecName: "config") pod "aff61f93-f202-4057-a14e-7b395a73e323" (UID: "aff61f93-f202-4057-a14e-7b395a73e323"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:08:16 crc kubenswrapper[5050]: I1211 14:08:16.310419 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aff61f93-f202-4057-a14e-7b395a73e323-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aff61f93-f202-4057-a14e-7b395a73e323" (UID: "aff61f93-f202-4057-a14e-7b395a73e323"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:08:16 crc kubenswrapper[5050]: I1211 14:08:16.380427 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cx5vq\" (UniqueName: \"kubernetes.io/projected/aff61f93-f202-4057-a14e-7b395a73e323-kube-api-access-cx5vq\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:16 crc kubenswrapper[5050]: I1211 14:08:16.380463 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/aff61f93-f202-4057-a14e-7b395a73e323-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:16 crc kubenswrapper[5050]: I1211 14:08:16.380475 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aff61f93-f202-4057-a14e-7b395a73e323-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:16 crc kubenswrapper[5050]: I1211 14:08:16.881063 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49e25755-0205-47fb-a88c-2f7a3291a687","Type":"ContainerStarted","Data":"363bef5fc02a72922b8027ac00256b6492310726e090e9dab94b12db5a9c9a9e"} Dec 11 14:08:16 crc kubenswrapper[5050]: I1211 14:08:16.885265 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-rdbqg" event={"ID":"aff61f93-f202-4057-a14e-7b395a73e323","Type":"ContainerDied","Data":"de395991995705ad773b339b04a206f6026ad47c0b1a2ff0b31463aa71cf0cf3"} Dec 11 14:08:16 crc kubenswrapper[5050]: I1211 14:08:16.885321 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de395991995705ad773b339b04a206f6026ad47c0b1a2ff0b31463aa71cf0cf3" Dec 11 14:08:16 crc kubenswrapper[5050]: I1211 14:08:16.885398 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-rdbqg" Dec 11 14:08:16 crc kubenswrapper[5050]: I1211 14:08:16.896820 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78ccc9f8bd-jdg2t" event={"ID":"4de557a0-8b74-4d40-8c91-351ba127eb13","Type":"ContainerStarted","Data":"8af0220738d7b4267aab1e60eaa3da9d17f3f47fefe09dc1901f5e2bee442704"} Dec 11 14:08:16 crc kubenswrapper[5050]: I1211 14:08:16.896891 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78ccc9f8bd-jdg2t" event={"ID":"4de557a0-8b74-4d40-8c91-351ba127eb13","Type":"ContainerStarted","Data":"1977929bc424b057bf59a3155bf7f4cfdfe00b2e3f9856bd807dc72825864a27"} Dec 11 14:08:16 crc kubenswrapper[5050]: I1211 14:08:16.896903 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78ccc9f8bd-jdg2t" event={"ID":"4de557a0-8b74-4d40-8c91-351ba127eb13","Type":"ContainerStarted","Data":"990e9b23f5d093ce7839248fb892b423f7090078e0fff3c4669e7c3068b46283"} Dec 11 14:08:16 crc kubenswrapper[5050]: I1211 14:08:16.946259 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-78ccc9f8bd-jdg2t" podStartSLOduration=3.946225602 podStartE2EDuration="3.946225602s" podCreationTimestamp="2025-12-11 14:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:08:16.933434408 +0000 UTC m=+1187.777156994" watchObservedRunningTime="2025-12-11 14:08:16.946225602 +0000 UTC m=+1187.789948188" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.008433 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7f54bc974d-nvhbp"] Dec 11 14:08:17 crc kubenswrapper[5050]: E1211 14:08:17.008965 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aff61f93-f202-4057-a14e-7b395a73e323" containerName="neutron-db-sync" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.008991 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="aff61f93-f202-4057-a14e-7b395a73e323" containerName="neutron-db-sync" Dec 11 14:08:17 crc kubenswrapper[5050]: E1211 14:08:17.009031 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="107786ed-ea8f-4c2f-ac86-54b1bb504a69" containerName="barbican-db-sync" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.009039 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="107786ed-ea8f-4c2f-ac86-54b1bb504a69" containerName="barbican-db-sync" Dec 11 14:08:17 crc kubenswrapper[5050]: E1211 14:08:17.009054 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c69742c2-ef6f-478b-bf96-754808e9a127" containerName="keystone-bootstrap" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.009063 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="c69742c2-ef6f-478b-bf96-754808e9a127" containerName="keystone-bootstrap" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.009315 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="aff61f93-f202-4057-a14e-7b395a73e323" containerName="neutron-db-sync" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.009338 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="107786ed-ea8f-4c2f-ac86-54b1bb504a69" containerName="barbican-db-sync" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.009349 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="c69742c2-ef6f-478b-bf96-754808e9a127" containerName="keystone-bootstrap" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.010316 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7f54bc974d-nvhbp" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.016503 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.046659 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.047081 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.057344 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.058468 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.058522 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.058611 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.062843 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.064851 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.066844 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-tckgc" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.083855 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7f54bc974d-nvhbp"] Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.182114 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.198348 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-685444497c-x65wj"] Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.200114 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.200228 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685444497c-x65wj" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.204291 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-config-data\") pod \"keystone-7f54bc974d-nvhbp\" (UID: \"36acbdf3-346e-4207-8391-b2a03ef839e5\") " pod="openstack/keystone-7f54bc974d-nvhbp" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.204355 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-internal-tls-certs\") pod \"keystone-7f54bc974d-nvhbp\" (UID: \"36acbdf3-346e-4207-8391-b2a03ef839e5\") " pod="openstack/keystone-7f54bc974d-nvhbp" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.204412 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-combined-ca-bundle\") pod \"keystone-7f54bc974d-nvhbp\" (UID: \"36acbdf3-346e-4207-8391-b2a03ef839e5\") " pod="openstack/keystone-7f54bc974d-nvhbp" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.204453 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-credential-keys\") pod \"keystone-7f54bc974d-nvhbp\" (UID: \"36acbdf3-346e-4207-8391-b2a03ef839e5\") " pod="openstack/keystone-7f54bc974d-nvhbp" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.204483 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c868\" (UniqueName: \"kubernetes.io/projected/36acbdf3-346e-4207-8391-b2a03ef839e5-kube-api-access-6c868\") pod \"keystone-7f54bc974d-nvhbp\" (UID: \"36acbdf3-346e-4207-8391-b2a03ef839e5\") " pod="openstack/keystone-7f54bc974d-nvhbp" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.204512 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-fernet-keys\") pod \"keystone-7f54bc974d-nvhbp\" (UID: \"36acbdf3-346e-4207-8391-b2a03ef839e5\") " pod="openstack/keystone-7f54bc974d-nvhbp" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.204586 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-scripts\") pod \"keystone-7f54bc974d-nvhbp\" (UID: \"36acbdf3-346e-4207-8391-b2a03ef839e5\") " pod="openstack/keystone-7f54bc974d-nvhbp" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.204653 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-public-tls-certs\") pod \"keystone-7f54bc974d-nvhbp\" (UID: \"36acbdf3-346e-4207-8391-b2a03ef839e5\") " pod="openstack/keystone-7f54bc974d-nvhbp" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.234055 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685444497c-x65wj"] Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.295834 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7d59d8d5d8-lw5w5"] Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.297717 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7d59d8d5d8-lw5w5" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.307413 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-scripts\") pod \"keystone-7f54bc974d-nvhbp\" (UID: \"36acbdf3-346e-4207-8391-b2a03ef839e5\") " pod="openstack/keystone-7f54bc974d-nvhbp" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.307495 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28c7c2de-e924-4a41-af51-2f0f9e687952-dns-swift-storage-0\") pod \"dnsmasq-dns-685444497c-x65wj\" (UID: \"28c7c2de-e924-4a41-af51-2f0f9e687952\") " pod="openstack/dnsmasq-dns-685444497c-x65wj" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.307533 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28c7c2de-e924-4a41-af51-2f0f9e687952-ovsdbserver-sb\") pod \"dnsmasq-dns-685444497c-x65wj\" (UID: \"28c7c2de-e924-4a41-af51-2f0f9e687952\") " pod="openstack/dnsmasq-dns-685444497c-x65wj" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.307573 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47jwk\" (UniqueName: \"kubernetes.io/projected/28c7c2de-e924-4a41-af51-2f0f9e687952-kube-api-access-47jwk\") pod \"dnsmasq-dns-685444497c-x65wj\" (UID: \"28c7c2de-e924-4a41-af51-2f0f9e687952\") " pod="openstack/dnsmasq-dns-685444497c-x65wj" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.307623 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28c7c2de-e924-4a41-af51-2f0f9e687952-dns-svc\") pod \"dnsmasq-dns-685444497c-x65wj\" (UID: \"28c7c2de-e924-4a41-af51-2f0f9e687952\") " pod="openstack/dnsmasq-dns-685444497c-x65wj" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.307652 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-public-tls-certs\") pod \"keystone-7f54bc974d-nvhbp\" (UID: \"36acbdf3-346e-4207-8391-b2a03ef839e5\") " pod="openstack/keystone-7f54bc974d-nvhbp" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.307693 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28c7c2de-e924-4a41-af51-2f0f9e687952-config\") pod \"dnsmasq-dns-685444497c-x65wj\" (UID: \"28c7c2de-e924-4a41-af51-2f0f9e687952\") " pod="openstack/dnsmasq-dns-685444497c-x65wj" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.307738 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-config-data\") pod \"keystone-7f54bc974d-nvhbp\" (UID: \"36acbdf3-346e-4207-8391-b2a03ef839e5\") " pod="openstack/keystone-7f54bc974d-nvhbp" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.307772 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28c7c2de-e924-4a41-af51-2f0f9e687952-ovsdbserver-nb\") pod \"dnsmasq-dns-685444497c-x65wj\" (UID: \"28c7c2de-e924-4a41-af51-2f0f9e687952\") " pod="openstack/dnsmasq-dns-685444497c-x65wj" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.307803 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-internal-tls-certs\") pod \"keystone-7f54bc974d-nvhbp\" (UID: \"36acbdf3-346e-4207-8391-b2a03ef839e5\") " pod="openstack/keystone-7f54bc974d-nvhbp" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.307850 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-combined-ca-bundle\") pod \"keystone-7f54bc974d-nvhbp\" (UID: \"36acbdf3-346e-4207-8391-b2a03ef839e5\") " pod="openstack/keystone-7f54bc974d-nvhbp" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.307890 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-credential-keys\") pod \"keystone-7f54bc974d-nvhbp\" (UID: \"36acbdf3-346e-4207-8391-b2a03ef839e5\") " pod="openstack/keystone-7f54bc974d-nvhbp" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.307914 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c868\" (UniqueName: \"kubernetes.io/projected/36acbdf3-346e-4207-8391-b2a03ef839e5-kube-api-access-6c868\") pod \"keystone-7f54bc974d-nvhbp\" (UID: \"36acbdf3-346e-4207-8391-b2a03ef839e5\") " pod="openstack/keystone-7f54bc974d-nvhbp" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.307947 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-fernet-keys\") pod \"keystone-7f54bc974d-nvhbp\" (UID: \"36acbdf3-346e-4207-8391-b2a03ef839e5\") " pod="openstack/keystone-7f54bc974d-nvhbp" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.318561 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.319076 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.319249 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-kk6kg" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.319261 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.339115 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-public-tls-certs\") pod \"keystone-7f54bc974d-nvhbp\" (UID: \"36acbdf3-346e-4207-8391-b2a03ef839e5\") " pod="openstack/keystone-7f54bc974d-nvhbp" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.362513 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-scripts\") pod \"keystone-7f54bc974d-nvhbp\" (UID: \"36acbdf3-346e-4207-8391-b2a03ef839e5\") " pod="openstack/keystone-7f54bc974d-nvhbp" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.364427 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-fernet-keys\") pod \"keystone-7f54bc974d-nvhbp\" (UID: \"36acbdf3-346e-4207-8391-b2a03ef839e5\") " pod="openstack/keystone-7f54bc974d-nvhbp" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.364710 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-config-data\") pod \"keystone-7f54bc974d-nvhbp\" (UID: \"36acbdf3-346e-4207-8391-b2a03ef839e5\") " pod="openstack/keystone-7f54bc974d-nvhbp" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.378653 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7d59d8d5d8-lw5w5"] Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.378863 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-internal-tls-certs\") pod \"keystone-7f54bc974d-nvhbp\" (UID: \"36acbdf3-346e-4207-8391-b2a03ef839e5\") " pod="openstack/keystone-7f54bc974d-nvhbp" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.395954 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-combined-ca-bundle\") pod \"keystone-7f54bc974d-nvhbp\" (UID: \"36acbdf3-346e-4207-8391-b2a03ef839e5\") " pod="openstack/keystone-7f54bc974d-nvhbp" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.404915 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c868\" (UniqueName: \"kubernetes.io/projected/36acbdf3-346e-4207-8391-b2a03ef839e5-kube-api-access-6c868\") pod \"keystone-7f54bc974d-nvhbp\" (UID: \"36acbdf3-346e-4207-8391-b2a03ef839e5\") " pod="openstack/keystone-7f54bc974d-nvhbp" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.423868 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-credential-keys\") pod \"keystone-7f54bc974d-nvhbp\" (UID: \"36acbdf3-346e-4207-8391-b2a03ef839e5\") " pod="openstack/keystone-7f54bc974d-nvhbp" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.431821 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28c7c2de-e924-4a41-af51-2f0f9e687952-ovsdbserver-nb\") pod \"dnsmasq-dns-685444497c-x65wj\" (UID: \"28c7c2de-e924-4a41-af51-2f0f9e687952\") " pod="openstack/dnsmasq-dns-685444497c-x65wj" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.431947 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bc5cca8e-233a-4621-87bc-d8c64a6d1d88-httpd-config\") pod \"neutron-7d59d8d5d8-lw5w5\" (UID: \"bc5cca8e-233a-4621-87bc-d8c64a6d1d88\") " pod="openstack/neutron-7d59d8d5d8-lw5w5" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.432033 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw9t7\" (UniqueName: \"kubernetes.io/projected/bc5cca8e-233a-4621-87bc-d8c64a6d1d88-kube-api-access-mw9t7\") pod \"neutron-7d59d8d5d8-lw5w5\" (UID: \"bc5cca8e-233a-4621-87bc-d8c64a6d1d88\") " pod="openstack/neutron-7d59d8d5d8-lw5w5" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.432079 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc5cca8e-233a-4621-87bc-d8c64a6d1d88-combined-ca-bundle\") pod \"neutron-7d59d8d5d8-lw5w5\" (UID: \"bc5cca8e-233a-4621-87bc-d8c64a6d1d88\") " pod="openstack/neutron-7d59d8d5d8-lw5w5" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.432187 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc5cca8e-233a-4621-87bc-d8c64a6d1d88-ovndb-tls-certs\") pod \"neutron-7d59d8d5d8-lw5w5\" (UID: \"bc5cca8e-233a-4621-87bc-d8c64a6d1d88\") " pod="openstack/neutron-7d59d8d5d8-lw5w5" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.432287 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28c7c2de-e924-4a41-af51-2f0f9e687952-dns-swift-storage-0\") pod \"dnsmasq-dns-685444497c-x65wj\" (UID: \"28c7c2de-e924-4a41-af51-2f0f9e687952\") " pod="openstack/dnsmasq-dns-685444497c-x65wj" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.432323 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28c7c2de-e924-4a41-af51-2f0f9e687952-ovsdbserver-sb\") pod \"dnsmasq-dns-685444497c-x65wj\" (UID: \"28c7c2de-e924-4a41-af51-2f0f9e687952\") " pod="openstack/dnsmasq-dns-685444497c-x65wj" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.432372 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47jwk\" (UniqueName: \"kubernetes.io/projected/28c7c2de-e924-4a41-af51-2f0f9e687952-kube-api-access-47jwk\") pod \"dnsmasq-dns-685444497c-x65wj\" (UID: \"28c7c2de-e924-4a41-af51-2f0f9e687952\") " pod="openstack/dnsmasq-dns-685444497c-x65wj" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.432407 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bc5cca8e-233a-4621-87bc-d8c64a6d1d88-config\") pod \"neutron-7d59d8d5d8-lw5w5\" (UID: \"bc5cca8e-233a-4621-87bc-d8c64a6d1d88\") " pod="openstack/neutron-7d59d8d5d8-lw5w5" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.432455 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28c7c2de-e924-4a41-af51-2f0f9e687952-dns-svc\") pod \"dnsmasq-dns-685444497c-x65wj\" (UID: \"28c7c2de-e924-4a41-af51-2f0f9e687952\") " pod="openstack/dnsmasq-dns-685444497c-x65wj" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.432502 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28c7c2de-e924-4a41-af51-2f0f9e687952-config\") pod \"dnsmasq-dns-685444497c-x65wj\" (UID: \"28c7c2de-e924-4a41-af51-2f0f9e687952\") " pod="openstack/dnsmasq-dns-685444497c-x65wj" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.433610 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28c7c2de-e924-4a41-af51-2f0f9e687952-config\") pod \"dnsmasq-dns-685444497c-x65wj\" (UID: \"28c7c2de-e924-4a41-af51-2f0f9e687952\") " pod="openstack/dnsmasq-dns-685444497c-x65wj" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.436202 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28c7c2de-e924-4a41-af51-2f0f9e687952-ovsdbserver-nb\") pod \"dnsmasq-dns-685444497c-x65wj\" (UID: \"28c7c2de-e924-4a41-af51-2f0f9e687952\") " pod="openstack/dnsmasq-dns-685444497c-x65wj" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.439168 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28c7c2de-e924-4a41-af51-2f0f9e687952-dns-swift-storage-0\") pod \"dnsmasq-dns-685444497c-x65wj\" (UID: \"28c7c2de-e924-4a41-af51-2f0f9e687952\") " pod="openstack/dnsmasq-dns-685444497c-x65wj" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.439837 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28c7c2de-e924-4a41-af51-2f0f9e687952-ovsdbserver-sb\") pod \"dnsmasq-dns-685444497c-x65wj\" (UID: \"28c7c2de-e924-4a41-af51-2f0f9e687952\") " pod="openstack/dnsmasq-dns-685444497c-x65wj" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.442365 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28c7c2de-e924-4a41-af51-2f0f9e687952-dns-svc\") pod \"dnsmasq-dns-685444497c-x65wj\" (UID: \"28c7c2de-e924-4a41-af51-2f0f9e687952\") " pod="openstack/dnsmasq-dns-685444497c-x65wj" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.463098 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-54bb9c4d69-975sg"] Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.478457 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-54bb9c4d69-975sg" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.498335 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.499224 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-zdpcp" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.499429 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.508936 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47jwk\" (UniqueName: \"kubernetes.io/projected/28c7c2de-e924-4a41-af51-2f0f9e687952-kube-api-access-47jwk\") pod \"dnsmasq-dns-685444497c-x65wj\" (UID: \"28c7c2de-e924-4a41-af51-2f0f9e687952\") " pod="openstack/dnsmasq-dns-685444497c-x65wj" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.516118 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-655647566b-n2tcs"] Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.518276 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-655647566b-n2tcs" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.526149 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.534785 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bc5cca8e-233a-4621-87bc-d8c64a6d1d88-config\") pod \"neutron-7d59d8d5d8-lw5w5\" (UID: \"bc5cca8e-233a-4621-87bc-d8c64a6d1d88\") " pod="openstack/neutron-7d59d8d5d8-lw5w5" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.534876 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934cae9e-c75b-434d-b1e1-d566d6fb8b7d-config-data\") pod \"barbican-worker-54bb9c4d69-975sg\" (UID: \"934cae9e-c75b-434d-b1e1-d566d6fb8b7d\") " pod="openstack/barbican-worker-54bb9c4d69-975sg" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.534930 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/934cae9e-c75b-434d-b1e1-d566d6fb8b7d-config-data-custom\") pod \"barbican-worker-54bb9c4d69-975sg\" (UID: \"934cae9e-c75b-434d-b1e1-d566d6fb8b7d\") " pod="openstack/barbican-worker-54bb9c4d69-975sg" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.534961 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bc5cca8e-233a-4621-87bc-d8c64a6d1d88-httpd-config\") pod \"neutron-7d59d8d5d8-lw5w5\" (UID: \"bc5cca8e-233a-4621-87bc-d8c64a6d1d88\") " pod="openstack/neutron-7d59d8d5d8-lw5w5" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.534995 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934cae9e-c75b-434d-b1e1-d566d6fb8b7d-combined-ca-bundle\") pod \"barbican-worker-54bb9c4d69-975sg\" (UID: \"934cae9e-c75b-434d-b1e1-d566d6fb8b7d\") " pod="openstack/barbican-worker-54bb9c4d69-975sg" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.535045 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw9t7\" (UniqueName: \"kubernetes.io/projected/bc5cca8e-233a-4621-87bc-d8c64a6d1d88-kube-api-access-mw9t7\") pod \"neutron-7d59d8d5d8-lw5w5\" (UID: \"bc5cca8e-233a-4621-87bc-d8c64a6d1d88\") " pod="openstack/neutron-7d59d8d5d8-lw5w5" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.535075 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc5cca8e-233a-4621-87bc-d8c64a6d1d88-combined-ca-bundle\") pod \"neutron-7d59d8d5d8-lw5w5\" (UID: \"bc5cca8e-233a-4621-87bc-d8c64a6d1d88\") " pod="openstack/neutron-7d59d8d5d8-lw5w5" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.535134 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc5cca8e-233a-4621-87bc-d8c64a6d1d88-ovndb-tls-certs\") pod \"neutron-7d59d8d5d8-lw5w5\" (UID: \"bc5cca8e-233a-4621-87bc-d8c64a6d1d88\") " pod="openstack/neutron-7d59d8d5d8-lw5w5" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.535162 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlt4x\" (UniqueName: \"kubernetes.io/projected/934cae9e-c75b-434d-b1e1-d566d6fb8b7d-kube-api-access-dlt4x\") pod \"barbican-worker-54bb9c4d69-975sg\" (UID: \"934cae9e-c75b-434d-b1e1-d566d6fb8b7d\") " pod="openstack/barbican-worker-54bb9c4d69-975sg" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.535212 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/934cae9e-c75b-434d-b1e1-d566d6fb8b7d-logs\") pod \"barbican-worker-54bb9c4d69-975sg\" (UID: \"934cae9e-c75b-434d-b1e1-d566d6fb8b7d\") " pod="openstack/barbican-worker-54bb9c4d69-975sg" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.538961 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bc5cca8e-233a-4621-87bc-d8c64a6d1d88-httpd-config\") pod \"neutron-7d59d8d5d8-lw5w5\" (UID: \"bc5cca8e-233a-4621-87bc-d8c64a6d1d88\") " pod="openstack/neutron-7d59d8d5d8-lw5w5" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.564710 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685444497c-x65wj" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.581902 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc5cca8e-233a-4621-87bc-d8c64a6d1d88-ovndb-tls-certs\") pod \"neutron-7d59d8d5d8-lw5w5\" (UID: \"bc5cca8e-233a-4621-87bc-d8c64a6d1d88\") " pod="openstack/neutron-7d59d8d5d8-lw5w5" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.621544 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc5cca8e-233a-4621-87bc-d8c64a6d1d88-combined-ca-bundle\") pod \"neutron-7d59d8d5d8-lw5w5\" (UID: \"bc5cca8e-233a-4621-87bc-d8c64a6d1d88\") " pod="openstack/neutron-7d59d8d5d8-lw5w5" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.624055 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/bc5cca8e-233a-4621-87bc-d8c64a6d1d88-config\") pod \"neutron-7d59d8d5d8-lw5w5\" (UID: \"bc5cca8e-233a-4621-87bc-d8c64a6d1d88\") " pod="openstack/neutron-7d59d8d5d8-lw5w5" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.644834 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw9t7\" (UniqueName: \"kubernetes.io/projected/bc5cca8e-233a-4621-87bc-d8c64a6d1d88-kube-api-access-mw9t7\") pod \"neutron-7d59d8d5d8-lw5w5\" (UID: \"bc5cca8e-233a-4621-87bc-d8c64a6d1d88\") " pod="openstack/neutron-7d59d8d5d8-lw5w5" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.658085 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-54bb9c4d69-975sg"] Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.665501 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/569cb143-086a-42f1-9e8c-6f6f614c9ee2-combined-ca-bundle\") pod \"barbican-keystone-listener-655647566b-n2tcs\" (UID: \"569cb143-086a-42f1-9e8c-6f6f614c9ee2\") " pod="openstack/barbican-keystone-listener-655647566b-n2tcs" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.665609 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72shl\" (UniqueName: \"kubernetes.io/projected/569cb143-086a-42f1-9e8c-6f6f614c9ee2-kube-api-access-72shl\") pod \"barbican-keystone-listener-655647566b-n2tcs\" (UID: \"569cb143-086a-42f1-9e8c-6f6f614c9ee2\") " pod="openstack/barbican-keystone-listener-655647566b-n2tcs" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.665732 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlt4x\" (UniqueName: \"kubernetes.io/projected/934cae9e-c75b-434d-b1e1-d566d6fb8b7d-kube-api-access-dlt4x\") pod \"barbican-worker-54bb9c4d69-975sg\" (UID: \"934cae9e-c75b-434d-b1e1-d566d6fb8b7d\") " pod="openstack/barbican-worker-54bb9c4d69-975sg" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.665761 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/569cb143-086a-42f1-9e8c-6f6f614c9ee2-config-data\") pod \"barbican-keystone-listener-655647566b-n2tcs\" (UID: \"569cb143-086a-42f1-9e8c-6f6f614c9ee2\") " pod="openstack/barbican-keystone-listener-655647566b-n2tcs" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.665804 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/569cb143-086a-42f1-9e8c-6f6f614c9ee2-logs\") pod \"barbican-keystone-listener-655647566b-n2tcs\" (UID: \"569cb143-086a-42f1-9e8c-6f6f614c9ee2\") " pod="openstack/barbican-keystone-listener-655647566b-n2tcs" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.665889 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/934cae9e-c75b-434d-b1e1-d566d6fb8b7d-logs\") pod \"barbican-worker-54bb9c4d69-975sg\" (UID: \"934cae9e-c75b-434d-b1e1-d566d6fb8b7d\") " pod="openstack/barbican-worker-54bb9c4d69-975sg" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.665906 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/569cb143-086a-42f1-9e8c-6f6f614c9ee2-config-data-custom\") pod \"barbican-keystone-listener-655647566b-n2tcs\" (UID: \"569cb143-086a-42f1-9e8c-6f6f614c9ee2\") " pod="openstack/barbican-keystone-listener-655647566b-n2tcs" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.666079 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934cae9e-c75b-434d-b1e1-d566d6fb8b7d-config-data\") pod \"barbican-worker-54bb9c4d69-975sg\" (UID: \"934cae9e-c75b-434d-b1e1-d566d6fb8b7d\") " pod="openstack/barbican-worker-54bb9c4d69-975sg" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.666184 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/934cae9e-c75b-434d-b1e1-d566d6fb8b7d-config-data-custom\") pod \"barbican-worker-54bb9c4d69-975sg\" (UID: \"934cae9e-c75b-434d-b1e1-d566d6fb8b7d\") " pod="openstack/barbican-worker-54bb9c4d69-975sg" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.666245 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934cae9e-c75b-434d-b1e1-d566d6fb8b7d-combined-ca-bundle\") pod \"barbican-worker-54bb9c4d69-975sg\" (UID: \"934cae9e-c75b-434d-b1e1-d566d6fb8b7d\") " pod="openstack/barbican-worker-54bb9c4d69-975sg" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.678430 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7f54bc974d-nvhbp" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.680295 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/934cae9e-c75b-434d-b1e1-d566d6fb8b7d-logs\") pod \"barbican-worker-54bb9c4d69-975sg\" (UID: \"934cae9e-c75b-434d-b1e1-d566d6fb8b7d\") " pod="openstack/barbican-worker-54bb9c4d69-975sg" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.712246 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934cae9e-c75b-434d-b1e1-d566d6fb8b7d-combined-ca-bundle\") pod \"barbican-worker-54bb9c4d69-975sg\" (UID: \"934cae9e-c75b-434d-b1e1-d566d6fb8b7d\") " pod="openstack/barbican-worker-54bb9c4d69-975sg" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.721890 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/934cae9e-c75b-434d-b1e1-d566d6fb8b7d-config-data-custom\") pod \"barbican-worker-54bb9c4d69-975sg\" (UID: \"934cae9e-c75b-434d-b1e1-d566d6fb8b7d\") " pod="openstack/barbican-worker-54bb9c4d69-975sg" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.729443 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934cae9e-c75b-434d-b1e1-d566d6fb8b7d-config-data\") pod \"barbican-worker-54bb9c4d69-975sg\" (UID: \"934cae9e-c75b-434d-b1e1-d566d6fb8b7d\") " pod="openstack/barbican-worker-54bb9c4d69-975sg" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.814128 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7d59d8d5d8-lw5w5" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.816154 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/569cb143-086a-42f1-9e8c-6f6f614c9ee2-logs\") pod \"barbican-keystone-listener-655647566b-n2tcs\" (UID: \"569cb143-086a-42f1-9e8c-6f6f614c9ee2\") " pod="openstack/barbican-keystone-listener-655647566b-n2tcs" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.816232 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/569cb143-086a-42f1-9e8c-6f6f614c9ee2-config-data-custom\") pod \"barbican-keystone-listener-655647566b-n2tcs\" (UID: \"569cb143-086a-42f1-9e8c-6f6f614c9ee2\") " pod="openstack/barbican-keystone-listener-655647566b-n2tcs" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.816360 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/569cb143-086a-42f1-9e8c-6f6f614c9ee2-combined-ca-bundle\") pod \"barbican-keystone-listener-655647566b-n2tcs\" (UID: \"569cb143-086a-42f1-9e8c-6f6f614c9ee2\") " pod="openstack/barbican-keystone-listener-655647566b-n2tcs" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.816402 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72shl\" (UniqueName: \"kubernetes.io/projected/569cb143-086a-42f1-9e8c-6f6f614c9ee2-kube-api-access-72shl\") pod \"barbican-keystone-listener-655647566b-n2tcs\" (UID: \"569cb143-086a-42f1-9e8c-6f6f614c9ee2\") " pod="openstack/barbican-keystone-listener-655647566b-n2tcs" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.816468 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/569cb143-086a-42f1-9e8c-6f6f614c9ee2-config-data\") pod \"barbican-keystone-listener-655647566b-n2tcs\" (UID: \"569cb143-086a-42f1-9e8c-6f6f614c9ee2\") " pod="openstack/barbican-keystone-listener-655647566b-n2tcs" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.819297 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/569cb143-086a-42f1-9e8c-6f6f614c9ee2-logs\") pod \"barbican-keystone-listener-655647566b-n2tcs\" (UID: \"569cb143-086a-42f1-9e8c-6f6f614c9ee2\") " pod="openstack/barbican-keystone-listener-655647566b-n2tcs" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.822618 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlt4x\" (UniqueName: \"kubernetes.io/projected/934cae9e-c75b-434d-b1e1-d566d6fb8b7d-kube-api-access-dlt4x\") pod \"barbican-worker-54bb9c4d69-975sg\" (UID: \"934cae9e-c75b-434d-b1e1-d566d6fb8b7d\") " pod="openstack/barbican-worker-54bb9c4d69-975sg" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.853171 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/569cb143-086a-42f1-9e8c-6f6f614c9ee2-config-data-custom\") pod \"barbican-keystone-listener-655647566b-n2tcs\" (UID: \"569cb143-086a-42f1-9e8c-6f6f614c9ee2\") " pod="openstack/barbican-keystone-listener-655647566b-n2tcs" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.879620 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/569cb143-086a-42f1-9e8c-6f6f614c9ee2-combined-ca-bundle\") pod \"barbican-keystone-listener-655647566b-n2tcs\" (UID: \"569cb143-086a-42f1-9e8c-6f6f614c9ee2\") " pod="openstack/barbican-keystone-listener-655647566b-n2tcs" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.892272 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-655647566b-n2tcs"] Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.914368 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72shl\" (UniqueName: \"kubernetes.io/projected/569cb143-086a-42f1-9e8c-6f6f614c9ee2-kube-api-access-72shl\") pod \"barbican-keystone-listener-655647566b-n2tcs\" (UID: \"569cb143-086a-42f1-9e8c-6f6f614c9ee2\") " pod="openstack/barbican-keystone-listener-655647566b-n2tcs" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.922292 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/569cb143-086a-42f1-9e8c-6f6f614c9ee2-config-data\") pod \"barbican-keystone-listener-655647566b-n2tcs\" (UID: \"569cb143-086a-42f1-9e8c-6f6f614c9ee2\") " pod="openstack/barbican-keystone-listener-655647566b-n2tcs" Dec 11 14:08:17 crc kubenswrapper[5050]: I1211 14:08:17.973826 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-54bb9c4d69-975sg" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.000673 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-655647566b-n2tcs" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.016217 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-78ccc9f8bd-jdg2t" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.016291 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-78ccc9f8bd-jdg2t" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.075155 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685444497c-x65wj"] Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.088291 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-mnccb"] Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.137389 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-mnccb" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.140957 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-mnccb"] Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.194105 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6fb64b5f76-9r6t7"] Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.234944 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6fb64b5f76-9r6t7"] Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.236535 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6fb64b5f76-9r6t7" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.248426 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.275800 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5bde6837-eef2-482a-81db-0fbba416e17d-ovsdbserver-nb\") pod \"dnsmasq-dns-66cdd4b5b5-mnccb\" (UID: \"5bde6837-eef2-482a-81db-0fbba416e17d\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-mnccb" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.275845 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/592e71d8-01fd-4db1-8292-938ded924711-config-data-custom\") pod \"barbican-api-6fb64b5f76-9r6t7\" (UID: \"592e71d8-01fd-4db1-8292-938ded924711\") " pod="openstack/barbican-api-6fb64b5f76-9r6t7" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.275887 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g2mx\" (UniqueName: \"kubernetes.io/projected/592e71d8-01fd-4db1-8292-938ded924711-kube-api-access-9g2mx\") pod \"barbican-api-6fb64b5f76-9r6t7\" (UID: \"592e71d8-01fd-4db1-8292-938ded924711\") " pod="openstack/barbican-api-6fb64b5f76-9r6t7" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.275950 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bde6837-eef2-482a-81db-0fbba416e17d-dns-svc\") pod \"dnsmasq-dns-66cdd4b5b5-mnccb\" (UID: \"5bde6837-eef2-482a-81db-0fbba416e17d\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-mnccb" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.275979 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/592e71d8-01fd-4db1-8292-938ded924711-logs\") pod \"barbican-api-6fb64b5f76-9r6t7\" (UID: \"592e71d8-01fd-4db1-8292-938ded924711\") " pod="openstack/barbican-api-6fb64b5f76-9r6t7" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.275996 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5bde6837-eef2-482a-81db-0fbba416e17d-ovsdbserver-sb\") pod \"dnsmasq-dns-66cdd4b5b5-mnccb\" (UID: \"5bde6837-eef2-482a-81db-0fbba416e17d\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-mnccb" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.276039 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bde6837-eef2-482a-81db-0fbba416e17d-config\") pod \"dnsmasq-dns-66cdd4b5b5-mnccb\" (UID: \"5bde6837-eef2-482a-81db-0fbba416e17d\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-mnccb" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.276062 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/592e71d8-01fd-4db1-8292-938ded924711-combined-ca-bundle\") pod \"barbican-api-6fb64b5f76-9r6t7\" (UID: \"592e71d8-01fd-4db1-8292-938ded924711\") " pod="openstack/barbican-api-6fb64b5f76-9r6t7" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.276081 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5bde6837-eef2-482a-81db-0fbba416e17d-dns-swift-storage-0\") pod \"dnsmasq-dns-66cdd4b5b5-mnccb\" (UID: \"5bde6837-eef2-482a-81db-0fbba416e17d\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-mnccb" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.276099 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/592e71d8-01fd-4db1-8292-938ded924711-config-data\") pod \"barbican-api-6fb64b5f76-9r6t7\" (UID: \"592e71d8-01fd-4db1-8292-938ded924711\") " pod="openstack/barbican-api-6fb64b5f76-9r6t7" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.276124 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qw5t\" (UniqueName: \"kubernetes.io/projected/5bde6837-eef2-482a-81db-0fbba416e17d-kube-api-access-6qw5t\") pod \"dnsmasq-dns-66cdd4b5b5-mnccb\" (UID: \"5bde6837-eef2-482a-81db-0fbba416e17d\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-mnccb" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.382869 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bde6837-eef2-482a-81db-0fbba416e17d-dns-svc\") pod \"dnsmasq-dns-66cdd4b5b5-mnccb\" (UID: \"5bde6837-eef2-482a-81db-0fbba416e17d\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-mnccb" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.387593 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/592e71d8-01fd-4db1-8292-938ded924711-logs\") pod \"barbican-api-6fb64b5f76-9r6t7\" (UID: \"592e71d8-01fd-4db1-8292-938ded924711\") " pod="openstack/barbican-api-6fb64b5f76-9r6t7" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.388514 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5bde6837-eef2-482a-81db-0fbba416e17d-ovsdbserver-sb\") pod \"dnsmasq-dns-66cdd4b5b5-mnccb\" (UID: \"5bde6837-eef2-482a-81db-0fbba416e17d\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-mnccb" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.388833 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bde6837-eef2-482a-81db-0fbba416e17d-config\") pod \"dnsmasq-dns-66cdd4b5b5-mnccb\" (UID: \"5bde6837-eef2-482a-81db-0fbba416e17d\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-mnccb" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.388959 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/592e71d8-01fd-4db1-8292-938ded924711-combined-ca-bundle\") pod \"barbican-api-6fb64b5f76-9r6t7\" (UID: \"592e71d8-01fd-4db1-8292-938ded924711\") " pod="openstack/barbican-api-6fb64b5f76-9r6t7" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.389129 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5bde6837-eef2-482a-81db-0fbba416e17d-dns-swift-storage-0\") pod \"dnsmasq-dns-66cdd4b5b5-mnccb\" (UID: \"5bde6837-eef2-482a-81db-0fbba416e17d\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-mnccb" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.389230 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/592e71d8-01fd-4db1-8292-938ded924711-config-data\") pod \"barbican-api-6fb64b5f76-9r6t7\" (UID: \"592e71d8-01fd-4db1-8292-938ded924711\") " pod="openstack/barbican-api-6fb64b5f76-9r6t7" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.389360 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qw5t\" (UniqueName: \"kubernetes.io/projected/5bde6837-eef2-482a-81db-0fbba416e17d-kube-api-access-6qw5t\") pod \"dnsmasq-dns-66cdd4b5b5-mnccb\" (UID: \"5bde6837-eef2-482a-81db-0fbba416e17d\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-mnccb" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.389608 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5bde6837-eef2-482a-81db-0fbba416e17d-ovsdbserver-sb\") pod \"dnsmasq-dns-66cdd4b5b5-mnccb\" (UID: \"5bde6837-eef2-482a-81db-0fbba416e17d\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-mnccb" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.388870 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bde6837-eef2-482a-81db-0fbba416e17d-dns-svc\") pod \"dnsmasq-dns-66cdd4b5b5-mnccb\" (UID: \"5bde6837-eef2-482a-81db-0fbba416e17d\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-mnccb" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.388287 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/592e71d8-01fd-4db1-8292-938ded924711-logs\") pod \"barbican-api-6fb64b5f76-9r6t7\" (UID: \"592e71d8-01fd-4db1-8292-938ded924711\") " pod="openstack/barbican-api-6fb64b5f76-9r6t7" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.390440 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bde6837-eef2-482a-81db-0fbba416e17d-config\") pod \"dnsmasq-dns-66cdd4b5b5-mnccb\" (UID: \"5bde6837-eef2-482a-81db-0fbba416e17d\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-mnccb" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.391227 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5bde6837-eef2-482a-81db-0fbba416e17d-dns-swift-storage-0\") pod \"dnsmasq-dns-66cdd4b5b5-mnccb\" (UID: \"5bde6837-eef2-482a-81db-0fbba416e17d\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-mnccb" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.391420 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5bde6837-eef2-482a-81db-0fbba416e17d-ovsdbserver-nb\") pod \"dnsmasq-dns-66cdd4b5b5-mnccb\" (UID: \"5bde6837-eef2-482a-81db-0fbba416e17d\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-mnccb" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.392279 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/592e71d8-01fd-4db1-8292-938ded924711-config-data-custom\") pod \"barbican-api-6fb64b5f76-9r6t7\" (UID: \"592e71d8-01fd-4db1-8292-938ded924711\") " pod="openstack/barbican-api-6fb64b5f76-9r6t7" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.392454 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9g2mx\" (UniqueName: \"kubernetes.io/projected/592e71d8-01fd-4db1-8292-938ded924711-kube-api-access-9g2mx\") pod \"barbican-api-6fb64b5f76-9r6t7\" (UID: \"592e71d8-01fd-4db1-8292-938ded924711\") " pod="openstack/barbican-api-6fb64b5f76-9r6t7" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.392119 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5bde6837-eef2-482a-81db-0fbba416e17d-ovsdbserver-nb\") pod \"dnsmasq-dns-66cdd4b5b5-mnccb\" (UID: \"5bde6837-eef2-482a-81db-0fbba416e17d\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-mnccb" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.403253 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/592e71d8-01fd-4db1-8292-938ded924711-combined-ca-bundle\") pod \"barbican-api-6fb64b5f76-9r6t7\" (UID: \"592e71d8-01fd-4db1-8292-938ded924711\") " pod="openstack/barbican-api-6fb64b5f76-9r6t7" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.403965 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/592e71d8-01fd-4db1-8292-938ded924711-config-data\") pod \"barbican-api-6fb64b5f76-9r6t7\" (UID: \"592e71d8-01fd-4db1-8292-938ded924711\") " pod="openstack/barbican-api-6fb64b5f76-9r6t7" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.418431 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/592e71d8-01fd-4db1-8292-938ded924711-config-data-custom\") pod \"barbican-api-6fb64b5f76-9r6t7\" (UID: \"592e71d8-01fd-4db1-8292-938ded924711\") " pod="openstack/barbican-api-6fb64b5f76-9r6t7" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.418620 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9g2mx\" (UniqueName: \"kubernetes.io/projected/592e71d8-01fd-4db1-8292-938ded924711-kube-api-access-9g2mx\") pod \"barbican-api-6fb64b5f76-9r6t7\" (UID: \"592e71d8-01fd-4db1-8292-938ded924711\") " pod="openstack/barbican-api-6fb64b5f76-9r6t7" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.432606 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qw5t\" (UniqueName: \"kubernetes.io/projected/5bde6837-eef2-482a-81db-0fbba416e17d-kube-api-access-6qw5t\") pod \"dnsmasq-dns-66cdd4b5b5-mnccb\" (UID: \"5bde6837-eef2-482a-81db-0fbba416e17d\") " pod="openstack/dnsmasq-dns-66cdd4b5b5-mnccb" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.508745 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-mnccb" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.588436 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6fb64b5f76-9r6t7" Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.731170 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7f54bc974d-nvhbp"] Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.763736 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-54bb9c4d69-975sg"] Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.776580 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685444497c-x65wj"] Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.812319 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7d59d8d5d8-lw5w5"] Dec 11 14:08:18 crc kubenswrapper[5050]: I1211 14:08:18.980887 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-mnccb"] Dec 11 14:08:19 crc kubenswrapper[5050]: I1211 14:08:19.014402 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-655647566b-n2tcs"] Dec 11 14:08:19 crc kubenswrapper[5050]: I1211 14:08:19.043062 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d59d8d5d8-lw5w5" event={"ID":"bc5cca8e-233a-4621-87bc-d8c64a6d1d88","Type":"ContainerStarted","Data":"653893a45f7b93fb28bb9d52ba80a1c9feb622877ce8299a1d83f878e192a90a"} Dec 11 14:08:19 crc kubenswrapper[5050]: I1211 14:08:19.047432 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-54bb9c4d69-975sg" event={"ID":"934cae9e-c75b-434d-b1e1-d566d6fb8b7d","Type":"ContainerStarted","Data":"abdd71658e59797dc9fcd008fc688f874bf25398b05b5bbc4d1e561c2d75f9c0"} Dec 11 14:08:19 crc kubenswrapper[5050]: I1211 14:08:19.049881 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7f54bc974d-nvhbp" event={"ID":"36acbdf3-346e-4207-8391-b2a03ef839e5","Type":"ContainerStarted","Data":"f829f2cd18d36054e8757d223545d93ec81ebd794ad7abd983c707e9f0df6efd"} Dec 11 14:08:19 crc kubenswrapper[5050]: I1211 14:08:19.054442 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-x65wj" event={"ID":"28c7c2de-e924-4a41-af51-2f0f9e687952","Type":"ContainerStarted","Data":"3ab5135467ea272faf8678656c3e7b4103b34592eaaaf5eb2db5ac23bac3ed79"} Dec 11 14:08:19 crc kubenswrapper[5050]: W1211 14:08:19.054946 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod569cb143_086a_42f1_9e8c_6f6f614c9ee2.slice/crio-6fac34aac4fda442dfc8957aad3488b09240cc41d9a9c76bd6bf15fff9fd9fc8 WatchSource:0}: Error finding container 6fac34aac4fda442dfc8957aad3488b09240cc41d9a9c76bd6bf15fff9fd9fc8: Status 404 returned error can't find the container with id 6fac34aac4fda442dfc8957aad3488b09240cc41d9a9c76bd6bf15fff9fd9fc8 Dec 11 14:08:19 crc kubenswrapper[5050]: I1211 14:08:19.665417 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6fb64b5f76-9r6t7"] Dec 11 14:08:20 crc kubenswrapper[5050]: I1211 14:08:20.094654 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-655647566b-n2tcs" event={"ID":"569cb143-086a-42f1-9e8c-6f6f614c9ee2","Type":"ContainerStarted","Data":"6fac34aac4fda442dfc8957aad3488b09240cc41d9a9c76bd6bf15fff9fd9fc8"} Dec 11 14:08:20 crc kubenswrapper[5050]: I1211 14:08:20.109349 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7f54bc974d-nvhbp" event={"ID":"36acbdf3-346e-4207-8391-b2a03ef839e5","Type":"ContainerStarted","Data":"5f2c03aa348522be8e65276f7ae37004bcd483651a98c207f1fa66f6b76162d4"} Dec 11 14:08:20 crc kubenswrapper[5050]: I1211 14:08:20.109482 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7f54bc974d-nvhbp" Dec 11 14:08:20 crc kubenswrapper[5050]: I1211 14:08:20.113535 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fb64b5f76-9r6t7" event={"ID":"592e71d8-01fd-4db1-8292-938ded924711","Type":"ContainerStarted","Data":"043404c37872b89c66d62829220f575aaf0711f9b4ff50b4dbd7f66e92ee24c5"} Dec 11 14:08:20 crc kubenswrapper[5050]: I1211 14:08:20.116278 5050 generic.go:334] "Generic (PLEG): container finished" podID="28c7c2de-e924-4a41-af51-2f0f9e687952" containerID="b019bf5a43d7eb68be27b728e3ec9060b699212fb889de9ce9464148f235d609" exitCode=0 Dec 11 14:08:20 crc kubenswrapper[5050]: I1211 14:08:20.116400 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-x65wj" event={"ID":"28c7c2de-e924-4a41-af51-2f0f9e687952","Type":"ContainerDied","Data":"b019bf5a43d7eb68be27b728e3ec9060b699212fb889de9ce9464148f235d609"} Dec 11 14:08:20 crc kubenswrapper[5050]: I1211 14:08:20.119780 5050 generic.go:334] "Generic (PLEG): container finished" podID="5bde6837-eef2-482a-81db-0fbba416e17d" containerID="0ba47719f866756238ec2cc0155f78576c2d8c7daa512b705c79dd5815cfa07e" exitCode=0 Dec 11 14:08:20 crc kubenswrapper[5050]: I1211 14:08:20.120075 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-mnccb" event={"ID":"5bde6837-eef2-482a-81db-0fbba416e17d","Type":"ContainerDied","Data":"0ba47719f866756238ec2cc0155f78576c2d8c7daa512b705c79dd5815cfa07e"} Dec 11 14:08:20 crc kubenswrapper[5050]: I1211 14:08:20.120118 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-mnccb" event={"ID":"5bde6837-eef2-482a-81db-0fbba416e17d","Type":"ContainerStarted","Data":"09acf4a2d760b9dccb307b179b1c4e35f55bcbff22ac11c15c79f643d9c438e8"} Dec 11 14:08:20 crc kubenswrapper[5050]: I1211 14:08:20.124826 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d59d8d5d8-lw5w5" event={"ID":"bc5cca8e-233a-4621-87bc-d8c64a6d1d88","Type":"ContainerStarted","Data":"80a0101f214c710057cbef55369ef6e367dd76aa31c8d93a145799639c0c1f38"} Dec 11 14:08:20 crc kubenswrapper[5050]: I1211 14:08:20.169128 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7f54bc974d-nvhbp" podStartSLOduration=4.169085554 podStartE2EDuration="4.169085554s" podCreationTimestamp="2025-12-11 14:08:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:08:20.137819301 +0000 UTC m=+1190.981541887" watchObservedRunningTime="2025-12-11 14:08:20.169085554 +0000 UTC m=+1191.012808160" Dec 11 14:08:20 crc kubenswrapper[5050]: I1211 14:08:20.832260 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685444497c-x65wj" Dec 11 14:08:20 crc kubenswrapper[5050]: I1211 14:08:20.912935 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28c7c2de-e924-4a41-af51-2f0f9e687952-dns-svc\") pod \"28c7c2de-e924-4a41-af51-2f0f9e687952\" (UID: \"28c7c2de-e924-4a41-af51-2f0f9e687952\") " Dec 11 14:08:20 crc kubenswrapper[5050]: I1211 14:08:20.913050 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28c7c2de-e924-4a41-af51-2f0f9e687952-ovsdbserver-nb\") pod \"28c7c2de-e924-4a41-af51-2f0f9e687952\" (UID: \"28c7c2de-e924-4a41-af51-2f0f9e687952\") " Dec 11 14:08:20 crc kubenswrapper[5050]: I1211 14:08:20.913143 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28c7c2de-e924-4a41-af51-2f0f9e687952-ovsdbserver-sb\") pod \"28c7c2de-e924-4a41-af51-2f0f9e687952\" (UID: \"28c7c2de-e924-4a41-af51-2f0f9e687952\") " Dec 11 14:08:20 crc kubenswrapper[5050]: I1211 14:08:20.913213 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28c7c2de-e924-4a41-af51-2f0f9e687952-dns-swift-storage-0\") pod \"28c7c2de-e924-4a41-af51-2f0f9e687952\" (UID: \"28c7c2de-e924-4a41-af51-2f0f9e687952\") " Dec 11 14:08:20 crc kubenswrapper[5050]: I1211 14:08:20.913265 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28c7c2de-e924-4a41-af51-2f0f9e687952-config\") pod \"28c7c2de-e924-4a41-af51-2f0f9e687952\" (UID: \"28c7c2de-e924-4a41-af51-2f0f9e687952\") " Dec 11 14:08:20 crc kubenswrapper[5050]: I1211 14:08:20.913379 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47jwk\" (UniqueName: \"kubernetes.io/projected/28c7c2de-e924-4a41-af51-2f0f9e687952-kube-api-access-47jwk\") pod \"28c7c2de-e924-4a41-af51-2f0f9e687952\" (UID: \"28c7c2de-e924-4a41-af51-2f0f9e687952\") " Dec 11 14:08:20 crc kubenswrapper[5050]: I1211 14:08:20.968491 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28c7c2de-e924-4a41-af51-2f0f9e687952-kube-api-access-47jwk" (OuterVolumeSpecName: "kube-api-access-47jwk") pod "28c7c2de-e924-4a41-af51-2f0f9e687952" (UID: "28c7c2de-e924-4a41-af51-2f0f9e687952"). InnerVolumeSpecName "kube-api-access-47jwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.017595 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47jwk\" (UniqueName: \"kubernetes.io/projected/28c7c2de-e924-4a41-af51-2f0f9e687952-kube-api-access-47jwk\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.089759 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28c7c2de-e924-4a41-af51-2f0f9e687952-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "28c7c2de-e924-4a41-af51-2f0f9e687952" (UID: "28c7c2de-e924-4a41-af51-2f0f9e687952"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.120562 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/28c7c2de-e924-4a41-af51-2f0f9e687952-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.158544 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28c7c2de-e924-4a41-af51-2f0f9e687952-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "28c7c2de-e924-4a41-af51-2f0f9e687952" (UID: "28c7c2de-e924-4a41-af51-2f0f9e687952"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.163741 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685444497c-x65wj" event={"ID":"28c7c2de-e924-4a41-af51-2f0f9e687952","Type":"ContainerDied","Data":"3ab5135467ea272faf8678656c3e7b4103b34592eaaaf5eb2db5ac23bac3ed79"} Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.163865 5050 scope.go:117] "RemoveContainer" containerID="b019bf5a43d7eb68be27b728e3ec9060b699212fb889de9ce9464148f235d609" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.164031 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685444497c-x65wj" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.171604 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28c7c2de-e924-4a41-af51-2f0f9e687952-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "28c7c2de-e924-4a41-af51-2f0f9e687952" (UID: "28c7c2de-e924-4a41-af51-2f0f9e687952"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.174428 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28c7c2de-e924-4a41-af51-2f0f9e687952-config" (OuterVolumeSpecName: "config") pod "28c7c2de-e924-4a41-af51-2f0f9e687952" (UID: "28c7c2de-e924-4a41-af51-2f0f9e687952"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.190190 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7kvxk" event={"ID":"2d9200d5-8f1c-46be-a802-995c7f58b754","Type":"ContainerStarted","Data":"add249db91788c64fc0bc9abe12d8ebe0bbd0ac4df87c1a680f9ba5d9cae0685"} Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.201283 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28c7c2de-e924-4a41-af51-2f0f9e687952-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "28c7c2de-e924-4a41-af51-2f0f9e687952" (UID: "28c7c2de-e924-4a41-af51-2f0f9e687952"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.202542 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d59d8d5d8-lw5w5" event={"ID":"bc5cca8e-233a-4621-87bc-d8c64a6d1d88","Type":"ContainerStarted","Data":"b1a274e5b4234729c2aa4fa87751468ac7110151d81f59e14f5c7057a94c21fc"} Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.202960 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7d59d8d5d8-lw5w5" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.221369 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fb64b5f76-9r6t7" event={"ID":"592e71d8-01fd-4db1-8292-938ded924711","Type":"ContainerStarted","Data":"1e1d2fc36a977a9413352e3979b3d27f2bdc7a12c3c470c41cb2d0ca85721112"} Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.221435 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6fb64b5f76-9r6t7" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.221447 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6fb64b5f76-9r6t7" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.222190 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/28c7c2de-e924-4a41-af51-2f0f9e687952-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.222286 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/28c7c2de-e924-4a41-af51-2f0f9e687952-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.222349 5050 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/28c7c2de-e924-4a41-af51-2f0f9e687952-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.222412 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28c7c2de-e924-4a41-af51-2f0f9e687952-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.233834 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-7kvxk" podStartSLOduration=4.508009349 podStartE2EDuration="47.233803667s" podCreationTimestamp="2025-12-11 14:07:34 +0000 UTC" firstStartedPulling="2025-12-11 14:07:35.425805556 +0000 UTC m=+1146.269528142" lastFinishedPulling="2025-12-11 14:08:18.151599874 +0000 UTC m=+1188.995322460" observedRunningTime="2025-12-11 14:08:21.217886718 +0000 UTC m=+1192.061609304" watchObservedRunningTime="2025-12-11 14:08:21.233803667 +0000 UTC m=+1192.077526253" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.260110 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7d59d8d5d8-lw5w5" podStartSLOduration=4.2600781340000005 podStartE2EDuration="4.260078134s" podCreationTimestamp="2025-12-11 14:08:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:08:21.243102117 +0000 UTC m=+1192.086824713" watchObservedRunningTime="2025-12-11 14:08:21.260078134 +0000 UTC m=+1192.103800730" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.306298 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6fb64b5f76-9r6t7" podStartSLOduration=4.306247678 podStartE2EDuration="4.306247678s" podCreationTimestamp="2025-12-11 14:08:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:08:21.269143559 +0000 UTC m=+1192.112866145" watchObservedRunningTime="2025-12-11 14:08:21.306247678 +0000 UTC m=+1192.149970264" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.381079 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.381197 5050 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.384817 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.657920 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685444497c-x65wj"] Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.678815 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-685444497c-x65wj"] Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.781351 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7766777c65-2rcww"] Dec 11 14:08:21 crc kubenswrapper[5050]: E1211 14:08:21.782034 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28c7c2de-e924-4a41-af51-2f0f9e687952" containerName="init" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.782082 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="28c7c2de-e924-4a41-af51-2f0f9e687952" containerName="init" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.782295 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="28c7c2de-e924-4a41-af51-2f0f9e687952" containerName="init" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.783361 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7766777c65-2rcww" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.791051 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.791105 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.814333 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7766777c65-2rcww"] Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.866061 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ee634ad2-5f9a-4183-bddc-d076b6456276-httpd-config\") pod \"neutron-7766777c65-2rcww\" (UID: \"ee634ad2-5f9a-4183-bddc-d076b6456276\") " pod="openstack/neutron-7766777c65-2rcww" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.866121 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee634ad2-5f9a-4183-bddc-d076b6456276-internal-tls-certs\") pod \"neutron-7766777c65-2rcww\" (UID: \"ee634ad2-5f9a-4183-bddc-d076b6456276\") " pod="openstack/neutron-7766777c65-2rcww" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.866160 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee634ad2-5f9a-4183-bddc-d076b6456276-ovndb-tls-certs\") pod \"neutron-7766777c65-2rcww\" (UID: \"ee634ad2-5f9a-4183-bddc-d076b6456276\") " pod="openstack/neutron-7766777c65-2rcww" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.866185 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee634ad2-5f9a-4183-bddc-d076b6456276-combined-ca-bundle\") pod \"neutron-7766777c65-2rcww\" (UID: \"ee634ad2-5f9a-4183-bddc-d076b6456276\") " pod="openstack/neutron-7766777c65-2rcww" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.866210 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ee634ad2-5f9a-4183-bddc-d076b6456276-config\") pod \"neutron-7766777c65-2rcww\" (UID: \"ee634ad2-5f9a-4183-bddc-d076b6456276\") " pod="openstack/neutron-7766777c65-2rcww" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.866232 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee634ad2-5f9a-4183-bddc-d076b6456276-public-tls-certs\") pod \"neutron-7766777c65-2rcww\" (UID: \"ee634ad2-5f9a-4183-bddc-d076b6456276\") " pod="openstack/neutron-7766777c65-2rcww" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.866253 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnh6w\" (UniqueName: \"kubernetes.io/projected/ee634ad2-5f9a-4183-bddc-d076b6456276-kube-api-access-mnh6w\") pod \"neutron-7766777c65-2rcww\" (UID: \"ee634ad2-5f9a-4183-bddc-d076b6456276\") " pod="openstack/neutron-7766777c65-2rcww" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.968867 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee634ad2-5f9a-4183-bddc-d076b6456276-combined-ca-bundle\") pod \"neutron-7766777c65-2rcww\" (UID: \"ee634ad2-5f9a-4183-bddc-d076b6456276\") " pod="openstack/neutron-7766777c65-2rcww" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.968938 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ee634ad2-5f9a-4183-bddc-d076b6456276-config\") pod \"neutron-7766777c65-2rcww\" (UID: \"ee634ad2-5f9a-4183-bddc-d076b6456276\") " pod="openstack/neutron-7766777c65-2rcww" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.968965 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee634ad2-5f9a-4183-bddc-d076b6456276-public-tls-certs\") pod \"neutron-7766777c65-2rcww\" (UID: \"ee634ad2-5f9a-4183-bddc-d076b6456276\") " pod="openstack/neutron-7766777c65-2rcww" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.968989 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnh6w\" (UniqueName: \"kubernetes.io/projected/ee634ad2-5f9a-4183-bddc-d076b6456276-kube-api-access-mnh6w\") pod \"neutron-7766777c65-2rcww\" (UID: \"ee634ad2-5f9a-4183-bddc-d076b6456276\") " pod="openstack/neutron-7766777c65-2rcww" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.969169 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ee634ad2-5f9a-4183-bddc-d076b6456276-httpd-config\") pod \"neutron-7766777c65-2rcww\" (UID: \"ee634ad2-5f9a-4183-bddc-d076b6456276\") " pod="openstack/neutron-7766777c65-2rcww" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.969193 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee634ad2-5f9a-4183-bddc-d076b6456276-internal-tls-certs\") pod \"neutron-7766777c65-2rcww\" (UID: \"ee634ad2-5f9a-4183-bddc-d076b6456276\") " pod="openstack/neutron-7766777c65-2rcww" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.969222 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee634ad2-5f9a-4183-bddc-d076b6456276-ovndb-tls-certs\") pod \"neutron-7766777c65-2rcww\" (UID: \"ee634ad2-5f9a-4183-bddc-d076b6456276\") " pod="openstack/neutron-7766777c65-2rcww" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.978135 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee634ad2-5f9a-4183-bddc-d076b6456276-ovndb-tls-certs\") pod \"neutron-7766777c65-2rcww\" (UID: \"ee634ad2-5f9a-4183-bddc-d076b6456276\") " pod="openstack/neutron-7766777c65-2rcww" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.979483 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee634ad2-5f9a-4183-bddc-d076b6456276-combined-ca-bundle\") pod \"neutron-7766777c65-2rcww\" (UID: \"ee634ad2-5f9a-4183-bddc-d076b6456276\") " pod="openstack/neutron-7766777c65-2rcww" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.980479 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ee634ad2-5f9a-4183-bddc-d076b6456276-httpd-config\") pod \"neutron-7766777c65-2rcww\" (UID: \"ee634ad2-5f9a-4183-bddc-d076b6456276\") " pod="openstack/neutron-7766777c65-2rcww" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.990791 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ee634ad2-5f9a-4183-bddc-d076b6456276-config\") pod \"neutron-7766777c65-2rcww\" (UID: \"ee634ad2-5f9a-4183-bddc-d076b6456276\") " pod="openstack/neutron-7766777c65-2rcww" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.992464 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee634ad2-5f9a-4183-bddc-d076b6456276-internal-tls-certs\") pod \"neutron-7766777c65-2rcww\" (UID: \"ee634ad2-5f9a-4183-bddc-d076b6456276\") " pod="openstack/neutron-7766777c65-2rcww" Dec 11 14:08:21 crc kubenswrapper[5050]: I1211 14:08:21.994149 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee634ad2-5f9a-4183-bddc-d076b6456276-public-tls-certs\") pod \"neutron-7766777c65-2rcww\" (UID: \"ee634ad2-5f9a-4183-bddc-d076b6456276\") " pod="openstack/neutron-7766777c65-2rcww" Dec 11 14:08:22 crc kubenswrapper[5050]: I1211 14:08:22.013241 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnh6w\" (UniqueName: \"kubernetes.io/projected/ee634ad2-5f9a-4183-bddc-d076b6456276-kube-api-access-mnh6w\") pod \"neutron-7766777c65-2rcww\" (UID: \"ee634ad2-5f9a-4183-bddc-d076b6456276\") " pod="openstack/neutron-7766777c65-2rcww" Dec 11 14:08:22 crc kubenswrapper[5050]: I1211 14:08:22.115365 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7766777c65-2rcww" Dec 11 14:08:22 crc kubenswrapper[5050]: I1211 14:08:22.243721 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fb64b5f76-9r6t7" event={"ID":"592e71d8-01fd-4db1-8292-938ded924711","Type":"ContainerStarted","Data":"5b8546c77d46556062f3ce9f749c83366abb63e015f143c4be0d0ad4bf950b7d"} Dec 11 14:08:22 crc kubenswrapper[5050]: I1211 14:08:22.249101 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-mnccb" event={"ID":"5bde6837-eef2-482a-81db-0fbba416e17d","Type":"ContainerStarted","Data":"6bcb0ac63ada624c47aed3c4fbc915ad6e58ba49482564d13ca0a463139d6517"} Dec 11 14:08:22 crc kubenswrapper[5050]: I1211 14:08:22.278633 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-66cdd4b5b5-mnccb" podStartSLOduration=5.278608512 podStartE2EDuration="5.278608512s" podCreationTimestamp="2025-12-11 14:08:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:08:22.273721691 +0000 UTC m=+1193.117444307" watchObservedRunningTime="2025-12-11 14:08:22.278608512 +0000 UTC m=+1193.122331098" Dec 11 14:08:23 crc kubenswrapper[5050]: I1211 14:08:23.265398 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-66cdd4b5b5-mnccb" Dec 11 14:08:23 crc kubenswrapper[5050]: I1211 14:08:23.566140 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28c7c2de-e924-4a41-af51-2f0f9e687952" path="/var/lib/kubelet/pods/28c7c2de-e924-4a41-af51-2f0f9e687952/volumes" Dec 11 14:08:24 crc kubenswrapper[5050]: I1211 14:08:24.022706 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7766777c65-2rcww"] Dec 11 14:08:24 crc kubenswrapper[5050]: W1211 14:08:24.060223 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee634ad2_5f9a_4183_bddc_d076b6456276.slice/crio-cdad9081ffb3f2872de560a2ff42fb6d940f170002761e5918354dad1b365fd6 WatchSource:0}: Error finding container cdad9081ffb3f2872de560a2ff42fb6d940f170002761e5918354dad1b365fd6: Status 404 returned error can't find the container with id cdad9081ffb3f2872de560a2ff42fb6d940f170002761e5918354dad1b365fd6 Dec 11 14:08:24 crc kubenswrapper[5050]: I1211 14:08:24.281751 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-655647566b-n2tcs" event={"ID":"569cb143-086a-42f1-9e8c-6f6f614c9ee2","Type":"ContainerStarted","Data":"d11f1570a4983e90360fd498bdee9b19f208c7f5acd61496d60bf9cadd7bc16f"} Dec 11 14:08:24 crc kubenswrapper[5050]: I1211 14:08:24.285382 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7766777c65-2rcww" event={"ID":"ee634ad2-5f9a-4183-bddc-d076b6456276","Type":"ContainerStarted","Data":"cdad9081ffb3f2872de560a2ff42fb6d940f170002761e5918354dad1b365fd6"} Dec 11 14:08:24 crc kubenswrapper[5050]: I1211 14:08:24.289258 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-54bb9c4d69-975sg" event={"ID":"934cae9e-c75b-434d-b1e1-d566d6fb8b7d","Type":"ContainerStarted","Data":"955a0ee0c9eed128222ddf5d6dedbc74a4c5d1d3bcc7732f13e94db5162a8ca2"} Dec 11 14:08:25 crc kubenswrapper[5050]: I1211 14:08:25.240748 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-57f899fb58-v2lwj"] Dec 11 14:08:25 crc kubenswrapper[5050]: I1211 14:08:25.243508 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-57f899fb58-v2lwj" Dec 11 14:08:25 crc kubenswrapper[5050]: I1211 14:08:25.250979 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 11 14:08:25 crc kubenswrapper[5050]: I1211 14:08:25.251265 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 11 14:08:25 crc kubenswrapper[5050]: I1211 14:08:25.272535 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-public-tls-certs\") pod \"barbican-api-57f899fb58-v2lwj\" (UID: \"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf\") " pod="openstack/barbican-api-57f899fb58-v2lwj" Dec 11 14:08:25 crc kubenswrapper[5050]: I1211 14:08:25.272654 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-combined-ca-bundle\") pod \"barbican-api-57f899fb58-v2lwj\" (UID: \"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf\") " pod="openstack/barbican-api-57f899fb58-v2lwj" Dec 11 14:08:25 crc kubenswrapper[5050]: I1211 14:08:25.272691 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-internal-tls-certs\") pod \"barbican-api-57f899fb58-v2lwj\" (UID: \"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf\") " pod="openstack/barbican-api-57f899fb58-v2lwj" Dec 11 14:08:25 crc kubenswrapper[5050]: I1211 14:08:25.272720 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-config-data\") pod \"barbican-api-57f899fb58-v2lwj\" (UID: \"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf\") " pod="openstack/barbican-api-57f899fb58-v2lwj" Dec 11 14:08:25 crc kubenswrapper[5050]: I1211 14:08:25.272752 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-logs\") pod \"barbican-api-57f899fb58-v2lwj\" (UID: \"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf\") " pod="openstack/barbican-api-57f899fb58-v2lwj" Dec 11 14:08:25 crc kubenswrapper[5050]: I1211 14:08:25.272811 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg874\" (UniqueName: \"kubernetes.io/projected/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-kube-api-access-kg874\") pod \"barbican-api-57f899fb58-v2lwj\" (UID: \"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf\") " pod="openstack/barbican-api-57f899fb58-v2lwj" Dec 11 14:08:25 crc kubenswrapper[5050]: I1211 14:08:25.272853 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-config-data-custom\") pod \"barbican-api-57f899fb58-v2lwj\" (UID: \"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf\") " pod="openstack/barbican-api-57f899fb58-v2lwj" Dec 11 14:08:25 crc kubenswrapper[5050]: I1211 14:08:25.273994 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-57f899fb58-v2lwj"] Dec 11 14:08:25 crc kubenswrapper[5050]: I1211 14:08:25.315980 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7766777c65-2rcww" event={"ID":"ee634ad2-5f9a-4183-bddc-d076b6456276","Type":"ContainerStarted","Data":"61e67897af4ee162906e77cb4c294b5d3db5572b3070be4672c47d685548ceec"} Dec 11 14:08:25 crc kubenswrapper[5050]: I1211 14:08:25.317056 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7766777c65-2rcww" Dec 11 14:08:25 crc kubenswrapper[5050]: I1211 14:08:25.317071 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7766777c65-2rcww" event={"ID":"ee634ad2-5f9a-4183-bddc-d076b6456276","Type":"ContainerStarted","Data":"b54b96dfb39cdff4b8a6a73d84c57d9222db83cd6d5c351fcff1ac130593b742"} Dec 11 14:08:25 crc kubenswrapper[5050]: I1211 14:08:25.322503 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-54bb9c4d69-975sg" event={"ID":"934cae9e-c75b-434d-b1e1-d566d6fb8b7d","Type":"ContainerStarted","Data":"d2f88cb82773ad5f567925e106c60ec7bef84c6e078be7c5e2a9bd340e19b35c"} Dec 11 14:08:25 crc kubenswrapper[5050]: I1211 14:08:25.326683 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-655647566b-n2tcs" event={"ID":"569cb143-086a-42f1-9e8c-6f6f614c9ee2","Type":"ContainerStarted","Data":"e402b883c564b7c4156be1691f2f8af60f04df5e1dc8aa45ac6e3435d54ea395"} Dec 11 14:08:25 crc kubenswrapper[5050]: I1211 14:08:25.343501 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7766777c65-2rcww" podStartSLOduration=4.343480329 podStartE2EDuration="4.343480329s" podCreationTimestamp="2025-12-11 14:08:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:08:25.33758402 +0000 UTC m=+1196.181306616" watchObservedRunningTime="2025-12-11 14:08:25.343480329 +0000 UTC m=+1196.187202905" Dec 11 14:08:25 crc kubenswrapper[5050]: I1211 14:08:25.368838 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-54bb9c4d69-975sg" podStartSLOduration=3.7700306230000002 podStartE2EDuration="8.368816111s" podCreationTimestamp="2025-12-11 14:08:17 +0000 UTC" firstStartedPulling="2025-12-11 14:08:19.04456516 +0000 UTC m=+1189.888287746" lastFinishedPulling="2025-12-11 14:08:23.643350648 +0000 UTC m=+1194.487073234" observedRunningTime="2025-12-11 14:08:25.367979659 +0000 UTC m=+1196.211702245" watchObservedRunningTime="2025-12-11 14:08:25.368816111 +0000 UTC m=+1196.212538687" Dec 11 14:08:25 crc kubenswrapper[5050]: I1211 14:08:25.375622 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-config-data-custom\") pod \"barbican-api-57f899fb58-v2lwj\" (UID: \"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf\") " pod="openstack/barbican-api-57f899fb58-v2lwj" Dec 11 14:08:25 crc kubenswrapper[5050]: I1211 14:08:25.375774 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-public-tls-certs\") pod \"barbican-api-57f899fb58-v2lwj\" (UID: \"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf\") " pod="openstack/barbican-api-57f899fb58-v2lwj" Dec 11 14:08:25 crc kubenswrapper[5050]: I1211 14:08:25.375877 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-combined-ca-bundle\") pod \"barbican-api-57f899fb58-v2lwj\" (UID: \"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf\") " pod="openstack/barbican-api-57f899fb58-v2lwj" Dec 11 14:08:25 crc kubenswrapper[5050]: I1211 14:08:25.375925 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-internal-tls-certs\") pod \"barbican-api-57f899fb58-v2lwj\" (UID: \"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf\") " pod="openstack/barbican-api-57f899fb58-v2lwj" Dec 11 14:08:25 crc kubenswrapper[5050]: I1211 14:08:25.375964 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-config-data\") pod \"barbican-api-57f899fb58-v2lwj\" (UID: \"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf\") " pod="openstack/barbican-api-57f899fb58-v2lwj" Dec 11 14:08:25 crc kubenswrapper[5050]: I1211 14:08:25.375991 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-logs\") pod \"barbican-api-57f899fb58-v2lwj\" (UID: \"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf\") " pod="openstack/barbican-api-57f899fb58-v2lwj" Dec 11 14:08:25 crc kubenswrapper[5050]: I1211 14:08:25.376626 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg874\" (UniqueName: \"kubernetes.io/projected/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-kube-api-access-kg874\") pod \"barbican-api-57f899fb58-v2lwj\" (UID: \"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf\") " pod="openstack/barbican-api-57f899fb58-v2lwj" Dec 11 14:08:25 crc kubenswrapper[5050]: I1211 14:08:25.385431 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-logs\") pod \"barbican-api-57f899fb58-v2lwj\" (UID: \"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf\") " pod="openstack/barbican-api-57f899fb58-v2lwj" Dec 11 14:08:25 crc kubenswrapper[5050]: I1211 14:08:25.407483 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-public-tls-certs\") pod \"barbican-api-57f899fb58-v2lwj\" (UID: \"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf\") " pod="openstack/barbican-api-57f899fb58-v2lwj" Dec 11 14:08:25 crc kubenswrapper[5050]: I1211 14:08:25.412081 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-655647566b-n2tcs" podStartSLOduration=3.842948238 podStartE2EDuration="8.412063627s" podCreationTimestamp="2025-12-11 14:08:17 +0000 UTC" firstStartedPulling="2025-12-11 14:08:19.071366272 +0000 UTC m=+1189.915088858" lastFinishedPulling="2025-12-11 14:08:23.640481671 +0000 UTC m=+1194.484204247" observedRunningTime="2025-12-11 14:08:25.40810983 +0000 UTC m=+1196.251832426" watchObservedRunningTime="2025-12-11 14:08:25.412063627 +0000 UTC m=+1196.255786213" Dec 11 14:08:25 crc kubenswrapper[5050]: I1211 14:08:25.418428 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-config-data-custom\") pod \"barbican-api-57f899fb58-v2lwj\" (UID: \"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf\") " pod="openstack/barbican-api-57f899fb58-v2lwj" Dec 11 14:08:25 crc kubenswrapper[5050]: I1211 14:08:25.418651 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-config-data\") pod \"barbican-api-57f899fb58-v2lwj\" (UID: \"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf\") " pod="openstack/barbican-api-57f899fb58-v2lwj" Dec 11 14:08:25 crc kubenswrapper[5050]: I1211 14:08:25.420736 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-combined-ca-bundle\") pod \"barbican-api-57f899fb58-v2lwj\" (UID: \"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf\") " pod="openstack/barbican-api-57f899fb58-v2lwj" Dec 11 14:08:25 crc kubenswrapper[5050]: I1211 14:08:25.423521 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-internal-tls-certs\") pod \"barbican-api-57f899fb58-v2lwj\" (UID: \"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf\") " pod="openstack/barbican-api-57f899fb58-v2lwj" Dec 11 14:08:25 crc kubenswrapper[5050]: I1211 14:08:25.430157 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg874\" (UniqueName: \"kubernetes.io/projected/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-kube-api-access-kg874\") pod \"barbican-api-57f899fb58-v2lwj\" (UID: \"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf\") " pod="openstack/barbican-api-57f899fb58-v2lwj" Dec 11 14:08:25 crc kubenswrapper[5050]: I1211 14:08:25.565159 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-57f899fb58-v2lwj" Dec 11 14:08:28 crc kubenswrapper[5050]: I1211 14:08:28.384589 5050 generic.go:334] "Generic (PLEG): container finished" podID="2d9200d5-8f1c-46be-a802-995c7f58b754" containerID="add249db91788c64fc0bc9abe12d8ebe0bbd0ac4df87c1a680f9ba5d9cae0685" exitCode=0 Dec 11 14:08:28 crc kubenswrapper[5050]: I1211 14:08:28.385070 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7kvxk" event={"ID":"2d9200d5-8f1c-46be-a802-995c7f58b754","Type":"ContainerDied","Data":"add249db91788c64fc0bc9abe12d8ebe0bbd0ac4df87c1a680f9ba5d9cae0685"} Dec 11 14:08:28 crc kubenswrapper[5050]: I1211 14:08:28.512249 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-66cdd4b5b5-mnccb" Dec 11 14:08:28 crc kubenswrapper[5050]: I1211 14:08:28.576725 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-rvxcv"] Dec 11 14:08:28 crc kubenswrapper[5050]: I1211 14:08:28.577107 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6f6f8cb849-rvxcv" podUID="220e4ed5-e988-428e-b186-7a4231311831" containerName="dnsmasq-dns" containerID="cri-o://6f9318782ca9007f6ad48d70d0b16e91ab76be1db28d346a1199392c0864541f" gracePeriod=10 Dec 11 14:08:28 crc kubenswrapper[5050]: E1211 14:08:28.879501 5050 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod220e4ed5_e988_428e_b186_7a4231311831.slice/crio-6f9318782ca9007f6ad48d70d0b16e91ab76be1db28d346a1199392c0864541f.scope\": RecentStats: unable to find data in memory cache]" Dec 11 14:08:29 crc kubenswrapper[5050]: I1211 14:08:29.402275 5050 generic.go:334] "Generic (PLEG): container finished" podID="220e4ed5-e988-428e-b186-7a4231311831" containerID="6f9318782ca9007f6ad48d70d0b16e91ab76be1db28d346a1199392c0864541f" exitCode=0 Dec 11 14:08:29 crc kubenswrapper[5050]: I1211 14:08:29.402321 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-rvxcv" event={"ID":"220e4ed5-e988-428e-b186-7a4231311831","Type":"ContainerDied","Data":"6f9318782ca9007f6ad48d70d0b16e91ab76be1db28d346a1199392c0864541f"} Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.160313 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6f6f8cb849-rvxcv" podUID="220e4ed5-e988-428e-b186-7a4231311831" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.142:5353: connect: connection refused" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.162843 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7kvxk" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.203860 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2d9200d5-8f1c-46be-a802-995c7f58b754-db-sync-config-data\") pod \"2d9200d5-8f1c-46be-a802-995c7f58b754\" (UID: \"2d9200d5-8f1c-46be-a802-995c7f58b754\") " Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.204064 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26th6\" (UniqueName: \"kubernetes.io/projected/2d9200d5-8f1c-46be-a802-995c7f58b754-kube-api-access-26th6\") pod \"2d9200d5-8f1c-46be-a802-995c7f58b754\" (UID: \"2d9200d5-8f1c-46be-a802-995c7f58b754\") " Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.204116 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2d9200d5-8f1c-46be-a802-995c7f58b754-etc-machine-id\") pod \"2d9200d5-8f1c-46be-a802-995c7f58b754\" (UID: \"2d9200d5-8f1c-46be-a802-995c7f58b754\") " Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.204168 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d9200d5-8f1c-46be-a802-995c7f58b754-combined-ca-bundle\") pod \"2d9200d5-8f1c-46be-a802-995c7f58b754\" (UID: \"2d9200d5-8f1c-46be-a802-995c7f58b754\") " Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.204219 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d9200d5-8f1c-46be-a802-995c7f58b754-config-data\") pod \"2d9200d5-8f1c-46be-a802-995c7f58b754\" (UID: \"2d9200d5-8f1c-46be-a802-995c7f58b754\") " Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.204250 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d9200d5-8f1c-46be-a802-995c7f58b754-scripts\") pod \"2d9200d5-8f1c-46be-a802-995c7f58b754\" (UID: \"2d9200d5-8f1c-46be-a802-995c7f58b754\") " Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.204360 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d9200d5-8f1c-46be-a802-995c7f58b754-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2d9200d5-8f1c-46be-a802-995c7f58b754" (UID: "2d9200d5-8f1c-46be-a802-995c7f58b754"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.204842 5050 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2d9200d5-8f1c-46be-a802-995c7f58b754-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.226562 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d9200d5-8f1c-46be-a802-995c7f58b754-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2d9200d5-8f1c-46be-a802-995c7f58b754" (UID: "2d9200d5-8f1c-46be-a802-995c7f58b754"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.228460 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d9200d5-8f1c-46be-a802-995c7f58b754-kube-api-access-26th6" (OuterVolumeSpecName: "kube-api-access-26th6") pod "2d9200d5-8f1c-46be-a802-995c7f58b754" (UID: "2d9200d5-8f1c-46be-a802-995c7f58b754"). InnerVolumeSpecName "kube-api-access-26th6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.263473 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d9200d5-8f1c-46be-a802-995c7f58b754-scripts" (OuterVolumeSpecName: "scripts") pod "2d9200d5-8f1c-46be-a802-995c7f58b754" (UID: "2d9200d5-8f1c-46be-a802-995c7f58b754"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.313077 5050 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2d9200d5-8f1c-46be-a802-995c7f58b754-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.313136 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26th6\" (UniqueName: \"kubernetes.io/projected/2d9200d5-8f1c-46be-a802-995c7f58b754-kube-api-access-26th6\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.313150 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d9200d5-8f1c-46be-a802-995c7f58b754-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.326530 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d9200d5-8f1c-46be-a802-995c7f58b754-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d9200d5-8f1c-46be-a802-995c7f58b754" (UID: "2d9200d5-8f1c-46be-a802-995c7f58b754"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.392274 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d9200d5-8f1c-46be-a802-995c7f58b754-config-data" (OuterVolumeSpecName: "config-data") pod "2d9200d5-8f1c-46be-a802-995c7f58b754" (UID: "2d9200d5-8f1c-46be-a802-995c7f58b754"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.416744 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d9200d5-8f1c-46be-a802-995c7f58b754-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.416802 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d9200d5-8f1c-46be-a802-995c7f58b754-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.439837 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-7kvxk" event={"ID":"2d9200d5-8f1c-46be-a802-995c7f58b754","Type":"ContainerDied","Data":"1d9c5ca984053216d7d1231a28e67f27bc2f8619d7a076726c333375cf4c47ed"} Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.439875 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d9c5ca984053216d7d1231a28e67f27bc2f8619d7a076726c333375cf4c47ed" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.440028 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-7kvxk" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.554536 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6fb64b5f76-9r6t7" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.743670 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 14:08:30 crc kubenswrapper[5050]: E1211 14:08:30.744538 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d9200d5-8f1c-46be-a802-995c7f58b754" containerName="cinder-db-sync" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.744556 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d9200d5-8f1c-46be-a802-995c7f58b754" containerName="cinder-db-sync" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.744824 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d9200d5-8f1c-46be-a802-995c7f58b754" containerName="cinder-db-sync" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.746377 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.749293 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.751638 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-9ptxk" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.751950 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.754243 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.801610 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.829323 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55xnh\" (UniqueName: \"kubernetes.io/projected/98804b0e-1bb7-4817-9c3b-25f3101a9aac-kube-api-access-55xnh\") pod \"cinder-scheduler-0\" (UID: \"98804b0e-1bb7-4817-9c3b-25f3101a9aac\") " pod="openstack/cinder-scheduler-0" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.829407 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98804b0e-1bb7-4817-9c3b-25f3101a9aac-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"98804b0e-1bb7-4817-9c3b-25f3101a9aac\") " pod="openstack/cinder-scheduler-0" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.829447 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98804b0e-1bb7-4817-9c3b-25f3101a9aac-scripts\") pod \"cinder-scheduler-0\" (UID: \"98804b0e-1bb7-4817-9c3b-25f3101a9aac\") " pod="openstack/cinder-scheduler-0" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.829486 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98804b0e-1bb7-4817-9c3b-25f3101a9aac-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"98804b0e-1bb7-4817-9c3b-25f3101a9aac\") " pod="openstack/cinder-scheduler-0" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.829512 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98804b0e-1bb7-4817-9c3b-25f3101a9aac-config-data\") pod \"cinder-scheduler-0\" (UID: \"98804b0e-1bb7-4817-9c3b-25f3101a9aac\") " pod="openstack/cinder-scheduler-0" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.829543 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/98804b0e-1bb7-4817-9c3b-25f3101a9aac-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"98804b0e-1bb7-4817-9c3b-25f3101a9aac\") " pod="openstack/cinder-scheduler-0" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.848924 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-676wg"] Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.850543 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-676wg" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.899357 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-676wg"] Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.934646 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98804b0e-1bb7-4817-9c3b-25f3101a9aac-config-data\") pod \"cinder-scheduler-0\" (UID: \"98804b0e-1bb7-4817-9c3b-25f3101a9aac\") " pod="openstack/cinder-scheduler-0" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.934700 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a0a350f-3ea9-4892-9964-47c591420d28-config\") pod \"dnsmasq-dns-75dbb546bf-676wg\" (UID: \"5a0a350f-3ea9-4892-9964-47c591420d28\") " pod="openstack/dnsmasq-dns-75dbb546bf-676wg" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.934728 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a0a350f-3ea9-4892-9964-47c591420d28-dns-svc\") pod \"dnsmasq-dns-75dbb546bf-676wg\" (UID: \"5a0a350f-3ea9-4892-9964-47c591420d28\") " pod="openstack/dnsmasq-dns-75dbb546bf-676wg" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.934767 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/98804b0e-1bb7-4817-9c3b-25f3101a9aac-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"98804b0e-1bb7-4817-9c3b-25f3101a9aac\") " pod="openstack/cinder-scheduler-0" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.934814 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a0a350f-3ea9-4892-9964-47c591420d28-ovsdbserver-nb\") pod \"dnsmasq-dns-75dbb546bf-676wg\" (UID: \"5a0a350f-3ea9-4892-9964-47c591420d28\") " pod="openstack/dnsmasq-dns-75dbb546bf-676wg" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.934892 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59br4\" (UniqueName: \"kubernetes.io/projected/5a0a350f-3ea9-4892-9964-47c591420d28-kube-api-access-59br4\") pod \"dnsmasq-dns-75dbb546bf-676wg\" (UID: \"5a0a350f-3ea9-4892-9964-47c591420d28\") " pod="openstack/dnsmasq-dns-75dbb546bf-676wg" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.934946 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55xnh\" (UniqueName: \"kubernetes.io/projected/98804b0e-1bb7-4817-9c3b-25f3101a9aac-kube-api-access-55xnh\") pod \"cinder-scheduler-0\" (UID: \"98804b0e-1bb7-4817-9c3b-25f3101a9aac\") " pod="openstack/cinder-scheduler-0" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.934997 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5a0a350f-3ea9-4892-9964-47c591420d28-dns-swift-storage-0\") pod \"dnsmasq-dns-75dbb546bf-676wg\" (UID: \"5a0a350f-3ea9-4892-9964-47c591420d28\") " pod="openstack/dnsmasq-dns-75dbb546bf-676wg" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.935057 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98804b0e-1bb7-4817-9c3b-25f3101a9aac-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"98804b0e-1bb7-4817-9c3b-25f3101a9aac\") " pod="openstack/cinder-scheduler-0" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.935087 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a0a350f-3ea9-4892-9964-47c591420d28-ovsdbserver-sb\") pod \"dnsmasq-dns-75dbb546bf-676wg\" (UID: \"5a0a350f-3ea9-4892-9964-47c591420d28\") " pod="openstack/dnsmasq-dns-75dbb546bf-676wg" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.935123 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98804b0e-1bb7-4817-9c3b-25f3101a9aac-scripts\") pod \"cinder-scheduler-0\" (UID: \"98804b0e-1bb7-4817-9c3b-25f3101a9aac\") " pod="openstack/cinder-scheduler-0" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.935180 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98804b0e-1bb7-4817-9c3b-25f3101a9aac-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"98804b0e-1bb7-4817-9c3b-25f3101a9aac\") " pod="openstack/cinder-scheduler-0" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.936324 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/98804b0e-1bb7-4817-9c3b-25f3101a9aac-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"98804b0e-1bb7-4817-9c3b-25f3101a9aac\") " pod="openstack/cinder-scheduler-0" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.943417 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98804b0e-1bb7-4817-9c3b-25f3101a9aac-config-data\") pod \"cinder-scheduler-0\" (UID: \"98804b0e-1bb7-4817-9c3b-25f3101a9aac\") " pod="openstack/cinder-scheduler-0" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.944530 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98804b0e-1bb7-4817-9c3b-25f3101a9aac-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"98804b0e-1bb7-4817-9c3b-25f3101a9aac\") " pod="openstack/cinder-scheduler-0" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.945769 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98804b0e-1bb7-4817-9c3b-25f3101a9aac-scripts\") pod \"cinder-scheduler-0\" (UID: \"98804b0e-1bb7-4817-9c3b-25f3101a9aac\") " pod="openstack/cinder-scheduler-0" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.955634 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98804b0e-1bb7-4817-9c3b-25f3101a9aac-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"98804b0e-1bb7-4817-9c3b-25f3101a9aac\") " pod="openstack/cinder-scheduler-0" Dec 11 14:08:30 crc kubenswrapper[5050]: I1211 14:08:30.959574 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55xnh\" (UniqueName: \"kubernetes.io/projected/98804b0e-1bb7-4817-9c3b-25f3101a9aac-kube-api-access-55xnh\") pod \"cinder-scheduler-0\" (UID: \"98804b0e-1bb7-4817-9c3b-25f3101a9aac\") " pod="openstack/cinder-scheduler-0" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.036823 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a0a350f-3ea9-4892-9964-47c591420d28-config\") pod \"dnsmasq-dns-75dbb546bf-676wg\" (UID: \"5a0a350f-3ea9-4892-9964-47c591420d28\") " pod="openstack/dnsmasq-dns-75dbb546bf-676wg" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.036888 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a0a350f-3ea9-4892-9964-47c591420d28-dns-svc\") pod \"dnsmasq-dns-75dbb546bf-676wg\" (UID: \"5a0a350f-3ea9-4892-9964-47c591420d28\") " pod="openstack/dnsmasq-dns-75dbb546bf-676wg" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.036949 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a0a350f-3ea9-4892-9964-47c591420d28-ovsdbserver-nb\") pod \"dnsmasq-dns-75dbb546bf-676wg\" (UID: \"5a0a350f-3ea9-4892-9964-47c591420d28\") " pod="openstack/dnsmasq-dns-75dbb546bf-676wg" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.036988 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59br4\" (UniqueName: \"kubernetes.io/projected/5a0a350f-3ea9-4892-9964-47c591420d28-kube-api-access-59br4\") pod \"dnsmasq-dns-75dbb546bf-676wg\" (UID: \"5a0a350f-3ea9-4892-9964-47c591420d28\") " pod="openstack/dnsmasq-dns-75dbb546bf-676wg" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.037199 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5a0a350f-3ea9-4892-9964-47c591420d28-dns-swift-storage-0\") pod \"dnsmasq-dns-75dbb546bf-676wg\" (UID: \"5a0a350f-3ea9-4892-9964-47c591420d28\") " pod="openstack/dnsmasq-dns-75dbb546bf-676wg" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.037273 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a0a350f-3ea9-4892-9964-47c591420d28-ovsdbserver-sb\") pod \"dnsmasq-dns-75dbb546bf-676wg\" (UID: \"5a0a350f-3ea9-4892-9964-47c591420d28\") " pod="openstack/dnsmasq-dns-75dbb546bf-676wg" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.038462 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a0a350f-3ea9-4892-9964-47c591420d28-ovsdbserver-nb\") pod \"dnsmasq-dns-75dbb546bf-676wg\" (UID: \"5a0a350f-3ea9-4892-9964-47c591420d28\") " pod="openstack/dnsmasq-dns-75dbb546bf-676wg" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.038487 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a0a350f-3ea9-4892-9964-47c591420d28-ovsdbserver-sb\") pod \"dnsmasq-dns-75dbb546bf-676wg\" (UID: \"5a0a350f-3ea9-4892-9964-47c591420d28\") " pod="openstack/dnsmasq-dns-75dbb546bf-676wg" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.039059 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a0a350f-3ea9-4892-9964-47c591420d28-dns-svc\") pod \"dnsmasq-dns-75dbb546bf-676wg\" (UID: \"5a0a350f-3ea9-4892-9964-47c591420d28\") " pod="openstack/dnsmasq-dns-75dbb546bf-676wg" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.039598 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a0a350f-3ea9-4892-9964-47c591420d28-config\") pod \"dnsmasq-dns-75dbb546bf-676wg\" (UID: \"5a0a350f-3ea9-4892-9964-47c591420d28\") " pod="openstack/dnsmasq-dns-75dbb546bf-676wg" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.039763 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5a0a350f-3ea9-4892-9964-47c591420d28-dns-swift-storage-0\") pod \"dnsmasq-dns-75dbb546bf-676wg\" (UID: \"5a0a350f-3ea9-4892-9964-47c591420d28\") " pod="openstack/dnsmasq-dns-75dbb546bf-676wg" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.042403 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.045272 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.052422 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.072606 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.073638 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.086063 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59br4\" (UniqueName: \"kubernetes.io/projected/5a0a350f-3ea9-4892-9964-47c591420d28-kube-api-access-59br4\") pod \"dnsmasq-dns-75dbb546bf-676wg\" (UID: \"5a0a350f-3ea9-4892-9964-47c591420d28\") " pod="openstack/dnsmasq-dns-75dbb546bf-676wg" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.139447 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c3d321f-16aa-4789-87f5-3d5d54f2be30-scripts\") pod \"cinder-api-0\" (UID: \"3c3d321f-16aa-4789-87f5-3d5d54f2be30\") " pod="openstack/cinder-api-0" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.139519 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c3d321f-16aa-4789-87f5-3d5d54f2be30-logs\") pod \"cinder-api-0\" (UID: \"3c3d321f-16aa-4789-87f5-3d5d54f2be30\") " pod="openstack/cinder-api-0" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.139543 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3d321f-16aa-4789-87f5-3d5d54f2be30-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3c3d321f-16aa-4789-87f5-3d5d54f2be30\") " pod="openstack/cinder-api-0" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.139739 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlbms\" (UniqueName: \"kubernetes.io/projected/3c3d321f-16aa-4789-87f5-3d5d54f2be30-kube-api-access-nlbms\") pod \"cinder-api-0\" (UID: \"3c3d321f-16aa-4789-87f5-3d5d54f2be30\") " pod="openstack/cinder-api-0" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.139824 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3c3d321f-16aa-4789-87f5-3d5d54f2be30-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3c3d321f-16aa-4789-87f5-3d5d54f2be30\") " pod="openstack/cinder-api-0" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.139962 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3d321f-16aa-4789-87f5-3d5d54f2be30-config-data\") pod \"cinder-api-0\" (UID: \"3c3d321f-16aa-4789-87f5-3d5d54f2be30\") " pod="openstack/cinder-api-0" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.140341 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c3d321f-16aa-4789-87f5-3d5d54f2be30-config-data-custom\") pod \"cinder-api-0\" (UID: \"3c3d321f-16aa-4789-87f5-3d5d54f2be30\") " pod="openstack/cinder-api-0" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.192426 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-676wg" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.242750 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c3d321f-16aa-4789-87f5-3d5d54f2be30-logs\") pod \"cinder-api-0\" (UID: \"3c3d321f-16aa-4789-87f5-3d5d54f2be30\") " pod="openstack/cinder-api-0" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.242805 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3d321f-16aa-4789-87f5-3d5d54f2be30-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3c3d321f-16aa-4789-87f5-3d5d54f2be30\") " pod="openstack/cinder-api-0" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.242831 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlbms\" (UniqueName: \"kubernetes.io/projected/3c3d321f-16aa-4789-87f5-3d5d54f2be30-kube-api-access-nlbms\") pod \"cinder-api-0\" (UID: \"3c3d321f-16aa-4789-87f5-3d5d54f2be30\") " pod="openstack/cinder-api-0" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.242858 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3c3d321f-16aa-4789-87f5-3d5d54f2be30-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3c3d321f-16aa-4789-87f5-3d5d54f2be30\") " pod="openstack/cinder-api-0" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.242886 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3d321f-16aa-4789-87f5-3d5d54f2be30-config-data\") pod \"cinder-api-0\" (UID: \"3c3d321f-16aa-4789-87f5-3d5d54f2be30\") " pod="openstack/cinder-api-0" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.242934 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c3d321f-16aa-4789-87f5-3d5d54f2be30-config-data-custom\") pod \"cinder-api-0\" (UID: \"3c3d321f-16aa-4789-87f5-3d5d54f2be30\") " pod="openstack/cinder-api-0" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.243023 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c3d321f-16aa-4789-87f5-3d5d54f2be30-scripts\") pod \"cinder-api-0\" (UID: \"3c3d321f-16aa-4789-87f5-3d5d54f2be30\") " pod="openstack/cinder-api-0" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.243363 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3c3d321f-16aa-4789-87f5-3d5d54f2be30-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3c3d321f-16aa-4789-87f5-3d5d54f2be30\") " pod="openstack/cinder-api-0" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.243362 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c3d321f-16aa-4789-87f5-3d5d54f2be30-logs\") pod \"cinder-api-0\" (UID: \"3c3d321f-16aa-4789-87f5-3d5d54f2be30\") " pod="openstack/cinder-api-0" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.251309 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c3d321f-16aa-4789-87f5-3d5d54f2be30-scripts\") pod \"cinder-api-0\" (UID: \"3c3d321f-16aa-4789-87f5-3d5d54f2be30\") " pod="openstack/cinder-api-0" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.251868 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3d321f-16aa-4789-87f5-3d5d54f2be30-config-data\") pod \"cinder-api-0\" (UID: \"3c3d321f-16aa-4789-87f5-3d5d54f2be30\") " pod="openstack/cinder-api-0" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.253824 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c3d321f-16aa-4789-87f5-3d5d54f2be30-config-data-custom\") pod \"cinder-api-0\" (UID: \"3c3d321f-16aa-4789-87f5-3d5d54f2be30\") " pod="openstack/cinder-api-0" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.259810 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3d321f-16aa-4789-87f5-3d5d54f2be30-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3c3d321f-16aa-4789-87f5-3d5d54f2be30\") " pod="openstack/cinder-api-0" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.266184 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlbms\" (UniqueName: \"kubernetes.io/projected/3c3d321f-16aa-4789-87f5-3d5d54f2be30-kube-api-access-nlbms\") pod \"cinder-api-0\" (UID: \"3c3d321f-16aa-4789-87f5-3d5d54f2be30\") " pod="openstack/cinder-api-0" Dec 11 14:08:31 crc kubenswrapper[5050]: I1211 14:08:31.433098 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 14:08:32 crc kubenswrapper[5050]: I1211 14:08:32.324168 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6fb64b5f76-9r6t7" Dec 11 14:08:33 crc kubenswrapper[5050]: I1211 14:08:33.187571 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 11 14:08:34 crc kubenswrapper[5050]: E1211 14:08:34.827348 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24@sha256:6b929971283d69f485a7d3e449fb5a3dd65d5a4de585c73419e776821d00062c" Dec 11 14:08:34 crc kubenswrapper[5050]: E1211 14:08:34.828444 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24@sha256:6b929971283d69f485a7d3e449fb5a3dd65d5a4de585c73419e776821d00062c,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tbpf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(49e25755-0205-47fb-a88c-2f7a3291a687): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 14:08:34 crc kubenswrapper[5050]: E1211 14:08:34.830310 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="49e25755-0205-47fb-a88c-2f7a3291a687" Dec 11 14:08:35 crc kubenswrapper[5050]: I1211 14:08:35.051377 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-rvxcv" Dec 11 14:08:35 crc kubenswrapper[5050]: I1211 14:08:35.165359 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92d7x\" (UniqueName: \"kubernetes.io/projected/220e4ed5-e988-428e-b186-7a4231311831-kube-api-access-92d7x\") pod \"220e4ed5-e988-428e-b186-7a4231311831\" (UID: \"220e4ed5-e988-428e-b186-7a4231311831\") " Dec 11 14:08:35 crc kubenswrapper[5050]: I1211 14:08:35.165469 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/220e4ed5-e988-428e-b186-7a4231311831-dns-svc\") pod \"220e4ed5-e988-428e-b186-7a4231311831\" (UID: \"220e4ed5-e988-428e-b186-7a4231311831\") " Dec 11 14:08:35 crc kubenswrapper[5050]: I1211 14:08:35.165578 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/220e4ed5-e988-428e-b186-7a4231311831-config\") pod \"220e4ed5-e988-428e-b186-7a4231311831\" (UID: \"220e4ed5-e988-428e-b186-7a4231311831\") " Dec 11 14:08:35 crc kubenswrapper[5050]: I1211 14:08:35.165658 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/220e4ed5-e988-428e-b186-7a4231311831-ovsdbserver-sb\") pod \"220e4ed5-e988-428e-b186-7a4231311831\" (UID: \"220e4ed5-e988-428e-b186-7a4231311831\") " Dec 11 14:08:35 crc kubenswrapper[5050]: I1211 14:08:35.165685 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/220e4ed5-e988-428e-b186-7a4231311831-dns-swift-storage-0\") pod \"220e4ed5-e988-428e-b186-7a4231311831\" (UID: \"220e4ed5-e988-428e-b186-7a4231311831\") " Dec 11 14:08:35 crc kubenswrapper[5050]: I1211 14:08:35.167393 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/220e4ed5-e988-428e-b186-7a4231311831-ovsdbserver-nb\") pod \"220e4ed5-e988-428e-b186-7a4231311831\" (UID: \"220e4ed5-e988-428e-b186-7a4231311831\") " Dec 11 14:08:35 crc kubenswrapper[5050]: I1211 14:08:35.189601 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/220e4ed5-e988-428e-b186-7a4231311831-kube-api-access-92d7x" (OuterVolumeSpecName: "kube-api-access-92d7x") pod "220e4ed5-e988-428e-b186-7a4231311831" (UID: "220e4ed5-e988-428e-b186-7a4231311831"). InnerVolumeSpecName "kube-api-access-92d7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:08:35 crc kubenswrapper[5050]: I1211 14:08:35.256734 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/220e4ed5-e988-428e-b186-7a4231311831-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "220e4ed5-e988-428e-b186-7a4231311831" (UID: "220e4ed5-e988-428e-b186-7a4231311831"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:08:35 crc kubenswrapper[5050]: I1211 14:08:35.268589 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/220e4ed5-e988-428e-b186-7a4231311831-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "220e4ed5-e988-428e-b186-7a4231311831" (UID: "220e4ed5-e988-428e-b186-7a4231311831"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:08:35 crc kubenswrapper[5050]: I1211 14:08:35.275469 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/220e4ed5-e988-428e-b186-7a4231311831-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:35 crc kubenswrapper[5050]: I1211 14:08:35.275535 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/220e4ed5-e988-428e-b186-7a4231311831-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:35 crc kubenswrapper[5050]: I1211 14:08:35.275557 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92d7x\" (UniqueName: \"kubernetes.io/projected/220e4ed5-e988-428e-b186-7a4231311831-kube-api-access-92d7x\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:35 crc kubenswrapper[5050]: I1211 14:08:35.290535 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/220e4ed5-e988-428e-b186-7a4231311831-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "220e4ed5-e988-428e-b186-7a4231311831" (UID: "220e4ed5-e988-428e-b186-7a4231311831"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:08:35 crc kubenswrapper[5050]: I1211 14:08:35.301650 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/220e4ed5-e988-428e-b186-7a4231311831-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "220e4ed5-e988-428e-b186-7a4231311831" (UID: "220e4ed5-e988-428e-b186-7a4231311831"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:08:35 crc kubenswrapper[5050]: I1211 14:08:35.319273 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/220e4ed5-e988-428e-b186-7a4231311831-config" (OuterVolumeSpecName: "config") pod "220e4ed5-e988-428e-b186-7a4231311831" (UID: "220e4ed5-e988-428e-b186-7a4231311831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:08:35 crc kubenswrapper[5050]: I1211 14:08:35.377229 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/220e4ed5-e988-428e-b186-7a4231311831-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:35 crc kubenswrapper[5050]: I1211 14:08:35.377273 5050 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/220e4ed5-e988-428e-b186-7a4231311831-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:35 crc kubenswrapper[5050]: I1211 14:08:35.377285 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/220e4ed5-e988-428e-b186-7a4231311831-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:35 crc kubenswrapper[5050]: I1211 14:08:35.539537 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="49e25755-0205-47fb-a88c-2f7a3291a687" containerName="ceilometer-central-agent" containerID="cri-o://bc5bd4da507e5e98c22354d37317440ad1b08d0fdef5e93aee4a399f722d5c89" gracePeriod=30 Dec 11 14:08:35 crc kubenswrapper[5050]: I1211 14:08:35.539777 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f6f8cb849-rvxcv" Dec 11 14:08:35 crc kubenswrapper[5050]: I1211 14:08:35.540755 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f6f8cb849-rvxcv" event={"ID":"220e4ed5-e988-428e-b186-7a4231311831","Type":"ContainerDied","Data":"fe7de1dece40429ea37cfacf13e4dbf438717340ffc11ecabbae16e6154dba0d"} Dec 11 14:08:35 crc kubenswrapper[5050]: I1211 14:08:35.540861 5050 scope.go:117] "RemoveContainer" containerID="6f9318782ca9007f6ad48d70d0b16e91ab76be1db28d346a1199392c0864541f" Dec 11 14:08:35 crc kubenswrapper[5050]: I1211 14:08:35.540825 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="49e25755-0205-47fb-a88c-2f7a3291a687" containerName="sg-core" containerID="cri-o://363bef5fc02a72922b8027ac00256b6492310726e090e9dab94b12db5a9c9a9e" gracePeriod=30 Dec 11 14:08:35 crc kubenswrapper[5050]: I1211 14:08:35.540885 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="49e25755-0205-47fb-a88c-2f7a3291a687" containerName="ceilometer-notification-agent" containerID="cri-o://7f6f169f1e21cd536cd2066b6883085b8233e7f19f2c348689687c417f9d7905" gracePeriod=30 Dec 11 14:08:35 crc kubenswrapper[5050]: I1211 14:08:35.603223 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-57f899fb58-v2lwj"] Dec 11 14:08:35 crc kubenswrapper[5050]: I1211 14:08:35.650540 5050 scope.go:117] "RemoveContainer" containerID="5a4b5f983fdbecf512731820ec94e59c7f16b943e0633a4e8536e9e64f2b792f" Dec 11 14:08:35 crc kubenswrapper[5050]: W1211 14:08:35.667635 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ad4414f_ca3e_4ff4_9e2a_3ab029df2ebf.slice/crio-963f1743399fd3044bb51fa705a06e61782331edb27271c64f977e938ec02d43 WatchSource:0}: Error finding container 963f1743399fd3044bb51fa705a06e61782331edb27271c64f977e938ec02d43: Status 404 returned error can't find the container with id 963f1743399fd3044bb51fa705a06e61782331edb27271c64f977e938ec02d43 Dec 11 14:08:35 crc kubenswrapper[5050]: W1211 14:08:35.688843 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98804b0e_1bb7_4817_9c3b_25f3101a9aac.slice/crio-398550bc22ca63543b57db1e3b0a967ee658b968d80c1f9ff4566a9617e3282b WatchSource:0}: Error finding container 398550bc22ca63543b57db1e3b0a967ee658b968d80c1f9ff4566a9617e3282b: Status 404 returned error can't find the container with id 398550bc22ca63543b57db1e3b0a967ee658b968d80c1f9ff4566a9617e3282b Dec 11 14:08:35 crc kubenswrapper[5050]: I1211 14:08:35.696362 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-rvxcv"] Dec 11 14:08:35 crc kubenswrapper[5050]: I1211 14:08:35.712136 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f6f8cb849-rvxcv"] Dec 11 14:08:35 crc kubenswrapper[5050]: W1211 14:08:35.712685 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c3d321f_16aa_4789_87f5_3d5d54f2be30.slice/crio-677c5ff5d96a61d3d06097df117b9880dc108999baa844ac69e0252679ad4ee8 WatchSource:0}: Error finding container 677c5ff5d96a61d3d06097df117b9880dc108999baa844ac69e0252679ad4ee8: Status 404 returned error can't find the container with id 677c5ff5d96a61d3d06097df117b9880dc108999baa844ac69e0252679ad4ee8 Dec 11 14:08:35 crc kubenswrapper[5050]: I1211 14:08:35.728089 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 14:08:35 crc kubenswrapper[5050]: I1211 14:08:35.751698 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 11 14:08:35 crc kubenswrapper[5050]: I1211 14:08:35.766505 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-676wg"] Dec 11 14:08:36 crc kubenswrapper[5050]: I1211 14:08:36.554432 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3c3d321f-16aa-4789-87f5-3d5d54f2be30","Type":"ContainerStarted","Data":"677c5ff5d96a61d3d06097df117b9880dc108999baa844ac69e0252679ad4ee8"} Dec 11 14:08:36 crc kubenswrapper[5050]: I1211 14:08:36.556383 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-676wg" event={"ID":"5a0a350f-3ea9-4892-9964-47c591420d28","Type":"ContainerStarted","Data":"cc52d54320148961756c4291b12beab28efb71c3a17d1fcdef4252354a1ee3a9"} Dec 11 14:08:36 crc kubenswrapper[5050]: I1211 14:08:36.558822 5050 generic.go:334] "Generic (PLEG): container finished" podID="49e25755-0205-47fb-a88c-2f7a3291a687" containerID="363bef5fc02a72922b8027ac00256b6492310726e090e9dab94b12db5a9c9a9e" exitCode=2 Dec 11 14:08:36 crc kubenswrapper[5050]: I1211 14:08:36.558880 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49e25755-0205-47fb-a88c-2f7a3291a687","Type":"ContainerDied","Data":"363bef5fc02a72922b8027ac00256b6492310726e090e9dab94b12db5a9c9a9e"} Dec 11 14:08:36 crc kubenswrapper[5050]: I1211 14:08:36.561304 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57f899fb58-v2lwj" event={"ID":"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf","Type":"ContainerStarted","Data":"963f1743399fd3044bb51fa705a06e61782331edb27271c64f977e938ec02d43"} Dec 11 14:08:36 crc kubenswrapper[5050]: I1211 14:08:36.564383 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"98804b0e-1bb7-4817-9c3b-25f3101a9aac","Type":"ContainerStarted","Data":"398550bc22ca63543b57db1e3b0a967ee658b968d80c1f9ff4566a9617e3282b"} Dec 11 14:08:37 crc kubenswrapper[5050]: I1211 14:08:37.562470 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="220e4ed5-e988-428e-b186-7a4231311831" path="/var/lib/kubelet/pods/220e4ed5-e988-428e-b186-7a4231311831/volumes" Dec 11 14:08:38 crc kubenswrapper[5050]: I1211 14:08:38.593057 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3c3d321f-16aa-4789-87f5-3d5d54f2be30","Type":"ContainerStarted","Data":"7dd54d0b1083881060ea7b32dfadc3a16d5333ec68ac6f4cea282f6da888ac9d"} Dec 11 14:08:38 crc kubenswrapper[5050]: I1211 14:08:38.603197 5050 generic.go:334] "Generic (PLEG): container finished" podID="5a0a350f-3ea9-4892-9964-47c591420d28" containerID="9e88e10d32974fd1d135728b2b562bba9aa1d54734cb028869c1d07e6f96a31d" exitCode=0 Dec 11 14:08:38 crc kubenswrapper[5050]: I1211 14:08:38.604044 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-676wg" event={"ID":"5a0a350f-3ea9-4892-9964-47c591420d28","Type":"ContainerDied","Data":"9e88e10d32974fd1d135728b2b562bba9aa1d54734cb028869c1d07e6f96a31d"} Dec 11 14:08:38 crc kubenswrapper[5050]: I1211 14:08:38.624909 5050 generic.go:334] "Generic (PLEG): container finished" podID="49e25755-0205-47fb-a88c-2f7a3291a687" containerID="7f6f169f1e21cd536cd2066b6883085b8233e7f19f2c348689687c417f9d7905" exitCode=0 Dec 11 14:08:38 crc kubenswrapper[5050]: I1211 14:08:38.624957 5050 generic.go:334] "Generic (PLEG): container finished" podID="49e25755-0205-47fb-a88c-2f7a3291a687" containerID="bc5bd4da507e5e98c22354d37317440ad1b08d0fdef5e93aee4a399f722d5c89" exitCode=0 Dec 11 14:08:38 crc kubenswrapper[5050]: I1211 14:08:38.625080 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49e25755-0205-47fb-a88c-2f7a3291a687","Type":"ContainerDied","Data":"7f6f169f1e21cd536cd2066b6883085b8233e7f19f2c348689687c417f9d7905"} Dec 11 14:08:38 crc kubenswrapper[5050]: I1211 14:08:38.625111 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49e25755-0205-47fb-a88c-2f7a3291a687","Type":"ContainerDied","Data":"bc5bd4da507e5e98c22354d37317440ad1b08d0fdef5e93aee4a399f722d5c89"} Dec 11 14:08:38 crc kubenswrapper[5050]: I1211 14:08:38.625121 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"49e25755-0205-47fb-a88c-2f7a3291a687","Type":"ContainerDied","Data":"3b15d68432c00d3d1c2be73a44464075776850bd495545f7ff3ff7265a20be3f"} Dec 11 14:08:38 crc kubenswrapper[5050]: I1211 14:08:38.625131 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b15d68432c00d3d1c2be73a44464075776850bd495545f7ff3ff7265a20be3f" Dec 11 14:08:38 crc kubenswrapper[5050]: I1211 14:08:38.633795 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 14:08:38 crc kubenswrapper[5050]: I1211 14:08:38.634591 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57f899fb58-v2lwj" event={"ID":"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf","Type":"ContainerStarted","Data":"9048b99f225c02588f0acf6ab078d23ba9d748c49478356c85f61a74df87c960"} Dec 11 14:08:38 crc kubenswrapper[5050]: I1211 14:08:38.634674 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57f899fb58-v2lwj" event={"ID":"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf","Type":"ContainerStarted","Data":"887182e7bdf510cf5f8d29d8def14429f4899834fa471d481e28b9675086a309"} Dec 11 14:08:38 crc kubenswrapper[5050]: I1211 14:08:38.634868 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-57f899fb58-v2lwj" Dec 11 14:08:38 crc kubenswrapper[5050]: I1211 14:08:38.634952 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-57f899fb58-v2lwj" Dec 11 14:08:38 crc kubenswrapper[5050]: I1211 14:08:38.712294 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-57f899fb58-v2lwj" podStartSLOduration=13.712271345 podStartE2EDuration="13.712271345s" podCreationTimestamp="2025-12-11 14:08:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:08:38.68535617 +0000 UTC m=+1209.529078756" watchObservedRunningTime="2025-12-11 14:08:38.712271345 +0000 UTC m=+1209.555993931" Dec 11 14:08:38 crc kubenswrapper[5050]: I1211 14:08:38.764800 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49e25755-0205-47fb-a88c-2f7a3291a687-scripts\") pod \"49e25755-0205-47fb-a88c-2f7a3291a687\" (UID: \"49e25755-0205-47fb-a88c-2f7a3291a687\") " Dec 11 14:08:38 crc kubenswrapper[5050]: I1211 14:08:38.765032 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49e25755-0205-47fb-a88c-2f7a3291a687-config-data\") pod \"49e25755-0205-47fb-a88c-2f7a3291a687\" (UID: \"49e25755-0205-47fb-a88c-2f7a3291a687\") " Dec 11 14:08:38 crc kubenswrapper[5050]: I1211 14:08:38.765105 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49e25755-0205-47fb-a88c-2f7a3291a687-combined-ca-bundle\") pod \"49e25755-0205-47fb-a88c-2f7a3291a687\" (UID: \"49e25755-0205-47fb-a88c-2f7a3291a687\") " Dec 11 14:08:38 crc kubenswrapper[5050]: I1211 14:08:38.765136 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49e25755-0205-47fb-a88c-2f7a3291a687-log-httpd\") pod \"49e25755-0205-47fb-a88c-2f7a3291a687\" (UID: \"49e25755-0205-47fb-a88c-2f7a3291a687\") " Dec 11 14:08:38 crc kubenswrapper[5050]: I1211 14:08:38.765191 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbpf7\" (UniqueName: \"kubernetes.io/projected/49e25755-0205-47fb-a88c-2f7a3291a687-kube-api-access-tbpf7\") pod \"49e25755-0205-47fb-a88c-2f7a3291a687\" (UID: \"49e25755-0205-47fb-a88c-2f7a3291a687\") " Dec 11 14:08:38 crc kubenswrapper[5050]: I1211 14:08:38.765258 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49e25755-0205-47fb-a88c-2f7a3291a687-sg-core-conf-yaml\") pod \"49e25755-0205-47fb-a88c-2f7a3291a687\" (UID: \"49e25755-0205-47fb-a88c-2f7a3291a687\") " Dec 11 14:08:38 crc kubenswrapper[5050]: I1211 14:08:38.765284 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49e25755-0205-47fb-a88c-2f7a3291a687-run-httpd\") pod \"49e25755-0205-47fb-a88c-2f7a3291a687\" (UID: \"49e25755-0205-47fb-a88c-2f7a3291a687\") " Dec 11 14:08:38 crc kubenswrapper[5050]: I1211 14:08:38.767751 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49e25755-0205-47fb-a88c-2f7a3291a687-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "49e25755-0205-47fb-a88c-2f7a3291a687" (UID: "49e25755-0205-47fb-a88c-2f7a3291a687"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:08:38 crc kubenswrapper[5050]: I1211 14:08:38.770215 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49e25755-0205-47fb-a88c-2f7a3291a687-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "49e25755-0205-47fb-a88c-2f7a3291a687" (UID: "49e25755-0205-47fb-a88c-2f7a3291a687"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:08:38 crc kubenswrapper[5050]: I1211 14:08:38.774770 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49e25755-0205-47fb-a88c-2f7a3291a687-kube-api-access-tbpf7" (OuterVolumeSpecName: "kube-api-access-tbpf7") pod "49e25755-0205-47fb-a88c-2f7a3291a687" (UID: "49e25755-0205-47fb-a88c-2f7a3291a687"). InnerVolumeSpecName "kube-api-access-tbpf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:08:38 crc kubenswrapper[5050]: I1211 14:08:38.788946 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49e25755-0205-47fb-a88c-2f7a3291a687-scripts" (OuterVolumeSpecName: "scripts") pod "49e25755-0205-47fb-a88c-2f7a3291a687" (UID: "49e25755-0205-47fb-a88c-2f7a3291a687"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:08:38 crc kubenswrapper[5050]: I1211 14:08:38.867945 5050 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49e25755-0205-47fb-a88c-2f7a3291a687-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:38 crc kubenswrapper[5050]: I1211 14:08:38.868001 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbpf7\" (UniqueName: \"kubernetes.io/projected/49e25755-0205-47fb-a88c-2f7a3291a687-kube-api-access-tbpf7\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:38 crc kubenswrapper[5050]: I1211 14:08:38.868032 5050 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/49e25755-0205-47fb-a88c-2f7a3291a687-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:38 crc kubenswrapper[5050]: I1211 14:08:38.868042 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49e25755-0205-47fb-a88c-2f7a3291a687-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:38 crc kubenswrapper[5050]: I1211 14:08:38.871734 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49e25755-0205-47fb-a88c-2f7a3291a687-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "49e25755-0205-47fb-a88c-2f7a3291a687" (UID: "49e25755-0205-47fb-a88c-2f7a3291a687"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:08:38 crc kubenswrapper[5050]: I1211 14:08:38.926475 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49e25755-0205-47fb-a88c-2f7a3291a687-config-data" (OuterVolumeSpecName: "config-data") pod "49e25755-0205-47fb-a88c-2f7a3291a687" (UID: "49e25755-0205-47fb-a88c-2f7a3291a687"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:08:38 crc kubenswrapper[5050]: I1211 14:08:38.929567 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49e25755-0205-47fb-a88c-2f7a3291a687-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "49e25755-0205-47fb-a88c-2f7a3291a687" (UID: "49e25755-0205-47fb-a88c-2f7a3291a687"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:08:38 crc kubenswrapper[5050]: I1211 14:08:38.969964 5050 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/49e25755-0205-47fb-a88c-2f7a3291a687-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:38 crc kubenswrapper[5050]: I1211 14:08:38.970270 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49e25755-0205-47fb-a88c-2f7a3291a687-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:38 crc kubenswrapper[5050]: I1211 14:08:38.970331 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49e25755-0205-47fb-a88c-2f7a3291a687-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:39 crc kubenswrapper[5050]: I1211 14:08:39.649996 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"98804b0e-1bb7-4817-9c3b-25f3101a9aac","Type":"ContainerStarted","Data":"3327c26fef8c852a99510e9773e793f91fc0d35c3abb557f940f65d1a1341d3a"} Dec 11 14:08:39 crc kubenswrapper[5050]: I1211 14:08:39.652677 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3c3d321f-16aa-4789-87f5-3d5d54f2be30","Type":"ContainerStarted","Data":"bacc497a7091ad5c398a0eaf800ba5f2b65b322bae0d2d68a8be1b183b7f3d6f"} Dec 11 14:08:39 crc kubenswrapper[5050]: I1211 14:08:39.653608 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="3c3d321f-16aa-4789-87f5-3d5d54f2be30" containerName="cinder-api-log" containerID="cri-o://7dd54d0b1083881060ea7b32dfadc3a16d5333ec68ac6f4cea282f6da888ac9d" gracePeriod=30 Dec 11 14:08:39 crc kubenswrapper[5050]: I1211 14:08:39.654227 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="3c3d321f-16aa-4789-87f5-3d5d54f2be30" containerName="cinder-api" containerID="cri-o://bacc497a7091ad5c398a0eaf800ba5f2b65b322bae0d2d68a8be1b183b7f3d6f" gracePeriod=30 Dec 11 14:08:39 crc kubenswrapper[5050]: I1211 14:08:39.655597 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 11 14:08:39 crc kubenswrapper[5050]: I1211 14:08:39.657036 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 14:08:39 crc kubenswrapper[5050]: I1211 14:08:39.657756 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-676wg" event={"ID":"5a0a350f-3ea9-4892-9964-47c591420d28","Type":"ContainerStarted","Data":"05eef7d50464f5c72bd09f6313e1c614797a2a9afc826e468d2ca258effcfbed"} Dec 11 14:08:39 crc kubenswrapper[5050]: I1211 14:08:39.658176 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75dbb546bf-676wg" Dec 11 14:08:39 crc kubenswrapper[5050]: I1211 14:08:39.683340 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=8.683307794 podStartE2EDuration="8.683307794s" podCreationTimestamp="2025-12-11 14:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:08:39.682922493 +0000 UTC m=+1210.526645079" watchObservedRunningTime="2025-12-11 14:08:39.683307794 +0000 UTC m=+1210.527030380" Dec 11 14:08:39 crc kubenswrapper[5050]: I1211 14:08:39.736003 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:08:39 crc kubenswrapper[5050]: I1211 14:08:39.752130 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:08:39 crc kubenswrapper[5050]: I1211 14:08:39.776074 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:08:39 crc kubenswrapper[5050]: E1211 14:08:39.776778 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49e25755-0205-47fb-a88c-2f7a3291a687" containerName="ceilometer-central-agent" Dec 11 14:08:39 crc kubenswrapper[5050]: I1211 14:08:39.776800 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="49e25755-0205-47fb-a88c-2f7a3291a687" containerName="ceilometer-central-agent" Dec 11 14:08:39 crc kubenswrapper[5050]: E1211 14:08:39.776818 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="220e4ed5-e988-428e-b186-7a4231311831" containerName="dnsmasq-dns" Dec 11 14:08:39 crc kubenswrapper[5050]: I1211 14:08:39.776826 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="220e4ed5-e988-428e-b186-7a4231311831" containerName="dnsmasq-dns" Dec 11 14:08:39 crc kubenswrapper[5050]: E1211 14:08:39.776842 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49e25755-0205-47fb-a88c-2f7a3291a687" containerName="sg-core" Dec 11 14:08:39 crc kubenswrapper[5050]: I1211 14:08:39.776851 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="49e25755-0205-47fb-a88c-2f7a3291a687" containerName="sg-core" Dec 11 14:08:39 crc kubenswrapper[5050]: E1211 14:08:39.776884 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49e25755-0205-47fb-a88c-2f7a3291a687" containerName="ceilometer-notification-agent" Dec 11 14:08:39 crc kubenswrapper[5050]: I1211 14:08:39.776891 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="49e25755-0205-47fb-a88c-2f7a3291a687" containerName="ceilometer-notification-agent" Dec 11 14:08:39 crc kubenswrapper[5050]: E1211 14:08:39.776912 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="220e4ed5-e988-428e-b186-7a4231311831" containerName="init" Dec 11 14:08:39 crc kubenswrapper[5050]: I1211 14:08:39.776918 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="220e4ed5-e988-428e-b186-7a4231311831" containerName="init" Dec 11 14:08:39 crc kubenswrapper[5050]: I1211 14:08:39.777188 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="220e4ed5-e988-428e-b186-7a4231311831" containerName="dnsmasq-dns" Dec 11 14:08:39 crc kubenswrapper[5050]: I1211 14:08:39.777211 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="49e25755-0205-47fb-a88c-2f7a3291a687" containerName="ceilometer-notification-agent" Dec 11 14:08:39 crc kubenswrapper[5050]: I1211 14:08:39.777236 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="49e25755-0205-47fb-a88c-2f7a3291a687" containerName="sg-core" Dec 11 14:08:39 crc kubenswrapper[5050]: I1211 14:08:39.777254 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="49e25755-0205-47fb-a88c-2f7a3291a687" containerName="ceilometer-central-agent" Dec 11 14:08:39 crc kubenswrapper[5050]: I1211 14:08:39.779676 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75dbb546bf-676wg" podStartSLOduration=9.779644409 podStartE2EDuration="9.779644409s" podCreationTimestamp="2025-12-11 14:08:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:08:39.762724213 +0000 UTC m=+1210.606446799" watchObservedRunningTime="2025-12-11 14:08:39.779644409 +0000 UTC m=+1210.623366985" Dec 11 14:08:39 crc kubenswrapper[5050]: I1211 14:08:39.779979 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 14:08:39 crc kubenswrapper[5050]: I1211 14:08:39.796274 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 14:08:39 crc kubenswrapper[5050]: I1211 14:08:39.798681 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:08:39 crc kubenswrapper[5050]: I1211 14:08:39.805721 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 14:08:39 crc kubenswrapper[5050]: I1211 14:08:39.894374 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5399477c-8dc9-4a49-a264-4f41042a3db7-run-httpd\") pod \"ceilometer-0\" (UID: \"5399477c-8dc9-4a49-a264-4f41042a3db7\") " pod="openstack/ceilometer-0" Dec 11 14:08:39 crc kubenswrapper[5050]: I1211 14:08:39.894459 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5399477c-8dc9-4a49-a264-4f41042a3db7-scripts\") pod \"ceilometer-0\" (UID: \"5399477c-8dc9-4a49-a264-4f41042a3db7\") " pod="openstack/ceilometer-0" Dec 11 14:08:39 crc kubenswrapper[5050]: I1211 14:08:39.894646 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5399477c-8dc9-4a49-a264-4f41042a3db7-log-httpd\") pod \"ceilometer-0\" (UID: \"5399477c-8dc9-4a49-a264-4f41042a3db7\") " pod="openstack/ceilometer-0" Dec 11 14:08:39 crc kubenswrapper[5050]: I1211 14:08:39.894754 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkf8c\" (UniqueName: \"kubernetes.io/projected/5399477c-8dc9-4a49-a264-4f41042a3db7-kube-api-access-gkf8c\") pod \"ceilometer-0\" (UID: \"5399477c-8dc9-4a49-a264-4f41042a3db7\") " pod="openstack/ceilometer-0" Dec 11 14:08:39 crc kubenswrapper[5050]: I1211 14:08:39.894811 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5399477c-8dc9-4a49-a264-4f41042a3db7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5399477c-8dc9-4a49-a264-4f41042a3db7\") " pod="openstack/ceilometer-0" Dec 11 14:08:39 crc kubenswrapper[5050]: I1211 14:08:39.895133 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5399477c-8dc9-4a49-a264-4f41042a3db7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5399477c-8dc9-4a49-a264-4f41042a3db7\") " pod="openstack/ceilometer-0" Dec 11 14:08:39 crc kubenswrapper[5050]: I1211 14:08:39.895601 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5399477c-8dc9-4a49-a264-4f41042a3db7-config-data\") pod \"ceilometer-0\" (UID: \"5399477c-8dc9-4a49-a264-4f41042a3db7\") " pod="openstack/ceilometer-0" Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.007940 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5399477c-8dc9-4a49-a264-4f41042a3db7-run-httpd\") pod \"ceilometer-0\" (UID: \"5399477c-8dc9-4a49-a264-4f41042a3db7\") " pod="openstack/ceilometer-0" Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.008620 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5399477c-8dc9-4a49-a264-4f41042a3db7-scripts\") pod \"ceilometer-0\" (UID: \"5399477c-8dc9-4a49-a264-4f41042a3db7\") " pod="openstack/ceilometer-0" Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.008690 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5399477c-8dc9-4a49-a264-4f41042a3db7-log-httpd\") pod \"ceilometer-0\" (UID: \"5399477c-8dc9-4a49-a264-4f41042a3db7\") " pod="openstack/ceilometer-0" Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.008758 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkf8c\" (UniqueName: \"kubernetes.io/projected/5399477c-8dc9-4a49-a264-4f41042a3db7-kube-api-access-gkf8c\") pod \"ceilometer-0\" (UID: \"5399477c-8dc9-4a49-a264-4f41042a3db7\") " pod="openstack/ceilometer-0" Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.008815 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5399477c-8dc9-4a49-a264-4f41042a3db7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5399477c-8dc9-4a49-a264-4f41042a3db7\") " pod="openstack/ceilometer-0" Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.008869 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5399477c-8dc9-4a49-a264-4f41042a3db7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5399477c-8dc9-4a49-a264-4f41042a3db7\") " pod="openstack/ceilometer-0" Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.009131 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5399477c-8dc9-4a49-a264-4f41042a3db7-config-data\") pod \"ceilometer-0\" (UID: \"5399477c-8dc9-4a49-a264-4f41042a3db7\") " pod="openstack/ceilometer-0" Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.011030 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5399477c-8dc9-4a49-a264-4f41042a3db7-log-httpd\") pod \"ceilometer-0\" (UID: \"5399477c-8dc9-4a49-a264-4f41042a3db7\") " pod="openstack/ceilometer-0" Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.011601 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5399477c-8dc9-4a49-a264-4f41042a3db7-run-httpd\") pod \"ceilometer-0\" (UID: \"5399477c-8dc9-4a49-a264-4f41042a3db7\") " pod="openstack/ceilometer-0" Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.018782 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5399477c-8dc9-4a49-a264-4f41042a3db7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5399477c-8dc9-4a49-a264-4f41042a3db7\") " pod="openstack/ceilometer-0" Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.022045 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5399477c-8dc9-4a49-a264-4f41042a3db7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5399477c-8dc9-4a49-a264-4f41042a3db7\") " pod="openstack/ceilometer-0" Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.028648 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5399477c-8dc9-4a49-a264-4f41042a3db7-scripts\") pod \"ceilometer-0\" (UID: \"5399477c-8dc9-4a49-a264-4f41042a3db7\") " pod="openstack/ceilometer-0" Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.037067 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5399477c-8dc9-4a49-a264-4f41042a3db7-config-data\") pod \"ceilometer-0\" (UID: \"5399477c-8dc9-4a49-a264-4f41042a3db7\") " pod="openstack/ceilometer-0" Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.066344 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkf8c\" (UniqueName: \"kubernetes.io/projected/5399477c-8dc9-4a49-a264-4f41042a3db7-kube-api-access-gkf8c\") pod \"ceilometer-0\" (UID: \"5399477c-8dc9-4a49-a264-4f41042a3db7\") " pod="openstack/ceilometer-0" Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.159658 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.673655 5050 generic.go:334] "Generic (PLEG): container finished" podID="3c3d321f-16aa-4789-87f5-3d5d54f2be30" containerID="bacc497a7091ad5c398a0eaf800ba5f2b65b322bae0d2d68a8be1b183b7f3d6f" exitCode=0 Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.674310 5050 generic.go:334] "Generic (PLEG): container finished" podID="3c3d321f-16aa-4789-87f5-3d5d54f2be30" containerID="7dd54d0b1083881060ea7b32dfadc3a16d5333ec68ac6f4cea282f6da888ac9d" exitCode=143 Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.673740 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3c3d321f-16aa-4789-87f5-3d5d54f2be30","Type":"ContainerDied","Data":"bacc497a7091ad5c398a0eaf800ba5f2b65b322bae0d2d68a8be1b183b7f3d6f"} Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.674407 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3c3d321f-16aa-4789-87f5-3d5d54f2be30","Type":"ContainerDied","Data":"7dd54d0b1083881060ea7b32dfadc3a16d5333ec68ac6f4cea282f6da888ac9d"} Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.674431 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3c3d321f-16aa-4789-87f5-3d5d54f2be30","Type":"ContainerDied","Data":"677c5ff5d96a61d3d06097df117b9880dc108999baa844ac69e0252679ad4ee8"} Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.674893 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="677c5ff5d96a61d3d06097df117b9880dc108999baa844ac69e0252679ad4ee8" Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.676830 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"98804b0e-1bb7-4817-9c3b-25f3101a9aac","Type":"ContainerStarted","Data":"d2253f4f811e6b12c7609f14e7c48a111ca11f19f2bc2426f87d077eccb3562f"} Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.711323 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.712896 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=8.47974555 podStartE2EDuration="10.712885599s" podCreationTimestamp="2025-12-11 14:08:30 +0000 UTC" firstStartedPulling="2025-12-11 14:08:35.705642878 +0000 UTC m=+1206.549365464" lastFinishedPulling="2025-12-11 14:08:37.938782927 +0000 UTC m=+1208.782505513" observedRunningTime="2025-12-11 14:08:40.70363694 +0000 UTC m=+1211.547359536" watchObservedRunningTime="2025-12-11 14:08:40.712885599 +0000 UTC m=+1211.556608175" Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.796787 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.796843 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.827115 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.835527 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3d321f-16aa-4789-87f5-3d5d54f2be30-combined-ca-bundle\") pod \"3c3d321f-16aa-4789-87f5-3d5d54f2be30\" (UID: \"3c3d321f-16aa-4789-87f5-3d5d54f2be30\") " Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.835996 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c3d321f-16aa-4789-87f5-3d5d54f2be30-scripts\") pod \"3c3d321f-16aa-4789-87f5-3d5d54f2be30\" (UID: \"3c3d321f-16aa-4789-87f5-3d5d54f2be30\") " Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.836080 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c3d321f-16aa-4789-87f5-3d5d54f2be30-logs\") pod \"3c3d321f-16aa-4789-87f5-3d5d54f2be30\" (UID: \"3c3d321f-16aa-4789-87f5-3d5d54f2be30\") " Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.836265 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3c3d321f-16aa-4789-87f5-3d5d54f2be30-etc-machine-id\") pod \"3c3d321f-16aa-4789-87f5-3d5d54f2be30\" (UID: \"3c3d321f-16aa-4789-87f5-3d5d54f2be30\") " Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.836319 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c3d321f-16aa-4789-87f5-3d5d54f2be30-config-data-custom\") pod \"3c3d321f-16aa-4789-87f5-3d5d54f2be30\" (UID: \"3c3d321f-16aa-4789-87f5-3d5d54f2be30\") " Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.836362 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlbms\" (UniqueName: \"kubernetes.io/projected/3c3d321f-16aa-4789-87f5-3d5d54f2be30-kube-api-access-nlbms\") pod \"3c3d321f-16aa-4789-87f5-3d5d54f2be30\" (UID: \"3c3d321f-16aa-4789-87f5-3d5d54f2be30\") " Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.836418 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3d321f-16aa-4789-87f5-3d5d54f2be30-config-data\") pod \"3c3d321f-16aa-4789-87f5-3d5d54f2be30\" (UID: \"3c3d321f-16aa-4789-87f5-3d5d54f2be30\") " Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.838963 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3c3d321f-16aa-4789-87f5-3d5d54f2be30-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3c3d321f-16aa-4789-87f5-3d5d54f2be30" (UID: "3c3d321f-16aa-4789-87f5-3d5d54f2be30"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.839673 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c3d321f-16aa-4789-87f5-3d5d54f2be30-logs" (OuterVolumeSpecName: "logs") pod "3c3d321f-16aa-4789-87f5-3d5d54f2be30" (UID: "3c3d321f-16aa-4789-87f5-3d5d54f2be30"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.843382 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c3d321f-16aa-4789-87f5-3d5d54f2be30-scripts" (OuterVolumeSpecName: "scripts") pod "3c3d321f-16aa-4789-87f5-3d5d54f2be30" (UID: "3c3d321f-16aa-4789-87f5-3d5d54f2be30"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.845342 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c3d321f-16aa-4789-87f5-3d5d54f2be30-kube-api-access-nlbms" (OuterVolumeSpecName: "kube-api-access-nlbms") pod "3c3d321f-16aa-4789-87f5-3d5d54f2be30" (UID: "3c3d321f-16aa-4789-87f5-3d5d54f2be30"). InnerVolumeSpecName "kube-api-access-nlbms". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.847147 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c3d321f-16aa-4789-87f5-3d5d54f2be30-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3c3d321f-16aa-4789-87f5-3d5d54f2be30" (UID: "3c3d321f-16aa-4789-87f5-3d5d54f2be30"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.874116 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c3d321f-16aa-4789-87f5-3d5d54f2be30-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c3d321f-16aa-4789-87f5-3d5d54f2be30" (UID: "3c3d321f-16aa-4789-87f5-3d5d54f2be30"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.896608 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c3d321f-16aa-4789-87f5-3d5d54f2be30-config-data" (OuterVolumeSpecName: "config-data") pod "3c3d321f-16aa-4789-87f5-3d5d54f2be30" (UID: "3c3d321f-16aa-4789-87f5-3d5d54f2be30"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.941331 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlbms\" (UniqueName: \"kubernetes.io/projected/3c3d321f-16aa-4789-87f5-3d5d54f2be30-kube-api-access-nlbms\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.941367 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c3d321f-16aa-4789-87f5-3d5d54f2be30-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.941379 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c3d321f-16aa-4789-87f5-3d5d54f2be30-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.941389 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c3d321f-16aa-4789-87f5-3d5d54f2be30-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.941399 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c3d321f-16aa-4789-87f5-3d5d54f2be30-logs\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.941408 5050 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3c3d321f-16aa-4789-87f5-3d5d54f2be30-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:40 crc kubenswrapper[5050]: I1211 14:08:40.941417 5050 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3c3d321f-16aa-4789-87f5-3d5d54f2be30-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.074377 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.569949 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49e25755-0205-47fb-a88c-2f7a3291a687" path="/var/lib/kubelet/pods/49e25755-0205-47fb-a88c-2f7a3291a687/volumes" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.693886 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5399477c-8dc9-4a49-a264-4f41042a3db7","Type":"ContainerStarted","Data":"63fb0ac378e6b7cde7188c22823010f69834feeaeead1089df533065e5482e10"} Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.693943 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5399477c-8dc9-4a49-a264-4f41042a3db7","Type":"ContainerStarted","Data":"3120c8a74d8103d4cc3220c7e7375a48dac3c60786891af084f4d9147c222eef"} Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.694043 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.731614 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.747441 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.761190 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 11 14:08:41 crc kubenswrapper[5050]: E1211 14:08:41.761854 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c3d321f-16aa-4789-87f5-3d5d54f2be30" containerName="cinder-api" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.761884 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c3d321f-16aa-4789-87f5-3d5d54f2be30" containerName="cinder-api" Dec 11 14:08:41 crc kubenswrapper[5050]: E1211 14:08:41.761938 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c3d321f-16aa-4789-87f5-3d5d54f2be30" containerName="cinder-api-log" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.761947 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c3d321f-16aa-4789-87f5-3d5d54f2be30" containerName="cinder-api-log" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.762216 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c3d321f-16aa-4789-87f5-3d5d54f2be30" containerName="cinder-api-log" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.762236 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c3d321f-16aa-4789-87f5-3d5d54f2be30" containerName="cinder-api" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.763731 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.769488 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.769877 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.769990 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.770985 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.865485 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"003b423c-92a0-47f6-8358-003f3ad24ded\") " pod="openstack/cinder-api-0" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.865556 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"003b423c-92a0-47f6-8358-003f3ad24ded\") " pod="openstack/cinder-api-0" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.865612 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-scripts\") pod \"cinder-api-0\" (UID: \"003b423c-92a0-47f6-8358-003f3ad24ded\") " pod="openstack/cinder-api-0" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.866148 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n52gw\" (UniqueName: \"kubernetes.io/projected/003b423c-92a0-47f6-8358-003f3ad24ded-kube-api-access-n52gw\") pod \"cinder-api-0\" (UID: \"003b423c-92a0-47f6-8358-003f3ad24ded\") " pod="openstack/cinder-api-0" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.866257 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-config-data-custom\") pod \"cinder-api-0\" (UID: \"003b423c-92a0-47f6-8358-003f3ad24ded\") " pod="openstack/cinder-api-0" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.866417 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/003b423c-92a0-47f6-8358-003f3ad24ded-etc-machine-id\") pod \"cinder-api-0\" (UID: \"003b423c-92a0-47f6-8358-003f3ad24ded\") " pod="openstack/cinder-api-0" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.866496 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-config-data\") pod \"cinder-api-0\" (UID: \"003b423c-92a0-47f6-8358-003f3ad24ded\") " pod="openstack/cinder-api-0" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.866817 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/003b423c-92a0-47f6-8358-003f3ad24ded-logs\") pod \"cinder-api-0\" (UID: \"003b423c-92a0-47f6-8358-003f3ad24ded\") " pod="openstack/cinder-api-0" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.867003 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-public-tls-certs\") pod \"cinder-api-0\" (UID: \"003b423c-92a0-47f6-8358-003f3ad24ded\") " pod="openstack/cinder-api-0" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.969211 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/003b423c-92a0-47f6-8358-003f3ad24ded-logs\") pod \"cinder-api-0\" (UID: \"003b423c-92a0-47f6-8358-003f3ad24ded\") " pod="openstack/cinder-api-0" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.969609 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-public-tls-certs\") pod \"cinder-api-0\" (UID: \"003b423c-92a0-47f6-8358-003f3ad24ded\") " pod="openstack/cinder-api-0" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.969649 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"003b423c-92a0-47f6-8358-003f3ad24ded\") " pod="openstack/cinder-api-0" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.969673 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"003b423c-92a0-47f6-8358-003f3ad24ded\") " pod="openstack/cinder-api-0" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.969701 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-scripts\") pod \"cinder-api-0\" (UID: \"003b423c-92a0-47f6-8358-003f3ad24ded\") " pod="openstack/cinder-api-0" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.969718 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/003b423c-92a0-47f6-8358-003f3ad24ded-logs\") pod \"cinder-api-0\" (UID: \"003b423c-92a0-47f6-8358-003f3ad24ded\") " pod="openstack/cinder-api-0" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.969999 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n52gw\" (UniqueName: \"kubernetes.io/projected/003b423c-92a0-47f6-8358-003f3ad24ded-kube-api-access-n52gw\") pod \"cinder-api-0\" (UID: \"003b423c-92a0-47f6-8358-003f3ad24ded\") " pod="openstack/cinder-api-0" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.970141 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-config-data-custom\") pod \"cinder-api-0\" (UID: \"003b423c-92a0-47f6-8358-003f3ad24ded\") " pod="openstack/cinder-api-0" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.970253 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/003b423c-92a0-47f6-8358-003f3ad24ded-etc-machine-id\") pod \"cinder-api-0\" (UID: \"003b423c-92a0-47f6-8358-003f3ad24ded\") " pod="openstack/cinder-api-0" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.970300 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-config-data\") pod \"cinder-api-0\" (UID: \"003b423c-92a0-47f6-8358-003f3ad24ded\") " pod="openstack/cinder-api-0" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.971063 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/003b423c-92a0-47f6-8358-003f3ad24ded-etc-machine-id\") pod \"cinder-api-0\" (UID: \"003b423c-92a0-47f6-8358-003f3ad24ded\") " pod="openstack/cinder-api-0" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.975540 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-config-data\") pod \"cinder-api-0\" (UID: \"003b423c-92a0-47f6-8358-003f3ad24ded\") " pod="openstack/cinder-api-0" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.975697 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"003b423c-92a0-47f6-8358-003f3ad24ded\") " pod="openstack/cinder-api-0" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.975770 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-scripts\") pod \"cinder-api-0\" (UID: \"003b423c-92a0-47f6-8358-003f3ad24ded\") " pod="openstack/cinder-api-0" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.975785 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"003b423c-92a0-47f6-8358-003f3ad24ded\") " pod="openstack/cinder-api-0" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.977647 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-public-tls-certs\") pod \"cinder-api-0\" (UID: \"003b423c-92a0-47f6-8358-003f3ad24ded\") " pod="openstack/cinder-api-0" Dec 11 14:08:41 crc kubenswrapper[5050]: I1211 14:08:41.984705 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-config-data-custom\") pod \"cinder-api-0\" (UID: \"003b423c-92a0-47f6-8358-003f3ad24ded\") " pod="openstack/cinder-api-0" Dec 11 14:08:42 crc kubenswrapper[5050]: I1211 14:08:42.020710 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n52gw\" (UniqueName: \"kubernetes.io/projected/003b423c-92a0-47f6-8358-003f3ad24ded-kube-api-access-n52gw\") pod \"cinder-api-0\" (UID: \"003b423c-92a0-47f6-8358-003f3ad24ded\") " pod="openstack/cinder-api-0" Dec 11 14:08:42 crc kubenswrapper[5050]: I1211 14:08:42.083482 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 14:08:42 crc kubenswrapper[5050]: I1211 14:08:42.598888 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 11 14:08:42 crc kubenswrapper[5050]: I1211 14:08:42.707977 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"003b423c-92a0-47f6-8358-003f3ad24ded","Type":"ContainerStarted","Data":"cd6e59093115a20d14dadf8024c232fb89638e93979b549a2eb575875c007b09"} Dec 11 14:08:42 crc kubenswrapper[5050]: I1211 14:08:42.712782 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5399477c-8dc9-4a49-a264-4f41042a3db7","Type":"ContainerStarted","Data":"8a4145a00301fbe55da05988b615023c9bc4ec6cd930d63d261a2a1eb3aba7fb"} Dec 11 14:08:43 crc kubenswrapper[5050]: I1211 14:08:43.562176 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c3d321f-16aa-4789-87f5-3d5d54f2be30" path="/var/lib/kubelet/pods/3c3d321f-16aa-4789-87f5-3d5d54f2be30/volumes" Dec 11 14:08:43 crc kubenswrapper[5050]: I1211 14:08:43.739363 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"003b423c-92a0-47f6-8358-003f3ad24ded","Type":"ContainerStarted","Data":"80030e514e19d023c1bec72880044d75c75621951af814ba5560c38086dcbc3d"} Dec 11 14:08:44 crc kubenswrapper[5050]: I1211 14:08:44.751912 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"003b423c-92a0-47f6-8358-003f3ad24ded","Type":"ContainerStarted","Data":"204ac42ef63a05788b1880c5f6c33e7a413d56ea5c69370c5a87fa4d156de0ba"} Dec 11 14:08:44 crc kubenswrapper[5050]: I1211 14:08:44.752140 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 11 14:08:44 crc kubenswrapper[5050]: I1211 14:08:44.755862 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5399477c-8dc9-4a49-a264-4f41042a3db7","Type":"ContainerStarted","Data":"d918e8d5709d4b3b1c3f78dbd33732a39b838846fe611ef223340c8358bf4c25"} Dec 11 14:08:44 crc kubenswrapper[5050]: I1211 14:08:44.783260 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.783227492 podStartE2EDuration="3.783227492s" podCreationTimestamp="2025-12-11 14:08:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:08:44.772724249 +0000 UTC m=+1215.616446835" watchObservedRunningTime="2025-12-11 14:08:44.783227492 +0000 UTC m=+1215.626950068" Dec 11 14:08:46 crc kubenswrapper[5050]: I1211 14:08:46.120232 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-78ccc9f8bd-jdg2t" Dec 11 14:08:46 crc kubenswrapper[5050]: I1211 14:08:46.121405 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-78ccc9f8bd-jdg2t" Dec 11 14:08:46 crc kubenswrapper[5050]: I1211 14:08:46.196273 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75dbb546bf-676wg" Dec 11 14:08:46 crc kubenswrapper[5050]: I1211 14:08:46.314434 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-mnccb"] Dec 11 14:08:46 crc kubenswrapper[5050]: I1211 14:08:46.314696 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-66cdd4b5b5-mnccb" podUID="5bde6837-eef2-482a-81db-0fbba416e17d" containerName="dnsmasq-dns" containerID="cri-o://6bcb0ac63ada624c47aed3c4fbc915ad6e58ba49482564d13ca0a463139d6517" gracePeriod=10 Dec 11 14:08:46 crc kubenswrapper[5050]: I1211 14:08:46.588661 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 11 14:08:46 crc kubenswrapper[5050]: I1211 14:08:46.672216 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 14:08:46 crc kubenswrapper[5050]: I1211 14:08:46.796446 5050 generic.go:334] "Generic (PLEG): container finished" podID="5bde6837-eef2-482a-81db-0fbba416e17d" containerID="6bcb0ac63ada624c47aed3c4fbc915ad6e58ba49482564d13ca0a463139d6517" exitCode=0 Dec 11 14:08:46 crc kubenswrapper[5050]: I1211 14:08:46.796527 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-mnccb" event={"ID":"5bde6837-eef2-482a-81db-0fbba416e17d","Type":"ContainerDied","Data":"6bcb0ac63ada624c47aed3c4fbc915ad6e58ba49482564d13ca0a463139d6517"} Dec 11 14:08:46 crc kubenswrapper[5050]: I1211 14:08:46.799726 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="98804b0e-1bb7-4817-9c3b-25f3101a9aac" containerName="cinder-scheduler" containerID="cri-o://3327c26fef8c852a99510e9773e793f91fc0d35c3abb557f940f65d1a1341d3a" gracePeriod=30 Dec 11 14:08:46 crc kubenswrapper[5050]: I1211 14:08:46.801333 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5399477c-8dc9-4a49-a264-4f41042a3db7","Type":"ContainerStarted","Data":"01421cfb1ea0141432e85b7482b48fb2a0ce5786d97235592c4db0982a1d9c7c"} Dec 11 14:08:46 crc kubenswrapper[5050]: I1211 14:08:46.801443 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 11 14:08:46 crc kubenswrapper[5050]: I1211 14:08:46.802500 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="98804b0e-1bb7-4817-9c3b-25f3101a9aac" containerName="probe" containerID="cri-o://d2253f4f811e6b12c7609f14e7c48a111ca11f19f2bc2426f87d077eccb3562f" gracePeriod=30 Dec 11 14:08:46 crc kubenswrapper[5050]: I1211 14:08:46.861711 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.737133842 podStartE2EDuration="7.861694635s" podCreationTimestamp="2025-12-11 14:08:39 +0000 UTC" firstStartedPulling="2025-12-11 14:08:40.835635096 +0000 UTC m=+1211.679357682" lastFinishedPulling="2025-12-11 14:08:45.960195879 +0000 UTC m=+1216.803918475" observedRunningTime="2025-12-11 14:08:46.857002748 +0000 UTC m=+1217.700725354" watchObservedRunningTime="2025-12-11 14:08:46.861694635 +0000 UTC m=+1217.705417221" Dec 11 14:08:47 crc kubenswrapper[5050]: I1211 14:08:47.093400 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-mnccb" Dec 11 14:08:47 crc kubenswrapper[5050]: I1211 14:08:47.195455 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qw5t\" (UniqueName: \"kubernetes.io/projected/5bde6837-eef2-482a-81db-0fbba416e17d-kube-api-access-6qw5t\") pod \"5bde6837-eef2-482a-81db-0fbba416e17d\" (UID: \"5bde6837-eef2-482a-81db-0fbba416e17d\") " Dec 11 14:08:47 crc kubenswrapper[5050]: I1211 14:08:47.195678 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5bde6837-eef2-482a-81db-0fbba416e17d-dns-swift-storage-0\") pod \"5bde6837-eef2-482a-81db-0fbba416e17d\" (UID: \"5bde6837-eef2-482a-81db-0fbba416e17d\") " Dec 11 14:08:47 crc kubenswrapper[5050]: I1211 14:08:47.195748 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5bde6837-eef2-482a-81db-0fbba416e17d-ovsdbserver-sb\") pod \"5bde6837-eef2-482a-81db-0fbba416e17d\" (UID: \"5bde6837-eef2-482a-81db-0fbba416e17d\") " Dec 11 14:08:47 crc kubenswrapper[5050]: I1211 14:08:47.195776 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bde6837-eef2-482a-81db-0fbba416e17d-dns-svc\") pod \"5bde6837-eef2-482a-81db-0fbba416e17d\" (UID: \"5bde6837-eef2-482a-81db-0fbba416e17d\") " Dec 11 14:08:47 crc kubenswrapper[5050]: I1211 14:08:47.195848 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5bde6837-eef2-482a-81db-0fbba416e17d-ovsdbserver-nb\") pod \"5bde6837-eef2-482a-81db-0fbba416e17d\" (UID: \"5bde6837-eef2-482a-81db-0fbba416e17d\") " Dec 11 14:08:47 crc kubenswrapper[5050]: I1211 14:08:47.195893 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bde6837-eef2-482a-81db-0fbba416e17d-config\") pod \"5bde6837-eef2-482a-81db-0fbba416e17d\" (UID: \"5bde6837-eef2-482a-81db-0fbba416e17d\") " Dec 11 14:08:47 crc kubenswrapper[5050]: I1211 14:08:47.221660 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bde6837-eef2-482a-81db-0fbba416e17d-kube-api-access-6qw5t" (OuterVolumeSpecName: "kube-api-access-6qw5t") pod "5bde6837-eef2-482a-81db-0fbba416e17d" (UID: "5bde6837-eef2-482a-81db-0fbba416e17d"). InnerVolumeSpecName "kube-api-access-6qw5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:08:47 crc kubenswrapper[5050]: I1211 14:08:47.292854 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bde6837-eef2-482a-81db-0fbba416e17d-config" (OuterVolumeSpecName: "config") pod "5bde6837-eef2-482a-81db-0fbba416e17d" (UID: "5bde6837-eef2-482a-81db-0fbba416e17d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:08:47 crc kubenswrapper[5050]: I1211 14:08:47.301774 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bde6837-eef2-482a-81db-0fbba416e17d-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:47 crc kubenswrapper[5050]: I1211 14:08:47.301812 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qw5t\" (UniqueName: \"kubernetes.io/projected/5bde6837-eef2-482a-81db-0fbba416e17d-kube-api-access-6qw5t\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:47 crc kubenswrapper[5050]: I1211 14:08:47.303667 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bde6837-eef2-482a-81db-0fbba416e17d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5bde6837-eef2-482a-81db-0fbba416e17d" (UID: "5bde6837-eef2-482a-81db-0fbba416e17d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:08:47 crc kubenswrapper[5050]: I1211 14:08:47.335733 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bde6837-eef2-482a-81db-0fbba416e17d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5bde6837-eef2-482a-81db-0fbba416e17d" (UID: "5bde6837-eef2-482a-81db-0fbba416e17d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:08:47 crc kubenswrapper[5050]: I1211 14:08:47.347959 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bde6837-eef2-482a-81db-0fbba416e17d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5bde6837-eef2-482a-81db-0fbba416e17d" (UID: "5bde6837-eef2-482a-81db-0fbba416e17d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:08:47 crc kubenswrapper[5050]: I1211 14:08:47.390183 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5bde6837-eef2-482a-81db-0fbba416e17d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5bde6837-eef2-482a-81db-0fbba416e17d" (UID: "5bde6837-eef2-482a-81db-0fbba416e17d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:08:47 crc kubenswrapper[5050]: I1211 14:08:47.405831 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5bde6837-eef2-482a-81db-0fbba416e17d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:47 crc kubenswrapper[5050]: I1211 14:08:47.405886 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5bde6837-eef2-482a-81db-0fbba416e17d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:47 crc kubenswrapper[5050]: I1211 14:08:47.405903 5050 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5bde6837-eef2-482a-81db-0fbba416e17d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:47 crc kubenswrapper[5050]: I1211 14:08:47.405916 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5bde6837-eef2-482a-81db-0fbba416e17d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:47 crc kubenswrapper[5050]: I1211 14:08:47.840074 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7d59d8d5d8-lw5w5" Dec 11 14:08:47 crc kubenswrapper[5050]: I1211 14:08:47.848590 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66cdd4b5b5-mnccb" Dec 11 14:08:47 crc kubenswrapper[5050]: I1211 14:08:47.849454 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66cdd4b5b5-mnccb" event={"ID":"5bde6837-eef2-482a-81db-0fbba416e17d","Type":"ContainerDied","Data":"09acf4a2d760b9dccb307b179b1c4e35f55bcbff22ac11c15c79f643d9c438e8"} Dec 11 14:08:47 crc kubenswrapper[5050]: I1211 14:08:47.849506 5050 scope.go:117] "RemoveContainer" containerID="6bcb0ac63ada624c47aed3c4fbc915ad6e58ba49482564d13ca0a463139d6517" Dec 11 14:08:47 crc kubenswrapper[5050]: I1211 14:08:47.905215 5050 scope.go:117] "RemoveContainer" containerID="0ba47719f866756238ec2cc0155f78576c2d8c7daa512b705c79dd5815cfa07e" Dec 11 14:08:47 crc kubenswrapper[5050]: I1211 14:08:47.918812 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-mnccb"] Dec 11 14:08:47 crc kubenswrapper[5050]: I1211 14:08:47.953154 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66cdd4b5b5-mnccb"] Dec 11 14:08:48 crc kubenswrapper[5050]: I1211 14:08:48.874194 5050 generic.go:334] "Generic (PLEG): container finished" podID="98804b0e-1bb7-4817-9c3b-25f3101a9aac" containerID="d2253f4f811e6b12c7609f14e7c48a111ca11f19f2bc2426f87d077eccb3562f" exitCode=0 Dec 11 14:08:48 crc kubenswrapper[5050]: I1211 14:08:48.874375 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"98804b0e-1bb7-4817-9c3b-25f3101a9aac","Type":"ContainerDied","Data":"d2253f4f811e6b12c7609f14e7c48a111ca11f19f2bc2426f87d077eccb3562f"} Dec 11 14:08:48 crc kubenswrapper[5050]: I1211 14:08:48.940851 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-57f899fb58-v2lwj" Dec 11 14:08:49 crc kubenswrapper[5050]: I1211 14:08:49.136797 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-57f899fb58-v2lwj" Dec 11 14:08:49 crc kubenswrapper[5050]: I1211 14:08:49.217365 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6fb64b5f76-9r6t7"] Dec 11 14:08:49 crc kubenswrapper[5050]: I1211 14:08:49.243495 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6fb64b5f76-9r6t7" podUID="592e71d8-01fd-4db1-8292-938ded924711" containerName="barbican-api-log" containerID="cri-o://1e1d2fc36a977a9413352e3979b3d27f2bdc7a12c3c470c41cb2d0ca85721112" gracePeriod=30 Dec 11 14:08:49 crc kubenswrapper[5050]: I1211 14:08:49.244064 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6fb64b5f76-9r6t7" podUID="592e71d8-01fd-4db1-8292-938ded924711" containerName="barbican-api" containerID="cri-o://5b8546c77d46556062f3ce9f749c83366abb63e015f143c4be0d0ad4bf950b7d" gracePeriod=30 Dec 11 14:08:49 crc kubenswrapper[5050]: I1211 14:08:49.594894 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bde6837-eef2-482a-81db-0fbba416e17d" path="/var/lib/kubelet/pods/5bde6837-eef2-482a-81db-0fbba416e17d/volumes" Dec 11 14:08:49 crc kubenswrapper[5050]: E1211 14:08:49.612778 5050 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod592e71d8_01fd_4db1_8292_938ded924711.slice/crio-conmon-1e1d2fc36a977a9413352e3979b3d27f2bdc7a12c3c470c41cb2d0ca85721112.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod592e71d8_01fd_4db1_8292_938ded924711.slice/crio-1e1d2fc36a977a9413352e3979b3d27f2bdc7a12c3c470c41cb2d0ca85721112.scope\": RecentStats: unable to find data in memory cache]" Dec 11 14:08:49 crc kubenswrapper[5050]: I1211 14:08:49.893936 5050 generic.go:334] "Generic (PLEG): container finished" podID="592e71d8-01fd-4db1-8292-938ded924711" containerID="1e1d2fc36a977a9413352e3979b3d27f2bdc7a12c3c470c41cb2d0ca85721112" exitCode=143 Dec 11 14:08:49 crc kubenswrapper[5050]: I1211 14:08:49.895124 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fb64b5f76-9r6t7" event={"ID":"592e71d8-01fd-4db1-8292-938ded924711","Type":"ContainerDied","Data":"1e1d2fc36a977a9413352e3979b3d27f2bdc7a12c3c470c41cb2d0ca85721112"} Dec 11 14:08:51 crc kubenswrapper[5050]: I1211 14:08:51.217352 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7f54bc974d-nvhbp" Dec 11 14:08:51 crc kubenswrapper[5050]: I1211 14:08:51.616639 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 14:08:51 crc kubenswrapper[5050]: I1211 14:08:51.772607 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55xnh\" (UniqueName: \"kubernetes.io/projected/98804b0e-1bb7-4817-9c3b-25f3101a9aac-kube-api-access-55xnh\") pod \"98804b0e-1bb7-4817-9c3b-25f3101a9aac\" (UID: \"98804b0e-1bb7-4817-9c3b-25f3101a9aac\") " Dec 11 14:08:51 crc kubenswrapper[5050]: I1211 14:08:51.772809 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98804b0e-1bb7-4817-9c3b-25f3101a9aac-config-data\") pod \"98804b0e-1bb7-4817-9c3b-25f3101a9aac\" (UID: \"98804b0e-1bb7-4817-9c3b-25f3101a9aac\") " Dec 11 14:08:51 crc kubenswrapper[5050]: I1211 14:08:51.772861 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98804b0e-1bb7-4817-9c3b-25f3101a9aac-config-data-custom\") pod \"98804b0e-1bb7-4817-9c3b-25f3101a9aac\" (UID: \"98804b0e-1bb7-4817-9c3b-25f3101a9aac\") " Dec 11 14:08:51 crc kubenswrapper[5050]: I1211 14:08:51.772903 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98804b0e-1bb7-4817-9c3b-25f3101a9aac-scripts\") pod \"98804b0e-1bb7-4817-9c3b-25f3101a9aac\" (UID: \"98804b0e-1bb7-4817-9c3b-25f3101a9aac\") " Dec 11 14:08:51 crc kubenswrapper[5050]: I1211 14:08:51.773097 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/98804b0e-1bb7-4817-9c3b-25f3101a9aac-etc-machine-id\") pod \"98804b0e-1bb7-4817-9c3b-25f3101a9aac\" (UID: \"98804b0e-1bb7-4817-9c3b-25f3101a9aac\") " Dec 11 14:08:51 crc kubenswrapper[5050]: I1211 14:08:51.773151 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98804b0e-1bb7-4817-9c3b-25f3101a9aac-combined-ca-bundle\") pod \"98804b0e-1bb7-4817-9c3b-25f3101a9aac\" (UID: \"98804b0e-1bb7-4817-9c3b-25f3101a9aac\") " Dec 11 14:08:51 crc kubenswrapper[5050]: I1211 14:08:51.773335 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98804b0e-1bb7-4817-9c3b-25f3101a9aac-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "98804b0e-1bb7-4817-9c3b-25f3101a9aac" (UID: "98804b0e-1bb7-4817-9c3b-25f3101a9aac"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 14:08:51 crc kubenswrapper[5050]: I1211 14:08:51.786317 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98804b0e-1bb7-4817-9c3b-25f3101a9aac-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "98804b0e-1bb7-4817-9c3b-25f3101a9aac" (UID: "98804b0e-1bb7-4817-9c3b-25f3101a9aac"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:08:51 crc kubenswrapper[5050]: I1211 14:08:51.788206 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98804b0e-1bb7-4817-9c3b-25f3101a9aac-scripts" (OuterVolumeSpecName: "scripts") pod "98804b0e-1bb7-4817-9c3b-25f3101a9aac" (UID: "98804b0e-1bb7-4817-9c3b-25f3101a9aac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:08:51 crc kubenswrapper[5050]: I1211 14:08:51.788346 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98804b0e-1bb7-4817-9c3b-25f3101a9aac-kube-api-access-55xnh" (OuterVolumeSpecName: "kube-api-access-55xnh") pod "98804b0e-1bb7-4817-9c3b-25f3101a9aac" (UID: "98804b0e-1bb7-4817-9c3b-25f3101a9aac"). InnerVolumeSpecName "kube-api-access-55xnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:08:51 crc kubenswrapper[5050]: I1211 14:08:51.878348 5050 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/98804b0e-1bb7-4817-9c3b-25f3101a9aac-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:51 crc kubenswrapper[5050]: I1211 14:08:51.878411 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55xnh\" (UniqueName: \"kubernetes.io/projected/98804b0e-1bb7-4817-9c3b-25f3101a9aac-kube-api-access-55xnh\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:51 crc kubenswrapper[5050]: I1211 14:08:51.878431 5050 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98804b0e-1bb7-4817-9c3b-25f3101a9aac-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:51 crc kubenswrapper[5050]: I1211 14:08:51.878445 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98804b0e-1bb7-4817-9c3b-25f3101a9aac-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:51 crc kubenswrapper[5050]: I1211 14:08:51.882153 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98804b0e-1bb7-4817-9c3b-25f3101a9aac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98804b0e-1bb7-4817-9c3b-25f3101a9aac" (UID: "98804b0e-1bb7-4817-9c3b-25f3101a9aac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:08:51 crc kubenswrapper[5050]: I1211 14:08:51.920442 5050 generic.go:334] "Generic (PLEG): container finished" podID="98804b0e-1bb7-4817-9c3b-25f3101a9aac" containerID="3327c26fef8c852a99510e9773e793f91fc0d35c3abb557f940f65d1a1341d3a" exitCode=0 Dec 11 14:08:51 crc kubenswrapper[5050]: I1211 14:08:51.920797 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"98804b0e-1bb7-4817-9c3b-25f3101a9aac","Type":"ContainerDied","Data":"3327c26fef8c852a99510e9773e793f91fc0d35c3abb557f940f65d1a1341d3a"} Dec 11 14:08:51 crc kubenswrapper[5050]: I1211 14:08:51.920894 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"98804b0e-1bb7-4817-9c3b-25f3101a9aac","Type":"ContainerDied","Data":"398550bc22ca63543b57db1e3b0a967ee658b968d80c1f9ff4566a9617e3282b"} Dec 11 14:08:51 crc kubenswrapper[5050]: I1211 14:08:51.920981 5050 scope.go:117] "RemoveContainer" containerID="d2253f4f811e6b12c7609f14e7c48a111ca11f19f2bc2426f87d077eccb3562f" Dec 11 14:08:51 crc kubenswrapper[5050]: I1211 14:08:51.921202 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 14:08:51 crc kubenswrapper[5050]: I1211 14:08:51.931935 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98804b0e-1bb7-4817-9c3b-25f3101a9aac-config-data" (OuterVolumeSpecName: "config-data") pod "98804b0e-1bb7-4817-9c3b-25f3101a9aac" (UID: "98804b0e-1bb7-4817-9c3b-25f3101a9aac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:08:51 crc kubenswrapper[5050]: I1211 14:08:51.955778 5050 scope.go:117] "RemoveContainer" containerID="3327c26fef8c852a99510e9773e793f91fc0d35c3abb557f940f65d1a1341d3a" Dec 11 14:08:51 crc kubenswrapper[5050]: I1211 14:08:51.981184 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98804b0e-1bb7-4817-9c3b-25f3101a9aac-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:51 crc kubenswrapper[5050]: I1211 14:08:51.981224 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98804b0e-1bb7-4817-9c3b-25f3101a9aac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:51 crc kubenswrapper[5050]: I1211 14:08:51.983034 5050 scope.go:117] "RemoveContainer" containerID="d2253f4f811e6b12c7609f14e7c48a111ca11f19f2bc2426f87d077eccb3562f" Dec 11 14:08:51 crc kubenswrapper[5050]: E1211 14:08:51.983710 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2253f4f811e6b12c7609f14e7c48a111ca11f19f2bc2426f87d077eccb3562f\": container with ID starting with d2253f4f811e6b12c7609f14e7c48a111ca11f19f2bc2426f87d077eccb3562f not found: ID does not exist" containerID="d2253f4f811e6b12c7609f14e7c48a111ca11f19f2bc2426f87d077eccb3562f" Dec 11 14:08:51 crc kubenswrapper[5050]: I1211 14:08:51.983772 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2253f4f811e6b12c7609f14e7c48a111ca11f19f2bc2426f87d077eccb3562f"} err="failed to get container status \"d2253f4f811e6b12c7609f14e7c48a111ca11f19f2bc2426f87d077eccb3562f\": rpc error: code = NotFound desc = could not find container \"d2253f4f811e6b12c7609f14e7c48a111ca11f19f2bc2426f87d077eccb3562f\": container with ID starting with d2253f4f811e6b12c7609f14e7c48a111ca11f19f2bc2426f87d077eccb3562f not found: ID does not exist" Dec 11 14:08:51 crc kubenswrapper[5050]: I1211 14:08:51.983865 5050 scope.go:117] "RemoveContainer" containerID="3327c26fef8c852a99510e9773e793f91fc0d35c3abb557f940f65d1a1341d3a" Dec 11 14:08:51 crc kubenswrapper[5050]: E1211 14:08:51.984285 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3327c26fef8c852a99510e9773e793f91fc0d35c3abb557f940f65d1a1341d3a\": container with ID starting with 3327c26fef8c852a99510e9773e793f91fc0d35c3abb557f940f65d1a1341d3a not found: ID does not exist" containerID="3327c26fef8c852a99510e9773e793f91fc0d35c3abb557f940f65d1a1341d3a" Dec 11 14:08:51 crc kubenswrapper[5050]: I1211 14:08:51.984322 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3327c26fef8c852a99510e9773e793f91fc0d35c3abb557f940f65d1a1341d3a"} err="failed to get container status \"3327c26fef8c852a99510e9773e793f91fc0d35c3abb557f940f65d1a1341d3a\": rpc error: code = NotFound desc = could not find container \"3327c26fef8c852a99510e9773e793f91fc0d35c3abb557f940f65d1a1341d3a\": container with ID starting with 3327c26fef8c852a99510e9773e793f91fc0d35c3abb557f940f65d1a1341d3a not found: ID does not exist" Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.140682 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7766777c65-2rcww" Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.226089 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7d59d8d5d8-lw5w5"] Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.226836 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7d59d8d5d8-lw5w5" podUID="bc5cca8e-233a-4621-87bc-d8c64a6d1d88" containerName="neutron-api" containerID="cri-o://80a0101f214c710057cbef55369ef6e367dd76aa31c8d93a145799639c0c1f38" gracePeriod=30 Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.227825 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7d59d8d5d8-lw5w5" podUID="bc5cca8e-233a-4621-87bc-d8c64a6d1d88" containerName="neutron-httpd" containerID="cri-o://b1a274e5b4234729c2aa4fa87751468ac7110151d81f59e14f5c7057a94c21fc" gracePeriod=30 Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.284401 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.300597 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.327556 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 14:08:52 crc kubenswrapper[5050]: E1211 14:08:52.330360 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bde6837-eef2-482a-81db-0fbba416e17d" containerName="init" Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.330404 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bde6837-eef2-482a-81db-0fbba416e17d" containerName="init" Dec 11 14:08:52 crc kubenswrapper[5050]: E1211 14:08:52.330435 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5bde6837-eef2-482a-81db-0fbba416e17d" containerName="dnsmasq-dns" Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.330443 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="5bde6837-eef2-482a-81db-0fbba416e17d" containerName="dnsmasq-dns" Dec 11 14:08:52 crc kubenswrapper[5050]: E1211 14:08:52.330469 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98804b0e-1bb7-4817-9c3b-25f3101a9aac" containerName="cinder-scheduler" Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.330477 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="98804b0e-1bb7-4817-9c3b-25f3101a9aac" containerName="cinder-scheduler" Dec 11 14:08:52 crc kubenswrapper[5050]: E1211 14:08:52.330489 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98804b0e-1bb7-4817-9c3b-25f3101a9aac" containerName="probe" Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.330496 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="98804b0e-1bb7-4817-9c3b-25f3101a9aac" containerName="probe" Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.330831 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="98804b0e-1bb7-4817-9c3b-25f3101a9aac" containerName="cinder-scheduler" Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.330853 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="98804b0e-1bb7-4817-9c3b-25f3101a9aac" containerName="probe" Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.330870 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="5bde6837-eef2-482a-81db-0fbba416e17d" containerName="dnsmasq-dns" Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.332278 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.338299 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.365105 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.491345 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29a26d59-027f-428e-928e-12222b61a350-config-data\") pod \"cinder-scheduler-0\" (UID: \"29a26d59-027f-428e-928e-12222b61a350\") " pod="openstack/cinder-scheduler-0" Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.491423 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv72n\" (UniqueName: \"kubernetes.io/projected/29a26d59-027f-428e-928e-12222b61a350-kube-api-access-xv72n\") pod \"cinder-scheduler-0\" (UID: \"29a26d59-027f-428e-928e-12222b61a350\") " pod="openstack/cinder-scheduler-0" Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.491573 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29a26d59-027f-428e-928e-12222b61a350-scripts\") pod \"cinder-scheduler-0\" (UID: \"29a26d59-027f-428e-928e-12222b61a350\") " pod="openstack/cinder-scheduler-0" Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.491612 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29a26d59-027f-428e-928e-12222b61a350-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"29a26d59-027f-428e-928e-12222b61a350\") " pod="openstack/cinder-scheduler-0" Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.491651 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29a26d59-027f-428e-928e-12222b61a350-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"29a26d59-027f-428e-928e-12222b61a350\") " pod="openstack/cinder-scheduler-0" Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.491725 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29a26d59-027f-428e-928e-12222b61a350-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"29a26d59-027f-428e-928e-12222b61a350\") " pod="openstack/cinder-scheduler-0" Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.593316 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29a26d59-027f-428e-928e-12222b61a350-config-data\") pod \"cinder-scheduler-0\" (UID: \"29a26d59-027f-428e-928e-12222b61a350\") " pod="openstack/cinder-scheduler-0" Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.593378 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv72n\" (UniqueName: \"kubernetes.io/projected/29a26d59-027f-428e-928e-12222b61a350-kube-api-access-xv72n\") pod \"cinder-scheduler-0\" (UID: \"29a26d59-027f-428e-928e-12222b61a350\") " pod="openstack/cinder-scheduler-0" Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.593458 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29a26d59-027f-428e-928e-12222b61a350-scripts\") pod \"cinder-scheduler-0\" (UID: \"29a26d59-027f-428e-928e-12222b61a350\") " pod="openstack/cinder-scheduler-0" Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.593489 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29a26d59-027f-428e-928e-12222b61a350-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"29a26d59-027f-428e-928e-12222b61a350\") " pod="openstack/cinder-scheduler-0" Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.593514 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29a26d59-027f-428e-928e-12222b61a350-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"29a26d59-027f-428e-928e-12222b61a350\") " pod="openstack/cinder-scheduler-0" Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.593579 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29a26d59-027f-428e-928e-12222b61a350-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"29a26d59-027f-428e-928e-12222b61a350\") " pod="openstack/cinder-scheduler-0" Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.594044 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29a26d59-027f-428e-928e-12222b61a350-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"29a26d59-027f-428e-928e-12222b61a350\") " pod="openstack/cinder-scheduler-0" Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.600047 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29a26d59-027f-428e-928e-12222b61a350-scripts\") pod \"cinder-scheduler-0\" (UID: \"29a26d59-027f-428e-928e-12222b61a350\") " pod="openstack/cinder-scheduler-0" Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.600338 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29a26d59-027f-428e-928e-12222b61a350-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"29a26d59-027f-428e-928e-12222b61a350\") " pod="openstack/cinder-scheduler-0" Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.603644 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29a26d59-027f-428e-928e-12222b61a350-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"29a26d59-027f-428e-928e-12222b61a350\") " pod="openstack/cinder-scheduler-0" Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.605259 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29a26d59-027f-428e-928e-12222b61a350-config-data\") pod \"cinder-scheduler-0\" (UID: \"29a26d59-027f-428e-928e-12222b61a350\") " pod="openstack/cinder-scheduler-0" Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.615569 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv72n\" (UniqueName: \"kubernetes.io/projected/29a26d59-027f-428e-928e-12222b61a350-kube-api-access-xv72n\") pod \"cinder-scheduler-0\" (UID: \"29a26d59-027f-428e-928e-12222b61a350\") " pod="openstack/cinder-scheduler-0" Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.672383 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.944881 5050 generic.go:334] "Generic (PLEG): container finished" podID="592e71d8-01fd-4db1-8292-938ded924711" containerID="5b8546c77d46556062f3ce9f749c83366abb63e015f143c4be0d0ad4bf950b7d" exitCode=0 Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.945324 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fb64b5f76-9r6t7" event={"ID":"592e71d8-01fd-4db1-8292-938ded924711","Type":"ContainerDied","Data":"5b8546c77d46556062f3ce9f749c83366abb63e015f143c4be0d0ad4bf950b7d"} Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.972643 5050 generic.go:334] "Generic (PLEG): container finished" podID="bc5cca8e-233a-4621-87bc-d8c64a6d1d88" containerID="b1a274e5b4234729c2aa4fa87751468ac7110151d81f59e14f5c7057a94c21fc" exitCode=0 Dec 11 14:08:52 crc kubenswrapper[5050]: I1211 14:08:52.972701 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d59d8d5d8-lw5w5" event={"ID":"bc5cca8e-233a-4621-87bc-d8c64a6d1d88","Type":"ContainerDied","Data":"b1a274e5b4234729c2aa4fa87751468ac7110151d81f59e14f5c7057a94c21fc"} Dec 11 14:08:53 crc kubenswrapper[5050]: I1211 14:08:53.222735 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 14:08:53 crc kubenswrapper[5050]: I1211 14:08:53.367648 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6fb64b5f76-9r6t7" Dec 11 14:08:53 crc kubenswrapper[5050]: I1211 14:08:53.438737 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/592e71d8-01fd-4db1-8292-938ded924711-config-data\") pod \"592e71d8-01fd-4db1-8292-938ded924711\" (UID: \"592e71d8-01fd-4db1-8292-938ded924711\") " Dec 11 14:08:53 crc kubenswrapper[5050]: I1211 14:08:53.438827 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/592e71d8-01fd-4db1-8292-938ded924711-logs\") pod \"592e71d8-01fd-4db1-8292-938ded924711\" (UID: \"592e71d8-01fd-4db1-8292-938ded924711\") " Dec 11 14:08:53 crc kubenswrapper[5050]: I1211 14:08:53.438930 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/592e71d8-01fd-4db1-8292-938ded924711-config-data-custom\") pod \"592e71d8-01fd-4db1-8292-938ded924711\" (UID: \"592e71d8-01fd-4db1-8292-938ded924711\") " Dec 11 14:08:53 crc kubenswrapper[5050]: I1211 14:08:53.439025 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/592e71d8-01fd-4db1-8292-938ded924711-combined-ca-bundle\") pod \"592e71d8-01fd-4db1-8292-938ded924711\" (UID: \"592e71d8-01fd-4db1-8292-938ded924711\") " Dec 11 14:08:53 crc kubenswrapper[5050]: I1211 14:08:53.439056 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9g2mx\" (UniqueName: \"kubernetes.io/projected/592e71d8-01fd-4db1-8292-938ded924711-kube-api-access-9g2mx\") pod \"592e71d8-01fd-4db1-8292-938ded924711\" (UID: \"592e71d8-01fd-4db1-8292-938ded924711\") " Dec 11 14:08:53 crc kubenswrapper[5050]: I1211 14:08:53.442681 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/592e71d8-01fd-4db1-8292-938ded924711-logs" (OuterVolumeSpecName: "logs") pod "592e71d8-01fd-4db1-8292-938ded924711" (UID: "592e71d8-01fd-4db1-8292-938ded924711"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:08:53 crc kubenswrapper[5050]: I1211 14:08:53.456560 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/592e71d8-01fd-4db1-8292-938ded924711-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "592e71d8-01fd-4db1-8292-938ded924711" (UID: "592e71d8-01fd-4db1-8292-938ded924711"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:08:53 crc kubenswrapper[5050]: I1211 14:08:53.475345 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/592e71d8-01fd-4db1-8292-938ded924711-kube-api-access-9g2mx" (OuterVolumeSpecName: "kube-api-access-9g2mx") pod "592e71d8-01fd-4db1-8292-938ded924711" (UID: "592e71d8-01fd-4db1-8292-938ded924711"). InnerVolumeSpecName "kube-api-access-9g2mx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:08:53 crc kubenswrapper[5050]: I1211 14:08:53.494658 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/592e71d8-01fd-4db1-8292-938ded924711-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "592e71d8-01fd-4db1-8292-938ded924711" (UID: "592e71d8-01fd-4db1-8292-938ded924711"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:08:53 crc kubenswrapper[5050]: I1211 14:08:53.531527 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/592e71d8-01fd-4db1-8292-938ded924711-config-data" (OuterVolumeSpecName: "config-data") pod "592e71d8-01fd-4db1-8292-938ded924711" (UID: "592e71d8-01fd-4db1-8292-938ded924711"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:08:53 crc kubenswrapper[5050]: I1211 14:08:53.545438 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/592e71d8-01fd-4db1-8292-938ded924711-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:53 crc kubenswrapper[5050]: I1211 14:08:53.545490 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/592e71d8-01fd-4db1-8292-938ded924711-logs\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:53 crc kubenswrapper[5050]: I1211 14:08:53.545503 5050 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/592e71d8-01fd-4db1-8292-938ded924711-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:53 crc kubenswrapper[5050]: I1211 14:08:53.545521 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/592e71d8-01fd-4db1-8292-938ded924711-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:53 crc kubenswrapper[5050]: I1211 14:08:53.545531 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9g2mx\" (UniqueName: \"kubernetes.io/projected/592e71d8-01fd-4db1-8292-938ded924711-kube-api-access-9g2mx\") on node \"crc\" DevicePath \"\"" Dec 11 14:08:53 crc kubenswrapper[5050]: I1211 14:08:53.574338 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98804b0e-1bb7-4817-9c3b-25f3101a9aac" path="/var/lib/kubelet/pods/98804b0e-1bb7-4817-9c3b-25f3101a9aac/volumes" Dec 11 14:08:54 crc kubenswrapper[5050]: I1211 14:08:54.056167 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"29a26d59-027f-428e-928e-12222b61a350","Type":"ContainerStarted","Data":"a321c9b52046a17c9cfc26a8db814515a1542befeca7b048ebd9bb1061d031ec"} Dec 11 14:08:54 crc kubenswrapper[5050]: I1211 14:08:54.081089 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fb64b5f76-9r6t7" event={"ID":"592e71d8-01fd-4db1-8292-938ded924711","Type":"ContainerDied","Data":"043404c37872b89c66d62829220f575aaf0711f9b4ff50b4dbd7f66e92ee24c5"} Dec 11 14:08:54 crc kubenswrapper[5050]: I1211 14:08:54.081187 5050 scope.go:117] "RemoveContainer" containerID="5b8546c77d46556062f3ce9f749c83366abb63e015f143c4be0d0ad4bf950b7d" Dec 11 14:08:54 crc kubenswrapper[5050]: I1211 14:08:54.081212 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6fb64b5f76-9r6t7" Dec 11 14:08:54 crc kubenswrapper[5050]: I1211 14:08:54.130088 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6fb64b5f76-9r6t7"] Dec 11 14:08:54 crc kubenswrapper[5050]: I1211 14:08:54.142050 5050 scope.go:117] "RemoveContainer" containerID="1e1d2fc36a977a9413352e3979b3d27f2bdc7a12c3c470c41cb2d0ca85721112" Dec 11 14:08:54 crc kubenswrapper[5050]: I1211 14:08:54.151868 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6fb64b5f76-9r6t7"] Dec 11 14:08:54 crc kubenswrapper[5050]: I1211 14:08:54.741985 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 11 14:08:54 crc kubenswrapper[5050]: E1211 14:08:54.742908 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="592e71d8-01fd-4db1-8292-938ded924711" containerName="barbican-api-log" Dec 11 14:08:54 crc kubenswrapper[5050]: I1211 14:08:54.742925 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="592e71d8-01fd-4db1-8292-938ded924711" containerName="barbican-api-log" Dec 11 14:08:54 crc kubenswrapper[5050]: E1211 14:08:54.742966 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="592e71d8-01fd-4db1-8292-938ded924711" containerName="barbican-api" Dec 11 14:08:54 crc kubenswrapper[5050]: I1211 14:08:54.742975 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="592e71d8-01fd-4db1-8292-938ded924711" containerName="barbican-api" Dec 11 14:08:54 crc kubenswrapper[5050]: I1211 14:08:54.743168 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="592e71d8-01fd-4db1-8292-938ded924711" containerName="barbican-api" Dec 11 14:08:54 crc kubenswrapper[5050]: I1211 14:08:54.743195 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="592e71d8-01fd-4db1-8292-938ded924711" containerName="barbican-api-log" Dec 11 14:08:54 crc kubenswrapper[5050]: I1211 14:08:54.743938 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 11 14:08:54 crc kubenswrapper[5050]: I1211 14:08:54.757193 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 11 14:08:54 crc kubenswrapper[5050]: I1211 14:08:54.757308 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-cwcln" Dec 11 14:08:54 crc kubenswrapper[5050]: I1211 14:08:54.757526 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 11 14:08:54 crc kubenswrapper[5050]: I1211 14:08:54.787207 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 11 14:08:54 crc kubenswrapper[5050]: I1211 14:08:54.881631 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clgnq\" (UniqueName: \"kubernetes.io/projected/2396be70-52b5-4a91-b8f8-463803fcc4d0-kube-api-access-clgnq\") pod \"openstackclient\" (UID: \"2396be70-52b5-4a91-b8f8-463803fcc4d0\") " pod="openstack/openstackclient" Dec 11 14:08:54 crc kubenswrapper[5050]: I1211 14:08:54.881696 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2396be70-52b5-4a91-b8f8-463803fcc4d0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2396be70-52b5-4a91-b8f8-463803fcc4d0\") " pod="openstack/openstackclient" Dec 11 14:08:54 crc kubenswrapper[5050]: I1211 14:08:54.881878 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2396be70-52b5-4a91-b8f8-463803fcc4d0-openstack-config-secret\") pod \"openstackclient\" (UID: \"2396be70-52b5-4a91-b8f8-463803fcc4d0\") " pod="openstack/openstackclient" Dec 11 14:08:54 crc kubenswrapper[5050]: I1211 14:08:54.881937 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2396be70-52b5-4a91-b8f8-463803fcc4d0-openstack-config\") pod \"openstackclient\" (UID: \"2396be70-52b5-4a91-b8f8-463803fcc4d0\") " pod="openstack/openstackclient" Dec 11 14:08:54 crc kubenswrapper[5050]: I1211 14:08:54.987329 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2396be70-52b5-4a91-b8f8-463803fcc4d0-openstack-config\") pod \"openstackclient\" (UID: \"2396be70-52b5-4a91-b8f8-463803fcc4d0\") " pod="openstack/openstackclient" Dec 11 14:08:54 crc kubenswrapper[5050]: I1211 14:08:54.987443 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clgnq\" (UniqueName: \"kubernetes.io/projected/2396be70-52b5-4a91-b8f8-463803fcc4d0-kube-api-access-clgnq\") pod \"openstackclient\" (UID: \"2396be70-52b5-4a91-b8f8-463803fcc4d0\") " pod="openstack/openstackclient" Dec 11 14:08:54 crc kubenswrapper[5050]: I1211 14:08:54.988632 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2396be70-52b5-4a91-b8f8-463803fcc4d0-openstack-config\") pod \"openstackclient\" (UID: \"2396be70-52b5-4a91-b8f8-463803fcc4d0\") " pod="openstack/openstackclient" Dec 11 14:08:54 crc kubenswrapper[5050]: I1211 14:08:54.995122 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2396be70-52b5-4a91-b8f8-463803fcc4d0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2396be70-52b5-4a91-b8f8-463803fcc4d0\") " pod="openstack/openstackclient" Dec 11 14:08:54 crc kubenswrapper[5050]: I1211 14:08:54.995487 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2396be70-52b5-4a91-b8f8-463803fcc4d0-openstack-config-secret\") pod \"openstackclient\" (UID: \"2396be70-52b5-4a91-b8f8-463803fcc4d0\") " pod="openstack/openstackclient" Dec 11 14:08:55 crc kubenswrapper[5050]: I1211 14:08:55.000784 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2396be70-52b5-4a91-b8f8-463803fcc4d0-openstack-config-secret\") pod \"openstackclient\" (UID: \"2396be70-52b5-4a91-b8f8-463803fcc4d0\") " pod="openstack/openstackclient" Dec 11 14:08:55 crc kubenswrapper[5050]: I1211 14:08:55.005041 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2396be70-52b5-4a91-b8f8-463803fcc4d0-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2396be70-52b5-4a91-b8f8-463803fcc4d0\") " pod="openstack/openstackclient" Dec 11 14:08:55 crc kubenswrapper[5050]: I1211 14:08:55.021512 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clgnq\" (UniqueName: \"kubernetes.io/projected/2396be70-52b5-4a91-b8f8-463803fcc4d0-kube-api-access-clgnq\") pod \"openstackclient\" (UID: \"2396be70-52b5-4a91-b8f8-463803fcc4d0\") " pod="openstack/openstackclient" Dec 11 14:08:55 crc kubenswrapper[5050]: I1211 14:08:55.091922 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 11 14:08:55 crc kubenswrapper[5050]: I1211 14:08:55.094661 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"29a26d59-027f-428e-928e-12222b61a350","Type":"ContainerStarted","Data":"68a9a87e998a4bb0563913fd86e150d1605935b84a4da45aa67210b036a699f2"} Dec 11 14:08:55 crc kubenswrapper[5050]: I1211 14:08:55.453811 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 11 14:08:55 crc kubenswrapper[5050]: I1211 14:08:55.558576 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="592e71d8-01fd-4db1-8292-938ded924711" path="/var/lib/kubelet/pods/592e71d8-01fd-4db1-8292-938ded924711/volumes" Dec 11 14:08:55 crc kubenswrapper[5050]: I1211 14:08:55.603323 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 11 14:08:56 crc kubenswrapper[5050]: I1211 14:08:56.106888 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"2396be70-52b5-4a91-b8f8-463803fcc4d0","Type":"ContainerStarted","Data":"87b250b8dc8f882583eea0133828a2a1e73cc1f76216a766f7db7bbe2d4e71f1"} Dec 11 14:08:56 crc kubenswrapper[5050]: I1211 14:08:56.109549 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"29a26d59-027f-428e-928e-12222b61a350","Type":"ContainerStarted","Data":"3e6132bd898662eb15caae20bc63d62858df7ed7da6bd64261b666f48768ec52"} Dec 11 14:08:57 crc kubenswrapper[5050]: I1211 14:08:57.673991 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 11 14:08:59 crc kubenswrapper[5050]: I1211 14:08:59.244831 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=7.244804126 podStartE2EDuration="7.244804126s" podCreationTimestamp="2025-12-11 14:08:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:08:56.145453892 +0000 UTC m=+1226.989176488" watchObservedRunningTime="2025-12-11 14:08:59.244804126 +0000 UTC m=+1230.088526712" Dec 11 14:08:59 crc kubenswrapper[5050]: I1211 14:08:59.249573 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-fcd4b466f-vsss4"] Dec 11 14:08:59 crc kubenswrapper[5050]: I1211 14:08:59.251797 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-fcd4b466f-vsss4" Dec 11 14:08:59 crc kubenswrapper[5050]: I1211 14:08:59.255908 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 11 14:08:59 crc kubenswrapper[5050]: I1211 14:08:59.256358 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 11 14:08:59 crc kubenswrapper[5050]: I1211 14:08:59.256557 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 11 14:08:59 crc kubenswrapper[5050]: I1211 14:08:59.299114 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-fcd4b466f-vsss4"] Dec 11 14:08:59 crc kubenswrapper[5050]: I1211 14:08:59.391374 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6h8b\" (UniqueName: \"kubernetes.io/projected/cffff412-bf3c-4739-8bb8-3d099c8c83fe-kube-api-access-b6h8b\") pod \"swift-proxy-fcd4b466f-vsss4\" (UID: \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\") " pod="openstack/swift-proxy-fcd4b466f-vsss4" Dec 11 14:08:59 crc kubenswrapper[5050]: I1211 14:08:59.391456 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cffff412-bf3c-4739-8bb8-3d099c8c83fe-public-tls-certs\") pod \"swift-proxy-fcd4b466f-vsss4\" (UID: \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\") " pod="openstack/swift-proxy-fcd4b466f-vsss4" Dec 11 14:08:59 crc kubenswrapper[5050]: I1211 14:08:59.391511 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cffff412-bf3c-4739-8bb8-3d099c8c83fe-combined-ca-bundle\") pod \"swift-proxy-fcd4b466f-vsss4\" (UID: \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\") " pod="openstack/swift-proxy-fcd4b466f-vsss4" Dec 11 14:08:59 crc kubenswrapper[5050]: I1211 14:08:59.391531 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cffff412-bf3c-4739-8bb8-3d099c8c83fe-log-httpd\") pod \"swift-proxy-fcd4b466f-vsss4\" (UID: \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\") " pod="openstack/swift-proxy-fcd4b466f-vsss4" Dec 11 14:08:59 crc kubenswrapper[5050]: I1211 14:08:59.391580 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cffff412-bf3c-4739-8bb8-3d099c8c83fe-run-httpd\") pod \"swift-proxy-fcd4b466f-vsss4\" (UID: \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\") " pod="openstack/swift-proxy-fcd4b466f-vsss4" Dec 11 14:08:59 crc kubenswrapper[5050]: I1211 14:08:59.391620 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cffff412-bf3c-4739-8bb8-3d099c8c83fe-internal-tls-certs\") pod \"swift-proxy-fcd4b466f-vsss4\" (UID: \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\") " pod="openstack/swift-proxy-fcd4b466f-vsss4" Dec 11 14:08:59 crc kubenswrapper[5050]: I1211 14:08:59.391639 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cffff412-bf3c-4739-8bb8-3d099c8c83fe-etc-swift\") pod \"swift-proxy-fcd4b466f-vsss4\" (UID: \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\") " pod="openstack/swift-proxy-fcd4b466f-vsss4" Dec 11 14:08:59 crc kubenswrapper[5050]: I1211 14:08:59.391670 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cffff412-bf3c-4739-8bb8-3d099c8c83fe-config-data\") pod \"swift-proxy-fcd4b466f-vsss4\" (UID: \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\") " pod="openstack/swift-proxy-fcd4b466f-vsss4" Dec 11 14:08:59 crc kubenswrapper[5050]: I1211 14:08:59.493350 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cffff412-bf3c-4739-8bb8-3d099c8c83fe-run-httpd\") pod \"swift-proxy-fcd4b466f-vsss4\" (UID: \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\") " pod="openstack/swift-proxy-fcd4b466f-vsss4" Dec 11 14:08:59 crc kubenswrapper[5050]: I1211 14:08:59.493468 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cffff412-bf3c-4739-8bb8-3d099c8c83fe-internal-tls-certs\") pod \"swift-proxy-fcd4b466f-vsss4\" (UID: \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\") " pod="openstack/swift-proxy-fcd4b466f-vsss4" Dec 11 14:08:59 crc kubenswrapper[5050]: I1211 14:08:59.493492 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cffff412-bf3c-4739-8bb8-3d099c8c83fe-etc-swift\") pod \"swift-proxy-fcd4b466f-vsss4\" (UID: \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\") " pod="openstack/swift-proxy-fcd4b466f-vsss4" Dec 11 14:08:59 crc kubenswrapper[5050]: I1211 14:08:59.493522 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cffff412-bf3c-4739-8bb8-3d099c8c83fe-config-data\") pod \"swift-proxy-fcd4b466f-vsss4\" (UID: \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\") " pod="openstack/swift-proxy-fcd4b466f-vsss4" Dec 11 14:08:59 crc kubenswrapper[5050]: I1211 14:08:59.493567 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6h8b\" (UniqueName: \"kubernetes.io/projected/cffff412-bf3c-4739-8bb8-3d099c8c83fe-kube-api-access-b6h8b\") pod \"swift-proxy-fcd4b466f-vsss4\" (UID: \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\") " pod="openstack/swift-proxy-fcd4b466f-vsss4" Dec 11 14:08:59 crc kubenswrapper[5050]: I1211 14:08:59.493607 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cffff412-bf3c-4739-8bb8-3d099c8c83fe-public-tls-certs\") pod \"swift-proxy-fcd4b466f-vsss4\" (UID: \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\") " pod="openstack/swift-proxy-fcd4b466f-vsss4" Dec 11 14:08:59 crc kubenswrapper[5050]: I1211 14:08:59.493654 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cffff412-bf3c-4739-8bb8-3d099c8c83fe-combined-ca-bundle\") pod \"swift-proxy-fcd4b466f-vsss4\" (UID: \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\") " pod="openstack/swift-proxy-fcd4b466f-vsss4" Dec 11 14:08:59 crc kubenswrapper[5050]: I1211 14:08:59.493670 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cffff412-bf3c-4739-8bb8-3d099c8c83fe-log-httpd\") pod \"swift-proxy-fcd4b466f-vsss4\" (UID: \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\") " pod="openstack/swift-proxy-fcd4b466f-vsss4" Dec 11 14:08:59 crc kubenswrapper[5050]: I1211 14:08:59.494439 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cffff412-bf3c-4739-8bb8-3d099c8c83fe-log-httpd\") pod \"swift-proxy-fcd4b466f-vsss4\" (UID: \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\") " pod="openstack/swift-proxy-fcd4b466f-vsss4" Dec 11 14:08:59 crc kubenswrapper[5050]: I1211 14:08:59.494920 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cffff412-bf3c-4739-8bb8-3d099c8c83fe-run-httpd\") pod \"swift-proxy-fcd4b466f-vsss4\" (UID: \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\") " pod="openstack/swift-proxy-fcd4b466f-vsss4" Dec 11 14:08:59 crc kubenswrapper[5050]: I1211 14:08:59.504027 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cffff412-bf3c-4739-8bb8-3d099c8c83fe-combined-ca-bundle\") pod \"swift-proxy-fcd4b466f-vsss4\" (UID: \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\") " pod="openstack/swift-proxy-fcd4b466f-vsss4" Dec 11 14:08:59 crc kubenswrapper[5050]: I1211 14:08:59.504611 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cffff412-bf3c-4739-8bb8-3d099c8c83fe-internal-tls-certs\") pod \"swift-proxy-fcd4b466f-vsss4\" (UID: \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\") " pod="openstack/swift-proxy-fcd4b466f-vsss4" Dec 11 14:08:59 crc kubenswrapper[5050]: I1211 14:08:59.515724 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cffff412-bf3c-4739-8bb8-3d099c8c83fe-config-data\") pod \"swift-proxy-fcd4b466f-vsss4\" (UID: \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\") " pod="openstack/swift-proxy-fcd4b466f-vsss4" Dec 11 14:08:59 crc kubenswrapper[5050]: I1211 14:08:59.516048 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6h8b\" (UniqueName: \"kubernetes.io/projected/cffff412-bf3c-4739-8bb8-3d099c8c83fe-kube-api-access-b6h8b\") pod \"swift-proxy-fcd4b466f-vsss4\" (UID: \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\") " pod="openstack/swift-proxy-fcd4b466f-vsss4" Dec 11 14:08:59 crc kubenswrapper[5050]: I1211 14:08:59.518971 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cffff412-bf3c-4739-8bb8-3d099c8c83fe-etc-swift\") pod \"swift-proxy-fcd4b466f-vsss4\" (UID: \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\") " pod="openstack/swift-proxy-fcd4b466f-vsss4" Dec 11 14:08:59 crc kubenswrapper[5050]: I1211 14:08:59.536230 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cffff412-bf3c-4739-8bb8-3d099c8c83fe-public-tls-certs\") pod \"swift-proxy-fcd4b466f-vsss4\" (UID: \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\") " pod="openstack/swift-proxy-fcd4b466f-vsss4" Dec 11 14:08:59 crc kubenswrapper[5050]: I1211 14:08:59.605461 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-fcd4b466f-vsss4" Dec 11 14:09:00 crc kubenswrapper[5050]: I1211 14:09:00.093152 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:09:00 crc kubenswrapper[5050]: I1211 14:09:00.094132 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5399477c-8dc9-4a49-a264-4f41042a3db7" containerName="ceilometer-central-agent" containerID="cri-o://63fb0ac378e6b7cde7188c22823010f69834feeaeead1089df533065e5482e10" gracePeriod=30 Dec 11 14:09:00 crc kubenswrapper[5050]: I1211 14:09:00.094355 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5399477c-8dc9-4a49-a264-4f41042a3db7" containerName="proxy-httpd" containerID="cri-o://01421cfb1ea0141432e85b7482b48fb2a0ce5786d97235592c4db0982a1d9c7c" gracePeriod=30 Dec 11 14:09:00 crc kubenswrapper[5050]: I1211 14:09:00.094417 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5399477c-8dc9-4a49-a264-4f41042a3db7" containerName="sg-core" containerID="cri-o://d918e8d5709d4b3b1c3f78dbd33732a39b838846fe611ef223340c8358bf4c25" gracePeriod=30 Dec 11 14:09:00 crc kubenswrapper[5050]: I1211 14:09:00.094488 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5399477c-8dc9-4a49-a264-4f41042a3db7" containerName="ceilometer-notification-agent" containerID="cri-o://8a4145a00301fbe55da05988b615023c9bc4ec6cd930d63d261a2a1eb3aba7fb" gracePeriod=30 Dec 11 14:09:00 crc kubenswrapper[5050]: I1211 14:09:00.120932 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="5399477c-8dc9-4a49-a264-4f41042a3db7" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Dec 11 14:09:00 crc kubenswrapper[5050]: I1211 14:09:00.131161 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-fcd4b466f-vsss4"] Dec 11 14:09:00 crc kubenswrapper[5050]: W1211 14:09:00.134510 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcffff412_bf3c_4739_8bb8_3d099c8c83fe.slice/crio-9e5aa859833ca502f026ee56fd4b26853d51e2676296b7df9a3fa117ef696f32 WatchSource:0}: Error finding container 9e5aa859833ca502f026ee56fd4b26853d51e2676296b7df9a3fa117ef696f32: Status 404 returned error can't find the container with id 9e5aa859833ca502f026ee56fd4b26853d51e2676296b7df9a3fa117ef696f32 Dec 11 14:09:00 crc kubenswrapper[5050]: I1211 14:09:00.165479 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-fcd4b466f-vsss4" event={"ID":"cffff412-bf3c-4739-8bb8-3d099c8c83fe","Type":"ContainerStarted","Data":"9e5aa859833ca502f026ee56fd4b26853d51e2676296b7df9a3fa117ef696f32"} Dec 11 14:09:01 crc kubenswrapper[5050]: I1211 14:09:01.179308 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-fcd4b466f-vsss4" event={"ID":"cffff412-bf3c-4739-8bb8-3d099c8c83fe","Type":"ContainerStarted","Data":"0834c0f817c98ccee2d1fc466612fef74fe94fe1c05828492e898fa4014682d9"} Dec 11 14:09:01 crc kubenswrapper[5050]: I1211 14:09:01.179647 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-fcd4b466f-vsss4" event={"ID":"cffff412-bf3c-4739-8bb8-3d099c8c83fe","Type":"ContainerStarted","Data":"bab4849e060d80aa6504ac4459a8bc36b1517d2feed47ced5829f252e6c45900"} Dec 11 14:09:01 crc kubenswrapper[5050]: I1211 14:09:01.181065 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-fcd4b466f-vsss4" Dec 11 14:09:01 crc kubenswrapper[5050]: I1211 14:09:01.181094 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-fcd4b466f-vsss4" Dec 11 14:09:01 crc kubenswrapper[5050]: I1211 14:09:01.184171 5050 generic.go:334] "Generic (PLEG): container finished" podID="5399477c-8dc9-4a49-a264-4f41042a3db7" containerID="01421cfb1ea0141432e85b7482b48fb2a0ce5786d97235592c4db0982a1d9c7c" exitCode=0 Dec 11 14:09:01 crc kubenswrapper[5050]: I1211 14:09:01.184194 5050 generic.go:334] "Generic (PLEG): container finished" podID="5399477c-8dc9-4a49-a264-4f41042a3db7" containerID="d918e8d5709d4b3b1c3f78dbd33732a39b838846fe611ef223340c8358bf4c25" exitCode=2 Dec 11 14:09:01 crc kubenswrapper[5050]: I1211 14:09:01.184202 5050 generic.go:334] "Generic (PLEG): container finished" podID="5399477c-8dc9-4a49-a264-4f41042a3db7" containerID="63fb0ac378e6b7cde7188c22823010f69834feeaeead1089df533065e5482e10" exitCode=0 Dec 11 14:09:01 crc kubenswrapper[5050]: I1211 14:09:01.184249 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5399477c-8dc9-4a49-a264-4f41042a3db7","Type":"ContainerDied","Data":"01421cfb1ea0141432e85b7482b48fb2a0ce5786d97235592c4db0982a1d9c7c"} Dec 11 14:09:01 crc kubenswrapper[5050]: I1211 14:09:01.184269 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5399477c-8dc9-4a49-a264-4f41042a3db7","Type":"ContainerDied","Data":"d918e8d5709d4b3b1c3f78dbd33732a39b838846fe611ef223340c8358bf4c25"} Dec 11 14:09:01 crc kubenswrapper[5050]: I1211 14:09:01.184279 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5399477c-8dc9-4a49-a264-4f41042a3db7","Type":"ContainerDied","Data":"63fb0ac378e6b7cde7188c22823010f69834feeaeead1089df533065e5482e10"} Dec 11 14:09:01 crc kubenswrapper[5050]: I1211 14:09:01.212186 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-fcd4b466f-vsss4" podStartSLOduration=2.212134455 podStartE2EDuration="2.212134455s" podCreationTimestamp="2025-12-11 14:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:09:01.208533398 +0000 UTC m=+1232.052255984" watchObservedRunningTime="2025-12-11 14:09:01.212134455 +0000 UTC m=+1232.055857041" Dec 11 14:09:02 crc kubenswrapper[5050]: I1211 14:09:02.963478 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 11 14:09:03 crc kubenswrapper[5050]: I1211 14:09:03.205426 5050 generic.go:334] "Generic (PLEG): container finished" podID="5399477c-8dc9-4a49-a264-4f41042a3db7" containerID="8a4145a00301fbe55da05988b615023c9bc4ec6cd930d63d261a2a1eb3aba7fb" exitCode=0 Dec 11 14:09:03 crc kubenswrapper[5050]: I1211 14:09:03.205493 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5399477c-8dc9-4a49-a264-4f41042a3db7","Type":"ContainerDied","Data":"8a4145a00301fbe55da05988b615023c9bc4ec6cd930d63d261a2a1eb3aba7fb"} Dec 11 14:09:04 crc kubenswrapper[5050]: I1211 14:09:04.243528 5050 generic.go:334] "Generic (PLEG): container finished" podID="bc5cca8e-233a-4621-87bc-d8c64a6d1d88" containerID="80a0101f214c710057cbef55369ef6e367dd76aa31c8d93a145799639c0c1f38" exitCode=0 Dec 11 14:09:04 crc kubenswrapper[5050]: I1211 14:09:04.243603 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d59d8d5d8-lw5w5" event={"ID":"bc5cca8e-233a-4621-87bc-d8c64a6d1d88","Type":"ContainerDied","Data":"80a0101f214c710057cbef55369ef6e367dd76aa31c8d93a145799639c0c1f38"} Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.360466 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.478075 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7d59d8d5d8-lw5w5" Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.553380 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5399477c-8dc9-4a49-a264-4f41042a3db7-sg-core-conf-yaml\") pod \"5399477c-8dc9-4a49-a264-4f41042a3db7\" (UID: \"5399477c-8dc9-4a49-a264-4f41042a3db7\") " Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.554892 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5399477c-8dc9-4a49-a264-4f41042a3db7-run-httpd\") pod \"5399477c-8dc9-4a49-a264-4f41042a3db7\" (UID: \"5399477c-8dc9-4a49-a264-4f41042a3db7\") " Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.555295 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkf8c\" (UniqueName: \"kubernetes.io/projected/5399477c-8dc9-4a49-a264-4f41042a3db7-kube-api-access-gkf8c\") pod \"5399477c-8dc9-4a49-a264-4f41042a3db7\" (UID: \"5399477c-8dc9-4a49-a264-4f41042a3db7\") " Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.555418 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5399477c-8dc9-4a49-a264-4f41042a3db7-scripts\") pod \"5399477c-8dc9-4a49-a264-4f41042a3db7\" (UID: \"5399477c-8dc9-4a49-a264-4f41042a3db7\") " Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.555600 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5399477c-8dc9-4a49-a264-4f41042a3db7-combined-ca-bundle\") pod \"5399477c-8dc9-4a49-a264-4f41042a3db7\" (UID: \"5399477c-8dc9-4a49-a264-4f41042a3db7\") " Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.555766 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5399477c-8dc9-4a49-a264-4f41042a3db7-log-httpd\") pod \"5399477c-8dc9-4a49-a264-4f41042a3db7\" (UID: \"5399477c-8dc9-4a49-a264-4f41042a3db7\") " Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.555889 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5399477c-8dc9-4a49-a264-4f41042a3db7-config-data\") pod \"5399477c-8dc9-4a49-a264-4f41042a3db7\" (UID: \"5399477c-8dc9-4a49-a264-4f41042a3db7\") " Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.556421 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5399477c-8dc9-4a49-a264-4f41042a3db7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5399477c-8dc9-4a49-a264-4f41042a3db7" (UID: "5399477c-8dc9-4a49-a264-4f41042a3db7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.560320 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5399477c-8dc9-4a49-a264-4f41042a3db7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5399477c-8dc9-4a49-a264-4f41042a3db7" (UID: "5399477c-8dc9-4a49-a264-4f41042a3db7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.560934 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5399477c-8dc9-4a49-a264-4f41042a3db7-scripts" (OuterVolumeSpecName: "scripts") pod "5399477c-8dc9-4a49-a264-4f41042a3db7" (UID: "5399477c-8dc9-4a49-a264-4f41042a3db7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.562984 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5399477c-8dc9-4a49-a264-4f41042a3db7-kube-api-access-gkf8c" (OuterVolumeSpecName: "kube-api-access-gkf8c") pod "5399477c-8dc9-4a49-a264-4f41042a3db7" (UID: "5399477c-8dc9-4a49-a264-4f41042a3db7"). InnerVolumeSpecName "kube-api-access-gkf8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.564993 5050 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5399477c-8dc9-4a49-a264-4f41042a3db7-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.565194 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkf8c\" (UniqueName: \"kubernetes.io/projected/5399477c-8dc9-4a49-a264-4f41042a3db7-kube-api-access-gkf8c\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.565340 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5399477c-8dc9-4a49-a264-4f41042a3db7-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.565482 5050 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5399477c-8dc9-4a49-a264-4f41042a3db7-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.593299 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5399477c-8dc9-4a49-a264-4f41042a3db7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5399477c-8dc9-4a49-a264-4f41042a3db7" (UID: "5399477c-8dc9-4a49-a264-4f41042a3db7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.620932 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-fcd4b466f-vsss4" Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.623518 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-fcd4b466f-vsss4" Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.661652 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5399477c-8dc9-4a49-a264-4f41042a3db7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5399477c-8dc9-4a49-a264-4f41042a3db7" (UID: "5399477c-8dc9-4a49-a264-4f41042a3db7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.667525 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc5cca8e-233a-4621-87bc-d8c64a6d1d88-combined-ca-bundle\") pod \"bc5cca8e-233a-4621-87bc-d8c64a6d1d88\" (UID: \"bc5cca8e-233a-4621-87bc-d8c64a6d1d88\") " Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.667825 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bc5cca8e-233a-4621-87bc-d8c64a6d1d88-config\") pod \"bc5cca8e-233a-4621-87bc-d8c64a6d1d88\" (UID: \"bc5cca8e-233a-4621-87bc-d8c64a6d1d88\") " Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.667855 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc5cca8e-233a-4621-87bc-d8c64a6d1d88-ovndb-tls-certs\") pod \"bc5cca8e-233a-4621-87bc-d8c64a6d1d88\" (UID: \"bc5cca8e-233a-4621-87bc-d8c64a6d1d88\") " Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.667881 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bc5cca8e-233a-4621-87bc-d8c64a6d1d88-httpd-config\") pod \"bc5cca8e-233a-4621-87bc-d8c64a6d1d88\" (UID: \"bc5cca8e-233a-4621-87bc-d8c64a6d1d88\") " Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.667916 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mw9t7\" (UniqueName: \"kubernetes.io/projected/bc5cca8e-233a-4621-87bc-d8c64a6d1d88-kube-api-access-mw9t7\") pod \"bc5cca8e-233a-4621-87bc-d8c64a6d1d88\" (UID: \"bc5cca8e-233a-4621-87bc-d8c64a6d1d88\") " Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.668440 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5399477c-8dc9-4a49-a264-4f41042a3db7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.668453 5050 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5399477c-8dc9-4a49-a264-4f41042a3db7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.671988 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5cca8e-233a-4621-87bc-d8c64a6d1d88-kube-api-access-mw9t7" (OuterVolumeSpecName: "kube-api-access-mw9t7") pod "bc5cca8e-233a-4621-87bc-d8c64a6d1d88" (UID: "bc5cca8e-233a-4621-87bc-d8c64a6d1d88"). InnerVolumeSpecName "kube-api-access-mw9t7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.696831 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5cca8e-233a-4621-87bc-d8c64a6d1d88-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "bc5cca8e-233a-4621-87bc-d8c64a6d1d88" (UID: "bc5cca8e-233a-4621-87bc-d8c64a6d1d88"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.750903 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5399477c-8dc9-4a49-a264-4f41042a3db7-config-data" (OuterVolumeSpecName: "config-data") pod "5399477c-8dc9-4a49-a264-4f41042a3db7" (UID: "5399477c-8dc9-4a49-a264-4f41042a3db7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.771235 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5cca8e-233a-4621-87bc-d8c64a6d1d88-config" (OuterVolumeSpecName: "config") pod "bc5cca8e-233a-4621-87bc-d8c64a6d1d88" (UID: "bc5cca8e-233a-4621-87bc-d8c64a6d1d88"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.773414 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5399477c-8dc9-4a49-a264-4f41042a3db7-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.773453 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/bc5cca8e-233a-4621-87bc-d8c64a6d1d88-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.773462 5050 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bc5cca8e-233a-4621-87bc-d8c64a6d1d88-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.773473 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mw9t7\" (UniqueName: \"kubernetes.io/projected/bc5cca8e-233a-4621-87bc-d8c64a6d1d88-kube-api-access-mw9t7\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.791108 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5cca8e-233a-4621-87bc-d8c64a6d1d88-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc5cca8e-233a-4621-87bc-d8c64a6d1d88" (UID: "bc5cca8e-233a-4621-87bc-d8c64a6d1d88"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.802570 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5cca8e-233a-4621-87bc-d8c64a6d1d88-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "bc5cca8e-233a-4621-87bc-d8c64a6d1d88" (UID: "bc5cca8e-233a-4621-87bc-d8c64a6d1d88"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.875904 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc5cca8e-233a-4621-87bc-d8c64a6d1d88-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:09 crc kubenswrapper[5050]: I1211 14:09:09.875943 5050 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc5cca8e-233a-4621-87bc-d8c64a6d1d88-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.334807 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d59d8d5d8-lw5w5" event={"ID":"bc5cca8e-233a-4621-87bc-d8c64a6d1d88","Type":"ContainerDied","Data":"653893a45f7b93fb28bb9d52ba80a1c9feb622877ce8299a1d83f878e192a90a"} Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.335236 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7d59d8d5d8-lw5w5" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.335595 5050 scope.go:117] "RemoveContainer" containerID="b1a274e5b4234729c2aa4fa87751468ac7110151d81f59e14f5c7057a94c21fc" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.342679 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5399477c-8dc9-4a49-a264-4f41042a3db7","Type":"ContainerDied","Data":"3120c8a74d8103d4cc3220c7e7375a48dac3c60786891af084f4d9147c222eef"} Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.342831 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.348752 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"2396be70-52b5-4a91-b8f8-463803fcc4d0","Type":"ContainerStarted","Data":"d2e5c82ae90e1137ec73bef8dd6ce2e374ca6ff4f54e4da5f33502be7443eb03"} Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.380667 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.716926297 podStartE2EDuration="16.380641378s" podCreationTimestamp="2025-12-11 14:08:54 +0000 UTC" firstStartedPulling="2025-12-11 14:08:55.458197178 +0000 UTC m=+1226.301919764" lastFinishedPulling="2025-12-11 14:09:09.121912259 +0000 UTC m=+1239.965634845" observedRunningTime="2025-12-11 14:09:10.378419098 +0000 UTC m=+1241.222141704" watchObservedRunningTime="2025-12-11 14:09:10.380641378 +0000 UTC m=+1241.224363954" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.386839 5050 scope.go:117] "RemoveContainer" containerID="80a0101f214c710057cbef55369ef6e367dd76aa31c8d93a145799639c0c1f38" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.413169 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7d59d8d5d8-lw5w5"] Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.428400 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7d59d8d5d8-lw5w5"] Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.438709 5050 scope.go:117] "RemoveContainer" containerID="01421cfb1ea0141432e85b7482b48fb2a0ce5786d97235592c4db0982a1d9c7c" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.449867 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.465333 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.477222 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:09:10 crc kubenswrapper[5050]: E1211 14:09:10.477790 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5399477c-8dc9-4a49-a264-4f41042a3db7" containerName="proxy-httpd" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.477818 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="5399477c-8dc9-4a49-a264-4f41042a3db7" containerName="proxy-httpd" Dec 11 14:09:10 crc kubenswrapper[5050]: E1211 14:09:10.477839 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc5cca8e-233a-4621-87bc-d8c64a6d1d88" containerName="neutron-httpd" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.477849 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc5cca8e-233a-4621-87bc-d8c64a6d1d88" containerName="neutron-httpd" Dec 11 14:09:10 crc kubenswrapper[5050]: E1211 14:09:10.477874 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5399477c-8dc9-4a49-a264-4f41042a3db7" containerName="ceilometer-central-agent" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.477887 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="5399477c-8dc9-4a49-a264-4f41042a3db7" containerName="ceilometer-central-agent" Dec 11 14:09:10 crc kubenswrapper[5050]: E1211 14:09:10.477902 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5399477c-8dc9-4a49-a264-4f41042a3db7" containerName="ceilometer-notification-agent" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.477910 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="5399477c-8dc9-4a49-a264-4f41042a3db7" containerName="ceilometer-notification-agent" Dec 11 14:09:10 crc kubenswrapper[5050]: E1211 14:09:10.477925 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5399477c-8dc9-4a49-a264-4f41042a3db7" containerName="sg-core" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.477934 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="5399477c-8dc9-4a49-a264-4f41042a3db7" containerName="sg-core" Dec 11 14:09:10 crc kubenswrapper[5050]: E1211 14:09:10.477949 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc5cca8e-233a-4621-87bc-d8c64a6d1d88" containerName="neutron-api" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.477956 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc5cca8e-233a-4621-87bc-d8c64a6d1d88" containerName="neutron-api" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.478196 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="5399477c-8dc9-4a49-a264-4f41042a3db7" containerName="proxy-httpd" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.478214 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="5399477c-8dc9-4a49-a264-4f41042a3db7" containerName="ceilometer-notification-agent" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.478230 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="5399477c-8dc9-4a49-a264-4f41042a3db7" containerName="sg-core" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.478250 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc5cca8e-233a-4621-87bc-d8c64a6d1d88" containerName="neutron-api" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.478259 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc5cca8e-233a-4621-87bc-d8c64a6d1d88" containerName="neutron-httpd" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.478270 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="5399477c-8dc9-4a49-a264-4f41042a3db7" containerName="ceilometer-central-agent" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.478292 5050 scope.go:117] "RemoveContainer" containerID="d918e8d5709d4b3b1c3f78dbd33732a39b838846fe611ef223340c8358bf4c25" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.480717 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.485275 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.485286 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.496446 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.530495 5050 scope.go:117] "RemoveContainer" containerID="8a4145a00301fbe55da05988b615023c9bc4ec6cd930d63d261a2a1eb3aba7fb" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.555372 5050 scope.go:117] "RemoveContainer" containerID="63fb0ac378e6b7cde7188c22823010f69834feeaeead1089df533065e5482e10" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.590384 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a565dde-2487-4769-8ded-f12f854974a3-scripts\") pod \"ceilometer-0\" (UID: \"0a565dde-2487-4769-8ded-f12f854974a3\") " pod="openstack/ceilometer-0" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.590550 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a565dde-2487-4769-8ded-f12f854974a3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0a565dde-2487-4769-8ded-f12f854974a3\") " pod="openstack/ceilometer-0" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.590635 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a565dde-2487-4769-8ded-f12f854974a3-config-data\") pod \"ceilometer-0\" (UID: \"0a565dde-2487-4769-8ded-f12f854974a3\") " pod="openstack/ceilometer-0" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.590707 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a565dde-2487-4769-8ded-f12f854974a3-run-httpd\") pod \"ceilometer-0\" (UID: \"0a565dde-2487-4769-8ded-f12f854974a3\") " pod="openstack/ceilometer-0" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.590923 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a565dde-2487-4769-8ded-f12f854974a3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0a565dde-2487-4769-8ded-f12f854974a3\") " pod="openstack/ceilometer-0" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.591097 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj7bp\" (UniqueName: \"kubernetes.io/projected/0a565dde-2487-4769-8ded-f12f854974a3-kube-api-access-sj7bp\") pod \"ceilometer-0\" (UID: \"0a565dde-2487-4769-8ded-f12f854974a3\") " pod="openstack/ceilometer-0" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.591345 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a565dde-2487-4769-8ded-f12f854974a3-log-httpd\") pod \"ceilometer-0\" (UID: \"0a565dde-2487-4769-8ded-f12f854974a3\") " pod="openstack/ceilometer-0" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.693142 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a565dde-2487-4769-8ded-f12f854974a3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0a565dde-2487-4769-8ded-f12f854974a3\") " pod="openstack/ceilometer-0" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.693584 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a565dde-2487-4769-8ded-f12f854974a3-config-data\") pod \"ceilometer-0\" (UID: \"0a565dde-2487-4769-8ded-f12f854974a3\") " pod="openstack/ceilometer-0" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.694660 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a565dde-2487-4769-8ded-f12f854974a3-run-httpd\") pod \"ceilometer-0\" (UID: \"0a565dde-2487-4769-8ded-f12f854974a3\") " pod="openstack/ceilometer-0" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.694952 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a565dde-2487-4769-8ded-f12f854974a3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0a565dde-2487-4769-8ded-f12f854974a3\") " pod="openstack/ceilometer-0" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.695360 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a565dde-2487-4769-8ded-f12f854974a3-run-httpd\") pod \"ceilometer-0\" (UID: \"0a565dde-2487-4769-8ded-f12f854974a3\") " pod="openstack/ceilometer-0" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.695384 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj7bp\" (UniqueName: \"kubernetes.io/projected/0a565dde-2487-4769-8ded-f12f854974a3-kube-api-access-sj7bp\") pod \"ceilometer-0\" (UID: \"0a565dde-2487-4769-8ded-f12f854974a3\") " pod="openstack/ceilometer-0" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.695484 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a565dde-2487-4769-8ded-f12f854974a3-log-httpd\") pod \"ceilometer-0\" (UID: \"0a565dde-2487-4769-8ded-f12f854974a3\") " pod="openstack/ceilometer-0" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.695601 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a565dde-2487-4769-8ded-f12f854974a3-scripts\") pod \"ceilometer-0\" (UID: \"0a565dde-2487-4769-8ded-f12f854974a3\") " pod="openstack/ceilometer-0" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.695988 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a565dde-2487-4769-8ded-f12f854974a3-log-httpd\") pod \"ceilometer-0\" (UID: \"0a565dde-2487-4769-8ded-f12f854974a3\") " pod="openstack/ceilometer-0" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.702384 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a565dde-2487-4769-8ded-f12f854974a3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0a565dde-2487-4769-8ded-f12f854974a3\") " pod="openstack/ceilometer-0" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.703829 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a565dde-2487-4769-8ded-f12f854974a3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0a565dde-2487-4769-8ded-f12f854974a3\") " pod="openstack/ceilometer-0" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.704969 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a565dde-2487-4769-8ded-f12f854974a3-config-data\") pod \"ceilometer-0\" (UID: \"0a565dde-2487-4769-8ded-f12f854974a3\") " pod="openstack/ceilometer-0" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.716565 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj7bp\" (UniqueName: \"kubernetes.io/projected/0a565dde-2487-4769-8ded-f12f854974a3-kube-api-access-sj7bp\") pod \"ceilometer-0\" (UID: \"0a565dde-2487-4769-8ded-f12f854974a3\") " pod="openstack/ceilometer-0" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.718380 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a565dde-2487-4769-8ded-f12f854974a3-scripts\") pod \"ceilometer-0\" (UID: \"0a565dde-2487-4769-8ded-f12f854974a3\") " pod="openstack/ceilometer-0" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.796915 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.797197 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:09:10 crc kubenswrapper[5050]: I1211 14:09:10.802226 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 14:09:11 crc kubenswrapper[5050]: I1211 14:09:11.330531 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:09:11 crc kubenswrapper[5050]: I1211 14:09:11.360102 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a565dde-2487-4769-8ded-f12f854974a3","Type":"ContainerStarted","Data":"134f394ef4576e5c9889bd45c118d830e7f81652537068ef79ef7eee38d4c373"} Dec 11 14:09:11 crc kubenswrapper[5050]: I1211 14:09:11.556187 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5399477c-8dc9-4a49-a264-4f41042a3db7" path="/var/lib/kubelet/pods/5399477c-8dc9-4a49-a264-4f41042a3db7/volumes" Dec 11 14:09:11 crc kubenswrapper[5050]: I1211 14:09:11.556938 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5cca8e-233a-4621-87bc-d8c64a6d1d88" path="/var/lib/kubelet/pods/bc5cca8e-233a-4621-87bc-d8c64a6d1d88/volumes" Dec 11 14:09:12 crc kubenswrapper[5050]: I1211 14:09:12.390643 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a565dde-2487-4769-8ded-f12f854974a3","Type":"ContainerStarted","Data":"446a31343fe3580f1fcbb0e6eadb32cca066114d34271bddbb6b3e3fe090b7c6"} Dec 11 14:09:13 crc kubenswrapper[5050]: I1211 14:09:13.406168 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a565dde-2487-4769-8ded-f12f854974a3","Type":"ContainerStarted","Data":"b1e5c13149d09a48020f93335a1f0829a6e8a80b81e0096094b2ea84ebde3f1c"} Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.393913 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-2cbbr"] Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.395854 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2cbbr" Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.416255 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-2cbbr"] Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.492946 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-brs2f"] Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.494635 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-brs2f" Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.494878 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4affe74d-e417-48c1-9c71-7cca7d0729db-operator-scripts\") pod \"nova-api-db-create-2cbbr\" (UID: \"4affe74d-e417-48c1-9c71-7cca7d0729db\") " pod="openstack/nova-api-db-create-2cbbr" Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.494988 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bfdk\" (UniqueName: \"kubernetes.io/projected/4affe74d-e417-48c1-9c71-7cca7d0729db-kube-api-access-6bfdk\") pod \"nova-api-db-create-2cbbr\" (UID: \"4affe74d-e417-48c1-9c71-7cca7d0729db\") " pod="openstack/nova-api-db-create-2cbbr" Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.505058 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-brs2f"] Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.595057 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-4326-account-create-update-mx2wn"] Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.596401 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4affe74d-e417-48c1-9c71-7cca7d0729db-operator-scripts\") pod \"nova-api-db-create-2cbbr\" (UID: \"4affe74d-e417-48c1-9c71-7cca7d0729db\") " pod="openstack/nova-api-db-create-2cbbr" Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.596453 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4130a67a-7d8d-4eff-b6ea-be9f43992443-operator-scripts\") pod \"nova-cell0-db-create-brs2f\" (UID: \"4130a67a-7d8d-4eff-b6ea-be9f43992443\") " pod="openstack/nova-cell0-db-create-brs2f" Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.596494 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsz8q\" (UniqueName: \"kubernetes.io/projected/4130a67a-7d8d-4eff-b6ea-be9f43992443-kube-api-access-fsz8q\") pod \"nova-cell0-db-create-brs2f\" (UID: \"4130a67a-7d8d-4eff-b6ea-be9f43992443\") " pod="openstack/nova-cell0-db-create-brs2f" Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.596518 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bfdk\" (UniqueName: \"kubernetes.io/projected/4affe74d-e417-48c1-9c71-7cca7d0729db-kube-api-access-6bfdk\") pod \"nova-api-db-create-2cbbr\" (UID: \"4affe74d-e417-48c1-9c71-7cca7d0729db\") " pod="openstack/nova-api-db-create-2cbbr" Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.596412 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4326-account-create-update-mx2wn" Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.597189 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4affe74d-e417-48c1-9c71-7cca7d0729db-operator-scripts\") pod \"nova-api-db-create-2cbbr\" (UID: \"4affe74d-e417-48c1-9c71-7cca7d0729db\") " pod="openstack/nova-api-db-create-2cbbr" Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.601423 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.612754 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-4326-account-create-update-mx2wn"] Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.621572 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bfdk\" (UniqueName: \"kubernetes.io/projected/4affe74d-e417-48c1-9c71-7cca7d0729db-kube-api-access-6bfdk\") pod \"nova-api-db-create-2cbbr\" (UID: \"4affe74d-e417-48c1-9c71-7cca7d0729db\") " pod="openstack/nova-api-db-create-2cbbr" Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.693393 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-97qxs"] Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.694824 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-97qxs" Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.698954 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee81095a-fe79-47c7-aa3e-e1768a655b86-operator-scripts\") pod \"nova-api-4326-account-create-update-mx2wn\" (UID: \"ee81095a-fe79-47c7-aa3e-e1768a655b86\") " pod="openstack/nova-api-4326-account-create-update-mx2wn" Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.699047 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njn77\" (UniqueName: \"kubernetes.io/projected/ee81095a-fe79-47c7-aa3e-e1768a655b86-kube-api-access-njn77\") pod \"nova-api-4326-account-create-update-mx2wn\" (UID: \"ee81095a-fe79-47c7-aa3e-e1768a655b86\") " pod="openstack/nova-api-4326-account-create-update-mx2wn" Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.699207 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4130a67a-7d8d-4eff-b6ea-be9f43992443-operator-scripts\") pod \"nova-cell0-db-create-brs2f\" (UID: \"4130a67a-7d8d-4eff-b6ea-be9f43992443\") " pod="openstack/nova-cell0-db-create-brs2f" Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.699281 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsz8q\" (UniqueName: \"kubernetes.io/projected/4130a67a-7d8d-4eff-b6ea-be9f43992443-kube-api-access-fsz8q\") pod \"nova-cell0-db-create-brs2f\" (UID: \"4130a67a-7d8d-4eff-b6ea-be9f43992443\") " pod="openstack/nova-cell0-db-create-brs2f" Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.700172 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4130a67a-7d8d-4eff-b6ea-be9f43992443-operator-scripts\") pod \"nova-cell0-db-create-brs2f\" (UID: \"4130a67a-7d8d-4eff-b6ea-be9f43992443\") " pod="openstack/nova-cell0-db-create-brs2f" Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.707472 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-97qxs"] Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.720524 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsz8q\" (UniqueName: \"kubernetes.io/projected/4130a67a-7d8d-4eff-b6ea-be9f43992443-kube-api-access-fsz8q\") pod \"nova-cell0-db-create-brs2f\" (UID: \"4130a67a-7d8d-4eff-b6ea-be9f43992443\") " pod="openstack/nova-cell0-db-create-brs2f" Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.726828 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2cbbr" Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.805755 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jzv7\" (UniqueName: \"kubernetes.io/projected/c3f359da-3978-4220-91da-28b53f4cf109-kube-api-access-4jzv7\") pod \"nova-cell1-db-create-97qxs\" (UID: \"c3f359da-3978-4220-91da-28b53f4cf109\") " pod="openstack/nova-cell1-db-create-97qxs" Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.805832 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee81095a-fe79-47c7-aa3e-e1768a655b86-operator-scripts\") pod \"nova-api-4326-account-create-update-mx2wn\" (UID: \"ee81095a-fe79-47c7-aa3e-e1768a655b86\") " pod="openstack/nova-api-4326-account-create-update-mx2wn" Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.805854 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3f359da-3978-4220-91da-28b53f4cf109-operator-scripts\") pod \"nova-cell1-db-create-97qxs\" (UID: \"c3f359da-3978-4220-91da-28b53f4cf109\") " pod="openstack/nova-cell1-db-create-97qxs" Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.805890 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njn77\" (UniqueName: \"kubernetes.io/projected/ee81095a-fe79-47c7-aa3e-e1768a655b86-kube-api-access-njn77\") pod \"nova-api-4326-account-create-update-mx2wn\" (UID: \"ee81095a-fe79-47c7-aa3e-e1768a655b86\") " pod="openstack/nova-api-4326-account-create-update-mx2wn" Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.807056 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee81095a-fe79-47c7-aa3e-e1768a655b86-operator-scripts\") pod \"nova-api-4326-account-create-update-mx2wn\" (UID: \"ee81095a-fe79-47c7-aa3e-e1768a655b86\") " pod="openstack/nova-api-4326-account-create-update-mx2wn" Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.812416 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-brs2f" Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.817431 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-e20d-account-create-update-6hllh"] Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.818847 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e20d-account-create-update-6hllh" Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.821478 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.828154 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-e20d-account-create-update-6hllh"] Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.829712 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njn77\" (UniqueName: \"kubernetes.io/projected/ee81095a-fe79-47c7-aa3e-e1768a655b86-kube-api-access-njn77\") pod \"nova-api-4326-account-create-update-mx2wn\" (UID: \"ee81095a-fe79-47c7-aa3e-e1768a655b86\") " pod="openstack/nova-api-4326-account-create-update-mx2wn" Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.910498 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jzv7\" (UniqueName: \"kubernetes.io/projected/c3f359da-3978-4220-91da-28b53f4cf109-kube-api-access-4jzv7\") pod \"nova-cell1-db-create-97qxs\" (UID: \"c3f359da-3978-4220-91da-28b53f4cf109\") " pod="openstack/nova-cell1-db-create-97qxs" Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.911625 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3f359da-3978-4220-91da-28b53f4cf109-operator-scripts\") pod \"nova-cell1-db-create-97qxs\" (UID: \"c3f359da-3978-4220-91da-28b53f4cf109\") " pod="openstack/nova-cell1-db-create-97qxs" Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.912447 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3f359da-3978-4220-91da-28b53f4cf109-operator-scripts\") pod \"nova-cell1-db-create-97qxs\" (UID: \"c3f359da-3978-4220-91da-28b53f4cf109\") " pod="openstack/nova-cell1-db-create-97qxs" Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.938022 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jzv7\" (UniqueName: \"kubernetes.io/projected/c3f359da-3978-4220-91da-28b53f4cf109-kube-api-access-4jzv7\") pod \"nova-cell1-db-create-97qxs\" (UID: \"c3f359da-3978-4220-91da-28b53f4cf109\") " pod="openstack/nova-cell1-db-create-97qxs" Dec 11 14:09:14 crc kubenswrapper[5050]: I1211 14:09:14.984620 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4326-account-create-update-mx2wn" Dec 11 14:09:15 crc kubenswrapper[5050]: I1211 14:09:15.009118 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-d6ec-account-create-update-f7xfx"] Dec 11 14:09:15 crc kubenswrapper[5050]: I1211 14:09:15.010837 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d6ec-account-create-update-f7xfx" Dec 11 14:09:15 crc kubenswrapper[5050]: I1211 14:09:15.013789 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 11 14:09:15 crc kubenswrapper[5050]: I1211 14:09:15.015388 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81a009ad-1a05-40c1-9c75-7a559592eadf-operator-scripts\") pod \"nova-cell0-e20d-account-create-update-6hllh\" (UID: \"81a009ad-1a05-40c1-9c75-7a559592eadf\") " pod="openstack/nova-cell0-e20d-account-create-update-6hllh" Dec 11 14:09:15 crc kubenswrapper[5050]: I1211 14:09:15.015498 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb985\" (UniqueName: \"kubernetes.io/projected/81a009ad-1a05-40c1-9c75-7a559592eadf-kube-api-access-wb985\") pod \"nova-cell0-e20d-account-create-update-6hllh\" (UID: \"81a009ad-1a05-40c1-9c75-7a559592eadf\") " pod="openstack/nova-cell0-e20d-account-create-update-6hllh" Dec 11 14:09:15 crc kubenswrapper[5050]: I1211 14:09:15.016759 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-97qxs" Dec 11 14:09:15 crc kubenswrapper[5050]: I1211 14:09:15.019997 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-d6ec-account-create-update-f7xfx"] Dec 11 14:09:15 crc kubenswrapper[5050]: I1211 14:09:15.117534 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/097e2c08-b7fb-4d21-8e0b-efdb0ac0c9a6-operator-scripts\") pod \"nova-cell1-d6ec-account-create-update-f7xfx\" (UID: \"097e2c08-b7fb-4d21-8e0b-efdb0ac0c9a6\") " pod="openstack/nova-cell1-d6ec-account-create-update-f7xfx" Dec 11 14:09:15 crc kubenswrapper[5050]: I1211 14:09:15.117849 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm8s8\" (UniqueName: \"kubernetes.io/projected/097e2c08-b7fb-4d21-8e0b-efdb0ac0c9a6-kube-api-access-zm8s8\") pod \"nova-cell1-d6ec-account-create-update-f7xfx\" (UID: \"097e2c08-b7fb-4d21-8e0b-efdb0ac0c9a6\") " pod="openstack/nova-cell1-d6ec-account-create-update-f7xfx" Dec 11 14:09:15 crc kubenswrapper[5050]: I1211 14:09:15.118193 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81a009ad-1a05-40c1-9c75-7a559592eadf-operator-scripts\") pod \"nova-cell0-e20d-account-create-update-6hllh\" (UID: \"81a009ad-1a05-40c1-9c75-7a559592eadf\") " pod="openstack/nova-cell0-e20d-account-create-update-6hllh" Dec 11 14:09:15 crc kubenswrapper[5050]: I1211 14:09:15.118264 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb985\" (UniqueName: \"kubernetes.io/projected/81a009ad-1a05-40c1-9c75-7a559592eadf-kube-api-access-wb985\") pod \"nova-cell0-e20d-account-create-update-6hllh\" (UID: \"81a009ad-1a05-40c1-9c75-7a559592eadf\") " pod="openstack/nova-cell0-e20d-account-create-update-6hllh" Dec 11 14:09:15 crc kubenswrapper[5050]: I1211 14:09:15.119349 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81a009ad-1a05-40c1-9c75-7a559592eadf-operator-scripts\") pod \"nova-cell0-e20d-account-create-update-6hllh\" (UID: \"81a009ad-1a05-40c1-9c75-7a559592eadf\") " pod="openstack/nova-cell0-e20d-account-create-update-6hllh" Dec 11 14:09:15 crc kubenswrapper[5050]: I1211 14:09:15.166386 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb985\" (UniqueName: \"kubernetes.io/projected/81a009ad-1a05-40c1-9c75-7a559592eadf-kube-api-access-wb985\") pod \"nova-cell0-e20d-account-create-update-6hllh\" (UID: \"81a009ad-1a05-40c1-9c75-7a559592eadf\") " pod="openstack/nova-cell0-e20d-account-create-update-6hllh" Dec 11 14:09:15 crc kubenswrapper[5050]: I1211 14:09:15.222358 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/097e2c08-b7fb-4d21-8e0b-efdb0ac0c9a6-operator-scripts\") pod \"nova-cell1-d6ec-account-create-update-f7xfx\" (UID: \"097e2c08-b7fb-4d21-8e0b-efdb0ac0c9a6\") " pod="openstack/nova-cell1-d6ec-account-create-update-f7xfx" Dec 11 14:09:15 crc kubenswrapper[5050]: I1211 14:09:15.222445 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm8s8\" (UniqueName: \"kubernetes.io/projected/097e2c08-b7fb-4d21-8e0b-efdb0ac0c9a6-kube-api-access-zm8s8\") pod \"nova-cell1-d6ec-account-create-update-f7xfx\" (UID: \"097e2c08-b7fb-4d21-8e0b-efdb0ac0c9a6\") " pod="openstack/nova-cell1-d6ec-account-create-update-f7xfx" Dec 11 14:09:15 crc kubenswrapper[5050]: I1211 14:09:15.223672 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/097e2c08-b7fb-4d21-8e0b-efdb0ac0c9a6-operator-scripts\") pod \"nova-cell1-d6ec-account-create-update-f7xfx\" (UID: \"097e2c08-b7fb-4d21-8e0b-efdb0ac0c9a6\") " pod="openstack/nova-cell1-d6ec-account-create-update-f7xfx" Dec 11 14:09:15 crc kubenswrapper[5050]: I1211 14:09:15.300251 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm8s8\" (UniqueName: \"kubernetes.io/projected/097e2c08-b7fb-4d21-8e0b-efdb0ac0c9a6-kube-api-access-zm8s8\") pod \"nova-cell1-d6ec-account-create-update-f7xfx\" (UID: \"097e2c08-b7fb-4d21-8e0b-efdb0ac0c9a6\") " pod="openstack/nova-cell1-d6ec-account-create-update-f7xfx" Dec 11 14:09:15 crc kubenswrapper[5050]: I1211 14:09:15.345630 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d6ec-account-create-update-f7xfx" Dec 11 14:09:15 crc kubenswrapper[5050]: I1211 14:09:15.373894 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-brs2f"] Dec 11 14:09:15 crc kubenswrapper[5050]: I1211 14:09:15.406574 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-2cbbr"] Dec 11 14:09:15 crc kubenswrapper[5050]: I1211 14:09:15.441577 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e20d-account-create-update-6hllh" Dec 11 14:09:15 crc kubenswrapper[5050]: I1211 14:09:15.468178 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a565dde-2487-4769-8ded-f12f854974a3","Type":"ContainerStarted","Data":"cd238a0ca1f59437094b058cc5204c39ad43032bad56669c8079a6896153cf65"} Dec 11 14:09:15 crc kubenswrapper[5050]: W1211 14:09:15.564270 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4130a67a_7d8d_4eff_b6ea_be9f43992443.slice/crio-2bb5d392d104b5ee03d96eecb639d989508533c2e37efe6759c719eb3cc2b0b1 WatchSource:0}: Error finding container 2bb5d392d104b5ee03d96eecb639d989508533c2e37efe6759c719eb3cc2b0b1: Status 404 returned error can't find the container with id 2bb5d392d104b5ee03d96eecb639d989508533c2e37efe6759c719eb3cc2b0b1 Dec 11 14:09:15 crc kubenswrapper[5050]: I1211 14:09:15.640973 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-4326-account-create-update-mx2wn"] Dec 11 14:09:15 crc kubenswrapper[5050]: I1211 14:09:15.923853 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-97qxs"] Dec 11 14:09:16 crc kubenswrapper[5050]: I1211 14:09:16.028911 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-d6ec-account-create-update-f7xfx"] Dec 11 14:09:16 crc kubenswrapper[5050]: W1211 14:09:16.084242 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod097e2c08_b7fb_4d21_8e0b_efdb0ac0c9a6.slice/crio-940ec0277ca6c1d129f0e9ef0b6de0f537d13f7eb1297de75312d65b70475a87 WatchSource:0}: Error finding container 940ec0277ca6c1d129f0e9ef0b6de0f537d13f7eb1297de75312d65b70475a87: Status 404 returned error can't find the container with id 940ec0277ca6c1d129f0e9ef0b6de0f537d13f7eb1297de75312d65b70475a87 Dec 11 14:09:16 crc kubenswrapper[5050]: I1211 14:09:16.222433 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-e20d-account-create-update-6hllh"] Dec 11 14:09:16 crc kubenswrapper[5050]: W1211 14:09:16.223760 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81a009ad_1a05_40c1_9c75_7a559592eadf.slice/crio-acab2b77c22490c5010f8345090d6f972aca21dcf6a908f8b6dd96be3e0022a1 WatchSource:0}: Error finding container acab2b77c22490c5010f8345090d6f972aca21dcf6a908f8b6dd96be3e0022a1: Status 404 returned error can't find the container with id acab2b77c22490c5010f8345090d6f972aca21dcf6a908f8b6dd96be3e0022a1 Dec 11 14:09:16 crc kubenswrapper[5050]: I1211 14:09:16.485733 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-97qxs" event={"ID":"c3f359da-3978-4220-91da-28b53f4cf109","Type":"ContainerStarted","Data":"1bef0680c44fff43ab5a9504ecc960a1b4317db8f23fbca332406cda6c7a3be5"} Dec 11 14:09:16 crc kubenswrapper[5050]: I1211 14:09:16.486049 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-97qxs" event={"ID":"c3f359da-3978-4220-91da-28b53f4cf109","Type":"ContainerStarted","Data":"8f4ada786e3a3b7f2e1d4cf993471365f4c0ce76825c12e5af938e3f7c11c59f"} Dec 11 14:09:16 crc kubenswrapper[5050]: I1211 14:09:16.489523 5050 generic.go:334] "Generic (PLEG): container finished" podID="4130a67a-7d8d-4eff-b6ea-be9f43992443" containerID="43ee0aafff3805d46aa0d5efae095bd970bdd079d7f8cf85ed78a9ebc421e4cb" exitCode=0 Dec 11 14:09:16 crc kubenswrapper[5050]: I1211 14:09:16.489616 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-brs2f" event={"ID":"4130a67a-7d8d-4eff-b6ea-be9f43992443","Type":"ContainerDied","Data":"43ee0aafff3805d46aa0d5efae095bd970bdd079d7f8cf85ed78a9ebc421e4cb"} Dec 11 14:09:16 crc kubenswrapper[5050]: I1211 14:09:16.489664 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-brs2f" event={"ID":"4130a67a-7d8d-4eff-b6ea-be9f43992443","Type":"ContainerStarted","Data":"2bb5d392d104b5ee03d96eecb639d989508533c2e37efe6759c719eb3cc2b0b1"} Dec 11 14:09:16 crc kubenswrapper[5050]: I1211 14:09:16.506239 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a565dde-2487-4769-8ded-f12f854974a3","Type":"ContainerStarted","Data":"d9e9e4869e8babe6006b7fb53f05249086077270aef055ad2257d108e07ab995"} Dec 11 14:09:16 crc kubenswrapper[5050]: I1211 14:09:16.507178 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 11 14:09:16 crc kubenswrapper[5050]: I1211 14:09:16.509493 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d6ec-account-create-update-f7xfx" event={"ID":"097e2c08-b7fb-4d21-8e0b-efdb0ac0c9a6","Type":"ContainerStarted","Data":"fdcecf14f741e53bec9526499e9c4b26c9749197a5f66f8e11b391a11024579f"} Dec 11 14:09:16 crc kubenswrapper[5050]: I1211 14:09:16.509556 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d6ec-account-create-update-f7xfx" event={"ID":"097e2c08-b7fb-4d21-8e0b-efdb0ac0c9a6","Type":"ContainerStarted","Data":"940ec0277ca6c1d129f0e9ef0b6de0f537d13f7eb1297de75312d65b70475a87"} Dec 11 14:09:16 crc kubenswrapper[5050]: I1211 14:09:16.511117 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-97qxs" podStartSLOduration=2.511100508 podStartE2EDuration="2.511100508s" podCreationTimestamp="2025-12-11 14:09:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:09:16.504972913 +0000 UTC m=+1247.348695509" watchObservedRunningTime="2025-12-11 14:09:16.511100508 +0000 UTC m=+1247.354823094" Dec 11 14:09:16 crc kubenswrapper[5050]: I1211 14:09:16.513445 5050 generic.go:334] "Generic (PLEG): container finished" podID="4affe74d-e417-48c1-9c71-7cca7d0729db" containerID="0fb4881029120d8bb5f547b1ad66c0f186f487a95a32a956e5d51f220e3cca47" exitCode=0 Dec 11 14:09:16 crc kubenswrapper[5050]: I1211 14:09:16.513522 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2cbbr" event={"ID":"4affe74d-e417-48c1-9c71-7cca7d0729db","Type":"ContainerDied","Data":"0fb4881029120d8bb5f547b1ad66c0f186f487a95a32a956e5d51f220e3cca47"} Dec 11 14:09:16 crc kubenswrapper[5050]: I1211 14:09:16.513552 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2cbbr" event={"ID":"4affe74d-e417-48c1-9c71-7cca7d0729db","Type":"ContainerStarted","Data":"79730644c3148ebd3752128cebc77023c888014e7acc5668555db20fb075e783"} Dec 11 14:09:16 crc kubenswrapper[5050]: I1211 14:09:16.515916 5050 generic.go:334] "Generic (PLEG): container finished" podID="ee81095a-fe79-47c7-aa3e-e1768a655b86" containerID="81b83b4349b2dc8f2b9b8ea3e181e622e1a06808c372e219e6cfa525077df28b" exitCode=0 Dec 11 14:09:16 crc kubenswrapper[5050]: I1211 14:09:16.516037 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4326-account-create-update-mx2wn" event={"ID":"ee81095a-fe79-47c7-aa3e-e1768a655b86","Type":"ContainerDied","Data":"81b83b4349b2dc8f2b9b8ea3e181e622e1a06808c372e219e6cfa525077df28b"} Dec 11 14:09:16 crc kubenswrapper[5050]: I1211 14:09:16.516053 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4326-account-create-update-mx2wn" event={"ID":"ee81095a-fe79-47c7-aa3e-e1768a655b86","Type":"ContainerStarted","Data":"356edcabbe6274adaec268a09f4b30b919209447ba1f99cfb6c3fb64216c12df"} Dec 11 14:09:16 crc kubenswrapper[5050]: I1211 14:09:16.517193 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e20d-account-create-update-6hllh" event={"ID":"81a009ad-1a05-40c1-9c75-7a559592eadf","Type":"ContainerStarted","Data":"acab2b77c22490c5010f8345090d6f972aca21dcf6a908f8b6dd96be3e0022a1"} Dec 11 14:09:16 crc kubenswrapper[5050]: I1211 14:09:16.556499 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.095214157 podStartE2EDuration="6.556288375s" podCreationTimestamp="2025-12-11 14:09:10 +0000 UTC" firstStartedPulling="2025-12-11 14:09:11.343038695 +0000 UTC m=+1242.186761281" lastFinishedPulling="2025-12-11 14:09:15.804112913 +0000 UTC m=+1246.647835499" observedRunningTime="2025-12-11 14:09:16.547802167 +0000 UTC m=+1247.391524753" watchObservedRunningTime="2025-12-11 14:09:16.556288375 +0000 UTC m=+1247.400010961" Dec 11 14:09:16 crc kubenswrapper[5050]: I1211 14:09:16.601794 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-d6ec-account-create-update-f7xfx" podStartSLOduration=2.601768081 podStartE2EDuration="2.601768081s" podCreationTimestamp="2025-12-11 14:09:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:09:16.592423139 +0000 UTC m=+1247.436145725" watchObservedRunningTime="2025-12-11 14:09:16.601768081 +0000 UTC m=+1247.445490677" Dec 11 14:09:17 crc kubenswrapper[5050]: I1211 14:09:17.529553 5050 generic.go:334] "Generic (PLEG): container finished" podID="81a009ad-1a05-40c1-9c75-7a559592eadf" containerID="6161a3de9821ffcd08ec84fa22ce5305398be067c2c6d3a6b39a6732c2fc1edb" exitCode=0 Dec 11 14:09:17 crc kubenswrapper[5050]: I1211 14:09:17.529637 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e20d-account-create-update-6hllh" event={"ID":"81a009ad-1a05-40c1-9c75-7a559592eadf","Type":"ContainerDied","Data":"6161a3de9821ffcd08ec84fa22ce5305398be067c2c6d3a6b39a6732c2fc1edb"} Dec 11 14:09:17 crc kubenswrapper[5050]: I1211 14:09:17.531387 5050 generic.go:334] "Generic (PLEG): container finished" podID="c3f359da-3978-4220-91da-28b53f4cf109" containerID="1bef0680c44fff43ab5a9504ecc960a1b4317db8f23fbca332406cda6c7a3be5" exitCode=0 Dec 11 14:09:17 crc kubenswrapper[5050]: I1211 14:09:17.531516 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-97qxs" event={"ID":"c3f359da-3978-4220-91da-28b53f4cf109","Type":"ContainerDied","Data":"1bef0680c44fff43ab5a9504ecc960a1b4317db8f23fbca332406cda6c7a3be5"} Dec 11 14:09:17 crc kubenswrapper[5050]: I1211 14:09:17.533493 5050 generic.go:334] "Generic (PLEG): container finished" podID="097e2c08-b7fb-4d21-8e0b-efdb0ac0c9a6" containerID="fdcecf14f741e53bec9526499e9c4b26c9749197a5f66f8e11b391a11024579f" exitCode=0 Dec 11 14:09:17 crc kubenswrapper[5050]: I1211 14:09:17.533608 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d6ec-account-create-update-f7xfx" event={"ID":"097e2c08-b7fb-4d21-8e0b-efdb0ac0c9a6","Type":"ContainerDied","Data":"fdcecf14f741e53bec9526499e9c4b26c9749197a5f66f8e11b391a11024579f"} Dec 11 14:09:17 crc kubenswrapper[5050]: I1211 14:09:17.991051 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4326-account-create-update-mx2wn" Dec 11 14:09:18 crc kubenswrapper[5050]: I1211 14:09:18.103781 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-brs2f" Dec 11 14:09:18 crc kubenswrapper[5050]: I1211 14:09:18.112598 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2cbbr" Dec 11 14:09:18 crc kubenswrapper[5050]: I1211 14:09:18.124888 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njn77\" (UniqueName: \"kubernetes.io/projected/ee81095a-fe79-47c7-aa3e-e1768a655b86-kube-api-access-njn77\") pod \"ee81095a-fe79-47c7-aa3e-e1768a655b86\" (UID: \"ee81095a-fe79-47c7-aa3e-e1768a655b86\") " Dec 11 14:09:18 crc kubenswrapper[5050]: I1211 14:09:18.124995 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee81095a-fe79-47c7-aa3e-e1768a655b86-operator-scripts\") pod \"ee81095a-fe79-47c7-aa3e-e1768a655b86\" (UID: \"ee81095a-fe79-47c7-aa3e-e1768a655b86\") " Dec 11 14:09:18 crc kubenswrapper[5050]: I1211 14:09:18.125925 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee81095a-fe79-47c7-aa3e-e1768a655b86-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ee81095a-fe79-47c7-aa3e-e1768a655b86" (UID: "ee81095a-fe79-47c7-aa3e-e1768a655b86"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:09:18 crc kubenswrapper[5050]: I1211 14:09:18.134197 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee81095a-fe79-47c7-aa3e-e1768a655b86-kube-api-access-njn77" (OuterVolumeSpecName: "kube-api-access-njn77") pod "ee81095a-fe79-47c7-aa3e-e1768a655b86" (UID: "ee81095a-fe79-47c7-aa3e-e1768a655b86"). InnerVolumeSpecName "kube-api-access-njn77". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:09:18 crc kubenswrapper[5050]: I1211 14:09:18.226871 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4130a67a-7d8d-4eff-b6ea-be9f43992443-operator-scripts\") pod \"4130a67a-7d8d-4eff-b6ea-be9f43992443\" (UID: \"4130a67a-7d8d-4eff-b6ea-be9f43992443\") " Dec 11 14:09:18 crc kubenswrapper[5050]: I1211 14:09:18.227235 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bfdk\" (UniqueName: \"kubernetes.io/projected/4affe74d-e417-48c1-9c71-7cca7d0729db-kube-api-access-6bfdk\") pod \"4affe74d-e417-48c1-9c71-7cca7d0729db\" (UID: \"4affe74d-e417-48c1-9c71-7cca7d0729db\") " Dec 11 14:09:18 crc kubenswrapper[5050]: I1211 14:09:18.227338 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsz8q\" (UniqueName: \"kubernetes.io/projected/4130a67a-7d8d-4eff-b6ea-be9f43992443-kube-api-access-fsz8q\") pod \"4130a67a-7d8d-4eff-b6ea-be9f43992443\" (UID: \"4130a67a-7d8d-4eff-b6ea-be9f43992443\") " Dec 11 14:09:18 crc kubenswrapper[5050]: I1211 14:09:18.227427 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4affe74d-e417-48c1-9c71-7cca7d0729db-operator-scripts\") pod \"4affe74d-e417-48c1-9c71-7cca7d0729db\" (UID: \"4affe74d-e417-48c1-9c71-7cca7d0729db\") " Dec 11 14:09:18 crc kubenswrapper[5050]: I1211 14:09:18.227661 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4130a67a-7d8d-4eff-b6ea-be9f43992443-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4130a67a-7d8d-4eff-b6ea-be9f43992443" (UID: "4130a67a-7d8d-4eff-b6ea-be9f43992443"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:09:18 crc kubenswrapper[5050]: I1211 14:09:18.227974 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njn77\" (UniqueName: \"kubernetes.io/projected/ee81095a-fe79-47c7-aa3e-e1768a655b86-kube-api-access-njn77\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:18 crc kubenswrapper[5050]: I1211 14:09:18.227991 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4130a67a-7d8d-4eff-b6ea-be9f43992443-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:18 crc kubenswrapper[5050]: I1211 14:09:18.228002 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee81095a-fe79-47c7-aa3e-e1768a655b86-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:18 crc kubenswrapper[5050]: I1211 14:09:18.229057 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4affe74d-e417-48c1-9c71-7cca7d0729db-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4affe74d-e417-48c1-9c71-7cca7d0729db" (UID: "4affe74d-e417-48c1-9c71-7cca7d0729db"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:09:18 crc kubenswrapper[5050]: I1211 14:09:18.232249 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4130a67a-7d8d-4eff-b6ea-be9f43992443-kube-api-access-fsz8q" (OuterVolumeSpecName: "kube-api-access-fsz8q") pod "4130a67a-7d8d-4eff-b6ea-be9f43992443" (UID: "4130a67a-7d8d-4eff-b6ea-be9f43992443"). InnerVolumeSpecName "kube-api-access-fsz8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:09:18 crc kubenswrapper[5050]: I1211 14:09:18.232413 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4affe74d-e417-48c1-9c71-7cca7d0729db-kube-api-access-6bfdk" (OuterVolumeSpecName: "kube-api-access-6bfdk") pod "4affe74d-e417-48c1-9c71-7cca7d0729db" (UID: "4affe74d-e417-48c1-9c71-7cca7d0729db"). InnerVolumeSpecName "kube-api-access-6bfdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:09:18 crc kubenswrapper[5050]: I1211 14:09:18.330071 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bfdk\" (UniqueName: \"kubernetes.io/projected/4affe74d-e417-48c1-9c71-7cca7d0729db-kube-api-access-6bfdk\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:18 crc kubenswrapper[5050]: I1211 14:09:18.330125 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsz8q\" (UniqueName: \"kubernetes.io/projected/4130a67a-7d8d-4eff-b6ea-be9f43992443-kube-api-access-fsz8q\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:18 crc kubenswrapper[5050]: I1211 14:09:18.330141 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4affe74d-e417-48c1-9c71-7cca7d0729db-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:18 crc kubenswrapper[5050]: I1211 14:09:18.546253 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2cbbr" Dec 11 14:09:18 crc kubenswrapper[5050]: I1211 14:09:18.546240 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2cbbr" event={"ID":"4affe74d-e417-48c1-9c71-7cca7d0729db","Type":"ContainerDied","Data":"79730644c3148ebd3752128cebc77023c888014e7acc5668555db20fb075e783"} Dec 11 14:09:18 crc kubenswrapper[5050]: I1211 14:09:18.546434 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79730644c3148ebd3752128cebc77023c888014e7acc5668555db20fb075e783" Dec 11 14:09:18 crc kubenswrapper[5050]: I1211 14:09:18.548088 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-4326-account-create-update-mx2wn" event={"ID":"ee81095a-fe79-47c7-aa3e-e1768a655b86","Type":"ContainerDied","Data":"356edcabbe6274adaec268a09f4b30b919209447ba1f99cfb6c3fb64216c12df"} Dec 11 14:09:18 crc kubenswrapper[5050]: I1211 14:09:18.548137 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="356edcabbe6274adaec268a09f4b30b919209447ba1f99cfb6c3fb64216c12df" Dec 11 14:09:18 crc kubenswrapper[5050]: I1211 14:09:18.548095 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-4326-account-create-update-mx2wn" Dec 11 14:09:18 crc kubenswrapper[5050]: I1211 14:09:18.551371 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-brs2f" event={"ID":"4130a67a-7d8d-4eff-b6ea-be9f43992443","Type":"ContainerDied","Data":"2bb5d392d104b5ee03d96eecb639d989508533c2e37efe6759c719eb3cc2b0b1"} Dec 11 14:09:18 crc kubenswrapper[5050]: I1211 14:09:18.551410 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bb5d392d104b5ee03d96eecb639d989508533c2e37efe6759c719eb3cc2b0b1" Dec 11 14:09:18 crc kubenswrapper[5050]: I1211 14:09:18.551668 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-brs2f" Dec 11 14:09:18 crc kubenswrapper[5050]: I1211 14:09:18.824564 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d6ec-account-create-update-f7xfx" Dec 11 14:09:18 crc kubenswrapper[5050]: I1211 14:09:18.944152 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/097e2c08-b7fb-4d21-8e0b-efdb0ac0c9a6-operator-scripts\") pod \"097e2c08-b7fb-4d21-8e0b-efdb0ac0c9a6\" (UID: \"097e2c08-b7fb-4d21-8e0b-efdb0ac0c9a6\") " Dec 11 14:09:18 crc kubenswrapper[5050]: I1211 14:09:18.944336 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm8s8\" (UniqueName: \"kubernetes.io/projected/097e2c08-b7fb-4d21-8e0b-efdb0ac0c9a6-kube-api-access-zm8s8\") pod \"097e2c08-b7fb-4d21-8e0b-efdb0ac0c9a6\" (UID: \"097e2c08-b7fb-4d21-8e0b-efdb0ac0c9a6\") " Dec 11 14:09:18 crc kubenswrapper[5050]: I1211 14:09:18.944694 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/097e2c08-b7fb-4d21-8e0b-efdb0ac0c9a6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "097e2c08-b7fb-4d21-8e0b-efdb0ac0c9a6" (UID: "097e2c08-b7fb-4d21-8e0b-efdb0ac0c9a6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:09:18 crc kubenswrapper[5050]: I1211 14:09:18.945066 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/097e2c08-b7fb-4d21-8e0b-efdb0ac0c9a6-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:18 crc kubenswrapper[5050]: I1211 14:09:18.951375 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/097e2c08-b7fb-4d21-8e0b-efdb0ac0c9a6-kube-api-access-zm8s8" (OuterVolumeSpecName: "kube-api-access-zm8s8") pod "097e2c08-b7fb-4d21-8e0b-efdb0ac0c9a6" (UID: "097e2c08-b7fb-4d21-8e0b-efdb0ac0c9a6"). InnerVolumeSpecName "kube-api-access-zm8s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:09:19 crc kubenswrapper[5050]: I1211 14:09:19.047838 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e20d-account-create-update-6hllh" Dec 11 14:09:19 crc kubenswrapper[5050]: I1211 14:09:19.048522 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm8s8\" (UniqueName: \"kubernetes.io/projected/097e2c08-b7fb-4d21-8e0b-efdb0ac0c9a6-kube-api-access-zm8s8\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:19 crc kubenswrapper[5050]: I1211 14:09:19.060648 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-97qxs" Dec 11 14:09:19 crc kubenswrapper[5050]: I1211 14:09:19.150389 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81a009ad-1a05-40c1-9c75-7a559592eadf-operator-scripts\") pod \"81a009ad-1a05-40c1-9c75-7a559592eadf\" (UID: \"81a009ad-1a05-40c1-9c75-7a559592eadf\") " Dec 11 14:09:19 crc kubenswrapper[5050]: I1211 14:09:19.150563 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb985\" (UniqueName: \"kubernetes.io/projected/81a009ad-1a05-40c1-9c75-7a559592eadf-kube-api-access-wb985\") pod \"81a009ad-1a05-40c1-9c75-7a559592eadf\" (UID: \"81a009ad-1a05-40c1-9c75-7a559592eadf\") " Dec 11 14:09:19 crc kubenswrapper[5050]: I1211 14:09:19.150706 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3f359da-3978-4220-91da-28b53f4cf109-operator-scripts\") pod \"c3f359da-3978-4220-91da-28b53f4cf109\" (UID: \"c3f359da-3978-4220-91da-28b53f4cf109\") " Dec 11 14:09:19 crc kubenswrapper[5050]: I1211 14:09:19.150752 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jzv7\" (UniqueName: \"kubernetes.io/projected/c3f359da-3978-4220-91da-28b53f4cf109-kube-api-access-4jzv7\") pod \"c3f359da-3978-4220-91da-28b53f4cf109\" (UID: \"c3f359da-3978-4220-91da-28b53f4cf109\") " Dec 11 14:09:19 crc kubenswrapper[5050]: I1211 14:09:19.151077 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81a009ad-1a05-40c1-9c75-7a559592eadf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "81a009ad-1a05-40c1-9c75-7a559592eadf" (UID: "81a009ad-1a05-40c1-9c75-7a559592eadf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:09:19 crc kubenswrapper[5050]: I1211 14:09:19.151259 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81a009ad-1a05-40c1-9c75-7a559592eadf-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:19 crc kubenswrapper[5050]: I1211 14:09:19.151809 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3f359da-3978-4220-91da-28b53f4cf109-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c3f359da-3978-4220-91da-28b53f4cf109" (UID: "c3f359da-3978-4220-91da-28b53f4cf109"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:09:19 crc kubenswrapper[5050]: I1211 14:09:19.154957 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3f359da-3978-4220-91da-28b53f4cf109-kube-api-access-4jzv7" (OuterVolumeSpecName: "kube-api-access-4jzv7") pod "c3f359da-3978-4220-91da-28b53f4cf109" (UID: "c3f359da-3978-4220-91da-28b53f4cf109"). InnerVolumeSpecName "kube-api-access-4jzv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:09:19 crc kubenswrapper[5050]: I1211 14:09:19.155189 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81a009ad-1a05-40c1-9c75-7a559592eadf-kube-api-access-wb985" (OuterVolumeSpecName: "kube-api-access-wb985") pod "81a009ad-1a05-40c1-9c75-7a559592eadf" (UID: "81a009ad-1a05-40c1-9c75-7a559592eadf"). InnerVolumeSpecName "kube-api-access-wb985". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:09:19 crc kubenswrapper[5050]: I1211 14:09:19.253098 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3f359da-3978-4220-91da-28b53f4cf109-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:19 crc kubenswrapper[5050]: I1211 14:09:19.253388 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jzv7\" (UniqueName: \"kubernetes.io/projected/c3f359da-3978-4220-91da-28b53f4cf109-kube-api-access-4jzv7\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:19 crc kubenswrapper[5050]: I1211 14:09:19.253460 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb985\" (UniqueName: \"kubernetes.io/projected/81a009ad-1a05-40c1-9c75-7a559592eadf-kube-api-access-wb985\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:19 crc kubenswrapper[5050]: I1211 14:09:19.605143 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d6ec-account-create-update-f7xfx" event={"ID":"097e2c08-b7fb-4d21-8e0b-efdb0ac0c9a6","Type":"ContainerDied","Data":"940ec0277ca6c1d129f0e9ef0b6de0f537d13f7eb1297de75312d65b70475a87"} Dec 11 14:09:19 crc kubenswrapper[5050]: I1211 14:09:19.605221 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="940ec0277ca6c1d129f0e9ef0b6de0f537d13f7eb1297de75312d65b70475a87" Dec 11 14:09:19 crc kubenswrapper[5050]: I1211 14:09:19.605397 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d6ec-account-create-update-f7xfx" Dec 11 14:09:19 crc kubenswrapper[5050]: I1211 14:09:19.613657 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-e20d-account-create-update-6hllh" event={"ID":"81a009ad-1a05-40c1-9c75-7a559592eadf","Type":"ContainerDied","Data":"acab2b77c22490c5010f8345090d6f972aca21dcf6a908f8b6dd96be3e0022a1"} Dec 11 14:09:19 crc kubenswrapper[5050]: I1211 14:09:19.613725 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acab2b77c22490c5010f8345090d6f972aca21dcf6a908f8b6dd96be3e0022a1" Dec 11 14:09:19 crc kubenswrapper[5050]: I1211 14:09:19.614812 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-e20d-account-create-update-6hllh" Dec 11 14:09:19 crc kubenswrapper[5050]: I1211 14:09:19.619898 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-97qxs" event={"ID":"c3f359da-3978-4220-91da-28b53f4cf109","Type":"ContainerDied","Data":"8f4ada786e3a3b7f2e1d4cf993471365f4c0ce76825c12e5af938e3f7c11c59f"} Dec 11 14:09:19 crc kubenswrapper[5050]: I1211 14:09:19.619993 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f4ada786e3a3b7f2e1d4cf993471365f4c0ce76825c12e5af938e3f7c11c59f" Dec 11 14:09:19 crc kubenswrapper[5050]: I1211 14:09:19.620041 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-97qxs" Dec 11 14:09:22 crc kubenswrapper[5050]: I1211 14:09:22.144534 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:09:22 crc kubenswrapper[5050]: I1211 14:09:22.148063 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a565dde-2487-4769-8ded-f12f854974a3" containerName="ceilometer-central-agent" containerID="cri-o://446a31343fe3580f1fcbb0e6eadb32cca066114d34271bddbb6b3e3fe090b7c6" gracePeriod=30 Dec 11 14:09:22 crc kubenswrapper[5050]: I1211 14:09:22.148498 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a565dde-2487-4769-8ded-f12f854974a3" containerName="proxy-httpd" containerID="cri-o://d9e9e4869e8babe6006b7fb53f05249086077270aef055ad2257d108e07ab995" gracePeriod=30 Dec 11 14:09:22 crc kubenswrapper[5050]: I1211 14:09:22.148669 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a565dde-2487-4769-8ded-f12f854974a3" containerName="sg-core" containerID="cri-o://cd238a0ca1f59437094b058cc5204c39ad43032bad56669c8079a6896153cf65" gracePeriod=30 Dec 11 14:09:22 crc kubenswrapper[5050]: I1211 14:09:22.148751 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0a565dde-2487-4769-8ded-f12f854974a3" containerName="ceilometer-notification-agent" containerID="cri-o://b1e5c13149d09a48020f93335a1f0829a6e8a80b81e0096094b2ea84ebde3f1c" gracePeriod=30 Dec 11 14:09:22 crc kubenswrapper[5050]: I1211 14:09:22.649907 5050 generic.go:334] "Generic (PLEG): container finished" podID="0a565dde-2487-4769-8ded-f12f854974a3" containerID="d9e9e4869e8babe6006b7fb53f05249086077270aef055ad2257d108e07ab995" exitCode=0 Dec 11 14:09:22 crc kubenswrapper[5050]: I1211 14:09:22.650286 5050 generic.go:334] "Generic (PLEG): container finished" podID="0a565dde-2487-4769-8ded-f12f854974a3" containerID="cd238a0ca1f59437094b058cc5204c39ad43032bad56669c8079a6896153cf65" exitCode=2 Dec 11 14:09:22 crc kubenswrapper[5050]: I1211 14:09:22.649976 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a565dde-2487-4769-8ded-f12f854974a3","Type":"ContainerDied","Data":"d9e9e4869e8babe6006b7fb53f05249086077270aef055ad2257d108e07ab995"} Dec 11 14:09:22 crc kubenswrapper[5050]: I1211 14:09:22.650332 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a565dde-2487-4769-8ded-f12f854974a3","Type":"ContainerDied","Data":"cd238a0ca1f59437094b058cc5204c39ad43032bad56669c8079a6896153cf65"} Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.310932 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.454326 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a565dde-2487-4769-8ded-f12f854974a3-combined-ca-bundle\") pod \"0a565dde-2487-4769-8ded-f12f854974a3\" (UID: \"0a565dde-2487-4769-8ded-f12f854974a3\") " Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.454684 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a565dde-2487-4769-8ded-f12f854974a3-scripts\") pod \"0a565dde-2487-4769-8ded-f12f854974a3\" (UID: \"0a565dde-2487-4769-8ded-f12f854974a3\") " Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.454750 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sj7bp\" (UniqueName: \"kubernetes.io/projected/0a565dde-2487-4769-8ded-f12f854974a3-kube-api-access-sj7bp\") pod \"0a565dde-2487-4769-8ded-f12f854974a3\" (UID: \"0a565dde-2487-4769-8ded-f12f854974a3\") " Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.454807 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a565dde-2487-4769-8ded-f12f854974a3-config-data\") pod \"0a565dde-2487-4769-8ded-f12f854974a3\" (UID: \"0a565dde-2487-4769-8ded-f12f854974a3\") " Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.454873 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a565dde-2487-4769-8ded-f12f854974a3-sg-core-conf-yaml\") pod \"0a565dde-2487-4769-8ded-f12f854974a3\" (UID: \"0a565dde-2487-4769-8ded-f12f854974a3\") " Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.454931 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a565dde-2487-4769-8ded-f12f854974a3-log-httpd\") pod \"0a565dde-2487-4769-8ded-f12f854974a3\" (UID: \"0a565dde-2487-4769-8ded-f12f854974a3\") " Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.455537 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a565dde-2487-4769-8ded-f12f854974a3-run-httpd\") pod \"0a565dde-2487-4769-8ded-f12f854974a3\" (UID: \"0a565dde-2487-4769-8ded-f12f854974a3\") " Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.456519 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a565dde-2487-4769-8ded-f12f854974a3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0a565dde-2487-4769-8ded-f12f854974a3" (UID: "0a565dde-2487-4769-8ded-f12f854974a3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.457002 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a565dde-2487-4769-8ded-f12f854974a3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0a565dde-2487-4769-8ded-f12f854974a3" (UID: "0a565dde-2487-4769-8ded-f12f854974a3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.464773 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a565dde-2487-4769-8ded-f12f854974a3-kube-api-access-sj7bp" (OuterVolumeSpecName: "kube-api-access-sj7bp") pod "0a565dde-2487-4769-8ded-f12f854974a3" (UID: "0a565dde-2487-4769-8ded-f12f854974a3"). InnerVolumeSpecName "kube-api-access-sj7bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.490270 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a565dde-2487-4769-8ded-f12f854974a3-scripts" (OuterVolumeSpecName: "scripts") pod "0a565dde-2487-4769-8ded-f12f854974a3" (UID: "0a565dde-2487-4769-8ded-f12f854974a3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.503691 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a565dde-2487-4769-8ded-f12f854974a3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0a565dde-2487-4769-8ded-f12f854974a3" (UID: "0a565dde-2487-4769-8ded-f12f854974a3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.558511 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a565dde-2487-4769-8ded-f12f854974a3-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.558784 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sj7bp\" (UniqueName: \"kubernetes.io/projected/0a565dde-2487-4769-8ded-f12f854974a3-kube-api-access-sj7bp\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.558852 5050 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0a565dde-2487-4769-8ded-f12f854974a3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.558910 5050 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a565dde-2487-4769-8ded-f12f854974a3-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.559045 5050 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0a565dde-2487-4769-8ded-f12f854974a3-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.584670 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a565dde-2487-4769-8ded-f12f854974a3-config-data" (OuterVolumeSpecName: "config-data") pod "0a565dde-2487-4769-8ded-f12f854974a3" (UID: "0a565dde-2487-4769-8ded-f12f854974a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.584732 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a565dde-2487-4769-8ded-f12f854974a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a565dde-2487-4769-8ded-f12f854974a3" (UID: "0a565dde-2487-4769-8ded-f12f854974a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.661675 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a565dde-2487-4769-8ded-f12f854974a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.661730 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a565dde-2487-4769-8ded-f12f854974a3-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.663456 5050 generic.go:334] "Generic (PLEG): container finished" podID="0a565dde-2487-4769-8ded-f12f854974a3" containerID="b1e5c13149d09a48020f93335a1f0829a6e8a80b81e0096094b2ea84ebde3f1c" exitCode=0 Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.663498 5050 generic.go:334] "Generic (PLEG): container finished" podID="0a565dde-2487-4769-8ded-f12f854974a3" containerID="446a31343fe3580f1fcbb0e6eadb32cca066114d34271bddbb6b3e3fe090b7c6" exitCode=0 Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.663524 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a565dde-2487-4769-8ded-f12f854974a3","Type":"ContainerDied","Data":"b1e5c13149d09a48020f93335a1f0829a6e8a80b81e0096094b2ea84ebde3f1c"} Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.663549 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.663560 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a565dde-2487-4769-8ded-f12f854974a3","Type":"ContainerDied","Data":"446a31343fe3580f1fcbb0e6eadb32cca066114d34271bddbb6b3e3fe090b7c6"} Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.663576 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0a565dde-2487-4769-8ded-f12f854974a3","Type":"ContainerDied","Data":"134f394ef4576e5c9889bd45c118d830e7f81652537068ef79ef7eee38d4c373"} Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.663598 5050 scope.go:117] "RemoveContainer" containerID="d9e9e4869e8babe6006b7fb53f05249086077270aef055ad2257d108e07ab995" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.692268 5050 scope.go:117] "RemoveContainer" containerID="cd238a0ca1f59437094b058cc5204c39ad43032bad56669c8079a6896153cf65" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.711218 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.715437 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.725330 5050 scope.go:117] "RemoveContainer" containerID="b1e5c13149d09a48020f93335a1f0829a6e8a80b81e0096094b2ea84ebde3f1c" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.768525 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:09:23 crc kubenswrapper[5050]: E1211 14:09:23.769062 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a565dde-2487-4769-8ded-f12f854974a3" containerName="ceilometer-notification-agent" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.769078 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a565dde-2487-4769-8ded-f12f854974a3" containerName="ceilometer-notification-agent" Dec 11 14:09:23 crc kubenswrapper[5050]: E1211 14:09:23.769095 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="097e2c08-b7fb-4d21-8e0b-efdb0ac0c9a6" containerName="mariadb-account-create-update" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.769102 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="097e2c08-b7fb-4d21-8e0b-efdb0ac0c9a6" containerName="mariadb-account-create-update" Dec 11 14:09:23 crc kubenswrapper[5050]: E1211 14:09:23.769414 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4affe74d-e417-48c1-9c71-7cca7d0729db" containerName="mariadb-database-create" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.769426 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="4affe74d-e417-48c1-9c71-7cca7d0729db" containerName="mariadb-database-create" Dec 11 14:09:23 crc kubenswrapper[5050]: E1211 14:09:23.769564 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a565dde-2487-4769-8ded-f12f854974a3" containerName="proxy-httpd" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.769576 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a565dde-2487-4769-8ded-f12f854974a3" containerName="proxy-httpd" Dec 11 14:09:23 crc kubenswrapper[5050]: E1211 14:09:23.769591 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81a009ad-1a05-40c1-9c75-7a559592eadf" containerName="mariadb-account-create-update" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.769608 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="81a009ad-1a05-40c1-9c75-7a559592eadf" containerName="mariadb-account-create-update" Dec 11 14:09:23 crc kubenswrapper[5050]: E1211 14:09:23.769624 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee81095a-fe79-47c7-aa3e-e1768a655b86" containerName="mariadb-account-create-update" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.769630 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee81095a-fe79-47c7-aa3e-e1768a655b86" containerName="mariadb-account-create-update" Dec 11 14:09:23 crc kubenswrapper[5050]: E1211 14:09:23.769655 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a565dde-2487-4769-8ded-f12f854974a3" containerName="sg-core" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.769664 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a565dde-2487-4769-8ded-f12f854974a3" containerName="sg-core" Dec 11 14:09:23 crc kubenswrapper[5050]: E1211 14:09:23.769675 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4130a67a-7d8d-4eff-b6ea-be9f43992443" containerName="mariadb-database-create" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.769681 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="4130a67a-7d8d-4eff-b6ea-be9f43992443" containerName="mariadb-database-create" Dec 11 14:09:23 crc kubenswrapper[5050]: E1211 14:09:23.769692 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3f359da-3978-4220-91da-28b53f4cf109" containerName="mariadb-database-create" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.769697 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3f359da-3978-4220-91da-28b53f4cf109" containerName="mariadb-database-create" Dec 11 14:09:23 crc kubenswrapper[5050]: E1211 14:09:23.769706 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a565dde-2487-4769-8ded-f12f854974a3" containerName="ceilometer-central-agent" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.769712 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a565dde-2487-4769-8ded-f12f854974a3" containerName="ceilometer-central-agent" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.769892 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3f359da-3978-4220-91da-28b53f4cf109" containerName="mariadb-database-create" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.769905 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="4130a67a-7d8d-4eff-b6ea-be9f43992443" containerName="mariadb-database-create" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.769915 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a565dde-2487-4769-8ded-f12f854974a3" containerName="ceilometer-central-agent" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.769924 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a565dde-2487-4769-8ded-f12f854974a3" containerName="sg-core" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.769933 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a565dde-2487-4769-8ded-f12f854974a3" containerName="ceilometer-notification-agent" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.769944 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a565dde-2487-4769-8ded-f12f854974a3" containerName="proxy-httpd" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.769960 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee81095a-fe79-47c7-aa3e-e1768a655b86" containerName="mariadb-account-create-update" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.769970 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="81a009ad-1a05-40c1-9c75-7a559592eadf" containerName="mariadb-account-create-update" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.769979 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="097e2c08-b7fb-4d21-8e0b-efdb0ac0c9a6" containerName="mariadb-account-create-update" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.769988 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="4affe74d-e417-48c1-9c71-7cca7d0729db" containerName="mariadb-database-create" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.773085 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.774426 5050 scope.go:117] "RemoveContainer" containerID="446a31343fe3580f1fcbb0e6eadb32cca066114d34271bddbb6b3e3fe090b7c6" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.779230 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.802953 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.803215 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.830302 5050 scope.go:117] "RemoveContainer" containerID="d9e9e4869e8babe6006b7fb53f05249086077270aef055ad2257d108e07ab995" Dec 11 14:09:23 crc kubenswrapper[5050]: E1211 14:09:23.834880 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9e9e4869e8babe6006b7fb53f05249086077270aef055ad2257d108e07ab995\": container with ID starting with d9e9e4869e8babe6006b7fb53f05249086077270aef055ad2257d108e07ab995 not found: ID does not exist" containerID="d9e9e4869e8babe6006b7fb53f05249086077270aef055ad2257d108e07ab995" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.834921 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9e9e4869e8babe6006b7fb53f05249086077270aef055ad2257d108e07ab995"} err="failed to get container status \"d9e9e4869e8babe6006b7fb53f05249086077270aef055ad2257d108e07ab995\": rpc error: code = NotFound desc = could not find container \"d9e9e4869e8babe6006b7fb53f05249086077270aef055ad2257d108e07ab995\": container with ID starting with d9e9e4869e8babe6006b7fb53f05249086077270aef055ad2257d108e07ab995 not found: ID does not exist" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.834946 5050 scope.go:117] "RemoveContainer" containerID="cd238a0ca1f59437094b058cc5204c39ad43032bad56669c8079a6896153cf65" Dec 11 14:09:23 crc kubenswrapper[5050]: E1211 14:09:23.835239 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd238a0ca1f59437094b058cc5204c39ad43032bad56669c8079a6896153cf65\": container with ID starting with cd238a0ca1f59437094b058cc5204c39ad43032bad56669c8079a6896153cf65 not found: ID does not exist" containerID="cd238a0ca1f59437094b058cc5204c39ad43032bad56669c8079a6896153cf65" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.835258 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd238a0ca1f59437094b058cc5204c39ad43032bad56669c8079a6896153cf65"} err="failed to get container status \"cd238a0ca1f59437094b058cc5204c39ad43032bad56669c8079a6896153cf65\": rpc error: code = NotFound desc = could not find container \"cd238a0ca1f59437094b058cc5204c39ad43032bad56669c8079a6896153cf65\": container with ID starting with cd238a0ca1f59437094b058cc5204c39ad43032bad56669c8079a6896153cf65 not found: ID does not exist" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.835273 5050 scope.go:117] "RemoveContainer" containerID="b1e5c13149d09a48020f93335a1f0829a6e8a80b81e0096094b2ea84ebde3f1c" Dec 11 14:09:23 crc kubenswrapper[5050]: E1211 14:09:23.835521 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1e5c13149d09a48020f93335a1f0829a6e8a80b81e0096094b2ea84ebde3f1c\": container with ID starting with b1e5c13149d09a48020f93335a1f0829a6e8a80b81e0096094b2ea84ebde3f1c not found: ID does not exist" containerID="b1e5c13149d09a48020f93335a1f0829a6e8a80b81e0096094b2ea84ebde3f1c" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.835542 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1e5c13149d09a48020f93335a1f0829a6e8a80b81e0096094b2ea84ebde3f1c"} err="failed to get container status \"b1e5c13149d09a48020f93335a1f0829a6e8a80b81e0096094b2ea84ebde3f1c\": rpc error: code = NotFound desc = could not find container \"b1e5c13149d09a48020f93335a1f0829a6e8a80b81e0096094b2ea84ebde3f1c\": container with ID starting with b1e5c13149d09a48020f93335a1f0829a6e8a80b81e0096094b2ea84ebde3f1c not found: ID does not exist" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.835555 5050 scope.go:117] "RemoveContainer" containerID="446a31343fe3580f1fcbb0e6eadb32cca066114d34271bddbb6b3e3fe090b7c6" Dec 11 14:09:23 crc kubenswrapper[5050]: E1211 14:09:23.835718 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"446a31343fe3580f1fcbb0e6eadb32cca066114d34271bddbb6b3e3fe090b7c6\": container with ID starting with 446a31343fe3580f1fcbb0e6eadb32cca066114d34271bddbb6b3e3fe090b7c6 not found: ID does not exist" containerID="446a31343fe3580f1fcbb0e6eadb32cca066114d34271bddbb6b3e3fe090b7c6" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.835737 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"446a31343fe3580f1fcbb0e6eadb32cca066114d34271bddbb6b3e3fe090b7c6"} err="failed to get container status \"446a31343fe3580f1fcbb0e6eadb32cca066114d34271bddbb6b3e3fe090b7c6\": rpc error: code = NotFound desc = could not find container \"446a31343fe3580f1fcbb0e6eadb32cca066114d34271bddbb6b3e3fe090b7c6\": container with ID starting with 446a31343fe3580f1fcbb0e6eadb32cca066114d34271bddbb6b3e3fe090b7c6 not found: ID does not exist" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.835749 5050 scope.go:117] "RemoveContainer" containerID="d9e9e4869e8babe6006b7fb53f05249086077270aef055ad2257d108e07ab995" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.835910 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9e9e4869e8babe6006b7fb53f05249086077270aef055ad2257d108e07ab995"} err="failed to get container status \"d9e9e4869e8babe6006b7fb53f05249086077270aef055ad2257d108e07ab995\": rpc error: code = NotFound desc = could not find container \"d9e9e4869e8babe6006b7fb53f05249086077270aef055ad2257d108e07ab995\": container with ID starting with d9e9e4869e8babe6006b7fb53f05249086077270aef055ad2257d108e07ab995 not found: ID does not exist" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.835933 5050 scope.go:117] "RemoveContainer" containerID="cd238a0ca1f59437094b058cc5204c39ad43032bad56669c8079a6896153cf65" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.836100 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd238a0ca1f59437094b058cc5204c39ad43032bad56669c8079a6896153cf65"} err="failed to get container status \"cd238a0ca1f59437094b058cc5204c39ad43032bad56669c8079a6896153cf65\": rpc error: code = NotFound desc = could not find container \"cd238a0ca1f59437094b058cc5204c39ad43032bad56669c8079a6896153cf65\": container with ID starting with cd238a0ca1f59437094b058cc5204c39ad43032bad56669c8079a6896153cf65 not found: ID does not exist" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.836112 5050 scope.go:117] "RemoveContainer" containerID="b1e5c13149d09a48020f93335a1f0829a6e8a80b81e0096094b2ea84ebde3f1c" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.836252 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1e5c13149d09a48020f93335a1f0829a6e8a80b81e0096094b2ea84ebde3f1c"} err="failed to get container status \"b1e5c13149d09a48020f93335a1f0829a6e8a80b81e0096094b2ea84ebde3f1c\": rpc error: code = NotFound desc = could not find container \"b1e5c13149d09a48020f93335a1f0829a6e8a80b81e0096094b2ea84ebde3f1c\": container with ID starting with b1e5c13149d09a48020f93335a1f0829a6e8a80b81e0096094b2ea84ebde3f1c not found: ID does not exist" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.836265 5050 scope.go:117] "RemoveContainer" containerID="446a31343fe3580f1fcbb0e6eadb32cca066114d34271bddbb6b3e3fe090b7c6" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.836407 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"446a31343fe3580f1fcbb0e6eadb32cca066114d34271bddbb6b3e3fe090b7c6"} err="failed to get container status \"446a31343fe3580f1fcbb0e6eadb32cca066114d34271bddbb6b3e3fe090b7c6\": rpc error: code = NotFound desc = could not find container \"446a31343fe3580f1fcbb0e6eadb32cca066114d34271bddbb6b3e3fe090b7c6\": container with ID starting with 446a31343fe3580f1fcbb0e6eadb32cca066114d34271bddbb6b3e3fe090b7c6 not found: ID does not exist" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.866173 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b38c77f-3e0b-4025-be68-7f36f501dbc7-scripts\") pod \"ceilometer-0\" (UID: \"2b38c77f-3e0b-4025-be68-7f36f501dbc7\") " pod="openstack/ceilometer-0" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.866258 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b38c77f-3e0b-4025-be68-7f36f501dbc7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2b38c77f-3e0b-4025-be68-7f36f501dbc7\") " pod="openstack/ceilometer-0" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.866328 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b38c77f-3e0b-4025-be68-7f36f501dbc7-run-httpd\") pod \"ceilometer-0\" (UID: \"2b38c77f-3e0b-4025-be68-7f36f501dbc7\") " pod="openstack/ceilometer-0" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.866352 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvvrj\" (UniqueName: \"kubernetes.io/projected/2b38c77f-3e0b-4025-be68-7f36f501dbc7-kube-api-access-kvvrj\") pod \"ceilometer-0\" (UID: \"2b38c77f-3e0b-4025-be68-7f36f501dbc7\") " pod="openstack/ceilometer-0" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.866551 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b38c77f-3e0b-4025-be68-7f36f501dbc7-log-httpd\") pod \"ceilometer-0\" (UID: \"2b38c77f-3e0b-4025-be68-7f36f501dbc7\") " pod="openstack/ceilometer-0" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.866861 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b38c77f-3e0b-4025-be68-7f36f501dbc7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2b38c77f-3e0b-4025-be68-7f36f501dbc7\") " pod="openstack/ceilometer-0" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.867024 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b38c77f-3e0b-4025-be68-7f36f501dbc7-config-data\") pod \"ceilometer-0\" (UID: \"2b38c77f-3e0b-4025-be68-7f36f501dbc7\") " pod="openstack/ceilometer-0" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.968934 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b38c77f-3e0b-4025-be68-7f36f501dbc7-scripts\") pod \"ceilometer-0\" (UID: \"2b38c77f-3e0b-4025-be68-7f36f501dbc7\") " pod="openstack/ceilometer-0" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.969061 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b38c77f-3e0b-4025-be68-7f36f501dbc7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2b38c77f-3e0b-4025-be68-7f36f501dbc7\") " pod="openstack/ceilometer-0" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.969125 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b38c77f-3e0b-4025-be68-7f36f501dbc7-run-httpd\") pod \"ceilometer-0\" (UID: \"2b38c77f-3e0b-4025-be68-7f36f501dbc7\") " pod="openstack/ceilometer-0" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.969144 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvvrj\" (UniqueName: \"kubernetes.io/projected/2b38c77f-3e0b-4025-be68-7f36f501dbc7-kube-api-access-kvvrj\") pod \"ceilometer-0\" (UID: \"2b38c77f-3e0b-4025-be68-7f36f501dbc7\") " pod="openstack/ceilometer-0" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.969165 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b38c77f-3e0b-4025-be68-7f36f501dbc7-log-httpd\") pod \"ceilometer-0\" (UID: \"2b38c77f-3e0b-4025-be68-7f36f501dbc7\") " pod="openstack/ceilometer-0" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.969527 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b38c77f-3e0b-4025-be68-7f36f501dbc7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2b38c77f-3e0b-4025-be68-7f36f501dbc7\") " pod="openstack/ceilometer-0" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.969576 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b38c77f-3e0b-4025-be68-7f36f501dbc7-config-data\") pod \"ceilometer-0\" (UID: \"2b38c77f-3e0b-4025-be68-7f36f501dbc7\") " pod="openstack/ceilometer-0" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.969967 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b38c77f-3e0b-4025-be68-7f36f501dbc7-run-httpd\") pod \"ceilometer-0\" (UID: \"2b38c77f-3e0b-4025-be68-7f36f501dbc7\") " pod="openstack/ceilometer-0" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.970822 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b38c77f-3e0b-4025-be68-7f36f501dbc7-log-httpd\") pod \"ceilometer-0\" (UID: \"2b38c77f-3e0b-4025-be68-7f36f501dbc7\") " pod="openstack/ceilometer-0" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.975754 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b38c77f-3e0b-4025-be68-7f36f501dbc7-scripts\") pod \"ceilometer-0\" (UID: \"2b38c77f-3e0b-4025-be68-7f36f501dbc7\") " pod="openstack/ceilometer-0" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.985346 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b38c77f-3e0b-4025-be68-7f36f501dbc7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2b38c77f-3e0b-4025-be68-7f36f501dbc7\") " pod="openstack/ceilometer-0" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.988308 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b38c77f-3e0b-4025-be68-7f36f501dbc7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2b38c77f-3e0b-4025-be68-7f36f501dbc7\") " pod="openstack/ceilometer-0" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.988383 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b38c77f-3e0b-4025-be68-7f36f501dbc7-config-data\") pod \"ceilometer-0\" (UID: \"2b38c77f-3e0b-4025-be68-7f36f501dbc7\") " pod="openstack/ceilometer-0" Dec 11 14:09:23 crc kubenswrapper[5050]: I1211 14:09:23.988698 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvvrj\" (UniqueName: \"kubernetes.io/projected/2b38c77f-3e0b-4025-be68-7f36f501dbc7-kube-api-access-kvvrj\") pod \"ceilometer-0\" (UID: \"2b38c77f-3e0b-4025-be68-7f36f501dbc7\") " pod="openstack/ceilometer-0" Dec 11 14:09:24 crc kubenswrapper[5050]: I1211 14:09:24.129024 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 14:09:24 crc kubenswrapper[5050]: I1211 14:09:24.639273 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:09:24 crc kubenswrapper[5050]: I1211 14:09:24.676565 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b38c77f-3e0b-4025-be68-7f36f501dbc7","Type":"ContainerStarted","Data":"fbb2b3c74701959b54490a2bb7b3eb67d88365688353a338fc8434c0a7582dc2"} Dec 11 14:09:25 crc kubenswrapper[5050]: I1211 14:09:25.564509 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a565dde-2487-4769-8ded-f12f854974a3" path="/var/lib/kubelet/pods/0a565dde-2487-4769-8ded-f12f854974a3/volumes" Dec 11 14:09:25 crc kubenswrapper[5050]: I1211 14:09:25.570300 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-w884k"] Dec 11 14:09:25 crc kubenswrapper[5050]: I1211 14:09:25.572171 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-w884k" Dec 11 14:09:25 crc kubenswrapper[5050]: I1211 14:09:25.576708 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 11 14:09:25 crc kubenswrapper[5050]: I1211 14:09:25.577087 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 11 14:09:25 crc kubenswrapper[5050]: I1211 14:09:25.577333 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-kws6c" Dec 11 14:09:25 crc kubenswrapper[5050]: I1211 14:09:25.592941 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-w884k"] Dec 11 14:09:25 crc kubenswrapper[5050]: I1211 14:09:25.718800 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h82kj\" (UniqueName: \"kubernetes.io/projected/a3f691ef-0109-459b-bbb9-eb08838d3dd0-kube-api-access-h82kj\") pod \"nova-cell0-conductor-db-sync-w884k\" (UID: \"a3f691ef-0109-459b-bbb9-eb08838d3dd0\") " pod="openstack/nova-cell0-conductor-db-sync-w884k" Dec 11 14:09:25 crc kubenswrapper[5050]: I1211 14:09:25.718879 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3f691ef-0109-459b-bbb9-eb08838d3dd0-scripts\") pod \"nova-cell0-conductor-db-sync-w884k\" (UID: \"a3f691ef-0109-459b-bbb9-eb08838d3dd0\") " pod="openstack/nova-cell0-conductor-db-sync-w884k" Dec 11 14:09:25 crc kubenswrapper[5050]: I1211 14:09:25.718910 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3f691ef-0109-459b-bbb9-eb08838d3dd0-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-w884k\" (UID: \"a3f691ef-0109-459b-bbb9-eb08838d3dd0\") " pod="openstack/nova-cell0-conductor-db-sync-w884k" Dec 11 14:09:25 crc kubenswrapper[5050]: I1211 14:09:25.718976 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3f691ef-0109-459b-bbb9-eb08838d3dd0-config-data\") pod \"nova-cell0-conductor-db-sync-w884k\" (UID: \"a3f691ef-0109-459b-bbb9-eb08838d3dd0\") " pod="openstack/nova-cell0-conductor-db-sync-w884k" Dec 11 14:09:25 crc kubenswrapper[5050]: I1211 14:09:25.784643 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:09:25 crc kubenswrapper[5050]: I1211 14:09:25.820900 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3f691ef-0109-459b-bbb9-eb08838d3dd0-config-data\") pod \"nova-cell0-conductor-db-sync-w884k\" (UID: \"a3f691ef-0109-459b-bbb9-eb08838d3dd0\") " pod="openstack/nova-cell0-conductor-db-sync-w884k" Dec 11 14:09:25 crc kubenswrapper[5050]: I1211 14:09:25.821308 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h82kj\" (UniqueName: \"kubernetes.io/projected/a3f691ef-0109-459b-bbb9-eb08838d3dd0-kube-api-access-h82kj\") pod \"nova-cell0-conductor-db-sync-w884k\" (UID: \"a3f691ef-0109-459b-bbb9-eb08838d3dd0\") " pod="openstack/nova-cell0-conductor-db-sync-w884k" Dec 11 14:09:25 crc kubenswrapper[5050]: I1211 14:09:25.821445 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3f691ef-0109-459b-bbb9-eb08838d3dd0-scripts\") pod \"nova-cell0-conductor-db-sync-w884k\" (UID: \"a3f691ef-0109-459b-bbb9-eb08838d3dd0\") " pod="openstack/nova-cell0-conductor-db-sync-w884k" Dec 11 14:09:25 crc kubenswrapper[5050]: I1211 14:09:25.821568 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3f691ef-0109-459b-bbb9-eb08838d3dd0-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-w884k\" (UID: \"a3f691ef-0109-459b-bbb9-eb08838d3dd0\") " pod="openstack/nova-cell0-conductor-db-sync-w884k" Dec 11 14:09:25 crc kubenswrapper[5050]: I1211 14:09:25.828827 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3f691ef-0109-459b-bbb9-eb08838d3dd0-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-w884k\" (UID: \"a3f691ef-0109-459b-bbb9-eb08838d3dd0\") " pod="openstack/nova-cell0-conductor-db-sync-w884k" Dec 11 14:09:25 crc kubenswrapper[5050]: I1211 14:09:25.830802 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3f691ef-0109-459b-bbb9-eb08838d3dd0-config-data\") pod \"nova-cell0-conductor-db-sync-w884k\" (UID: \"a3f691ef-0109-459b-bbb9-eb08838d3dd0\") " pod="openstack/nova-cell0-conductor-db-sync-w884k" Dec 11 14:09:25 crc kubenswrapper[5050]: I1211 14:09:25.835657 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3f691ef-0109-459b-bbb9-eb08838d3dd0-scripts\") pod \"nova-cell0-conductor-db-sync-w884k\" (UID: \"a3f691ef-0109-459b-bbb9-eb08838d3dd0\") " pod="openstack/nova-cell0-conductor-db-sync-w884k" Dec 11 14:09:25 crc kubenswrapper[5050]: I1211 14:09:25.845637 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h82kj\" (UniqueName: \"kubernetes.io/projected/a3f691ef-0109-459b-bbb9-eb08838d3dd0-kube-api-access-h82kj\") pod \"nova-cell0-conductor-db-sync-w884k\" (UID: \"a3f691ef-0109-459b-bbb9-eb08838d3dd0\") " pod="openstack/nova-cell0-conductor-db-sync-w884k" Dec 11 14:09:25 crc kubenswrapper[5050]: I1211 14:09:25.968095 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-w884k" Dec 11 14:09:26 crc kubenswrapper[5050]: I1211 14:09:26.587305 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-w884k"] Dec 11 14:09:26 crc kubenswrapper[5050]: I1211 14:09:26.706718 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-w884k" event={"ID":"a3f691ef-0109-459b-bbb9-eb08838d3dd0","Type":"ContainerStarted","Data":"92bea3d33ffa9bc0efc0c48a3de1e6fdb5d6bb5794c1c6b9d79b0207a31954a8"} Dec 11 14:09:26 crc kubenswrapper[5050]: I1211 14:09:26.708190 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b38c77f-3e0b-4025-be68-7f36f501dbc7","Type":"ContainerStarted","Data":"a9c34fc155dd7b9bd45d3a0c8f67086610cccd4efb7f91abfac5988d70c7dbb7"} Dec 11 14:09:27 crc kubenswrapper[5050]: I1211 14:09:27.728697 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b38c77f-3e0b-4025-be68-7f36f501dbc7","Type":"ContainerStarted","Data":"dd87005a2ab4fae4e858dfd7bb26304f806e5d0ac9d0dee7ead545656d44e2ce"} Dec 11 14:09:27 crc kubenswrapper[5050]: I1211 14:09:27.729072 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b38c77f-3e0b-4025-be68-7f36f501dbc7","Type":"ContainerStarted","Data":"ff0ec19e8712cde85c153b783577e76c91bcedb05308a446b1bef3bb1ff2e389"} Dec 11 14:09:29 crc kubenswrapper[5050]: I1211 14:09:29.784536 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b38c77f-3e0b-4025-be68-7f36f501dbc7","Type":"ContainerStarted","Data":"765f8289b92b84048e64f85cc04fb4455558faef5e6e20f4a07d1818c36599b1"} Dec 11 14:09:29 crc kubenswrapper[5050]: I1211 14:09:29.784763 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b38c77f-3e0b-4025-be68-7f36f501dbc7" containerName="ceilometer-central-agent" containerID="cri-o://a9c34fc155dd7b9bd45d3a0c8f67086610cccd4efb7f91abfac5988d70c7dbb7" gracePeriod=30 Dec 11 14:09:29 crc kubenswrapper[5050]: I1211 14:09:29.785145 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b38c77f-3e0b-4025-be68-7f36f501dbc7" containerName="ceilometer-notification-agent" containerID="cri-o://ff0ec19e8712cde85c153b783577e76c91bcedb05308a446b1bef3bb1ff2e389" gracePeriod=30 Dec 11 14:09:29 crc kubenswrapper[5050]: I1211 14:09:29.785352 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b38c77f-3e0b-4025-be68-7f36f501dbc7" containerName="sg-core" containerID="cri-o://dd87005a2ab4fae4e858dfd7bb26304f806e5d0ac9d0dee7ead545656d44e2ce" gracePeriod=30 Dec 11 14:09:29 crc kubenswrapper[5050]: I1211 14:09:29.785770 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 11 14:09:29 crc kubenswrapper[5050]: I1211 14:09:29.786632 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b38c77f-3e0b-4025-be68-7f36f501dbc7" containerName="proxy-httpd" containerID="cri-o://765f8289b92b84048e64f85cc04fb4455558faef5e6e20f4a07d1818c36599b1" gracePeriod=30 Dec 11 14:09:29 crc kubenswrapper[5050]: I1211 14:09:29.819762 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.810036216 podStartE2EDuration="6.819732904s" podCreationTimestamp="2025-12-11 14:09:23 +0000 UTC" firstStartedPulling="2025-12-11 14:09:24.644101756 +0000 UTC m=+1255.487824342" lastFinishedPulling="2025-12-11 14:09:28.653798444 +0000 UTC m=+1259.497521030" observedRunningTime="2025-12-11 14:09:29.814897624 +0000 UTC m=+1260.658620230" watchObservedRunningTime="2025-12-11 14:09:29.819732904 +0000 UTC m=+1260.663455490" Dec 11 14:09:30 crc kubenswrapper[5050]: I1211 14:09:30.803530 5050 generic.go:334] "Generic (PLEG): container finished" podID="2b38c77f-3e0b-4025-be68-7f36f501dbc7" containerID="765f8289b92b84048e64f85cc04fb4455558faef5e6e20f4a07d1818c36599b1" exitCode=0 Dec 11 14:09:30 crc kubenswrapper[5050]: I1211 14:09:30.803847 5050 generic.go:334] "Generic (PLEG): container finished" podID="2b38c77f-3e0b-4025-be68-7f36f501dbc7" containerID="dd87005a2ab4fae4e858dfd7bb26304f806e5d0ac9d0dee7ead545656d44e2ce" exitCode=2 Dec 11 14:09:30 crc kubenswrapper[5050]: I1211 14:09:30.803615 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b38c77f-3e0b-4025-be68-7f36f501dbc7","Type":"ContainerDied","Data":"765f8289b92b84048e64f85cc04fb4455558faef5e6e20f4a07d1818c36599b1"} Dec 11 14:09:30 crc kubenswrapper[5050]: I1211 14:09:30.803897 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b38c77f-3e0b-4025-be68-7f36f501dbc7","Type":"ContainerDied","Data":"dd87005a2ab4fae4e858dfd7bb26304f806e5d0ac9d0dee7ead545656d44e2ce"} Dec 11 14:09:31 crc kubenswrapper[5050]: I1211 14:09:31.819394 5050 generic.go:334] "Generic (PLEG): container finished" podID="2b38c77f-3e0b-4025-be68-7f36f501dbc7" containerID="ff0ec19e8712cde85c153b783577e76c91bcedb05308a446b1bef3bb1ff2e389" exitCode=0 Dec 11 14:09:31 crc kubenswrapper[5050]: I1211 14:09:31.819543 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b38c77f-3e0b-4025-be68-7f36f501dbc7","Type":"ContainerDied","Data":"ff0ec19e8712cde85c153b783577e76c91bcedb05308a446b1bef3bb1ff2e389"} Dec 11 14:09:38 crc kubenswrapper[5050]: I1211 14:09:38.914000 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-w884k" event={"ID":"a3f691ef-0109-459b-bbb9-eb08838d3dd0","Type":"ContainerStarted","Data":"a90ab157b3c623d184d49b39ff73cc98df65330afdc42c004c5a9becbab50b27"} Dec 11 14:09:38 crc kubenswrapper[5050]: I1211 14:09:38.925217 5050 generic.go:334] "Generic (PLEG): container finished" podID="2b38c77f-3e0b-4025-be68-7f36f501dbc7" containerID="a9c34fc155dd7b9bd45d3a0c8f67086610cccd4efb7f91abfac5988d70c7dbb7" exitCode=0 Dec 11 14:09:38 crc kubenswrapper[5050]: I1211 14:09:38.925290 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b38c77f-3e0b-4025-be68-7f36f501dbc7","Type":"ContainerDied","Data":"a9c34fc155dd7b9bd45d3a0c8f67086610cccd4efb7f91abfac5988d70c7dbb7"} Dec 11 14:09:38 crc kubenswrapper[5050]: I1211 14:09:38.944812 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-w884k" podStartSLOduration=2.580380188 podStartE2EDuration="13.944752615s" podCreationTimestamp="2025-12-11 14:09:25 +0000 UTC" firstStartedPulling="2025-12-11 14:09:26.603661236 +0000 UTC m=+1257.447383832" lastFinishedPulling="2025-12-11 14:09:37.968033673 +0000 UTC m=+1268.811756259" observedRunningTime="2025-12-11 14:09:38.936710989 +0000 UTC m=+1269.780433575" watchObservedRunningTime="2025-12-11 14:09:38.944752615 +0000 UTC m=+1269.788475201" Dec 11 14:09:39 crc kubenswrapper[5050]: I1211 14:09:39.103209 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 14:09:39 crc kubenswrapper[5050]: I1211 14:09:39.301398 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b38c77f-3e0b-4025-be68-7f36f501dbc7-combined-ca-bundle\") pod \"2b38c77f-3e0b-4025-be68-7f36f501dbc7\" (UID: \"2b38c77f-3e0b-4025-be68-7f36f501dbc7\") " Dec 11 14:09:39 crc kubenswrapper[5050]: I1211 14:09:39.301767 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b38c77f-3e0b-4025-be68-7f36f501dbc7-log-httpd\") pod \"2b38c77f-3e0b-4025-be68-7f36f501dbc7\" (UID: \"2b38c77f-3e0b-4025-be68-7f36f501dbc7\") " Dec 11 14:09:39 crc kubenswrapper[5050]: I1211 14:09:39.301875 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b38c77f-3e0b-4025-be68-7f36f501dbc7-config-data\") pod \"2b38c77f-3e0b-4025-be68-7f36f501dbc7\" (UID: \"2b38c77f-3e0b-4025-be68-7f36f501dbc7\") " Dec 11 14:09:39 crc kubenswrapper[5050]: I1211 14:09:39.302074 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b38c77f-3e0b-4025-be68-7f36f501dbc7-scripts\") pod \"2b38c77f-3e0b-4025-be68-7f36f501dbc7\" (UID: \"2b38c77f-3e0b-4025-be68-7f36f501dbc7\") " Dec 11 14:09:39 crc kubenswrapper[5050]: I1211 14:09:39.302235 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b38c77f-3e0b-4025-be68-7f36f501dbc7-run-httpd\") pod \"2b38c77f-3e0b-4025-be68-7f36f501dbc7\" (UID: \"2b38c77f-3e0b-4025-be68-7f36f501dbc7\") " Dec 11 14:09:39 crc kubenswrapper[5050]: I1211 14:09:39.302313 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b38c77f-3e0b-4025-be68-7f36f501dbc7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2b38c77f-3e0b-4025-be68-7f36f501dbc7" (UID: "2b38c77f-3e0b-4025-be68-7f36f501dbc7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:09:39 crc kubenswrapper[5050]: I1211 14:09:39.302443 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b38c77f-3e0b-4025-be68-7f36f501dbc7-sg-core-conf-yaml\") pod \"2b38c77f-3e0b-4025-be68-7f36f501dbc7\" (UID: \"2b38c77f-3e0b-4025-be68-7f36f501dbc7\") " Dec 11 14:09:39 crc kubenswrapper[5050]: I1211 14:09:39.302652 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvvrj\" (UniqueName: \"kubernetes.io/projected/2b38c77f-3e0b-4025-be68-7f36f501dbc7-kube-api-access-kvvrj\") pod \"2b38c77f-3e0b-4025-be68-7f36f501dbc7\" (UID: \"2b38c77f-3e0b-4025-be68-7f36f501dbc7\") " Dec 11 14:09:39 crc kubenswrapper[5050]: I1211 14:09:39.302679 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b38c77f-3e0b-4025-be68-7f36f501dbc7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2b38c77f-3e0b-4025-be68-7f36f501dbc7" (UID: "2b38c77f-3e0b-4025-be68-7f36f501dbc7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:09:39 crc kubenswrapper[5050]: I1211 14:09:39.303386 5050 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b38c77f-3e0b-4025-be68-7f36f501dbc7-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:39 crc kubenswrapper[5050]: I1211 14:09:39.303486 5050 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b38c77f-3e0b-4025-be68-7f36f501dbc7-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:39 crc kubenswrapper[5050]: I1211 14:09:39.309718 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b38c77f-3e0b-4025-be68-7f36f501dbc7-scripts" (OuterVolumeSpecName: "scripts") pod "2b38c77f-3e0b-4025-be68-7f36f501dbc7" (UID: "2b38c77f-3e0b-4025-be68-7f36f501dbc7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:09:39 crc kubenswrapper[5050]: I1211 14:09:39.311300 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b38c77f-3e0b-4025-be68-7f36f501dbc7-kube-api-access-kvvrj" (OuterVolumeSpecName: "kube-api-access-kvvrj") pod "2b38c77f-3e0b-4025-be68-7f36f501dbc7" (UID: "2b38c77f-3e0b-4025-be68-7f36f501dbc7"). InnerVolumeSpecName "kube-api-access-kvvrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:09:39 crc kubenswrapper[5050]: I1211 14:09:39.333455 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b38c77f-3e0b-4025-be68-7f36f501dbc7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2b38c77f-3e0b-4025-be68-7f36f501dbc7" (UID: "2b38c77f-3e0b-4025-be68-7f36f501dbc7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:09:39 crc kubenswrapper[5050]: I1211 14:09:39.388745 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b38c77f-3e0b-4025-be68-7f36f501dbc7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b38c77f-3e0b-4025-be68-7f36f501dbc7" (UID: "2b38c77f-3e0b-4025-be68-7f36f501dbc7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:09:39 crc kubenswrapper[5050]: I1211 14:09:39.405738 5050 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b38c77f-3e0b-4025-be68-7f36f501dbc7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:39 crc kubenswrapper[5050]: I1211 14:09:39.406057 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvvrj\" (UniqueName: \"kubernetes.io/projected/2b38c77f-3e0b-4025-be68-7f36f501dbc7-kube-api-access-kvvrj\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:39 crc kubenswrapper[5050]: I1211 14:09:39.406190 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b38c77f-3e0b-4025-be68-7f36f501dbc7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:39 crc kubenswrapper[5050]: I1211 14:09:39.406337 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b38c77f-3e0b-4025-be68-7f36f501dbc7-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:39 crc kubenswrapper[5050]: I1211 14:09:39.412260 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b38c77f-3e0b-4025-be68-7f36f501dbc7-config-data" (OuterVolumeSpecName: "config-data") pod "2b38c77f-3e0b-4025-be68-7f36f501dbc7" (UID: "2b38c77f-3e0b-4025-be68-7f36f501dbc7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:09:39 crc kubenswrapper[5050]: I1211 14:09:39.509341 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b38c77f-3e0b-4025-be68-7f36f501dbc7-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:39 crc kubenswrapper[5050]: I1211 14:09:39.940518 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 14:09:39 crc kubenswrapper[5050]: I1211 14:09:39.940502 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b38c77f-3e0b-4025-be68-7f36f501dbc7","Type":"ContainerDied","Data":"fbb2b3c74701959b54490a2bb7b3eb67d88365688353a338fc8434c0a7582dc2"} Dec 11 14:09:39 crc kubenswrapper[5050]: I1211 14:09:39.940734 5050 scope.go:117] "RemoveContainer" containerID="765f8289b92b84048e64f85cc04fb4455558faef5e6e20f4a07d1818c36599b1" Dec 11 14:09:39 crc kubenswrapper[5050]: I1211 14:09:39.973425 5050 scope.go:117] "RemoveContainer" containerID="dd87005a2ab4fae4e858dfd7bb26304f806e5d0ac9d0dee7ead545656d44e2ce" Dec 11 14:09:39 crc kubenswrapper[5050]: I1211 14:09:39.974435 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:09:39 crc kubenswrapper[5050]: I1211 14:09:39.987551 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:09:39 crc kubenswrapper[5050]: I1211 14:09:39.998441 5050 scope.go:117] "RemoveContainer" containerID="ff0ec19e8712cde85c153b783577e76c91bcedb05308a446b1bef3bb1ff2e389" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.010754 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:09:40 crc kubenswrapper[5050]: E1211 14:09:40.011401 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b38c77f-3e0b-4025-be68-7f36f501dbc7" containerName="ceilometer-central-agent" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.011498 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b38c77f-3e0b-4025-be68-7f36f501dbc7" containerName="ceilometer-central-agent" Dec 11 14:09:40 crc kubenswrapper[5050]: E1211 14:09:40.011586 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b38c77f-3e0b-4025-be68-7f36f501dbc7" containerName="sg-core" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.011642 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b38c77f-3e0b-4025-be68-7f36f501dbc7" containerName="sg-core" Dec 11 14:09:40 crc kubenswrapper[5050]: E1211 14:09:40.011699 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b38c77f-3e0b-4025-be68-7f36f501dbc7" containerName="ceilometer-notification-agent" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.011748 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b38c77f-3e0b-4025-be68-7f36f501dbc7" containerName="ceilometer-notification-agent" Dec 11 14:09:40 crc kubenswrapper[5050]: E1211 14:09:40.011826 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b38c77f-3e0b-4025-be68-7f36f501dbc7" containerName="proxy-httpd" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.011907 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b38c77f-3e0b-4025-be68-7f36f501dbc7" containerName="proxy-httpd" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.012195 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b38c77f-3e0b-4025-be68-7f36f501dbc7" containerName="proxy-httpd" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.012264 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b38c77f-3e0b-4025-be68-7f36f501dbc7" containerName="sg-core" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.012323 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b38c77f-3e0b-4025-be68-7f36f501dbc7" containerName="ceilometer-notification-agent" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.012373 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b38c77f-3e0b-4025-be68-7f36f501dbc7" containerName="ceilometer-central-agent" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.014241 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.017252 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.017475 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.024590 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-scripts\") pod \"ceilometer-0\" (UID: \"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb\") " pod="openstack/ceilometer-0" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.024778 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwdzd\" (UniqueName: \"kubernetes.io/projected/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-kube-api-access-lwdzd\") pod \"ceilometer-0\" (UID: \"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb\") " pod="openstack/ceilometer-0" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.025696 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb\") " pod="openstack/ceilometer-0" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.025827 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-log-httpd\") pod \"ceilometer-0\" (UID: \"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb\") " pod="openstack/ceilometer-0" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.025917 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-config-data\") pod \"ceilometer-0\" (UID: \"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb\") " pod="openstack/ceilometer-0" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.026037 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb\") " pod="openstack/ceilometer-0" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.026159 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-run-httpd\") pod \"ceilometer-0\" (UID: \"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb\") " pod="openstack/ceilometer-0" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.049474 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.064497 5050 scope.go:117] "RemoveContainer" containerID="a9c34fc155dd7b9bd45d3a0c8f67086610cccd4efb7f91abfac5988d70c7dbb7" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.127848 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb\") " pod="openstack/ceilometer-0" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.127933 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-run-httpd\") pod \"ceilometer-0\" (UID: \"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb\") " pod="openstack/ceilometer-0" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.128011 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-scripts\") pod \"ceilometer-0\" (UID: \"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb\") " pod="openstack/ceilometer-0" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.128092 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwdzd\" (UniqueName: \"kubernetes.io/projected/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-kube-api-access-lwdzd\") pod \"ceilometer-0\" (UID: \"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb\") " pod="openstack/ceilometer-0" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.128120 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb\") " pod="openstack/ceilometer-0" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.128179 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-log-httpd\") pod \"ceilometer-0\" (UID: \"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb\") " pod="openstack/ceilometer-0" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.128209 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-config-data\") pod \"ceilometer-0\" (UID: \"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb\") " pod="openstack/ceilometer-0" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.130344 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-log-httpd\") pod \"ceilometer-0\" (UID: \"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb\") " pod="openstack/ceilometer-0" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.130651 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-run-httpd\") pod \"ceilometer-0\" (UID: \"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb\") " pod="openstack/ceilometer-0" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.133184 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb\") " pod="openstack/ceilometer-0" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.133965 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-scripts\") pod \"ceilometer-0\" (UID: \"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb\") " pod="openstack/ceilometer-0" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.147096 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb\") " pod="openstack/ceilometer-0" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.149333 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-config-data\") pod \"ceilometer-0\" (UID: \"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb\") " pod="openstack/ceilometer-0" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.149501 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwdzd\" (UniqueName: \"kubernetes.io/projected/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-kube-api-access-lwdzd\") pod \"ceilometer-0\" (UID: \"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb\") " pod="openstack/ceilometer-0" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.371718 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.796287 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.796636 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.796701 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.797629 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9aa9a97e2005f8a9ae70c2d72cef618f936c2d3673f7194a8c95ac3ad3519511"} pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.797702 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" containerID="cri-o://9aa9a97e2005f8a9ae70c2d72cef618f936c2d3673f7194a8c95ac3ad3519511" gracePeriod=600 Dec 11 14:09:40 crc kubenswrapper[5050]: W1211 14:09:40.886148 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba2dbc4b_cd0e_4ed5_a9d3_0c687e96b5cb.slice/crio-843f0fe77130a7df36177153df1844b062fe430f7d4225766cf4817009e50e38 WatchSource:0}: Error finding container 843f0fe77130a7df36177153df1844b062fe430f7d4225766cf4817009e50e38: Status 404 returned error can't find the container with id 843f0fe77130a7df36177153df1844b062fe430f7d4225766cf4817009e50e38 Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.890069 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.962963 5050 generic.go:334] "Generic (PLEG): container finished" podID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerID="9aa9a97e2005f8a9ae70c2d72cef618f936c2d3673f7194a8c95ac3ad3519511" exitCode=0 Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.963074 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerDied","Data":"9aa9a97e2005f8a9ae70c2d72cef618f936c2d3673f7194a8c95ac3ad3519511"} Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.963115 5050 scope.go:117] "RemoveContainer" containerID="9bb2e8f5b00b062d127a620edb8af7fea8c346ac42e77621e28b4dafad3e5aa5" Dec 11 14:09:40 crc kubenswrapper[5050]: I1211 14:09:40.966644 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb","Type":"ContainerStarted","Data":"843f0fe77130a7df36177153df1844b062fe430f7d4225766cf4817009e50e38"} Dec 11 14:09:41 crc kubenswrapper[5050]: I1211 14:09:41.561938 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b38c77f-3e0b-4025-be68-7f36f501dbc7" path="/var/lib/kubelet/pods/2b38c77f-3e0b-4025-be68-7f36f501dbc7/volumes" Dec 11 14:09:41 crc kubenswrapper[5050]: I1211 14:09:41.978902 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerStarted","Data":"806cebb86653f8bb0b24399056d7c69486e67776b18ac98fa0522064fc34a318"} Dec 11 14:09:41 crc kubenswrapper[5050]: I1211 14:09:41.980135 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb","Type":"ContainerStarted","Data":"c6b1d62b01b3a95e16aab0b5c64eaac77a278fd8e34b184c55d0da919ec73e08"} Dec 11 14:09:42 crc kubenswrapper[5050]: I1211 14:09:42.171721 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 14:09:42 crc kubenswrapper[5050]: I1211 14:09:42.172321 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e61388f2-3282-4387-a850-9b4bffbf0e2b" containerName="glance-log" containerID="cri-o://03c89d6c92be0c0483362292c60802d54cf7cf479b193165dada5800417ea68f" gracePeriod=30 Dec 11 14:09:42 crc kubenswrapper[5050]: I1211 14:09:42.172345 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e61388f2-3282-4387-a850-9b4bffbf0e2b" containerName="glance-httpd" containerID="cri-o://0e7deec88bef5db6b7479f0a1d1b0310b574699f8cd3bdca098e09352d918df8" gracePeriod=30 Dec 11 14:09:42 crc kubenswrapper[5050]: I1211 14:09:42.991957 5050 generic.go:334] "Generic (PLEG): container finished" podID="e61388f2-3282-4387-a850-9b4bffbf0e2b" containerID="03c89d6c92be0c0483362292c60802d54cf7cf479b193165dada5800417ea68f" exitCode=143 Dec 11 14:09:42 crc kubenswrapper[5050]: I1211 14:09:42.992087 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e61388f2-3282-4387-a850-9b4bffbf0e2b","Type":"ContainerDied","Data":"03c89d6c92be0c0483362292c60802d54cf7cf479b193165dada5800417ea68f"} Dec 11 14:09:43 crc kubenswrapper[5050]: I1211 14:09:43.747833 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 14:09:43 crc kubenswrapper[5050]: I1211 14:09:43.748430 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1553db29-21b6-4403-ab72-67c4d725a99d" containerName="glance-log" containerID="cri-o://fca9a9c9137887d1725a8887e573a39781d41f645e74763e0e567170226b2342" gracePeriod=30 Dec 11 14:09:43 crc kubenswrapper[5050]: I1211 14:09:43.748571 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1553db29-21b6-4403-ab72-67c4d725a99d" containerName="glance-httpd" containerID="cri-o://36bbba731d297f10aa7e33a81c20476d6cf18cf25132fed5a0399b134ec2f19c" gracePeriod=30 Dec 11 14:09:44 crc kubenswrapper[5050]: I1211 14:09:44.020952 5050 generic.go:334] "Generic (PLEG): container finished" podID="1553db29-21b6-4403-ab72-67c4d725a99d" containerID="fca9a9c9137887d1725a8887e573a39781d41f645e74763e0e567170226b2342" exitCode=143 Dec 11 14:09:44 crc kubenswrapper[5050]: I1211 14:09:44.021065 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1553db29-21b6-4403-ab72-67c4d725a99d","Type":"ContainerDied","Data":"fca9a9c9137887d1725a8887e573a39781d41f645e74763e0e567170226b2342"} Dec 11 14:09:44 crc kubenswrapper[5050]: I1211 14:09:44.024587 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb","Type":"ContainerStarted","Data":"2d91a096501ed0bd59ffc8a882e4c06ddb3132b553d46dbf91a11271a12b973a"} Dec 11 14:09:45 crc kubenswrapper[5050]: I1211 14:09:45.058034 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb","Type":"ContainerStarted","Data":"1691121130a80c5136049af99bae2de374c9ff3322cf7100415a86a9e8096847"} Dec 11 14:09:45 crc kubenswrapper[5050]: I1211 14:09:45.313837 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:09:46 crc kubenswrapper[5050]: I1211 14:09:46.073382 5050 generic.go:334] "Generic (PLEG): container finished" podID="e61388f2-3282-4387-a850-9b4bffbf0e2b" containerID="0e7deec88bef5db6b7479f0a1d1b0310b574699f8cd3bdca098e09352d918df8" exitCode=0 Dec 11 14:09:46 crc kubenswrapper[5050]: I1211 14:09:46.073414 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e61388f2-3282-4387-a850-9b4bffbf0e2b","Type":"ContainerDied","Data":"0e7deec88bef5db6b7479f0a1d1b0310b574699f8cd3bdca098e09352d918df8"} Dec 11 14:09:46 crc kubenswrapper[5050]: I1211 14:09:46.468732 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 14:09:46 crc kubenswrapper[5050]: I1211 14:09:46.590226 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e61388f2-3282-4387-a850-9b4bffbf0e2b-httpd-run\") pod \"e61388f2-3282-4387-a850-9b4bffbf0e2b\" (UID: \"e61388f2-3282-4387-a850-9b4bffbf0e2b\") " Dec 11 14:09:46 crc kubenswrapper[5050]: I1211 14:09:46.590512 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e61388f2-3282-4387-a850-9b4bffbf0e2b-public-tls-certs\") pod \"e61388f2-3282-4387-a850-9b4bffbf0e2b\" (UID: \"e61388f2-3282-4387-a850-9b4bffbf0e2b\") " Dec 11 14:09:46 crc kubenswrapper[5050]: I1211 14:09:46.590609 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e61388f2-3282-4387-a850-9b4bffbf0e2b-config-data\") pod \"e61388f2-3282-4387-a850-9b4bffbf0e2b\" (UID: \"e61388f2-3282-4387-a850-9b4bffbf0e2b\") " Dec 11 14:09:46 crc kubenswrapper[5050]: I1211 14:09:46.590798 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"e61388f2-3282-4387-a850-9b4bffbf0e2b\" (UID: \"e61388f2-3282-4387-a850-9b4bffbf0e2b\") " Dec 11 14:09:46 crc kubenswrapper[5050]: I1211 14:09:46.590895 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpw78\" (UniqueName: \"kubernetes.io/projected/e61388f2-3282-4387-a850-9b4bffbf0e2b-kube-api-access-vpw78\") pod \"e61388f2-3282-4387-a850-9b4bffbf0e2b\" (UID: \"e61388f2-3282-4387-a850-9b4bffbf0e2b\") " Dec 11 14:09:46 crc kubenswrapper[5050]: I1211 14:09:46.590993 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e61388f2-3282-4387-a850-9b4bffbf0e2b-combined-ca-bundle\") pod \"e61388f2-3282-4387-a850-9b4bffbf0e2b\" (UID: \"e61388f2-3282-4387-a850-9b4bffbf0e2b\") " Dec 11 14:09:46 crc kubenswrapper[5050]: I1211 14:09:46.591104 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e61388f2-3282-4387-a850-9b4bffbf0e2b-scripts\") pod \"e61388f2-3282-4387-a850-9b4bffbf0e2b\" (UID: \"e61388f2-3282-4387-a850-9b4bffbf0e2b\") " Dec 11 14:09:46 crc kubenswrapper[5050]: I1211 14:09:46.591186 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e61388f2-3282-4387-a850-9b4bffbf0e2b-logs\") pod \"e61388f2-3282-4387-a850-9b4bffbf0e2b\" (UID: \"e61388f2-3282-4387-a850-9b4bffbf0e2b\") " Dec 11 14:09:46 crc kubenswrapper[5050]: I1211 14:09:46.590814 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e61388f2-3282-4387-a850-9b4bffbf0e2b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e61388f2-3282-4387-a850-9b4bffbf0e2b" (UID: "e61388f2-3282-4387-a850-9b4bffbf0e2b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:09:46 crc kubenswrapper[5050]: I1211 14:09:46.591817 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e61388f2-3282-4387-a850-9b4bffbf0e2b-logs" (OuterVolumeSpecName: "logs") pod "e61388f2-3282-4387-a850-9b4bffbf0e2b" (UID: "e61388f2-3282-4387-a850-9b4bffbf0e2b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:09:46 crc kubenswrapper[5050]: I1211 14:09:46.594139 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e61388f2-3282-4387-a850-9b4bffbf0e2b-logs\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:46 crc kubenswrapper[5050]: I1211 14:09:46.594285 5050 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e61388f2-3282-4387-a850-9b4bffbf0e2b-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:46 crc kubenswrapper[5050]: I1211 14:09:46.596579 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "e61388f2-3282-4387-a850-9b4bffbf0e2b" (UID: "e61388f2-3282-4387-a850-9b4bffbf0e2b"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 14:09:46 crc kubenswrapper[5050]: I1211 14:09:46.596971 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e61388f2-3282-4387-a850-9b4bffbf0e2b-scripts" (OuterVolumeSpecName: "scripts") pod "e61388f2-3282-4387-a850-9b4bffbf0e2b" (UID: "e61388f2-3282-4387-a850-9b4bffbf0e2b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:09:46 crc kubenswrapper[5050]: I1211 14:09:46.605482 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e61388f2-3282-4387-a850-9b4bffbf0e2b-kube-api-access-vpw78" (OuterVolumeSpecName: "kube-api-access-vpw78") pod "e61388f2-3282-4387-a850-9b4bffbf0e2b" (UID: "e61388f2-3282-4387-a850-9b4bffbf0e2b"). InnerVolumeSpecName "kube-api-access-vpw78". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:09:46 crc kubenswrapper[5050]: I1211 14:09:46.642343 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e61388f2-3282-4387-a850-9b4bffbf0e2b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e61388f2-3282-4387-a850-9b4bffbf0e2b" (UID: "e61388f2-3282-4387-a850-9b4bffbf0e2b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:09:46 crc kubenswrapper[5050]: E1211 14:09:46.666887 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e61388f2-3282-4387-a850-9b4bffbf0e2b-public-tls-certs podName:e61388f2-3282-4387-a850-9b4bffbf0e2b nodeName:}" failed. No retries permitted until 2025-12-11 14:09:47.166851584 +0000 UTC m=+1278.010574170 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "public-tls-certs" (UniqueName: "kubernetes.io/secret/e61388f2-3282-4387-a850-9b4bffbf0e2b-public-tls-certs") pod "e61388f2-3282-4387-a850-9b4bffbf0e2b" (UID: "e61388f2-3282-4387-a850-9b4bffbf0e2b") : error deleting /var/lib/kubelet/pods/e61388f2-3282-4387-a850-9b4bffbf0e2b/volume-subpaths: remove /var/lib/kubelet/pods/e61388f2-3282-4387-a850-9b4bffbf0e2b/volume-subpaths: no such file or directory Dec 11 14:09:46 crc kubenswrapper[5050]: I1211 14:09:46.675630 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e61388f2-3282-4387-a850-9b4bffbf0e2b-config-data" (OuterVolumeSpecName: "config-data") pod "e61388f2-3282-4387-a850-9b4bffbf0e2b" (UID: "e61388f2-3282-4387-a850-9b4bffbf0e2b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:09:46 crc kubenswrapper[5050]: I1211 14:09:46.696299 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e61388f2-3282-4387-a850-9b4bffbf0e2b-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:46 crc kubenswrapper[5050]: I1211 14:09:46.696347 5050 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Dec 11 14:09:46 crc kubenswrapper[5050]: I1211 14:09:46.696358 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vpw78\" (UniqueName: \"kubernetes.io/projected/e61388f2-3282-4387-a850-9b4bffbf0e2b-kube-api-access-vpw78\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:46 crc kubenswrapper[5050]: I1211 14:09:46.696368 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e61388f2-3282-4387-a850-9b4bffbf0e2b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:46 crc kubenswrapper[5050]: I1211 14:09:46.696377 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e61388f2-3282-4387-a850-9b4bffbf0e2b-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:46 crc kubenswrapper[5050]: I1211 14:09:46.728783 5050 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Dec 11 14:09:46 crc kubenswrapper[5050]: I1211 14:09:46.797918 5050 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.090449 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb","Type":"ContainerStarted","Data":"2f7b9b1af97df1ae415218361bb3e588feb3e7a9629979d22e7a56915ee476e7"} Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.090600 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.090612 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb" containerName="ceilometer-central-agent" containerID="cri-o://c6b1d62b01b3a95e16aab0b5c64eaac77a278fd8e34b184c55d0da919ec73e08" gracePeriod=30 Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.090687 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb" containerName="sg-core" containerID="cri-o://1691121130a80c5136049af99bae2de374c9ff3322cf7100415a86a9e8096847" gracePeriod=30 Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.090692 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb" containerName="ceilometer-notification-agent" containerID="cri-o://2d91a096501ed0bd59ffc8a882e4c06ddb3132b553d46dbf91a11271a12b973a" gracePeriod=30 Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.090687 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb" containerName="proxy-httpd" containerID="cri-o://2f7b9b1af97df1ae415218361bb3e588feb3e7a9629979d22e7a56915ee476e7" gracePeriod=30 Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.096812 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e61388f2-3282-4387-a850-9b4bffbf0e2b","Type":"ContainerDied","Data":"4475fda2ac479fc444a9017e44f882b794a79273947b47195c2a2f1b5b58374c"} Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.096885 5050 scope.go:117] "RemoveContainer" containerID="0e7deec88bef5db6b7479f0a1d1b0310b574699f8cd3bdca098e09352d918df8" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.097089 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.137535 5050 scope.go:117] "RemoveContainer" containerID="03c89d6c92be0c0483362292c60802d54cf7cf479b193165dada5800417ea68f" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.206527 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e61388f2-3282-4387-a850-9b4bffbf0e2b-public-tls-certs\") pod \"e61388f2-3282-4387-a850-9b4bffbf0e2b\" (UID: \"e61388f2-3282-4387-a850-9b4bffbf0e2b\") " Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.212746 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e61388f2-3282-4387-a850-9b4bffbf0e2b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e61388f2-3282-4387-a850-9b4bffbf0e2b" (UID: "e61388f2-3282-4387-a850-9b4bffbf0e2b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.309668 5050 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e61388f2-3282-4387-a850-9b4bffbf0e2b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.427729 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.066802312 podStartE2EDuration="8.427705191s" podCreationTimestamp="2025-12-11 14:09:39 +0000 UTC" firstStartedPulling="2025-12-11 14:09:40.888877889 +0000 UTC m=+1271.732600475" lastFinishedPulling="2025-12-11 14:09:46.249780768 +0000 UTC m=+1277.093503354" observedRunningTime="2025-12-11 14:09:47.121771129 +0000 UTC m=+1277.965493715" watchObservedRunningTime="2025-12-11 14:09:47.427705191 +0000 UTC m=+1278.271427777" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.438784 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.452792 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.464856 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 14:09:47 crc kubenswrapper[5050]: E1211 14:09:47.465422 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e61388f2-3282-4387-a850-9b4bffbf0e2b" containerName="glance-log" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.465452 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e61388f2-3282-4387-a850-9b4bffbf0e2b" containerName="glance-log" Dec 11 14:09:47 crc kubenswrapper[5050]: E1211 14:09:47.465501 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e61388f2-3282-4387-a850-9b4bffbf0e2b" containerName="glance-httpd" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.465511 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e61388f2-3282-4387-a850-9b4bffbf0e2b" containerName="glance-httpd" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.465739 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="e61388f2-3282-4387-a850-9b4bffbf0e2b" containerName="glance-httpd" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.465783 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="e61388f2-3282-4387-a850-9b4bffbf0e2b" containerName="glance-log" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.469118 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.472964 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.473957 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.482919 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.567580 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e61388f2-3282-4387-a850-9b4bffbf0e2b" path="/var/lib/kubelet/pods/e61388f2-3282-4387-a850-9b4bffbf0e2b/volumes" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.615483 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"213cfec6-ba42-4dbc-bd9c-051b193e4577\") " pod="openstack/glance-default-external-api-0" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.615556 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5d5t\" (UniqueName: \"kubernetes.io/projected/213cfec6-ba42-4dbc-bd9c-051b193e4577-kube-api-access-p5d5t\") pod \"glance-default-external-api-0\" (UID: \"213cfec6-ba42-4dbc-bd9c-051b193e4577\") " pod="openstack/glance-default-external-api-0" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.615590 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/213cfec6-ba42-4dbc-bd9c-051b193e4577-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"213cfec6-ba42-4dbc-bd9c-051b193e4577\") " pod="openstack/glance-default-external-api-0" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.615627 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/213cfec6-ba42-4dbc-bd9c-051b193e4577-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"213cfec6-ba42-4dbc-bd9c-051b193e4577\") " pod="openstack/glance-default-external-api-0" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.615698 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/213cfec6-ba42-4dbc-bd9c-051b193e4577-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"213cfec6-ba42-4dbc-bd9c-051b193e4577\") " pod="openstack/glance-default-external-api-0" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.615784 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/213cfec6-ba42-4dbc-bd9c-051b193e4577-logs\") pod \"glance-default-external-api-0\" (UID: \"213cfec6-ba42-4dbc-bd9c-051b193e4577\") " pod="openstack/glance-default-external-api-0" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.615873 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/213cfec6-ba42-4dbc-bd9c-051b193e4577-scripts\") pod \"glance-default-external-api-0\" (UID: \"213cfec6-ba42-4dbc-bd9c-051b193e4577\") " pod="openstack/glance-default-external-api-0" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.615946 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/213cfec6-ba42-4dbc-bd9c-051b193e4577-config-data\") pod \"glance-default-external-api-0\" (UID: \"213cfec6-ba42-4dbc-bd9c-051b193e4577\") " pod="openstack/glance-default-external-api-0" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.717906 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5d5t\" (UniqueName: \"kubernetes.io/projected/213cfec6-ba42-4dbc-bd9c-051b193e4577-kube-api-access-p5d5t\") pod \"glance-default-external-api-0\" (UID: \"213cfec6-ba42-4dbc-bd9c-051b193e4577\") " pod="openstack/glance-default-external-api-0" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.717973 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/213cfec6-ba42-4dbc-bd9c-051b193e4577-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"213cfec6-ba42-4dbc-bd9c-051b193e4577\") " pod="openstack/glance-default-external-api-0" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.718031 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/213cfec6-ba42-4dbc-bd9c-051b193e4577-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"213cfec6-ba42-4dbc-bd9c-051b193e4577\") " pod="openstack/glance-default-external-api-0" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.718104 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/213cfec6-ba42-4dbc-bd9c-051b193e4577-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"213cfec6-ba42-4dbc-bd9c-051b193e4577\") " pod="openstack/glance-default-external-api-0" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.718211 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/213cfec6-ba42-4dbc-bd9c-051b193e4577-logs\") pod \"glance-default-external-api-0\" (UID: \"213cfec6-ba42-4dbc-bd9c-051b193e4577\") " pod="openstack/glance-default-external-api-0" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.718255 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/213cfec6-ba42-4dbc-bd9c-051b193e4577-scripts\") pod \"glance-default-external-api-0\" (UID: \"213cfec6-ba42-4dbc-bd9c-051b193e4577\") " pod="openstack/glance-default-external-api-0" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.718317 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/213cfec6-ba42-4dbc-bd9c-051b193e4577-config-data\") pod \"glance-default-external-api-0\" (UID: \"213cfec6-ba42-4dbc-bd9c-051b193e4577\") " pod="openstack/glance-default-external-api-0" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.718351 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"213cfec6-ba42-4dbc-bd9c-051b193e4577\") " pod="openstack/glance-default-external-api-0" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.722335 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"213cfec6-ba42-4dbc-bd9c-051b193e4577\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.723257 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/213cfec6-ba42-4dbc-bd9c-051b193e4577-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"213cfec6-ba42-4dbc-bd9c-051b193e4577\") " pod="openstack/glance-default-external-api-0" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.723449 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/213cfec6-ba42-4dbc-bd9c-051b193e4577-logs\") pod \"glance-default-external-api-0\" (UID: \"213cfec6-ba42-4dbc-bd9c-051b193e4577\") " pod="openstack/glance-default-external-api-0" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.725914 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/213cfec6-ba42-4dbc-bd9c-051b193e4577-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"213cfec6-ba42-4dbc-bd9c-051b193e4577\") " pod="openstack/glance-default-external-api-0" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.738948 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/213cfec6-ba42-4dbc-bd9c-051b193e4577-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"213cfec6-ba42-4dbc-bd9c-051b193e4577\") " pod="openstack/glance-default-external-api-0" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.741092 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/213cfec6-ba42-4dbc-bd9c-051b193e4577-scripts\") pod \"glance-default-external-api-0\" (UID: \"213cfec6-ba42-4dbc-bd9c-051b193e4577\") " pod="openstack/glance-default-external-api-0" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.741803 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5d5t\" (UniqueName: \"kubernetes.io/projected/213cfec6-ba42-4dbc-bd9c-051b193e4577-kube-api-access-p5d5t\") pod \"glance-default-external-api-0\" (UID: \"213cfec6-ba42-4dbc-bd9c-051b193e4577\") " pod="openstack/glance-default-external-api-0" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.756829 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/213cfec6-ba42-4dbc-bd9c-051b193e4577-config-data\") pod \"glance-default-external-api-0\" (UID: \"213cfec6-ba42-4dbc-bd9c-051b193e4577\") " pod="openstack/glance-default-external-api-0" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.784980 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"213cfec6-ba42-4dbc-bd9c-051b193e4577\") " pod="openstack/glance-default-external-api-0" Dec 11 14:09:47 crc kubenswrapper[5050]: I1211 14:09:47.814194 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 14:09:48 crc kubenswrapper[5050]: I1211 14:09:48.138658 5050 generic.go:334] "Generic (PLEG): container finished" podID="1553db29-21b6-4403-ab72-67c4d725a99d" containerID="36bbba731d297f10aa7e33a81c20476d6cf18cf25132fed5a0399b134ec2f19c" exitCode=0 Dec 11 14:09:48 crc kubenswrapper[5050]: I1211 14:09:48.139308 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1553db29-21b6-4403-ab72-67c4d725a99d","Type":"ContainerDied","Data":"36bbba731d297f10aa7e33a81c20476d6cf18cf25132fed5a0399b134ec2f19c"} Dec 11 14:09:48 crc kubenswrapper[5050]: I1211 14:09:48.144922 5050 generic.go:334] "Generic (PLEG): container finished" podID="ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb" containerID="2f7b9b1af97df1ae415218361bb3e588feb3e7a9629979d22e7a56915ee476e7" exitCode=0 Dec 11 14:09:48 crc kubenswrapper[5050]: I1211 14:09:48.144978 5050 generic.go:334] "Generic (PLEG): container finished" podID="ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb" containerID="1691121130a80c5136049af99bae2de374c9ff3322cf7100415a86a9e8096847" exitCode=2 Dec 11 14:09:48 crc kubenswrapper[5050]: I1211 14:09:48.144987 5050 generic.go:334] "Generic (PLEG): container finished" podID="ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb" containerID="2d91a096501ed0bd59ffc8a882e4c06ddb3132b553d46dbf91a11271a12b973a" exitCode=0 Dec 11 14:09:48 crc kubenswrapper[5050]: I1211 14:09:48.145080 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb","Type":"ContainerDied","Data":"2f7b9b1af97df1ae415218361bb3e588feb3e7a9629979d22e7a56915ee476e7"} Dec 11 14:09:48 crc kubenswrapper[5050]: I1211 14:09:48.145123 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb","Type":"ContainerDied","Data":"1691121130a80c5136049af99bae2de374c9ff3322cf7100415a86a9e8096847"} Dec 11 14:09:48 crc kubenswrapper[5050]: I1211 14:09:48.145134 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb","Type":"ContainerDied","Data":"2d91a096501ed0bd59ffc8a882e4c06ddb3132b553d46dbf91a11271a12b973a"} Dec 11 14:09:48 crc kubenswrapper[5050]: I1211 14:09:48.381328 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 14:09:48 crc kubenswrapper[5050]: W1211 14:09:48.381328 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod213cfec6_ba42_4dbc_bd9c_051b193e4577.slice/crio-97f6f60fa1e0a707c6a961adcbc21614be41fe1e743674218e2a861231dc6493 WatchSource:0}: Error finding container 97f6f60fa1e0a707c6a961adcbc21614be41fe1e743674218e2a861231dc6493: Status 404 returned error can't find the container with id 97f6f60fa1e0a707c6a961adcbc21614be41fe1e743674218e2a861231dc6493 Dec 11 14:09:49 crc kubenswrapper[5050]: I1211 14:09:49.171761 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"213cfec6-ba42-4dbc-bd9c-051b193e4577","Type":"ContainerStarted","Data":"97f6f60fa1e0a707c6a961adcbc21614be41fe1e743674218e2a861231dc6493"} Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.184195 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.202498 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1553db29-21b6-4403-ab72-67c4d725a99d","Type":"ContainerDied","Data":"f567e6adab7ba3318d0dc9a26001aa270bbf3fafa1056dd7029fc390fb591a63"} Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.202569 5050 scope.go:117] "RemoveContainer" containerID="36bbba731d297f10aa7e33a81c20476d6cf18cf25132fed5a0399b134ec2f19c" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.202642 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.227621 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"213cfec6-ba42-4dbc-bd9c-051b193e4577","Type":"ContainerStarted","Data":"ed60fdd58c4339e3164d7f4e317f1114bf6ecb3ec2bef7cd7a80d1158c76ff29"} Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.252622 5050 scope.go:117] "RemoveContainer" containerID="fca9a9c9137887d1725a8887e573a39781d41f645e74763e0e567170226b2342" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.288329 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1553db29-21b6-4403-ab72-67c4d725a99d-combined-ca-bundle\") pod \"1553db29-21b6-4403-ab72-67c4d725a99d\" (UID: \"1553db29-21b6-4403-ab72-67c4d725a99d\") " Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.288802 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1553db29-21b6-4403-ab72-67c4d725a99d-internal-tls-certs\") pod \"1553db29-21b6-4403-ab72-67c4d725a99d\" (UID: \"1553db29-21b6-4403-ab72-67c4d725a99d\") " Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.288846 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1553db29-21b6-4403-ab72-67c4d725a99d-logs\") pod \"1553db29-21b6-4403-ab72-67c4d725a99d\" (UID: \"1553db29-21b6-4403-ab72-67c4d725a99d\") " Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.288887 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"1553db29-21b6-4403-ab72-67c4d725a99d\" (UID: \"1553db29-21b6-4403-ab72-67c4d725a99d\") " Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.289038 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l84hn\" (UniqueName: \"kubernetes.io/projected/1553db29-21b6-4403-ab72-67c4d725a99d-kube-api-access-l84hn\") pod \"1553db29-21b6-4403-ab72-67c4d725a99d\" (UID: \"1553db29-21b6-4403-ab72-67c4d725a99d\") " Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.289057 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1553db29-21b6-4403-ab72-67c4d725a99d-httpd-run\") pod \"1553db29-21b6-4403-ab72-67c4d725a99d\" (UID: \"1553db29-21b6-4403-ab72-67c4d725a99d\") " Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.289105 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1553db29-21b6-4403-ab72-67c4d725a99d-config-data\") pod \"1553db29-21b6-4403-ab72-67c4d725a99d\" (UID: \"1553db29-21b6-4403-ab72-67c4d725a99d\") " Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.289180 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1553db29-21b6-4403-ab72-67c4d725a99d-scripts\") pod \"1553db29-21b6-4403-ab72-67c4d725a99d\" (UID: \"1553db29-21b6-4403-ab72-67c4d725a99d\") " Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.290505 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1553db29-21b6-4403-ab72-67c4d725a99d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1553db29-21b6-4403-ab72-67c4d725a99d" (UID: "1553db29-21b6-4403-ab72-67c4d725a99d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.290621 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1553db29-21b6-4403-ab72-67c4d725a99d-logs" (OuterVolumeSpecName: "logs") pod "1553db29-21b6-4403-ab72-67c4d725a99d" (UID: "1553db29-21b6-4403-ab72-67c4d725a99d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.299455 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1553db29-21b6-4403-ab72-67c4d725a99d-kube-api-access-l84hn" (OuterVolumeSpecName: "kube-api-access-l84hn") pod "1553db29-21b6-4403-ab72-67c4d725a99d" (UID: "1553db29-21b6-4403-ab72-67c4d725a99d"). InnerVolumeSpecName "kube-api-access-l84hn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.305067 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1553db29-21b6-4403-ab72-67c4d725a99d-scripts" (OuterVolumeSpecName: "scripts") pod "1553db29-21b6-4403-ab72-67c4d725a99d" (UID: "1553db29-21b6-4403-ab72-67c4d725a99d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.313502 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "1553db29-21b6-4403-ab72-67c4d725a99d" (UID: "1553db29-21b6-4403-ab72-67c4d725a99d"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.333761 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1553db29-21b6-4403-ab72-67c4d725a99d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1553db29-21b6-4403-ab72-67c4d725a99d" (UID: "1553db29-21b6-4403-ab72-67c4d725a99d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.363720 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1553db29-21b6-4403-ab72-67c4d725a99d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1553db29-21b6-4403-ab72-67c4d725a99d" (UID: "1553db29-21b6-4403-ab72-67c4d725a99d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.372970 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1553db29-21b6-4403-ab72-67c4d725a99d-config-data" (OuterVolumeSpecName: "config-data") pod "1553db29-21b6-4403-ab72-67c4d725a99d" (UID: "1553db29-21b6-4403-ab72-67c4d725a99d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.391483 5050 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1553db29-21b6-4403-ab72-67c4d725a99d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.391523 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1553db29-21b6-4403-ab72-67c4d725a99d-logs\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.391559 5050 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.391570 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l84hn\" (UniqueName: \"kubernetes.io/projected/1553db29-21b6-4403-ab72-67c4d725a99d-kube-api-access-l84hn\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.391581 5050 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1553db29-21b6-4403-ab72-67c4d725a99d-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.391591 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1553db29-21b6-4403-ab72-67c4d725a99d-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.391599 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1553db29-21b6-4403-ab72-67c4d725a99d-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.391607 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1553db29-21b6-4403-ab72-67c4d725a99d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.413146 5050 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.493468 5050 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.544249 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.556263 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.571868 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 14:09:50 crc kubenswrapper[5050]: E1211 14:09:50.572523 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1553db29-21b6-4403-ab72-67c4d725a99d" containerName="glance-httpd" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.572542 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="1553db29-21b6-4403-ab72-67c4d725a99d" containerName="glance-httpd" Dec 11 14:09:50 crc kubenswrapper[5050]: E1211 14:09:50.572565 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1553db29-21b6-4403-ab72-67c4d725a99d" containerName="glance-log" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.572570 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="1553db29-21b6-4403-ab72-67c4d725a99d" containerName="glance-log" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.572773 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="1553db29-21b6-4403-ab72-67c4d725a99d" containerName="glance-httpd" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.572788 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="1553db29-21b6-4403-ab72-67c4d725a99d" containerName="glance-log" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.574076 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.576102 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.577098 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.584452 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.697687 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.697764 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/38b1a06e-804a-44dc-8e77-a7d8162f38bd-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.697812 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38b1a06e-804a-44dc-8e77-a7d8162f38bd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.697846 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38b1a06e-804a-44dc-8e77-a7d8162f38bd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.698024 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgv24\" (UniqueName: \"kubernetes.io/projected/38b1a06e-804a-44dc-8e77-a7d8162f38bd-kube-api-access-bgv24\") pod \"glance-default-internal-api-0\" (UID: \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.698093 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38b1a06e-804a-44dc-8e77-a7d8162f38bd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.698220 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/38b1a06e-804a-44dc-8e77-a7d8162f38bd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.698252 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38b1a06e-804a-44dc-8e77-a7d8162f38bd-logs\") pod \"glance-default-internal-api-0\" (UID: \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.800065 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38b1a06e-804a-44dc-8e77-a7d8162f38bd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.800145 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38b1a06e-804a-44dc-8e77-a7d8162f38bd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.800171 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgv24\" (UniqueName: \"kubernetes.io/projected/38b1a06e-804a-44dc-8e77-a7d8162f38bd-kube-api-access-bgv24\") pod \"glance-default-internal-api-0\" (UID: \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.800190 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38b1a06e-804a-44dc-8e77-a7d8162f38bd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.800218 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/38b1a06e-804a-44dc-8e77-a7d8162f38bd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.800242 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38b1a06e-804a-44dc-8e77-a7d8162f38bd-logs\") pod \"glance-default-internal-api-0\" (UID: \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.800310 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.800349 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/38b1a06e-804a-44dc-8e77-a7d8162f38bd-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.800711 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/glance-default-internal-api-0" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.800930 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38b1a06e-804a-44dc-8e77-a7d8162f38bd-logs\") pod \"glance-default-internal-api-0\" (UID: \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.801061 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/38b1a06e-804a-44dc-8e77-a7d8162f38bd-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.806552 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38b1a06e-804a-44dc-8e77-a7d8162f38bd-scripts\") pod \"glance-default-internal-api-0\" (UID: \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.807330 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/38b1a06e-804a-44dc-8e77-a7d8162f38bd-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.814365 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38b1a06e-804a-44dc-8e77-a7d8162f38bd-config-data\") pod \"glance-default-internal-api-0\" (UID: \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.818940 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38b1a06e-804a-44dc-8e77-a7d8162f38bd-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.834985 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgv24\" (UniqueName: \"kubernetes.io/projected/38b1a06e-804a-44dc-8e77-a7d8162f38bd-kube-api-access-bgv24\") pod \"glance-default-internal-api-0\" (UID: \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.844119 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"glance-default-internal-api-0\" (UID: \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\") " pod="openstack/glance-default-internal-api-0" Dec 11 14:09:50 crc kubenswrapper[5050]: I1211 14:09:50.892329 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 14:09:51 crc kubenswrapper[5050]: I1211 14:09:51.285531 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"213cfec6-ba42-4dbc-bd9c-051b193e4577","Type":"ContainerStarted","Data":"e6dc15c8d2821c9d66fa830b0740353eeecafc3c6002947a42891501a4a72dfd"} Dec 11 14:09:51 crc kubenswrapper[5050]: I1211 14:09:51.351846 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.351820464 podStartE2EDuration="4.351820464s" podCreationTimestamp="2025-12-11 14:09:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:09:51.342465292 +0000 UTC m=+1282.186187878" watchObservedRunningTime="2025-12-11 14:09:51.351820464 +0000 UTC m=+1282.195543050" Dec 11 14:09:51 crc kubenswrapper[5050]: I1211 14:09:51.559438 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1553db29-21b6-4403-ab72-67c4d725a99d" path="/var/lib/kubelet/pods/1553db29-21b6-4403-ab72-67c4d725a99d/volumes" Dec 11 14:09:51 crc kubenswrapper[5050]: I1211 14:09:51.676254 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 14:09:51 crc kubenswrapper[5050]: W1211 14:09:51.692378 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38b1a06e_804a_44dc_8e77_a7d8162f38bd.slice/crio-649e31b89346341e7c81146ba46ab96d338648e2c60254b84fa6acf787ca62d7 WatchSource:0}: Error finding container 649e31b89346341e7c81146ba46ab96d338648e2c60254b84fa6acf787ca62d7: Status 404 returned error can't find the container with id 649e31b89346341e7c81146ba46ab96d338648e2c60254b84fa6acf787ca62d7 Dec 11 14:09:52 crc kubenswrapper[5050]: I1211 14:09:52.298121 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"38b1a06e-804a-44dc-8e77-a7d8162f38bd","Type":"ContainerStarted","Data":"649e31b89346341e7c81146ba46ab96d338648e2c60254b84fa6acf787ca62d7"} Dec 11 14:09:53 crc kubenswrapper[5050]: I1211 14:09:53.311146 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"38b1a06e-804a-44dc-8e77-a7d8162f38bd","Type":"ContainerStarted","Data":"4a0dd3bf669f7beb6461a99c18d911c75efcecd8fddb14f47b6513fec2bf9b54"} Dec 11 14:09:54 crc kubenswrapper[5050]: I1211 14:09:54.329114 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"38b1a06e-804a-44dc-8e77-a7d8162f38bd","Type":"ContainerStarted","Data":"0a03b521b37d9fb0f7030177a2bb20787cfd84f0c0449bc65282aede0e194ffc"} Dec 11 14:09:54 crc kubenswrapper[5050]: I1211 14:09:54.357600 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.357582197 podStartE2EDuration="4.357582197s" podCreationTimestamp="2025-12-11 14:09:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:09:54.352743166 +0000 UTC m=+1285.196465792" watchObservedRunningTime="2025-12-11 14:09:54.357582197 +0000 UTC m=+1285.201304783" Dec 11 14:09:55 crc kubenswrapper[5050]: I1211 14:09:55.983630 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.016467 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-scripts\") pod \"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb\" (UID: \"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb\") " Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.016936 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-combined-ca-bundle\") pod \"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb\" (UID: \"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb\") " Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.017049 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-config-data\") pod \"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb\" (UID: \"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb\") " Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.017145 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-sg-core-conf-yaml\") pod \"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb\" (UID: \"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb\") " Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.017216 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-log-httpd\") pod \"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb\" (UID: \"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb\") " Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.017296 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwdzd\" (UniqueName: \"kubernetes.io/projected/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-kube-api-access-lwdzd\") pod \"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb\" (UID: \"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb\") " Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.017323 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-run-httpd\") pod \"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb\" (UID: \"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb\") " Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.018780 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb" (UID: "ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.019327 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb" (UID: "ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.025391 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-kube-api-access-lwdzd" (OuterVolumeSpecName: "kube-api-access-lwdzd") pod "ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb" (UID: "ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb"). InnerVolumeSpecName "kube-api-access-lwdzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.031407 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-scripts" (OuterVolumeSpecName: "scripts") pod "ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb" (UID: "ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.071297 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb" (UID: "ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.105400 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb" (UID: "ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.119769 5050 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.119816 5050 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.119827 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwdzd\" (UniqueName: \"kubernetes.io/projected/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-kube-api-access-lwdzd\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.119839 5050 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.119848 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.119857 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.122685 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-config-data" (OuterVolumeSpecName: "config-data") pod "ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb" (UID: "ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.222758 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.355303 5050 generic.go:334] "Generic (PLEG): container finished" podID="ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb" containerID="c6b1d62b01b3a95e16aab0b5c64eaac77a278fd8e34b184c55d0da919ec73e08" exitCode=0 Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.355353 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb","Type":"ContainerDied","Data":"c6b1d62b01b3a95e16aab0b5c64eaac77a278fd8e34b184c55d0da919ec73e08"} Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.355388 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb","Type":"ContainerDied","Data":"843f0fe77130a7df36177153df1844b062fe430f7d4225766cf4817009e50e38"} Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.355410 5050 scope.go:117] "RemoveContainer" containerID="2f7b9b1af97df1ae415218361bb3e588feb3e7a9629979d22e7a56915ee476e7" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.355470 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.380914 5050 scope.go:117] "RemoveContainer" containerID="1691121130a80c5136049af99bae2de374c9ff3322cf7100415a86a9e8096847" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.396203 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.405182 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.432217 5050 scope.go:117] "RemoveContainer" containerID="2d91a096501ed0bd59ffc8a882e4c06ddb3132b553d46dbf91a11271a12b973a" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.432715 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:09:56 crc kubenswrapper[5050]: E1211 14:09:56.433361 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb" containerName="ceilometer-notification-agent" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.433380 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb" containerName="ceilometer-notification-agent" Dec 11 14:09:56 crc kubenswrapper[5050]: E1211 14:09:56.433400 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb" containerName="proxy-httpd" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.433410 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb" containerName="proxy-httpd" Dec 11 14:09:56 crc kubenswrapper[5050]: E1211 14:09:56.433435 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb" containerName="ceilometer-central-agent" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.433443 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb" containerName="ceilometer-central-agent" Dec 11 14:09:56 crc kubenswrapper[5050]: E1211 14:09:56.433478 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb" containerName="sg-core" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.433485 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb" containerName="sg-core" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.433716 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb" containerName="proxy-httpd" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.433744 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb" containerName="ceilometer-notification-agent" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.433765 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb" containerName="ceilometer-central-agent" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.433779 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb" containerName="sg-core" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.440020 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.444096 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.444566 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.463371 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.472633 5050 scope.go:117] "RemoveContainer" containerID="c6b1d62b01b3a95e16aab0b5c64eaac77a278fd8e34b184c55d0da919ec73e08" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.506428 5050 scope.go:117] "RemoveContainer" containerID="2f7b9b1af97df1ae415218361bb3e588feb3e7a9629979d22e7a56915ee476e7" Dec 11 14:09:56 crc kubenswrapper[5050]: E1211 14:09:56.507266 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f7b9b1af97df1ae415218361bb3e588feb3e7a9629979d22e7a56915ee476e7\": container with ID starting with 2f7b9b1af97df1ae415218361bb3e588feb3e7a9629979d22e7a56915ee476e7 not found: ID does not exist" containerID="2f7b9b1af97df1ae415218361bb3e588feb3e7a9629979d22e7a56915ee476e7" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.507380 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f7b9b1af97df1ae415218361bb3e588feb3e7a9629979d22e7a56915ee476e7"} err="failed to get container status \"2f7b9b1af97df1ae415218361bb3e588feb3e7a9629979d22e7a56915ee476e7\": rpc error: code = NotFound desc = could not find container \"2f7b9b1af97df1ae415218361bb3e588feb3e7a9629979d22e7a56915ee476e7\": container with ID starting with 2f7b9b1af97df1ae415218361bb3e588feb3e7a9629979d22e7a56915ee476e7 not found: ID does not exist" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.507428 5050 scope.go:117] "RemoveContainer" containerID="1691121130a80c5136049af99bae2de374c9ff3322cf7100415a86a9e8096847" Dec 11 14:09:56 crc kubenswrapper[5050]: E1211 14:09:56.508842 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1691121130a80c5136049af99bae2de374c9ff3322cf7100415a86a9e8096847\": container with ID starting with 1691121130a80c5136049af99bae2de374c9ff3322cf7100415a86a9e8096847 not found: ID does not exist" containerID="1691121130a80c5136049af99bae2de374c9ff3322cf7100415a86a9e8096847" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.508888 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1691121130a80c5136049af99bae2de374c9ff3322cf7100415a86a9e8096847"} err="failed to get container status \"1691121130a80c5136049af99bae2de374c9ff3322cf7100415a86a9e8096847\": rpc error: code = NotFound desc = could not find container \"1691121130a80c5136049af99bae2de374c9ff3322cf7100415a86a9e8096847\": container with ID starting with 1691121130a80c5136049af99bae2de374c9ff3322cf7100415a86a9e8096847 not found: ID does not exist" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.508928 5050 scope.go:117] "RemoveContainer" containerID="2d91a096501ed0bd59ffc8a882e4c06ddb3132b553d46dbf91a11271a12b973a" Dec 11 14:09:56 crc kubenswrapper[5050]: E1211 14:09:56.509413 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d91a096501ed0bd59ffc8a882e4c06ddb3132b553d46dbf91a11271a12b973a\": container with ID starting with 2d91a096501ed0bd59ffc8a882e4c06ddb3132b553d46dbf91a11271a12b973a not found: ID does not exist" containerID="2d91a096501ed0bd59ffc8a882e4c06ddb3132b553d46dbf91a11271a12b973a" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.509492 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d91a096501ed0bd59ffc8a882e4c06ddb3132b553d46dbf91a11271a12b973a"} err="failed to get container status \"2d91a096501ed0bd59ffc8a882e4c06ddb3132b553d46dbf91a11271a12b973a\": rpc error: code = NotFound desc = could not find container \"2d91a096501ed0bd59ffc8a882e4c06ddb3132b553d46dbf91a11271a12b973a\": container with ID starting with 2d91a096501ed0bd59ffc8a882e4c06ddb3132b553d46dbf91a11271a12b973a not found: ID does not exist" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.509527 5050 scope.go:117] "RemoveContainer" containerID="c6b1d62b01b3a95e16aab0b5c64eaac77a278fd8e34b184c55d0da919ec73e08" Dec 11 14:09:56 crc kubenswrapper[5050]: E1211 14:09:56.510000 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6b1d62b01b3a95e16aab0b5c64eaac77a278fd8e34b184c55d0da919ec73e08\": container with ID starting with c6b1d62b01b3a95e16aab0b5c64eaac77a278fd8e34b184c55d0da919ec73e08 not found: ID does not exist" containerID="c6b1d62b01b3a95e16aab0b5c64eaac77a278fd8e34b184c55d0da919ec73e08" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.510053 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6b1d62b01b3a95e16aab0b5c64eaac77a278fd8e34b184c55d0da919ec73e08"} err="failed to get container status \"c6b1d62b01b3a95e16aab0b5c64eaac77a278fd8e34b184c55d0da919ec73e08\": rpc error: code = NotFound desc = could not find container \"c6b1d62b01b3a95e16aab0b5c64eaac77a278fd8e34b184c55d0da919ec73e08\": container with ID starting with c6b1d62b01b3a95e16aab0b5c64eaac77a278fd8e34b184c55d0da919ec73e08 not found: ID does not exist" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.531393 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krnld\" (UniqueName: \"kubernetes.io/projected/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-kube-api-access-krnld\") pod \"ceilometer-0\" (UID: \"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d\") " pod="openstack/ceilometer-0" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.531548 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-run-httpd\") pod \"ceilometer-0\" (UID: \"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d\") " pod="openstack/ceilometer-0" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.531586 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-config-data\") pod \"ceilometer-0\" (UID: \"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d\") " pod="openstack/ceilometer-0" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.531620 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d\") " pod="openstack/ceilometer-0" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.531646 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d\") " pod="openstack/ceilometer-0" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.531801 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-scripts\") pod \"ceilometer-0\" (UID: \"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d\") " pod="openstack/ceilometer-0" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.531886 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-log-httpd\") pod \"ceilometer-0\" (UID: \"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d\") " pod="openstack/ceilometer-0" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.634610 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-log-httpd\") pod \"ceilometer-0\" (UID: \"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d\") " pod="openstack/ceilometer-0" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.634797 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krnld\" (UniqueName: \"kubernetes.io/projected/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-kube-api-access-krnld\") pod \"ceilometer-0\" (UID: \"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d\") " pod="openstack/ceilometer-0" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.634916 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-run-httpd\") pod \"ceilometer-0\" (UID: \"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d\") " pod="openstack/ceilometer-0" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.635297 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-log-httpd\") pod \"ceilometer-0\" (UID: \"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d\") " pod="openstack/ceilometer-0" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.635697 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-config-data\") pod \"ceilometer-0\" (UID: \"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d\") " pod="openstack/ceilometer-0" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.635740 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d\") " pod="openstack/ceilometer-0" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.635761 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d\") " pod="openstack/ceilometer-0" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.635796 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-scripts\") pod \"ceilometer-0\" (UID: \"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d\") " pod="openstack/ceilometer-0" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.636003 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-run-httpd\") pod \"ceilometer-0\" (UID: \"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d\") " pod="openstack/ceilometer-0" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.639431 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-config-data\") pod \"ceilometer-0\" (UID: \"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d\") " pod="openstack/ceilometer-0" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.640163 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-scripts\") pod \"ceilometer-0\" (UID: \"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d\") " pod="openstack/ceilometer-0" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.642656 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d\") " pod="openstack/ceilometer-0" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.644163 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d\") " pod="openstack/ceilometer-0" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.657997 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krnld\" (UniqueName: \"kubernetes.io/projected/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-kube-api-access-krnld\") pod \"ceilometer-0\" (UID: \"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d\") " pod="openstack/ceilometer-0" Dec 11 14:09:56 crc kubenswrapper[5050]: I1211 14:09:56.789087 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 14:09:57 crc kubenswrapper[5050]: I1211 14:09:57.292743 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:09:57 crc kubenswrapper[5050]: W1211 14:09:57.303459 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f0b2c51_9300_49e3_be4f_f08bc63c7e8d.slice/crio-4c25bd92aecc85e8873ca1cdc15920a93598b5e2e37eea6c5f3f696e0ae932b3 WatchSource:0}: Error finding container 4c25bd92aecc85e8873ca1cdc15920a93598b5e2e37eea6c5f3f696e0ae932b3: Status 404 returned error can't find the container with id 4c25bd92aecc85e8873ca1cdc15920a93598b5e2e37eea6c5f3f696e0ae932b3 Dec 11 14:09:57 crc kubenswrapper[5050]: I1211 14:09:57.307267 5050 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 14:09:57 crc kubenswrapper[5050]: I1211 14:09:57.373181 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d","Type":"ContainerStarted","Data":"4c25bd92aecc85e8873ca1cdc15920a93598b5e2e37eea6c5f3f696e0ae932b3"} Dec 11 14:09:57 crc kubenswrapper[5050]: I1211 14:09:57.563243 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb" path="/var/lib/kubelet/pods/ba2dbc4b-cd0e-4ed5-a9d3-0c687e96b5cb/volumes" Dec 11 14:09:57 crc kubenswrapper[5050]: I1211 14:09:57.815473 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 11 14:09:57 crc kubenswrapper[5050]: I1211 14:09:57.817518 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 11 14:09:57 crc kubenswrapper[5050]: I1211 14:09:57.852891 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 11 14:09:57 crc kubenswrapper[5050]: I1211 14:09:57.878270 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 11 14:09:58 crc kubenswrapper[5050]: I1211 14:09:58.389355 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d","Type":"ContainerStarted","Data":"2166c87ee01e02647af3563324f2721942e6a80653c81391ee64da6d659b2eb5"} Dec 11 14:09:58 crc kubenswrapper[5050]: I1211 14:09:58.389796 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 11 14:09:58 crc kubenswrapper[5050]: I1211 14:09:58.389814 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 11 14:09:59 crc kubenswrapper[5050]: I1211 14:09:59.403580 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d","Type":"ContainerStarted","Data":"d1d520dbb4c4e8a4630669d75c825090adfefa6f450c209bead69d7203ca76ea"} Dec 11 14:10:00 crc kubenswrapper[5050]: I1211 14:10:00.415577 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d","Type":"ContainerStarted","Data":"83e968fbb2ebfcae094253150ed9e812c5c80149a383c66044b8e8058694c6e5"} Dec 11 14:10:00 crc kubenswrapper[5050]: I1211 14:10:00.418862 5050 generic.go:334] "Generic (PLEG): container finished" podID="a3f691ef-0109-459b-bbb9-eb08838d3dd0" containerID="a90ab157b3c623d184d49b39ff73cc98df65330afdc42c004c5a9becbab50b27" exitCode=0 Dec 11 14:10:00 crc kubenswrapper[5050]: I1211 14:10:00.418908 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-w884k" event={"ID":"a3f691ef-0109-459b-bbb9-eb08838d3dd0","Type":"ContainerDied","Data":"a90ab157b3c623d184d49b39ff73cc98df65330afdc42c004c5a9becbab50b27"} Dec 11 14:10:00 crc kubenswrapper[5050]: I1211 14:10:00.892791 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 11 14:10:00 crc kubenswrapper[5050]: I1211 14:10:00.892854 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 11 14:10:00 crc kubenswrapper[5050]: I1211 14:10:00.935611 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 11 14:10:00 crc kubenswrapper[5050]: I1211 14:10:00.952072 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 11 14:10:01 crc kubenswrapper[5050]: I1211 14:10:01.023912 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 11 14:10:01 crc kubenswrapper[5050]: I1211 14:10:01.024061 5050 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 14:10:01 crc kubenswrapper[5050]: I1211 14:10:01.025158 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 11 14:10:01 crc kubenswrapper[5050]: I1211 14:10:01.435303 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d","Type":"ContainerStarted","Data":"ba1c4a582b693a6685972a569e5da562813f6fe559ba28e42bc1ec1ce63e4859"} Dec 11 14:10:01 crc kubenswrapper[5050]: I1211 14:10:01.441525 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 11 14:10:01 crc kubenswrapper[5050]: I1211 14:10:01.441590 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 11 14:10:01 crc kubenswrapper[5050]: I1211 14:10:01.441613 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 11 14:10:01 crc kubenswrapper[5050]: I1211 14:10:01.500197 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.012173959 podStartE2EDuration="5.500166683s" podCreationTimestamp="2025-12-11 14:09:56 +0000 UTC" firstStartedPulling="2025-12-11 14:09:57.306906719 +0000 UTC m=+1288.150629305" lastFinishedPulling="2025-12-11 14:10:00.794899443 +0000 UTC m=+1291.638622029" observedRunningTime="2025-12-11 14:10:01.488370595 +0000 UTC m=+1292.332093201" watchObservedRunningTime="2025-12-11 14:10:01.500166683 +0000 UTC m=+1292.343889269" Dec 11 14:10:01 crc kubenswrapper[5050]: I1211 14:10:01.922381 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-w884k" Dec 11 14:10:01 crc kubenswrapper[5050]: I1211 14:10:01.977445 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3f691ef-0109-459b-bbb9-eb08838d3dd0-scripts\") pod \"a3f691ef-0109-459b-bbb9-eb08838d3dd0\" (UID: \"a3f691ef-0109-459b-bbb9-eb08838d3dd0\") " Dec 11 14:10:01 crc kubenswrapper[5050]: I1211 14:10:01.977728 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h82kj\" (UniqueName: \"kubernetes.io/projected/a3f691ef-0109-459b-bbb9-eb08838d3dd0-kube-api-access-h82kj\") pod \"a3f691ef-0109-459b-bbb9-eb08838d3dd0\" (UID: \"a3f691ef-0109-459b-bbb9-eb08838d3dd0\") " Dec 11 14:10:01 crc kubenswrapper[5050]: I1211 14:10:01.977776 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3f691ef-0109-459b-bbb9-eb08838d3dd0-config-data\") pod \"a3f691ef-0109-459b-bbb9-eb08838d3dd0\" (UID: \"a3f691ef-0109-459b-bbb9-eb08838d3dd0\") " Dec 11 14:10:01 crc kubenswrapper[5050]: I1211 14:10:01.977838 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3f691ef-0109-459b-bbb9-eb08838d3dd0-combined-ca-bundle\") pod \"a3f691ef-0109-459b-bbb9-eb08838d3dd0\" (UID: \"a3f691ef-0109-459b-bbb9-eb08838d3dd0\") " Dec 11 14:10:01 crc kubenswrapper[5050]: I1211 14:10:01.990538 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3f691ef-0109-459b-bbb9-eb08838d3dd0-kube-api-access-h82kj" (OuterVolumeSpecName: "kube-api-access-h82kj") pod "a3f691ef-0109-459b-bbb9-eb08838d3dd0" (UID: "a3f691ef-0109-459b-bbb9-eb08838d3dd0"). InnerVolumeSpecName "kube-api-access-h82kj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:10:02 crc kubenswrapper[5050]: I1211 14:10:02.002911 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3f691ef-0109-459b-bbb9-eb08838d3dd0-scripts" (OuterVolumeSpecName: "scripts") pod "a3f691ef-0109-459b-bbb9-eb08838d3dd0" (UID: "a3f691ef-0109-459b-bbb9-eb08838d3dd0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:10:02 crc kubenswrapper[5050]: I1211 14:10:02.049566 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3f691ef-0109-459b-bbb9-eb08838d3dd0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3f691ef-0109-459b-bbb9-eb08838d3dd0" (UID: "a3f691ef-0109-459b-bbb9-eb08838d3dd0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:10:02 crc kubenswrapper[5050]: I1211 14:10:02.061247 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3f691ef-0109-459b-bbb9-eb08838d3dd0-config-data" (OuterVolumeSpecName: "config-data") pod "a3f691ef-0109-459b-bbb9-eb08838d3dd0" (UID: "a3f691ef-0109-459b-bbb9-eb08838d3dd0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:10:02 crc kubenswrapper[5050]: I1211 14:10:02.082230 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h82kj\" (UniqueName: \"kubernetes.io/projected/a3f691ef-0109-459b-bbb9-eb08838d3dd0-kube-api-access-h82kj\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:02 crc kubenswrapper[5050]: I1211 14:10:02.082323 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3f691ef-0109-459b-bbb9-eb08838d3dd0-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:02 crc kubenswrapper[5050]: I1211 14:10:02.082365 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3f691ef-0109-459b-bbb9-eb08838d3dd0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:02 crc kubenswrapper[5050]: I1211 14:10:02.082379 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3f691ef-0109-459b-bbb9-eb08838d3dd0-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:02 crc kubenswrapper[5050]: I1211 14:10:02.447419 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-w884k" event={"ID":"a3f691ef-0109-459b-bbb9-eb08838d3dd0","Type":"ContainerDied","Data":"92bea3d33ffa9bc0efc0c48a3de1e6fdb5d6bb5794c1c6b9d79b0207a31954a8"} Dec 11 14:10:02 crc kubenswrapper[5050]: I1211 14:10:02.447481 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92bea3d33ffa9bc0efc0c48a3de1e6fdb5d6bb5794c1c6b9d79b0207a31954a8" Dec 11 14:10:02 crc kubenswrapper[5050]: I1211 14:10:02.448038 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-w884k" Dec 11 14:10:02 crc kubenswrapper[5050]: I1211 14:10:02.571134 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 11 14:10:02 crc kubenswrapper[5050]: E1211 14:10:02.571766 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3f691ef-0109-459b-bbb9-eb08838d3dd0" containerName="nova-cell0-conductor-db-sync" Dec 11 14:10:02 crc kubenswrapper[5050]: I1211 14:10:02.571786 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3f691ef-0109-459b-bbb9-eb08838d3dd0" containerName="nova-cell0-conductor-db-sync" Dec 11 14:10:02 crc kubenswrapper[5050]: I1211 14:10:02.571990 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3f691ef-0109-459b-bbb9-eb08838d3dd0" containerName="nova-cell0-conductor-db-sync" Dec 11 14:10:02 crc kubenswrapper[5050]: I1211 14:10:02.572835 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 11 14:10:02 crc kubenswrapper[5050]: I1211 14:10:02.578134 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 11 14:10:02 crc kubenswrapper[5050]: I1211 14:10:02.578295 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-kws6c" Dec 11 14:10:02 crc kubenswrapper[5050]: I1211 14:10:02.599446 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 11 14:10:02 crc kubenswrapper[5050]: I1211 14:10:02.697416 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c13a1ff-0952-40b8-9157-3f1ba8b232c0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7c13a1ff-0952-40b8-9157-3f1ba8b232c0\") " pod="openstack/nova-cell0-conductor-0" Dec 11 14:10:02 crc kubenswrapper[5050]: I1211 14:10:02.697537 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c13a1ff-0952-40b8-9157-3f1ba8b232c0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7c13a1ff-0952-40b8-9157-3f1ba8b232c0\") " pod="openstack/nova-cell0-conductor-0" Dec 11 14:10:02 crc kubenswrapper[5050]: I1211 14:10:02.697729 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk7xp\" (UniqueName: \"kubernetes.io/projected/7c13a1ff-0952-40b8-9157-3f1ba8b232c0-kube-api-access-zk7xp\") pod \"nova-cell0-conductor-0\" (UID: \"7c13a1ff-0952-40b8-9157-3f1ba8b232c0\") " pod="openstack/nova-cell0-conductor-0" Dec 11 14:10:02 crc kubenswrapper[5050]: I1211 14:10:02.800307 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c13a1ff-0952-40b8-9157-3f1ba8b232c0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7c13a1ff-0952-40b8-9157-3f1ba8b232c0\") " pod="openstack/nova-cell0-conductor-0" Dec 11 14:10:02 crc kubenswrapper[5050]: I1211 14:10:02.800895 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk7xp\" (UniqueName: \"kubernetes.io/projected/7c13a1ff-0952-40b8-9157-3f1ba8b232c0-kube-api-access-zk7xp\") pod \"nova-cell0-conductor-0\" (UID: \"7c13a1ff-0952-40b8-9157-3f1ba8b232c0\") " pod="openstack/nova-cell0-conductor-0" Dec 11 14:10:02 crc kubenswrapper[5050]: I1211 14:10:02.801152 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c13a1ff-0952-40b8-9157-3f1ba8b232c0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7c13a1ff-0952-40b8-9157-3f1ba8b232c0\") " pod="openstack/nova-cell0-conductor-0" Dec 11 14:10:02 crc kubenswrapper[5050]: I1211 14:10:02.805931 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c13a1ff-0952-40b8-9157-3f1ba8b232c0-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"7c13a1ff-0952-40b8-9157-3f1ba8b232c0\") " pod="openstack/nova-cell0-conductor-0" Dec 11 14:10:02 crc kubenswrapper[5050]: I1211 14:10:02.806531 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c13a1ff-0952-40b8-9157-3f1ba8b232c0-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"7c13a1ff-0952-40b8-9157-3f1ba8b232c0\") " pod="openstack/nova-cell0-conductor-0" Dec 11 14:10:02 crc kubenswrapper[5050]: I1211 14:10:02.826768 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk7xp\" (UniqueName: \"kubernetes.io/projected/7c13a1ff-0952-40b8-9157-3f1ba8b232c0-kube-api-access-zk7xp\") pod \"nova-cell0-conductor-0\" (UID: \"7c13a1ff-0952-40b8-9157-3f1ba8b232c0\") " pod="openstack/nova-cell0-conductor-0" Dec 11 14:10:02 crc kubenswrapper[5050]: I1211 14:10:02.901231 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 11 14:10:03 crc kubenswrapper[5050]: I1211 14:10:03.397228 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 11 14:10:03 crc kubenswrapper[5050]: I1211 14:10:03.461992 5050 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 14:10:03 crc kubenswrapper[5050]: I1211 14:10:03.462055 5050 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 14:10:03 crc kubenswrapper[5050]: I1211 14:10:03.462762 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7c13a1ff-0952-40b8-9157-3f1ba8b232c0","Type":"ContainerStarted","Data":"73f0c1f35fdb366392825d7ee60c87a2fe20f55cff52c244eaa4032c06b97a77"} Dec 11 14:10:04 crc kubenswrapper[5050]: I1211 14:10:04.066028 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 11 14:10:04 crc kubenswrapper[5050]: I1211 14:10:04.314939 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 11 14:10:04 crc kubenswrapper[5050]: I1211 14:10:04.476058 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7c13a1ff-0952-40b8-9157-3f1ba8b232c0","Type":"ContainerStarted","Data":"dcd6241bf625d5260eb81af83e760bdddc10ab5c6ec8bd47adbb59c1809dd3e4"} Dec 11 14:10:04 crc kubenswrapper[5050]: I1211 14:10:04.503160 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.503139911 podStartE2EDuration="2.503139911s" podCreationTimestamp="2025-12-11 14:10:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:10:04.496761039 +0000 UTC m=+1295.340483625" watchObservedRunningTime="2025-12-11 14:10:04.503139911 +0000 UTC m=+1295.346862497" Dec 11 14:10:05 crc kubenswrapper[5050]: I1211 14:10:05.487985 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 11 14:10:12 crc kubenswrapper[5050]: I1211 14:10:12.928485 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.441470 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-clbrx"] Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.443826 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-clbrx" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.446973 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.447887 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.454864 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-clbrx"] Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.496142 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8688x"] Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.500694 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8688x" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.534150 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8688x"] Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.641898 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.643784 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.647057 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.649198 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b0ac5e2-e200-454b-928e-93ea9b253ca1-utilities\") pod \"redhat-operators-8688x\" (UID: \"9b0ac5e2-e200-454b-928e-93ea9b253ca1\") " pod="openshift-marketplace/redhat-operators-8688x" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.649260 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84abe132-b822-4b40-9952-7454c24cf3d0-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-clbrx\" (UID: \"84abe132-b822-4b40-9952-7454c24cf3d0\") " pod="openstack/nova-cell0-cell-mapping-clbrx" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.649318 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02d4b151-446f-42dc-86ec-edb6eb55b289-config-data\") pod \"nova-api-0\" (UID: \"02d4b151-446f-42dc-86ec-edb6eb55b289\") " pod="openstack/nova-api-0" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.649367 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84abe132-b822-4b40-9952-7454c24cf3d0-scripts\") pod \"nova-cell0-cell-mapping-clbrx\" (UID: \"84abe132-b822-4b40-9952-7454c24cf3d0\") " pod="openstack/nova-cell0-cell-mapping-clbrx" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.649405 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02d4b151-446f-42dc-86ec-edb6eb55b289-logs\") pod \"nova-api-0\" (UID: \"02d4b151-446f-42dc-86ec-edb6eb55b289\") " pod="openstack/nova-api-0" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.649492 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l47cj\" (UniqueName: \"kubernetes.io/projected/9b0ac5e2-e200-454b-928e-93ea9b253ca1-kube-api-access-l47cj\") pod \"redhat-operators-8688x\" (UID: \"9b0ac5e2-e200-454b-928e-93ea9b253ca1\") " pod="openshift-marketplace/redhat-operators-8688x" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.649524 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84abe132-b822-4b40-9952-7454c24cf3d0-config-data\") pod \"nova-cell0-cell-mapping-clbrx\" (UID: \"84abe132-b822-4b40-9952-7454c24cf3d0\") " pod="openstack/nova-cell0-cell-mapping-clbrx" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.649567 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4hcc\" (UniqueName: \"kubernetes.io/projected/02d4b151-446f-42dc-86ec-edb6eb55b289-kube-api-access-n4hcc\") pod \"nova-api-0\" (UID: \"02d4b151-446f-42dc-86ec-edb6eb55b289\") " pod="openstack/nova-api-0" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.649595 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmq7b\" (UniqueName: \"kubernetes.io/projected/84abe132-b822-4b40-9952-7454c24cf3d0-kube-api-access-lmq7b\") pod \"nova-cell0-cell-mapping-clbrx\" (UID: \"84abe132-b822-4b40-9952-7454c24cf3d0\") " pod="openstack/nova-cell0-cell-mapping-clbrx" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.649628 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02d4b151-446f-42dc-86ec-edb6eb55b289-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"02d4b151-446f-42dc-86ec-edb6eb55b289\") " pod="openstack/nova-api-0" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.649647 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b0ac5e2-e200-454b-928e-93ea9b253ca1-catalog-content\") pod \"redhat-operators-8688x\" (UID: \"9b0ac5e2-e200-454b-928e-93ea9b253ca1\") " pod="openshift-marketplace/redhat-operators-8688x" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.700705 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.723959 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.725750 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.737365 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.752303 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b0ac5e2-e200-454b-928e-93ea9b253ca1-utilities\") pod \"redhat-operators-8688x\" (UID: \"9b0ac5e2-e200-454b-928e-93ea9b253ca1\") " pod="openshift-marketplace/redhat-operators-8688x" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.752347 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84abe132-b822-4b40-9952-7454c24cf3d0-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-clbrx\" (UID: \"84abe132-b822-4b40-9952-7454c24cf3d0\") " pod="openstack/nova-cell0-cell-mapping-clbrx" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.752390 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02d4b151-446f-42dc-86ec-edb6eb55b289-config-data\") pod \"nova-api-0\" (UID: \"02d4b151-446f-42dc-86ec-edb6eb55b289\") " pod="openstack/nova-api-0" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.752415 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84abe132-b822-4b40-9952-7454c24cf3d0-scripts\") pod \"nova-cell0-cell-mapping-clbrx\" (UID: \"84abe132-b822-4b40-9952-7454c24cf3d0\") " pod="openstack/nova-cell0-cell-mapping-clbrx" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.752447 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02d4b151-446f-42dc-86ec-edb6eb55b289-logs\") pod \"nova-api-0\" (UID: \"02d4b151-446f-42dc-86ec-edb6eb55b289\") " pod="openstack/nova-api-0" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.752464 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/750efff9-baf7-4feb-9dbc-6fc187a4350f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"750efff9-baf7-4feb-9dbc-6fc187a4350f\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.752492 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/750efff9-baf7-4feb-9dbc-6fc187a4350f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"750efff9-baf7-4feb-9dbc-6fc187a4350f\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.752511 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l47cj\" (UniqueName: \"kubernetes.io/projected/9b0ac5e2-e200-454b-928e-93ea9b253ca1-kube-api-access-l47cj\") pod \"redhat-operators-8688x\" (UID: \"9b0ac5e2-e200-454b-928e-93ea9b253ca1\") " pod="openshift-marketplace/redhat-operators-8688x" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.752530 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84abe132-b822-4b40-9952-7454c24cf3d0-config-data\") pod \"nova-cell0-cell-mapping-clbrx\" (UID: \"84abe132-b822-4b40-9952-7454c24cf3d0\") " pod="openstack/nova-cell0-cell-mapping-clbrx" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.752556 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4hcc\" (UniqueName: \"kubernetes.io/projected/02d4b151-446f-42dc-86ec-edb6eb55b289-kube-api-access-n4hcc\") pod \"nova-api-0\" (UID: \"02d4b151-446f-42dc-86ec-edb6eb55b289\") " pod="openstack/nova-api-0" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.752576 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmq7b\" (UniqueName: \"kubernetes.io/projected/84abe132-b822-4b40-9952-7454c24cf3d0-kube-api-access-lmq7b\") pod \"nova-cell0-cell-mapping-clbrx\" (UID: \"84abe132-b822-4b40-9952-7454c24cf3d0\") " pod="openstack/nova-cell0-cell-mapping-clbrx" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.752599 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02d4b151-446f-42dc-86ec-edb6eb55b289-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"02d4b151-446f-42dc-86ec-edb6eb55b289\") " pod="openstack/nova-api-0" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.752618 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b0ac5e2-e200-454b-928e-93ea9b253ca1-catalog-content\") pod \"redhat-operators-8688x\" (UID: \"9b0ac5e2-e200-454b-928e-93ea9b253ca1\") " pod="openshift-marketplace/redhat-operators-8688x" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.752652 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b84pc\" (UniqueName: \"kubernetes.io/projected/750efff9-baf7-4feb-9dbc-6fc187a4350f-kube-api-access-b84pc\") pod \"nova-cell1-novncproxy-0\" (UID: \"750efff9-baf7-4feb-9dbc-6fc187a4350f\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.760493 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84abe132-b822-4b40-9952-7454c24cf3d0-scripts\") pod \"nova-cell0-cell-mapping-clbrx\" (UID: \"84abe132-b822-4b40-9952-7454c24cf3d0\") " pod="openstack/nova-cell0-cell-mapping-clbrx" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.763329 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02d4b151-446f-42dc-86ec-edb6eb55b289-logs\") pod \"nova-api-0\" (UID: \"02d4b151-446f-42dc-86ec-edb6eb55b289\") " pod="openstack/nova-api-0" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.763859 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b0ac5e2-e200-454b-928e-93ea9b253ca1-utilities\") pod \"redhat-operators-8688x\" (UID: \"9b0ac5e2-e200-454b-928e-93ea9b253ca1\") " pod="openshift-marketplace/redhat-operators-8688x" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.764092 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b0ac5e2-e200-454b-928e-93ea9b253ca1-catalog-content\") pod \"redhat-operators-8688x\" (UID: \"9b0ac5e2-e200-454b-928e-93ea9b253ca1\") " pod="openshift-marketplace/redhat-operators-8688x" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.774436 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84abe132-b822-4b40-9952-7454c24cf3d0-config-data\") pod \"nova-cell0-cell-mapping-clbrx\" (UID: \"84abe132-b822-4b40-9952-7454c24cf3d0\") " pod="openstack/nova-cell0-cell-mapping-clbrx" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.785779 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02d4b151-446f-42dc-86ec-edb6eb55b289-config-data\") pod \"nova-api-0\" (UID: \"02d4b151-446f-42dc-86ec-edb6eb55b289\") " pod="openstack/nova-api-0" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.785885 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84abe132-b822-4b40-9952-7454c24cf3d0-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-clbrx\" (UID: \"84abe132-b822-4b40-9952-7454c24cf3d0\") " pod="openstack/nova-cell0-cell-mapping-clbrx" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.792823 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02d4b151-446f-42dc-86ec-edb6eb55b289-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"02d4b151-446f-42dc-86ec-edb6eb55b289\") " pod="openstack/nova-api-0" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.798401 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.799588 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l47cj\" (UniqueName: \"kubernetes.io/projected/9b0ac5e2-e200-454b-928e-93ea9b253ca1-kube-api-access-l47cj\") pod \"redhat-operators-8688x\" (UID: \"9b0ac5e2-e200-454b-928e-93ea9b253ca1\") " pod="openshift-marketplace/redhat-operators-8688x" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.803812 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmq7b\" (UniqueName: \"kubernetes.io/projected/84abe132-b822-4b40-9952-7454c24cf3d0-kube-api-access-lmq7b\") pod \"nova-cell0-cell-mapping-clbrx\" (UID: \"84abe132-b822-4b40-9952-7454c24cf3d0\") " pod="openstack/nova-cell0-cell-mapping-clbrx" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.815241 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4hcc\" (UniqueName: \"kubernetes.io/projected/02d4b151-446f-42dc-86ec-edb6eb55b289-kube-api-access-n4hcc\") pod \"nova-api-0\" (UID: \"02d4b151-446f-42dc-86ec-edb6eb55b289\") " pod="openstack/nova-api-0" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.850608 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8688x" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.854485 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/750efff9-baf7-4feb-9dbc-6fc187a4350f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"750efff9-baf7-4feb-9dbc-6fc187a4350f\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.854549 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/750efff9-baf7-4feb-9dbc-6fc187a4350f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"750efff9-baf7-4feb-9dbc-6fc187a4350f\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.854611 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b84pc\" (UniqueName: \"kubernetes.io/projected/750efff9-baf7-4feb-9dbc-6fc187a4350f-kube-api-access-b84pc\") pod \"nova-cell1-novncproxy-0\" (UID: \"750efff9-baf7-4feb-9dbc-6fc187a4350f\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.867171 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/750efff9-baf7-4feb-9dbc-6fc187a4350f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"750efff9-baf7-4feb-9dbc-6fc187a4350f\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.885065 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/750efff9-baf7-4feb-9dbc-6fc187a4350f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"750efff9-baf7-4feb-9dbc-6fc187a4350f\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.886785 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.888740 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.895505 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.917967 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b84pc\" (UniqueName: \"kubernetes.io/projected/750efff9-baf7-4feb-9dbc-6fc187a4350f-kube-api-access-b84pc\") pod \"nova-cell1-novncproxy-0\" (UID: \"750efff9-baf7-4feb-9dbc-6fc187a4350f\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.931932 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.957525 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a082a10-0ad5-42cc-9cbe-5c15b93a9af2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3a082a10-0ad5-42cc-9cbe-5c15b93a9af2\") " pod="openstack/nova-metadata-0" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.957918 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a082a10-0ad5-42cc-9cbe-5c15b93a9af2-config-data\") pod \"nova-metadata-0\" (UID: \"3a082a10-0ad5-42cc-9cbe-5c15b93a9af2\") " pod="openstack/nova-metadata-0" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.957987 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-299ws\" (UniqueName: \"kubernetes.io/projected/3a082a10-0ad5-42cc-9cbe-5c15b93a9af2-kube-api-access-299ws\") pod \"nova-metadata-0\" (UID: \"3a082a10-0ad5-42cc-9cbe-5c15b93a9af2\") " pod="openstack/nova-metadata-0" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.958140 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a082a10-0ad5-42cc-9cbe-5c15b93a9af2-logs\") pod \"nova-metadata-0\" (UID: \"3a082a10-0ad5-42cc-9cbe-5c15b93a9af2\") " pod="openstack/nova-metadata-0" Dec 11 14:10:13 crc kubenswrapper[5050]: I1211 14:10:13.987143 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.017109 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.018639 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.031699 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.033650 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.054319 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.061033 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a082a10-0ad5-42cc-9cbe-5c15b93a9af2-config-data\") pod \"nova-metadata-0\" (UID: \"3a082a10-0ad5-42cc-9cbe-5c15b93a9af2\") " pod="openstack/nova-metadata-0" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.061108 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-299ws\" (UniqueName: \"kubernetes.io/projected/3a082a10-0ad5-42cc-9cbe-5c15b93a9af2-kube-api-access-299ws\") pod \"nova-metadata-0\" (UID: \"3a082a10-0ad5-42cc-9cbe-5c15b93a9af2\") " pod="openstack/nova-metadata-0" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.061140 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b570eb96-751f-4200-ba76-1cd02d524b7d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b570eb96-751f-4200-ba76-1cd02d524b7d\") " pod="openstack/nova-scheduler-0" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.061171 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a082a10-0ad5-42cc-9cbe-5c15b93a9af2-logs\") pod \"nova-metadata-0\" (UID: \"3a082a10-0ad5-42cc-9cbe-5c15b93a9af2\") " pod="openstack/nova-metadata-0" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.061226 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b570eb96-751f-4200-ba76-1cd02d524b7d-config-data\") pod \"nova-scheduler-0\" (UID: \"b570eb96-751f-4200-ba76-1cd02d524b7d\") " pod="openstack/nova-scheduler-0" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.061274 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a082a10-0ad5-42cc-9cbe-5c15b93a9af2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3a082a10-0ad5-42cc-9cbe-5c15b93a9af2\") " pod="openstack/nova-metadata-0" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.061302 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjrhd\" (UniqueName: \"kubernetes.io/projected/b570eb96-751f-4200-ba76-1cd02d524b7d-kube-api-access-fjrhd\") pod \"nova-scheduler-0\" (UID: \"b570eb96-751f-4200-ba76-1cd02d524b7d\") " pod="openstack/nova-scheduler-0" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.062163 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a082a10-0ad5-42cc-9cbe-5c15b93a9af2-logs\") pod \"nova-metadata-0\" (UID: \"3a082a10-0ad5-42cc-9cbe-5c15b93a9af2\") " pod="openstack/nova-metadata-0" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.069141 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-clbrx" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.070918 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a082a10-0ad5-42cc-9cbe-5c15b93a9af2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3a082a10-0ad5-42cc-9cbe-5c15b93a9af2\") " pod="openstack/nova-metadata-0" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.073259 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a082a10-0ad5-42cc-9cbe-5c15b93a9af2-config-data\") pod \"nova-metadata-0\" (UID: \"3a082a10-0ad5-42cc-9cbe-5c15b93a9af2\") " pod="openstack/nova-metadata-0" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.083686 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-299ws\" (UniqueName: \"kubernetes.io/projected/3a082a10-0ad5-42cc-9cbe-5c15b93a9af2-kube-api-access-299ws\") pod \"nova-metadata-0\" (UID: \"3a082a10-0ad5-42cc-9cbe-5c15b93a9af2\") " pod="openstack/nova-metadata-0" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.116519 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-xtfbx"] Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.127085 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-xtfbx" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.163788 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b570eb96-751f-4200-ba76-1cd02d524b7d-config-data\") pod \"nova-scheduler-0\" (UID: \"b570eb96-751f-4200-ba76-1cd02d524b7d\") " pod="openstack/nova-scheduler-0" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.163904 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjrhd\" (UniqueName: \"kubernetes.io/projected/b570eb96-751f-4200-ba76-1cd02d524b7d-kube-api-access-fjrhd\") pod \"nova-scheduler-0\" (UID: \"b570eb96-751f-4200-ba76-1cd02d524b7d\") " pod="openstack/nova-scheduler-0" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.163993 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b570eb96-751f-4200-ba76-1cd02d524b7d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b570eb96-751f-4200-ba76-1cd02d524b7d\") " pod="openstack/nova-scheduler-0" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.171796 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b570eb96-751f-4200-ba76-1cd02d524b7d-config-data\") pod \"nova-scheduler-0\" (UID: \"b570eb96-751f-4200-ba76-1cd02d524b7d\") " pod="openstack/nova-scheduler-0" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.173884 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b570eb96-751f-4200-ba76-1cd02d524b7d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b570eb96-751f-4200-ba76-1cd02d524b7d\") " pod="openstack/nova-scheduler-0" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.203905 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjrhd\" (UniqueName: \"kubernetes.io/projected/b570eb96-751f-4200-ba76-1cd02d524b7d-kube-api-access-fjrhd\") pod \"nova-scheduler-0\" (UID: \"b570eb96-751f-4200-ba76-1cd02d524b7d\") " pod="openstack/nova-scheduler-0" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.213228 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-xtfbx"] Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.278682 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/39bb53b7-f4e4-4645-b635-62a51e5e286e-dns-swift-storage-0\") pod \"dnsmasq-dns-647df7b8c5-xtfbx\" (UID: \"39bb53b7-f4e4-4645-b635-62a51e5e286e\") " pod="openstack/dnsmasq-dns-647df7b8c5-xtfbx" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.278747 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75kxw\" (UniqueName: \"kubernetes.io/projected/39bb53b7-f4e4-4645-b635-62a51e5e286e-kube-api-access-75kxw\") pod \"dnsmasq-dns-647df7b8c5-xtfbx\" (UID: \"39bb53b7-f4e4-4645-b635-62a51e5e286e\") " pod="openstack/dnsmasq-dns-647df7b8c5-xtfbx" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.278801 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39bb53b7-f4e4-4645-b635-62a51e5e286e-ovsdbserver-sb\") pod \"dnsmasq-dns-647df7b8c5-xtfbx\" (UID: \"39bb53b7-f4e4-4645-b635-62a51e5e286e\") " pod="openstack/dnsmasq-dns-647df7b8c5-xtfbx" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.279196 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39bb53b7-f4e4-4645-b635-62a51e5e286e-dns-svc\") pod \"dnsmasq-dns-647df7b8c5-xtfbx\" (UID: \"39bb53b7-f4e4-4645-b635-62a51e5e286e\") " pod="openstack/dnsmasq-dns-647df7b8c5-xtfbx" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.279325 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39bb53b7-f4e4-4645-b635-62a51e5e286e-config\") pod \"dnsmasq-dns-647df7b8c5-xtfbx\" (UID: \"39bb53b7-f4e4-4645-b635-62a51e5e286e\") " pod="openstack/dnsmasq-dns-647df7b8c5-xtfbx" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.279407 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39bb53b7-f4e4-4645-b635-62a51e5e286e-ovsdbserver-nb\") pod \"dnsmasq-dns-647df7b8c5-xtfbx\" (UID: \"39bb53b7-f4e4-4645-b635-62a51e5e286e\") " pod="openstack/dnsmasq-dns-647df7b8c5-xtfbx" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.372703 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.402642 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.403981 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/39bb53b7-f4e4-4645-b635-62a51e5e286e-dns-swift-storage-0\") pod \"dnsmasq-dns-647df7b8c5-xtfbx\" (UID: \"39bb53b7-f4e4-4645-b635-62a51e5e286e\") " pod="openstack/dnsmasq-dns-647df7b8c5-xtfbx" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.404032 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75kxw\" (UniqueName: \"kubernetes.io/projected/39bb53b7-f4e4-4645-b635-62a51e5e286e-kube-api-access-75kxw\") pod \"dnsmasq-dns-647df7b8c5-xtfbx\" (UID: \"39bb53b7-f4e4-4645-b635-62a51e5e286e\") " pod="openstack/dnsmasq-dns-647df7b8c5-xtfbx" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.404075 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39bb53b7-f4e4-4645-b635-62a51e5e286e-ovsdbserver-sb\") pod \"dnsmasq-dns-647df7b8c5-xtfbx\" (UID: \"39bb53b7-f4e4-4645-b635-62a51e5e286e\") " pod="openstack/dnsmasq-dns-647df7b8c5-xtfbx" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.404177 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39bb53b7-f4e4-4645-b635-62a51e5e286e-dns-svc\") pod \"dnsmasq-dns-647df7b8c5-xtfbx\" (UID: \"39bb53b7-f4e4-4645-b635-62a51e5e286e\") " pod="openstack/dnsmasq-dns-647df7b8c5-xtfbx" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.404213 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39bb53b7-f4e4-4645-b635-62a51e5e286e-config\") pod \"dnsmasq-dns-647df7b8c5-xtfbx\" (UID: \"39bb53b7-f4e4-4645-b635-62a51e5e286e\") " pod="openstack/dnsmasq-dns-647df7b8c5-xtfbx" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.404245 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39bb53b7-f4e4-4645-b635-62a51e5e286e-ovsdbserver-nb\") pod \"dnsmasq-dns-647df7b8c5-xtfbx\" (UID: \"39bb53b7-f4e4-4645-b635-62a51e5e286e\") " pod="openstack/dnsmasq-dns-647df7b8c5-xtfbx" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.499236 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39bb53b7-f4e4-4645-b635-62a51e5e286e-ovsdbserver-nb\") pod \"dnsmasq-dns-647df7b8c5-xtfbx\" (UID: \"39bb53b7-f4e4-4645-b635-62a51e5e286e\") " pod="openstack/dnsmasq-dns-647df7b8c5-xtfbx" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.502237 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/39bb53b7-f4e4-4645-b635-62a51e5e286e-dns-swift-storage-0\") pod \"dnsmasq-dns-647df7b8c5-xtfbx\" (UID: \"39bb53b7-f4e4-4645-b635-62a51e5e286e\") " pod="openstack/dnsmasq-dns-647df7b8c5-xtfbx" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.502649 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75kxw\" (UniqueName: \"kubernetes.io/projected/39bb53b7-f4e4-4645-b635-62a51e5e286e-kube-api-access-75kxw\") pod \"dnsmasq-dns-647df7b8c5-xtfbx\" (UID: \"39bb53b7-f4e4-4645-b635-62a51e5e286e\") " pod="openstack/dnsmasq-dns-647df7b8c5-xtfbx" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.505801 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39bb53b7-f4e4-4645-b635-62a51e5e286e-dns-svc\") pod \"dnsmasq-dns-647df7b8c5-xtfbx\" (UID: \"39bb53b7-f4e4-4645-b635-62a51e5e286e\") " pod="openstack/dnsmasq-dns-647df7b8c5-xtfbx" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.506736 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39bb53b7-f4e4-4645-b635-62a51e5e286e-ovsdbserver-sb\") pod \"dnsmasq-dns-647df7b8c5-xtfbx\" (UID: \"39bb53b7-f4e4-4645-b635-62a51e5e286e\") " pod="openstack/dnsmasq-dns-647df7b8c5-xtfbx" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.510871 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39bb53b7-f4e4-4645-b635-62a51e5e286e-config\") pod \"dnsmasq-dns-647df7b8c5-xtfbx\" (UID: \"39bb53b7-f4e4-4645-b635-62a51e5e286e\") " pod="openstack/dnsmasq-dns-647df7b8c5-xtfbx" Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.636262 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8688x"] Dec 11 14:10:14 crc kubenswrapper[5050]: I1211 14:10:14.805508 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-xtfbx" Dec 11 14:10:15 crc kubenswrapper[5050]: I1211 14:10:15.055808 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 14:10:15 crc kubenswrapper[5050]: I1211 14:10:15.121165 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 14:10:15 crc kubenswrapper[5050]: I1211 14:10:15.133679 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 14:10:15 crc kubenswrapper[5050]: W1211 14:10:15.169713 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb570eb96_751f_4200_ba76_1cd02d524b7d.slice/crio-f31d6a638e4a654221a5a66006ea8add931d173b803bdba99bab253a35d30895 WatchSource:0}: Error finding container f31d6a638e4a654221a5a66006ea8add931d173b803bdba99bab253a35d30895: Status 404 returned error can't find the container with id f31d6a638e4a654221a5a66006ea8add931d173b803bdba99bab253a35d30895 Dec 11 14:10:15 crc kubenswrapper[5050]: I1211 14:10:15.183762 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-clbrx"] Dec 11 14:10:15 crc kubenswrapper[5050]: I1211 14:10:15.312543 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tccfb"] Dec 11 14:10:15 crc kubenswrapper[5050]: I1211 14:10:15.314863 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-tccfb" Dec 11 14:10:15 crc kubenswrapper[5050]: I1211 14:10:15.317404 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 11 14:10:15 crc kubenswrapper[5050]: I1211 14:10:15.317623 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 11 14:10:15 crc kubenswrapper[5050]: I1211 14:10:15.324720 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tccfb"] Dec 11 14:10:15 crc kubenswrapper[5050]: I1211 14:10:15.363428 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1627fd8-6a34-432b-a4c8-8a39b534f4f2-config-data\") pod \"nova-cell1-conductor-db-sync-tccfb\" (UID: \"e1627fd8-6a34-432b-a4c8-8a39b534f4f2\") " pod="openstack/nova-cell1-conductor-db-sync-tccfb" Dec 11 14:10:15 crc kubenswrapper[5050]: I1211 14:10:15.363540 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1627fd8-6a34-432b-a4c8-8a39b534f4f2-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-tccfb\" (UID: \"e1627fd8-6a34-432b-a4c8-8a39b534f4f2\") " pod="openstack/nova-cell1-conductor-db-sync-tccfb" Dec 11 14:10:15 crc kubenswrapper[5050]: I1211 14:10:15.363643 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1627fd8-6a34-432b-a4c8-8a39b534f4f2-scripts\") pod \"nova-cell1-conductor-db-sync-tccfb\" (UID: \"e1627fd8-6a34-432b-a4c8-8a39b534f4f2\") " pod="openstack/nova-cell1-conductor-db-sync-tccfb" Dec 11 14:10:15 crc kubenswrapper[5050]: I1211 14:10:15.363950 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl74j\" (UniqueName: \"kubernetes.io/projected/e1627fd8-6a34-432b-a4c8-8a39b534f4f2-kube-api-access-kl74j\") pod \"nova-cell1-conductor-db-sync-tccfb\" (UID: \"e1627fd8-6a34-432b-a4c8-8a39b534f4f2\") " pod="openstack/nova-cell1-conductor-db-sync-tccfb" Dec 11 14:10:15 crc kubenswrapper[5050]: I1211 14:10:15.419262 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 14:10:15 crc kubenswrapper[5050]: I1211 14:10:15.465989 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl74j\" (UniqueName: \"kubernetes.io/projected/e1627fd8-6a34-432b-a4c8-8a39b534f4f2-kube-api-access-kl74j\") pod \"nova-cell1-conductor-db-sync-tccfb\" (UID: \"e1627fd8-6a34-432b-a4c8-8a39b534f4f2\") " pod="openstack/nova-cell1-conductor-db-sync-tccfb" Dec 11 14:10:15 crc kubenswrapper[5050]: I1211 14:10:15.466140 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1627fd8-6a34-432b-a4c8-8a39b534f4f2-config-data\") pod \"nova-cell1-conductor-db-sync-tccfb\" (UID: \"e1627fd8-6a34-432b-a4c8-8a39b534f4f2\") " pod="openstack/nova-cell1-conductor-db-sync-tccfb" Dec 11 14:10:15 crc kubenswrapper[5050]: I1211 14:10:15.466187 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1627fd8-6a34-432b-a4c8-8a39b534f4f2-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-tccfb\" (UID: \"e1627fd8-6a34-432b-a4c8-8a39b534f4f2\") " pod="openstack/nova-cell1-conductor-db-sync-tccfb" Dec 11 14:10:15 crc kubenswrapper[5050]: I1211 14:10:15.466207 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1627fd8-6a34-432b-a4c8-8a39b534f4f2-scripts\") pod \"nova-cell1-conductor-db-sync-tccfb\" (UID: \"e1627fd8-6a34-432b-a4c8-8a39b534f4f2\") " pod="openstack/nova-cell1-conductor-db-sync-tccfb" Dec 11 14:10:15 crc kubenswrapper[5050]: I1211 14:10:15.471212 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1627fd8-6a34-432b-a4c8-8a39b534f4f2-config-data\") pod \"nova-cell1-conductor-db-sync-tccfb\" (UID: \"e1627fd8-6a34-432b-a4c8-8a39b534f4f2\") " pod="openstack/nova-cell1-conductor-db-sync-tccfb" Dec 11 14:10:15 crc kubenswrapper[5050]: I1211 14:10:15.471556 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1627fd8-6a34-432b-a4c8-8a39b534f4f2-scripts\") pod \"nova-cell1-conductor-db-sync-tccfb\" (UID: \"e1627fd8-6a34-432b-a4c8-8a39b534f4f2\") " pod="openstack/nova-cell1-conductor-db-sync-tccfb" Dec 11 14:10:15 crc kubenswrapper[5050]: I1211 14:10:15.471959 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1627fd8-6a34-432b-a4c8-8a39b534f4f2-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-tccfb\" (UID: \"e1627fd8-6a34-432b-a4c8-8a39b534f4f2\") " pod="openstack/nova-cell1-conductor-db-sync-tccfb" Dec 11 14:10:15 crc kubenswrapper[5050]: I1211 14:10:15.485452 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl74j\" (UniqueName: \"kubernetes.io/projected/e1627fd8-6a34-432b-a4c8-8a39b534f4f2-kube-api-access-kl74j\") pod \"nova-cell1-conductor-db-sync-tccfb\" (UID: \"e1627fd8-6a34-432b-a4c8-8a39b534f4f2\") " pod="openstack/nova-cell1-conductor-db-sync-tccfb" Dec 11 14:10:15 crc kubenswrapper[5050]: I1211 14:10:15.576722 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-xtfbx"] Dec 11 14:10:15 crc kubenswrapper[5050]: W1211 14:10:15.580053 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39bb53b7_f4e4_4645_b635_62a51e5e286e.slice/crio-237a7c537ce9dfabb233fb9cbd732508ac7a9d2421a8df880b8790a2edbd07a6 WatchSource:0}: Error finding container 237a7c537ce9dfabb233fb9cbd732508ac7a9d2421a8df880b8790a2edbd07a6: Status 404 returned error can't find the container with id 237a7c537ce9dfabb233fb9cbd732508ac7a9d2421a8df880b8790a2edbd07a6 Dec 11 14:10:15 crc kubenswrapper[5050]: I1211 14:10:15.654664 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-xtfbx" event={"ID":"39bb53b7-f4e4-4645-b635-62a51e5e286e","Type":"ContainerStarted","Data":"237a7c537ce9dfabb233fb9cbd732508ac7a9d2421a8df880b8790a2edbd07a6"} Dec 11 14:10:15 crc kubenswrapper[5050]: I1211 14:10:15.656572 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3a082a10-0ad5-42cc-9cbe-5c15b93a9af2","Type":"ContainerStarted","Data":"f9672b25cef4c126900a518bb364cf33174171386ab7e51996d89e6c12144d57"} Dec 11 14:10:15 crc kubenswrapper[5050]: I1211 14:10:15.657983 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8688x" event={"ID":"9b0ac5e2-e200-454b-928e-93ea9b253ca1","Type":"ContainerStarted","Data":"1456c38c50d956f8bee9207bc7ce1f56ff928c81d8a7c2f9d03f682e68aaefaf"} Dec 11 14:10:15 crc kubenswrapper[5050]: I1211 14:10:15.660970 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"750efff9-baf7-4feb-9dbc-6fc187a4350f","Type":"ContainerStarted","Data":"cb25cd7b0973e8fe63f3b10bc0211768a4786d3a720cb5de084babd17e1d34cf"} Dec 11 14:10:15 crc kubenswrapper[5050]: I1211 14:10:15.662965 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-clbrx" event={"ID":"84abe132-b822-4b40-9952-7454c24cf3d0","Type":"ContainerStarted","Data":"350ddf787ff2cc8086c0fe755ff94032747ba4490a2e61f427c4c1b449fc2131"} Dec 11 14:10:15 crc kubenswrapper[5050]: I1211 14:10:15.664444 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b570eb96-751f-4200-ba76-1cd02d524b7d","Type":"ContainerStarted","Data":"f31d6a638e4a654221a5a66006ea8add931d173b803bdba99bab253a35d30895"} Dec 11 14:10:15 crc kubenswrapper[5050]: I1211 14:10:15.666281 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"02d4b151-446f-42dc-86ec-edb6eb55b289","Type":"ContainerStarted","Data":"41373202fc9a7628ee66fef5c62da4e7fa685cf17353b875d23d69bcafb0ab49"} Dec 11 14:10:15 crc kubenswrapper[5050]: I1211 14:10:15.734448 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-tccfb" Dec 11 14:10:16 crc kubenswrapper[5050]: I1211 14:10:16.352912 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tccfb"] Dec 11 14:10:16 crc kubenswrapper[5050]: I1211 14:10:16.681314 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-tccfb" event={"ID":"e1627fd8-6a34-432b-a4c8-8a39b534f4f2","Type":"ContainerStarted","Data":"54e4f40b20777c3837b0d80b6c36e06ee009ed62c427e7b2303523b7c76726ba"} Dec 11 14:10:17 crc kubenswrapper[5050]: I1211 14:10:17.936094 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 14:10:17 crc kubenswrapper[5050]: I1211 14:10:17.952409 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 14:10:18 crc kubenswrapper[5050]: I1211 14:10:18.714123 5050 generic.go:334] "Generic (PLEG): container finished" podID="9b0ac5e2-e200-454b-928e-93ea9b253ca1" containerID="bbd9577d16ba312f99b2c9ebad9e02198fcbe52a92b7a05eea66064381895bcb" exitCode=0 Dec 11 14:10:18 crc kubenswrapper[5050]: I1211 14:10:18.714953 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8688x" event={"ID":"9b0ac5e2-e200-454b-928e-93ea9b253ca1","Type":"ContainerDied","Data":"bbd9577d16ba312f99b2c9ebad9e02198fcbe52a92b7a05eea66064381895bcb"} Dec 11 14:10:18 crc kubenswrapper[5050]: I1211 14:10:18.717397 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-clbrx" event={"ID":"84abe132-b822-4b40-9952-7454c24cf3d0","Type":"ContainerStarted","Data":"2e612e8cce6560b98967bcdbebffe901233053bef927a234d71f282f6b712a13"} Dec 11 14:10:18 crc kubenswrapper[5050]: I1211 14:10:18.719892 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-tccfb" event={"ID":"e1627fd8-6a34-432b-a4c8-8a39b534f4f2","Type":"ContainerStarted","Data":"5fce3440d274f6239f70451c81cca3a699ec3610161019c9df0fe303fc7d4623"} Dec 11 14:10:18 crc kubenswrapper[5050]: I1211 14:10:18.727670 5050 generic.go:334] "Generic (PLEG): container finished" podID="39bb53b7-f4e4-4645-b635-62a51e5e286e" containerID="b6251f4991aa3171de6238cafba9994014c4ca38da379499874bc838d785ad3a" exitCode=0 Dec 11 14:10:18 crc kubenswrapper[5050]: I1211 14:10:18.727727 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-xtfbx" event={"ID":"39bb53b7-f4e4-4645-b635-62a51e5e286e","Type":"ContainerDied","Data":"b6251f4991aa3171de6238cafba9994014c4ca38da379499874bc838d785ad3a"} Dec 11 14:10:18 crc kubenswrapper[5050]: I1211 14:10:18.794292 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-clbrx" podStartSLOduration=5.79425903 podStartE2EDuration="5.79425903s" podCreationTimestamp="2025-12-11 14:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:10:18.779795859 +0000 UTC m=+1309.623518465" watchObservedRunningTime="2025-12-11 14:10:18.79425903 +0000 UTC m=+1309.637981636" Dec 11 14:10:18 crc kubenswrapper[5050]: I1211 14:10:18.821882 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-tccfb" podStartSLOduration=3.821863065 podStartE2EDuration="3.821863065s" podCreationTimestamp="2025-12-11 14:10:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:10:18.798532515 +0000 UTC m=+1309.642255101" watchObservedRunningTime="2025-12-11 14:10:18.821863065 +0000 UTC m=+1309.665585651" Dec 11 14:10:24 crc kubenswrapper[5050]: I1211 14:10:24.801764 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3a082a10-0ad5-42cc-9cbe-5c15b93a9af2","Type":"ContainerStarted","Data":"80982f5107cdb80b0b0b443dd9966b245e84e492c7a44d0cc5646abce649be9b"} Dec 11 14:10:24 crc kubenswrapper[5050]: I1211 14:10:24.802378 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3a082a10-0ad5-42cc-9cbe-5c15b93a9af2","Type":"ContainerStarted","Data":"c3532ca3f2438e45923351146f742d11dd5521aec22d1bd4d7dab86e5bdda261"} Dec 11 14:10:24 crc kubenswrapper[5050]: I1211 14:10:24.802125 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3a082a10-0ad5-42cc-9cbe-5c15b93a9af2" containerName="nova-metadata-log" containerID="cri-o://c3532ca3f2438e45923351146f742d11dd5521aec22d1bd4d7dab86e5bdda261" gracePeriod=30 Dec 11 14:10:24 crc kubenswrapper[5050]: I1211 14:10:24.802537 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3a082a10-0ad5-42cc-9cbe-5c15b93a9af2" containerName="nova-metadata-metadata" containerID="cri-o://80982f5107cdb80b0b0b443dd9966b245e84e492c7a44d0cc5646abce649be9b" gracePeriod=30 Dec 11 14:10:24 crc kubenswrapper[5050]: I1211 14:10:24.813612 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8688x" event={"ID":"9b0ac5e2-e200-454b-928e-93ea9b253ca1","Type":"ContainerStarted","Data":"6bd6802f5b5a2e7cebd94d259b8775fc91e55f79de55a23a27b79983a9421b97"} Dec 11 14:10:24 crc kubenswrapper[5050]: I1211 14:10:24.819981 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"750efff9-baf7-4feb-9dbc-6fc187a4350f","Type":"ContainerStarted","Data":"99dbb11830672c70c03138355bb7cb30550a454f1f7d54f9f59a6f4c5c10154e"} Dec 11 14:10:24 crc kubenswrapper[5050]: I1211 14:10:24.820430 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="750efff9-baf7-4feb-9dbc-6fc187a4350f" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://99dbb11830672c70c03138355bb7cb30550a454f1f7d54f9f59a6f4c5c10154e" gracePeriod=30 Dec 11 14:10:24 crc kubenswrapper[5050]: I1211 14:10:24.832307 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.620018313 podStartE2EDuration="11.832285904s" podCreationTimestamp="2025-12-11 14:10:13 +0000 UTC" firstStartedPulling="2025-12-11 14:10:15.433946553 +0000 UTC m=+1306.277669139" lastFinishedPulling="2025-12-11 14:10:23.646214144 +0000 UTC m=+1314.489936730" observedRunningTime="2025-12-11 14:10:24.830145996 +0000 UTC m=+1315.673868592" watchObservedRunningTime="2025-12-11 14:10:24.832285904 +0000 UTC m=+1315.676008490" Dec 11 14:10:24 crc kubenswrapper[5050]: I1211 14:10:24.843143 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b570eb96-751f-4200-ba76-1cd02d524b7d","Type":"ContainerStarted","Data":"591d0909f0600652a524dd46d8fcb1a8d7961f6e73aaed3bbe6427afaacc3272"} Dec 11 14:10:24 crc kubenswrapper[5050]: I1211 14:10:24.863178 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"02d4b151-446f-42dc-86ec-edb6eb55b289","Type":"ContainerStarted","Data":"72ccd5065c62b91e0e2d7a84a47b585566841c17b0d60e37adcda3048ec2e523"} Dec 11 14:10:24 crc kubenswrapper[5050]: I1211 14:10:24.863237 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"02d4b151-446f-42dc-86ec-edb6eb55b289","Type":"ContainerStarted","Data":"f85e35cb5c618541ccfdc4317419e03c23ee1aee7c6c6e02e0c1fbc7e4696583"} Dec 11 14:10:24 crc kubenswrapper[5050]: I1211 14:10:24.875341 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-xtfbx" event={"ID":"39bb53b7-f4e4-4645-b635-62a51e5e286e","Type":"ContainerStarted","Data":"3d3cc6d7327de8c7178081e8f184853e2d85c3ba9076ba34794a4a7356e835f2"} Dec 11 14:10:24 crc kubenswrapper[5050]: I1211 14:10:24.876664 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-647df7b8c5-xtfbx" Dec 11 14:10:24 crc kubenswrapper[5050]: I1211 14:10:24.909182 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.469115147 podStartE2EDuration="11.909155361s" podCreationTimestamp="2025-12-11 14:10:13 +0000 UTC" firstStartedPulling="2025-12-11 14:10:15.16178369 +0000 UTC m=+1306.005506276" lastFinishedPulling="2025-12-11 14:10:23.601823904 +0000 UTC m=+1314.445546490" observedRunningTime="2025-12-11 14:10:24.886865419 +0000 UTC m=+1315.730588005" watchObservedRunningTime="2025-12-11 14:10:24.909155361 +0000 UTC m=+1315.752877947" Dec 11 14:10:24 crc kubenswrapper[5050]: I1211 14:10:24.925921 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.496350332 podStartE2EDuration="11.925895743s" podCreationTimestamp="2025-12-11 14:10:13 +0000 UTC" firstStartedPulling="2025-12-11 14:10:15.172076818 +0000 UTC m=+1306.015799404" lastFinishedPulling="2025-12-11 14:10:23.601622229 +0000 UTC m=+1314.445344815" observedRunningTime="2025-12-11 14:10:24.909634804 +0000 UTC m=+1315.753357380" watchObservedRunningTime="2025-12-11 14:10:24.925895743 +0000 UTC m=+1315.769618329" Dec 11 14:10:24 crc kubenswrapper[5050]: I1211 14:10:24.933863 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.3524977959999998 podStartE2EDuration="11.933848358s" podCreationTimestamp="2025-12-11 14:10:13 +0000 UTC" firstStartedPulling="2025-12-11 14:10:15.064645016 +0000 UTC m=+1305.908367602" lastFinishedPulling="2025-12-11 14:10:23.645995578 +0000 UTC m=+1314.489718164" observedRunningTime="2025-12-11 14:10:24.930562549 +0000 UTC m=+1315.774285155" watchObservedRunningTime="2025-12-11 14:10:24.933848358 +0000 UTC m=+1315.777570944" Dec 11 14:10:25 crc kubenswrapper[5050]: I1211 14:10:25.889694 5050 generic.go:334] "Generic (PLEG): container finished" podID="3a082a10-0ad5-42cc-9cbe-5c15b93a9af2" containerID="c3532ca3f2438e45923351146f742d11dd5521aec22d1bd4d7dab86e5bdda261" exitCode=143 Dec 11 14:10:25 crc kubenswrapper[5050]: I1211 14:10:25.890087 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3a082a10-0ad5-42cc-9cbe-5c15b93a9af2","Type":"ContainerDied","Data":"c3532ca3f2438e45923351146f742d11dd5521aec22d1bd4d7dab86e5bdda261"} Dec 11 14:10:25 crc kubenswrapper[5050]: I1211 14:10:25.892259 5050 generic.go:334] "Generic (PLEG): container finished" podID="9b0ac5e2-e200-454b-928e-93ea9b253ca1" containerID="6bd6802f5b5a2e7cebd94d259b8775fc91e55f79de55a23a27b79983a9421b97" exitCode=0 Dec 11 14:10:25 crc kubenswrapper[5050]: I1211 14:10:25.892429 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8688x" event={"ID":"9b0ac5e2-e200-454b-928e-93ea9b253ca1","Type":"ContainerDied","Data":"6bd6802f5b5a2e7cebd94d259b8775fc91e55f79de55a23a27b79983a9421b97"} Dec 11 14:10:25 crc kubenswrapper[5050]: I1211 14:10:25.915592 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-647df7b8c5-xtfbx" podStartSLOduration=12.915570569 podStartE2EDuration="12.915570569s" podCreationTimestamp="2025-12-11 14:10:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:10:24.962869112 +0000 UTC m=+1315.806591708" watchObservedRunningTime="2025-12-11 14:10:25.915570569 +0000 UTC m=+1316.759293155" Dec 11 14:10:26 crc kubenswrapper[5050]: I1211 14:10:26.811210 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 11 14:10:27 crc kubenswrapper[5050]: I1211 14:10:27.920960 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8688x" event={"ID":"9b0ac5e2-e200-454b-928e-93ea9b253ca1","Type":"ContainerStarted","Data":"079efb91a670ffc024f3c1db2218acc3f9af9f0415bd2c0fdbaa469ca4f8e129"} Dec 11 14:10:27 crc kubenswrapper[5050]: I1211 14:10:27.953036 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8688x" podStartSLOduration=7.575742376 podStartE2EDuration="14.952993359s" podCreationTimestamp="2025-12-11 14:10:13 +0000 UTC" firstStartedPulling="2025-12-11 14:10:19.470635192 +0000 UTC m=+1310.314357778" lastFinishedPulling="2025-12-11 14:10:26.847886175 +0000 UTC m=+1317.691608761" observedRunningTime="2025-12-11 14:10:27.939651198 +0000 UTC m=+1318.783373784" watchObservedRunningTime="2025-12-11 14:10:27.952993359 +0000 UTC m=+1318.796715945" Dec 11 14:10:28 crc kubenswrapper[5050]: I1211 14:10:28.935735 5050 generic.go:334] "Generic (PLEG): container finished" podID="84abe132-b822-4b40-9952-7454c24cf3d0" containerID="2e612e8cce6560b98967bcdbebffe901233053bef927a234d71f282f6b712a13" exitCode=0 Dec 11 14:10:28 crc kubenswrapper[5050]: I1211 14:10:28.937250 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-clbrx" event={"ID":"84abe132-b822-4b40-9952-7454c24cf3d0","Type":"ContainerDied","Data":"2e612e8cce6560b98967bcdbebffe901233053bef927a234d71f282f6b712a13"} Dec 11 14:10:29 crc kubenswrapper[5050]: I1211 14:10:29.055546 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 11 14:10:29 crc kubenswrapper[5050]: I1211 14:10:29.373284 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 11 14:10:29 crc kubenswrapper[5050]: I1211 14:10:29.373335 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 11 14:10:29 crc kubenswrapper[5050]: I1211 14:10:29.411479 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 11 14:10:29 crc kubenswrapper[5050]: I1211 14:10:29.811256 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-647df7b8c5-xtfbx" Dec 11 14:10:29 crc kubenswrapper[5050]: I1211 14:10:29.928560 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-676wg"] Dec 11 14:10:29 crc kubenswrapper[5050]: I1211 14:10:29.928882 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75dbb546bf-676wg" podUID="5a0a350f-3ea9-4892-9964-47c591420d28" containerName="dnsmasq-dns" containerID="cri-o://05eef7d50464f5c72bd09f6313e1c614797a2a9afc826e468d2ca258effcfbed" gracePeriod=10 Dec 11 14:10:30 crc kubenswrapper[5050]: I1211 14:10:30.523773 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-clbrx" Dec 11 14:10:30 crc kubenswrapper[5050]: I1211 14:10:30.628096 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-676wg" Dec 11 14:10:30 crc kubenswrapper[5050]: I1211 14:10:30.701756 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84abe132-b822-4b40-9952-7454c24cf3d0-config-data\") pod \"84abe132-b822-4b40-9952-7454c24cf3d0\" (UID: \"84abe132-b822-4b40-9952-7454c24cf3d0\") " Dec 11 14:10:30 crc kubenswrapper[5050]: I1211 14:10:30.701810 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a0a350f-3ea9-4892-9964-47c591420d28-config\") pod \"5a0a350f-3ea9-4892-9964-47c591420d28\" (UID: \"5a0a350f-3ea9-4892-9964-47c591420d28\") " Dec 11 14:10:30 crc kubenswrapper[5050]: I1211 14:10:30.701833 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84abe132-b822-4b40-9952-7454c24cf3d0-combined-ca-bundle\") pod \"84abe132-b822-4b40-9952-7454c24cf3d0\" (UID: \"84abe132-b822-4b40-9952-7454c24cf3d0\") " Dec 11 14:10:30 crc kubenswrapper[5050]: I1211 14:10:30.701891 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a0a350f-3ea9-4892-9964-47c591420d28-ovsdbserver-nb\") pod \"5a0a350f-3ea9-4892-9964-47c591420d28\" (UID: \"5a0a350f-3ea9-4892-9964-47c591420d28\") " Dec 11 14:10:30 crc kubenswrapper[5050]: I1211 14:10:30.701923 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a0a350f-3ea9-4892-9964-47c591420d28-dns-svc\") pod \"5a0a350f-3ea9-4892-9964-47c591420d28\" (UID: \"5a0a350f-3ea9-4892-9964-47c591420d28\") " Dec 11 14:10:30 crc kubenswrapper[5050]: I1211 14:10:30.701968 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59br4\" (UniqueName: \"kubernetes.io/projected/5a0a350f-3ea9-4892-9964-47c591420d28-kube-api-access-59br4\") pod \"5a0a350f-3ea9-4892-9964-47c591420d28\" (UID: \"5a0a350f-3ea9-4892-9964-47c591420d28\") " Dec 11 14:10:30 crc kubenswrapper[5050]: I1211 14:10:30.702039 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmq7b\" (UniqueName: \"kubernetes.io/projected/84abe132-b822-4b40-9952-7454c24cf3d0-kube-api-access-lmq7b\") pod \"84abe132-b822-4b40-9952-7454c24cf3d0\" (UID: \"84abe132-b822-4b40-9952-7454c24cf3d0\") " Dec 11 14:10:30 crc kubenswrapper[5050]: I1211 14:10:30.702065 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a0a350f-3ea9-4892-9964-47c591420d28-ovsdbserver-sb\") pod \"5a0a350f-3ea9-4892-9964-47c591420d28\" (UID: \"5a0a350f-3ea9-4892-9964-47c591420d28\") " Dec 11 14:10:30 crc kubenswrapper[5050]: I1211 14:10:30.702114 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5a0a350f-3ea9-4892-9964-47c591420d28-dns-swift-storage-0\") pod \"5a0a350f-3ea9-4892-9964-47c591420d28\" (UID: \"5a0a350f-3ea9-4892-9964-47c591420d28\") " Dec 11 14:10:30 crc kubenswrapper[5050]: I1211 14:10:30.702169 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84abe132-b822-4b40-9952-7454c24cf3d0-scripts\") pod \"84abe132-b822-4b40-9952-7454c24cf3d0\" (UID: \"84abe132-b822-4b40-9952-7454c24cf3d0\") " Dec 11 14:10:30 crc kubenswrapper[5050]: I1211 14:10:30.708959 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a0a350f-3ea9-4892-9964-47c591420d28-kube-api-access-59br4" (OuterVolumeSpecName: "kube-api-access-59br4") pod "5a0a350f-3ea9-4892-9964-47c591420d28" (UID: "5a0a350f-3ea9-4892-9964-47c591420d28"). InnerVolumeSpecName "kube-api-access-59br4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:10:30 crc kubenswrapper[5050]: I1211 14:10:30.710672 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84abe132-b822-4b40-9952-7454c24cf3d0-scripts" (OuterVolumeSpecName: "scripts") pod "84abe132-b822-4b40-9952-7454c24cf3d0" (UID: "84abe132-b822-4b40-9952-7454c24cf3d0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:10:30 crc kubenswrapper[5050]: I1211 14:10:30.731310 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84abe132-b822-4b40-9952-7454c24cf3d0-kube-api-access-lmq7b" (OuterVolumeSpecName: "kube-api-access-lmq7b") pod "84abe132-b822-4b40-9952-7454c24cf3d0" (UID: "84abe132-b822-4b40-9952-7454c24cf3d0"). InnerVolumeSpecName "kube-api-access-lmq7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:10:30 crc kubenswrapper[5050]: I1211 14:10:30.784071 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84abe132-b822-4b40-9952-7454c24cf3d0-config-data" (OuterVolumeSpecName: "config-data") pod "84abe132-b822-4b40-9952-7454c24cf3d0" (UID: "84abe132-b822-4b40-9952-7454c24cf3d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:10:30 crc kubenswrapper[5050]: I1211 14:10:30.792274 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84abe132-b822-4b40-9952-7454c24cf3d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84abe132-b822-4b40-9952-7454c24cf3d0" (UID: "84abe132-b822-4b40-9952-7454c24cf3d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:10:30 crc kubenswrapper[5050]: I1211 14:10:30.798000 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a0a350f-3ea9-4892-9964-47c591420d28-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5a0a350f-3ea9-4892-9964-47c591420d28" (UID: "5a0a350f-3ea9-4892-9964-47c591420d28"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:10:30 crc kubenswrapper[5050]: I1211 14:10:30.805935 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/84abe132-b822-4b40-9952-7454c24cf3d0-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:30 crc kubenswrapper[5050]: I1211 14:10:30.805975 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84abe132-b822-4b40-9952-7454c24cf3d0-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:30 crc kubenswrapper[5050]: I1211 14:10:30.805990 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84abe132-b822-4b40-9952-7454c24cf3d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:30 crc kubenswrapper[5050]: I1211 14:10:30.806003 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5a0a350f-3ea9-4892-9964-47c591420d28-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:30 crc kubenswrapper[5050]: I1211 14:10:30.806029 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59br4\" (UniqueName: \"kubernetes.io/projected/5a0a350f-3ea9-4892-9964-47c591420d28-kube-api-access-59br4\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:30 crc kubenswrapper[5050]: I1211 14:10:30.806042 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmq7b\" (UniqueName: \"kubernetes.io/projected/84abe132-b822-4b40-9952-7454c24cf3d0-kube-api-access-lmq7b\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:30 crc kubenswrapper[5050]: I1211 14:10:30.814718 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a0a350f-3ea9-4892-9964-47c591420d28-config" (OuterVolumeSpecName: "config") pod "5a0a350f-3ea9-4892-9964-47c591420d28" (UID: "5a0a350f-3ea9-4892-9964-47c591420d28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:10:30 crc kubenswrapper[5050]: I1211 14:10:30.827789 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a0a350f-3ea9-4892-9964-47c591420d28-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5a0a350f-3ea9-4892-9964-47c591420d28" (UID: "5a0a350f-3ea9-4892-9964-47c591420d28"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:10:30 crc kubenswrapper[5050]: I1211 14:10:30.841835 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a0a350f-3ea9-4892-9964-47c591420d28-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5a0a350f-3ea9-4892-9964-47c591420d28" (UID: "5a0a350f-3ea9-4892-9964-47c591420d28"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:10:30 crc kubenswrapper[5050]: I1211 14:10:30.854640 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a0a350f-3ea9-4892-9964-47c591420d28-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5a0a350f-3ea9-4892-9964-47c591420d28" (UID: "5a0a350f-3ea9-4892-9964-47c591420d28"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:10:30 crc kubenswrapper[5050]: I1211 14:10:30.907842 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5a0a350f-3ea9-4892-9964-47c591420d28-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:30 crc kubenswrapper[5050]: I1211 14:10:30.908184 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5a0a350f-3ea9-4892-9964-47c591420d28-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:30 crc kubenswrapper[5050]: I1211 14:10:30.908340 5050 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5a0a350f-3ea9-4892-9964-47c591420d28-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:30 crc kubenswrapper[5050]: I1211 14:10:30.908432 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a0a350f-3ea9-4892-9964-47c591420d28-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:30 crc kubenswrapper[5050]: I1211 14:10:30.991170 5050 generic.go:334] "Generic (PLEG): container finished" podID="5a0a350f-3ea9-4892-9964-47c591420d28" containerID="05eef7d50464f5c72bd09f6313e1c614797a2a9afc826e468d2ca258effcfbed" exitCode=0 Dec 11 14:10:30 crc kubenswrapper[5050]: I1211 14:10:30.991249 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-676wg" event={"ID":"5a0a350f-3ea9-4892-9964-47c591420d28","Type":"ContainerDied","Data":"05eef7d50464f5c72bd09f6313e1c614797a2a9afc826e468d2ca258effcfbed"} Dec 11 14:10:30 crc kubenswrapper[5050]: I1211 14:10:30.991291 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75dbb546bf-676wg" event={"ID":"5a0a350f-3ea9-4892-9964-47c591420d28","Type":"ContainerDied","Data":"cc52d54320148961756c4291b12beab28efb71c3a17d1fcdef4252354a1ee3a9"} Dec 11 14:10:30 crc kubenswrapper[5050]: I1211 14:10:30.991313 5050 scope.go:117] "RemoveContainer" containerID="05eef7d50464f5c72bd09f6313e1c614797a2a9afc826e468d2ca258effcfbed" Dec 11 14:10:30 crc kubenswrapper[5050]: I1211 14:10:30.991504 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75dbb546bf-676wg" Dec 11 14:10:31 crc kubenswrapper[5050]: I1211 14:10:31.000446 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-clbrx" event={"ID":"84abe132-b822-4b40-9952-7454c24cf3d0","Type":"ContainerDied","Data":"350ddf787ff2cc8086c0fe755ff94032747ba4490a2e61f427c4c1b449fc2131"} Dec 11 14:10:31 crc kubenswrapper[5050]: I1211 14:10:31.000493 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="350ddf787ff2cc8086c0fe755ff94032747ba4490a2e61f427c4c1b449fc2131" Dec 11 14:10:31 crc kubenswrapper[5050]: I1211 14:10:31.000561 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-clbrx" Dec 11 14:10:31 crc kubenswrapper[5050]: I1211 14:10:31.041061 5050 scope.go:117] "RemoveContainer" containerID="9e88e10d32974fd1d135728b2b562bba9aa1d54734cb028869c1d07e6f96a31d" Dec 11 14:10:31 crc kubenswrapper[5050]: I1211 14:10:31.043363 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-676wg"] Dec 11 14:10:31 crc kubenswrapper[5050]: I1211 14:10:31.075405 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75dbb546bf-676wg"] Dec 11 14:10:31 crc kubenswrapper[5050]: I1211 14:10:31.150619 5050 scope.go:117] "RemoveContainer" containerID="05eef7d50464f5c72bd09f6313e1c614797a2a9afc826e468d2ca258effcfbed" Dec 11 14:10:31 crc kubenswrapper[5050]: E1211 14:10:31.151245 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05eef7d50464f5c72bd09f6313e1c614797a2a9afc826e468d2ca258effcfbed\": container with ID starting with 05eef7d50464f5c72bd09f6313e1c614797a2a9afc826e468d2ca258effcfbed not found: ID does not exist" containerID="05eef7d50464f5c72bd09f6313e1c614797a2a9afc826e468d2ca258effcfbed" Dec 11 14:10:31 crc kubenswrapper[5050]: I1211 14:10:31.151282 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05eef7d50464f5c72bd09f6313e1c614797a2a9afc826e468d2ca258effcfbed"} err="failed to get container status \"05eef7d50464f5c72bd09f6313e1c614797a2a9afc826e468d2ca258effcfbed\": rpc error: code = NotFound desc = could not find container \"05eef7d50464f5c72bd09f6313e1c614797a2a9afc826e468d2ca258effcfbed\": container with ID starting with 05eef7d50464f5c72bd09f6313e1c614797a2a9afc826e468d2ca258effcfbed not found: ID does not exist" Dec 11 14:10:31 crc kubenswrapper[5050]: I1211 14:10:31.151306 5050 scope.go:117] "RemoveContainer" containerID="9e88e10d32974fd1d135728b2b562bba9aa1d54734cb028869c1d07e6f96a31d" Dec 11 14:10:31 crc kubenswrapper[5050]: E1211 14:10:31.151508 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e88e10d32974fd1d135728b2b562bba9aa1d54734cb028869c1d07e6f96a31d\": container with ID starting with 9e88e10d32974fd1d135728b2b562bba9aa1d54734cb028869c1d07e6f96a31d not found: ID does not exist" containerID="9e88e10d32974fd1d135728b2b562bba9aa1d54734cb028869c1d07e6f96a31d" Dec 11 14:10:31 crc kubenswrapper[5050]: I1211 14:10:31.151537 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e88e10d32974fd1d135728b2b562bba9aa1d54734cb028869c1d07e6f96a31d"} err="failed to get container status \"9e88e10d32974fd1d135728b2b562bba9aa1d54734cb028869c1d07e6f96a31d\": rpc error: code = NotFound desc = could not find container \"9e88e10d32974fd1d135728b2b562bba9aa1d54734cb028869c1d07e6f96a31d\": container with ID starting with 9e88e10d32974fd1d135728b2b562bba9aa1d54734cb028869c1d07e6f96a31d not found: ID does not exist" Dec 11 14:10:31 crc kubenswrapper[5050]: I1211 14:10:31.212396 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 14:10:31 crc kubenswrapper[5050]: I1211 14:10:31.212702 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="02d4b151-446f-42dc-86ec-edb6eb55b289" containerName="nova-api-log" containerID="cri-o://f85e35cb5c618541ccfdc4317419e03c23ee1aee7c6c6e02e0c1fbc7e4696583" gracePeriod=30 Dec 11 14:10:31 crc kubenswrapper[5050]: I1211 14:10:31.212743 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="02d4b151-446f-42dc-86ec-edb6eb55b289" containerName="nova-api-api" containerID="cri-o://72ccd5065c62b91e0e2d7a84a47b585566841c17b0d60e37adcda3048ec2e523" gracePeriod=30 Dec 11 14:10:31 crc kubenswrapper[5050]: I1211 14:10:31.313934 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 14:10:31 crc kubenswrapper[5050]: I1211 14:10:31.314646 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="b570eb96-751f-4200-ba76-1cd02d524b7d" containerName="nova-scheduler-scheduler" containerID="cri-o://591d0909f0600652a524dd46d8fcb1a8d7961f6e73aaed3bbe6427afaacc3272" gracePeriod=30 Dec 11 14:10:31 crc kubenswrapper[5050]: I1211 14:10:31.560211 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a0a350f-3ea9-4892-9964-47c591420d28" path="/var/lib/kubelet/pods/5a0a350f-3ea9-4892-9964-47c591420d28/volumes" Dec 11 14:10:31 crc kubenswrapper[5050]: I1211 14:10:31.650033 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 14:10:31 crc kubenswrapper[5050]: I1211 14:10:31.650939 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="8e60c3c2-6055-4e50-99b6-4a5f08728b17" containerName="kube-state-metrics" containerID="cri-o://54ab5edbcc14c67a1717bfd1d05ad6d09f2905446ab6d06cdf66777d774f523a" gracePeriod=30 Dec 11 14:10:32 crc kubenswrapper[5050]: I1211 14:10:32.015495 5050 generic.go:334] "Generic (PLEG): container finished" podID="02d4b151-446f-42dc-86ec-edb6eb55b289" containerID="f85e35cb5c618541ccfdc4317419e03c23ee1aee7c6c6e02e0c1fbc7e4696583" exitCode=143 Dec 11 14:10:32 crc kubenswrapper[5050]: I1211 14:10:32.015611 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"02d4b151-446f-42dc-86ec-edb6eb55b289","Type":"ContainerDied","Data":"f85e35cb5c618541ccfdc4317419e03c23ee1aee7c6c6e02e0c1fbc7e4696583"} Dec 11 14:10:32 crc kubenswrapper[5050]: I1211 14:10:32.017636 5050 generic.go:334] "Generic (PLEG): container finished" podID="8e60c3c2-6055-4e50-99b6-4a5f08728b17" containerID="54ab5edbcc14c67a1717bfd1d05ad6d09f2905446ab6d06cdf66777d774f523a" exitCode=2 Dec 11 14:10:32 crc kubenswrapper[5050]: I1211 14:10:32.017707 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8e60c3c2-6055-4e50-99b6-4a5f08728b17","Type":"ContainerDied","Data":"54ab5edbcc14c67a1717bfd1d05ad6d09f2905446ab6d06cdf66777d774f523a"} Dec 11 14:10:33 crc kubenswrapper[5050]: I1211 14:10:33.029239 5050 generic.go:334] "Generic (PLEG): container finished" podID="02d4b151-446f-42dc-86ec-edb6eb55b289" containerID="72ccd5065c62b91e0e2d7a84a47b585566841c17b0d60e37adcda3048ec2e523" exitCode=0 Dec 11 14:10:33 crc kubenswrapper[5050]: I1211 14:10:33.029589 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"02d4b151-446f-42dc-86ec-edb6eb55b289","Type":"ContainerDied","Data":"72ccd5065c62b91e0e2d7a84a47b585566841c17b0d60e37adcda3048ec2e523"} Dec 11 14:10:33 crc kubenswrapper[5050]: I1211 14:10:33.181578 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 11 14:10:33 crc kubenswrapper[5050]: I1211 14:10:33.376549 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twq74\" (UniqueName: \"kubernetes.io/projected/8e60c3c2-6055-4e50-99b6-4a5f08728b17-kube-api-access-twq74\") pod \"8e60c3c2-6055-4e50-99b6-4a5f08728b17\" (UID: \"8e60c3c2-6055-4e50-99b6-4a5f08728b17\") " Dec 11 14:10:33 crc kubenswrapper[5050]: I1211 14:10:33.389043 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e60c3c2-6055-4e50-99b6-4a5f08728b17-kube-api-access-twq74" (OuterVolumeSpecName: "kube-api-access-twq74") pod "8e60c3c2-6055-4e50-99b6-4a5f08728b17" (UID: "8e60c3c2-6055-4e50-99b6-4a5f08728b17"). InnerVolumeSpecName "kube-api-access-twq74". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:10:33 crc kubenswrapper[5050]: I1211 14:10:33.482618 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twq74\" (UniqueName: \"kubernetes.io/projected/8e60c3c2-6055-4e50-99b6-4a5f08728b17-kube-api-access-twq74\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:33 crc kubenswrapper[5050]: I1211 14:10:33.683158 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 14:10:33 crc kubenswrapper[5050]: I1211 14:10:33.789238 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02d4b151-446f-42dc-86ec-edb6eb55b289-config-data\") pod \"02d4b151-446f-42dc-86ec-edb6eb55b289\" (UID: \"02d4b151-446f-42dc-86ec-edb6eb55b289\") " Dec 11 14:10:33 crc kubenswrapper[5050]: I1211 14:10:33.789325 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4hcc\" (UniqueName: \"kubernetes.io/projected/02d4b151-446f-42dc-86ec-edb6eb55b289-kube-api-access-n4hcc\") pod \"02d4b151-446f-42dc-86ec-edb6eb55b289\" (UID: \"02d4b151-446f-42dc-86ec-edb6eb55b289\") " Dec 11 14:10:33 crc kubenswrapper[5050]: I1211 14:10:33.789370 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02d4b151-446f-42dc-86ec-edb6eb55b289-logs\") pod \"02d4b151-446f-42dc-86ec-edb6eb55b289\" (UID: \"02d4b151-446f-42dc-86ec-edb6eb55b289\") " Dec 11 14:10:33 crc kubenswrapper[5050]: I1211 14:10:33.789457 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02d4b151-446f-42dc-86ec-edb6eb55b289-combined-ca-bundle\") pod \"02d4b151-446f-42dc-86ec-edb6eb55b289\" (UID: \"02d4b151-446f-42dc-86ec-edb6eb55b289\") " Dec 11 14:10:33 crc kubenswrapper[5050]: I1211 14:10:33.790408 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02d4b151-446f-42dc-86ec-edb6eb55b289-logs" (OuterVolumeSpecName: "logs") pod "02d4b151-446f-42dc-86ec-edb6eb55b289" (UID: "02d4b151-446f-42dc-86ec-edb6eb55b289"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:10:33 crc kubenswrapper[5050]: I1211 14:10:33.796138 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02d4b151-446f-42dc-86ec-edb6eb55b289-kube-api-access-n4hcc" (OuterVolumeSpecName: "kube-api-access-n4hcc") pod "02d4b151-446f-42dc-86ec-edb6eb55b289" (UID: "02d4b151-446f-42dc-86ec-edb6eb55b289"). InnerVolumeSpecName "kube-api-access-n4hcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:10:33 crc kubenswrapper[5050]: I1211 14:10:33.820305 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02d4b151-446f-42dc-86ec-edb6eb55b289-config-data" (OuterVolumeSpecName: "config-data") pod "02d4b151-446f-42dc-86ec-edb6eb55b289" (UID: "02d4b151-446f-42dc-86ec-edb6eb55b289"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:10:33 crc kubenswrapper[5050]: I1211 14:10:33.851231 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8688x" Dec 11 14:10:33 crc kubenswrapper[5050]: I1211 14:10:33.852080 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8688x" Dec 11 14:10:33 crc kubenswrapper[5050]: I1211 14:10:33.854233 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02d4b151-446f-42dc-86ec-edb6eb55b289-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02d4b151-446f-42dc-86ec-edb6eb55b289" (UID: "02d4b151-446f-42dc-86ec-edb6eb55b289"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:10:33 crc kubenswrapper[5050]: I1211 14:10:33.892252 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02d4b151-446f-42dc-86ec-edb6eb55b289-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:33 crc kubenswrapper[5050]: I1211 14:10:33.892294 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4hcc\" (UniqueName: \"kubernetes.io/projected/02d4b151-446f-42dc-86ec-edb6eb55b289-kube-api-access-n4hcc\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:33 crc kubenswrapper[5050]: I1211 14:10:33.892304 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02d4b151-446f-42dc-86ec-edb6eb55b289-logs\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:33 crc kubenswrapper[5050]: I1211 14:10:33.892313 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02d4b151-446f-42dc-86ec-edb6eb55b289-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.042983 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8e60c3c2-6055-4e50-99b6-4a5f08728b17","Type":"ContainerDied","Data":"8f63bf38a8ce475fd832f14043c670eee1242e8c25f318fc0a497be6c8dad4aa"} Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.043057 5050 scope.go:117] "RemoveContainer" containerID="54ab5edbcc14c67a1717bfd1d05ad6d09f2905446ab6d06cdf66777d774f523a" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.043073 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.045784 5050 generic.go:334] "Generic (PLEG): container finished" podID="b570eb96-751f-4200-ba76-1cd02d524b7d" containerID="591d0909f0600652a524dd46d8fcb1a8d7961f6e73aaed3bbe6427afaacc3272" exitCode=0 Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.045836 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b570eb96-751f-4200-ba76-1cd02d524b7d","Type":"ContainerDied","Data":"591d0909f0600652a524dd46d8fcb1a8d7961f6e73aaed3bbe6427afaacc3272"} Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.056664 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"02d4b151-446f-42dc-86ec-edb6eb55b289","Type":"ContainerDied","Data":"41373202fc9a7628ee66fef5c62da4e7fa685cf17353b875d23d69bcafb0ab49"} Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.056735 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.085002 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.114285 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.124490 5050 scope.go:117] "RemoveContainer" containerID="72ccd5065c62b91e0e2d7a84a47b585566841c17b0d60e37adcda3048ec2e523" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.128351 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 14:10:34 crc kubenswrapper[5050]: E1211 14:10:34.128927 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a0a350f-3ea9-4892-9964-47c591420d28" containerName="init" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.128942 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a0a350f-3ea9-4892-9964-47c591420d28" containerName="init" Dec 11 14:10:34 crc kubenswrapper[5050]: E1211 14:10:34.128951 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02d4b151-446f-42dc-86ec-edb6eb55b289" containerName="nova-api-log" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.128958 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="02d4b151-446f-42dc-86ec-edb6eb55b289" containerName="nova-api-log" Dec 11 14:10:34 crc kubenswrapper[5050]: E1211 14:10:34.128965 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84abe132-b822-4b40-9952-7454c24cf3d0" containerName="nova-manage" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.128972 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="84abe132-b822-4b40-9952-7454c24cf3d0" containerName="nova-manage" Dec 11 14:10:34 crc kubenswrapper[5050]: E1211 14:10:34.128988 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a0a350f-3ea9-4892-9964-47c591420d28" containerName="dnsmasq-dns" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.128995 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a0a350f-3ea9-4892-9964-47c591420d28" containerName="dnsmasq-dns" Dec 11 14:10:34 crc kubenswrapper[5050]: E1211 14:10:34.129002 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02d4b151-446f-42dc-86ec-edb6eb55b289" containerName="nova-api-api" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.129027 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="02d4b151-446f-42dc-86ec-edb6eb55b289" containerName="nova-api-api" Dec 11 14:10:34 crc kubenswrapper[5050]: E1211 14:10:34.129044 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e60c3c2-6055-4e50-99b6-4a5f08728b17" containerName="kube-state-metrics" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.129050 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e60c3c2-6055-4e50-99b6-4a5f08728b17" containerName="kube-state-metrics" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.129313 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="84abe132-b822-4b40-9952-7454c24cf3d0" containerName="nova-manage" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.129350 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="02d4b151-446f-42dc-86ec-edb6eb55b289" containerName="nova-api-log" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.129377 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e60c3c2-6055-4e50-99b6-4a5f08728b17" containerName="kube-state-metrics" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.129390 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a0a350f-3ea9-4892-9964-47c591420d28" containerName="dnsmasq-dns" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.129399 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="02d4b151-446f-42dc-86ec-edb6eb55b289" containerName="nova-api-api" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.130292 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.133893 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.134106 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.141300 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.207202 5050 scope.go:117] "RemoveContainer" containerID="f85e35cb5c618541ccfdc4317419e03c23ee1aee7c6c6e02e0c1fbc7e4696583" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.209418 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.219321 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.227427 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.229502 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.237893 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.271226 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.309580 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6817c570-f6ff-4b08-825a-027a9c8630b0-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6817c570-f6ff-4b08-825a-027a9c8630b0\") " pod="openstack/kube-state-metrics-0" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.310099 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trkj4\" (UniqueName: \"kubernetes.io/projected/6817c570-f6ff-4b08-825a-027a9c8630b0-kube-api-access-trkj4\") pod \"kube-state-metrics-0\" (UID: \"6817c570-f6ff-4b08-825a-027a9c8630b0\") " pod="openstack/kube-state-metrics-0" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.310246 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6817c570-f6ff-4b08-825a-027a9c8630b0-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6817c570-f6ff-4b08-825a-027a9c8630b0\") " pod="openstack/kube-state-metrics-0" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.310365 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6817c570-f6ff-4b08-825a-027a9c8630b0-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6817c570-f6ff-4b08-825a-027a9c8630b0\") " pod="openstack/kube-state-metrics-0" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.378560 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.378966 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1f0b2c51-9300-49e3-be4f-f08bc63c7e8d" containerName="ceilometer-central-agent" containerID="cri-o://2166c87ee01e02647af3563324f2721942e6a80653c81391ee64da6d659b2eb5" gracePeriod=30 Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.379138 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1f0b2c51-9300-49e3-be4f-f08bc63c7e8d" containerName="sg-core" containerID="cri-o://83e968fbb2ebfcae094253150ed9e812c5c80149a383c66044b8e8058694c6e5" gracePeriod=30 Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.379184 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1f0b2c51-9300-49e3-be4f-f08bc63c7e8d" containerName="ceilometer-notification-agent" containerID="cri-o://d1d520dbb4c4e8a4630669d75c825090adfefa6f450c209bead69d7203ca76ea" gracePeriod=30 Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.379214 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1f0b2c51-9300-49e3-be4f-f08bc63c7e8d" containerName="proxy-httpd" containerID="cri-o://ba1c4a582b693a6685972a569e5da562813f6fe559ba28e42bc1ec1ce63e4859" gracePeriod=30 Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.412510 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89a7ecdc-c4ec-4097-a100-c74ebb8b86ff-logs\") pod \"nova-api-0\" (UID: \"89a7ecdc-c4ec-4097-a100-c74ebb8b86ff\") " pod="openstack/nova-api-0" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.412601 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6817c570-f6ff-4b08-825a-027a9c8630b0-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6817c570-f6ff-4b08-825a-027a9c8630b0\") " pod="openstack/kube-state-metrics-0" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.412661 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89a7ecdc-c4ec-4097-a100-c74ebb8b86ff-config-data\") pod \"nova-api-0\" (UID: \"89a7ecdc-c4ec-4097-a100-c74ebb8b86ff\") " pod="openstack/nova-api-0" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.412715 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trkj4\" (UniqueName: \"kubernetes.io/projected/6817c570-f6ff-4b08-825a-027a9c8630b0-kube-api-access-trkj4\") pod \"kube-state-metrics-0\" (UID: \"6817c570-f6ff-4b08-825a-027a9c8630b0\") " pod="openstack/kube-state-metrics-0" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.412749 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89a7ecdc-c4ec-4097-a100-c74ebb8b86ff-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"89a7ecdc-c4ec-4097-a100-c74ebb8b86ff\") " pod="openstack/nova-api-0" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.412776 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6817c570-f6ff-4b08-825a-027a9c8630b0-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6817c570-f6ff-4b08-825a-027a9c8630b0\") " pod="openstack/kube-state-metrics-0" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.412797 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pztmg\" (UniqueName: \"kubernetes.io/projected/89a7ecdc-c4ec-4097-a100-c74ebb8b86ff-kube-api-access-pztmg\") pod \"nova-api-0\" (UID: \"89a7ecdc-c4ec-4097-a100-c74ebb8b86ff\") " pod="openstack/nova-api-0" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.412832 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6817c570-f6ff-4b08-825a-027a9c8630b0-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6817c570-f6ff-4b08-825a-027a9c8630b0\") " pod="openstack/kube-state-metrics-0" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.418822 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6817c570-f6ff-4b08-825a-027a9c8630b0-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6817c570-f6ff-4b08-825a-027a9c8630b0\") " pod="openstack/kube-state-metrics-0" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.419867 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6817c570-f6ff-4b08-825a-027a9c8630b0-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6817c570-f6ff-4b08-825a-027a9c8630b0\") " pod="openstack/kube-state-metrics-0" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.419987 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6817c570-f6ff-4b08-825a-027a9c8630b0-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6817c570-f6ff-4b08-825a-027a9c8630b0\") " pod="openstack/kube-state-metrics-0" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.432667 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trkj4\" (UniqueName: \"kubernetes.io/projected/6817c570-f6ff-4b08-825a-027a9c8630b0-kube-api-access-trkj4\") pod \"kube-state-metrics-0\" (UID: \"6817c570-f6ff-4b08-825a-027a9c8630b0\") " pod="openstack/kube-state-metrics-0" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.461329 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.514701 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89a7ecdc-c4ec-4097-a100-c74ebb8b86ff-config-data\") pod \"nova-api-0\" (UID: \"89a7ecdc-c4ec-4097-a100-c74ebb8b86ff\") " pod="openstack/nova-api-0" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.514825 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89a7ecdc-c4ec-4097-a100-c74ebb8b86ff-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"89a7ecdc-c4ec-4097-a100-c74ebb8b86ff\") " pod="openstack/nova-api-0" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.514857 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pztmg\" (UniqueName: \"kubernetes.io/projected/89a7ecdc-c4ec-4097-a100-c74ebb8b86ff-kube-api-access-pztmg\") pod \"nova-api-0\" (UID: \"89a7ecdc-c4ec-4097-a100-c74ebb8b86ff\") " pod="openstack/nova-api-0" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.514914 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89a7ecdc-c4ec-4097-a100-c74ebb8b86ff-logs\") pod \"nova-api-0\" (UID: \"89a7ecdc-c4ec-4097-a100-c74ebb8b86ff\") " pod="openstack/nova-api-0" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.515746 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89a7ecdc-c4ec-4097-a100-c74ebb8b86ff-logs\") pod \"nova-api-0\" (UID: \"89a7ecdc-c4ec-4097-a100-c74ebb8b86ff\") " pod="openstack/nova-api-0" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.519407 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89a7ecdc-c4ec-4097-a100-c74ebb8b86ff-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"89a7ecdc-c4ec-4097-a100-c74ebb8b86ff\") " pod="openstack/nova-api-0" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.521582 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89a7ecdc-c4ec-4097-a100-c74ebb8b86ff-config-data\") pod \"nova-api-0\" (UID: \"89a7ecdc-c4ec-4097-a100-c74ebb8b86ff\") " pod="openstack/nova-api-0" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.542170 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pztmg\" (UniqueName: \"kubernetes.io/projected/89a7ecdc-c4ec-4097-a100-c74ebb8b86ff-kube-api-access-pztmg\") pod \"nova-api-0\" (UID: \"89a7ecdc-c4ec-4097-a100-c74ebb8b86ff\") " pod="openstack/nova-api-0" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.594828 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.668896 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.820931 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b570eb96-751f-4200-ba76-1cd02d524b7d-config-data\") pod \"b570eb96-751f-4200-ba76-1cd02d524b7d\" (UID: \"b570eb96-751f-4200-ba76-1cd02d524b7d\") " Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.821404 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b570eb96-751f-4200-ba76-1cd02d524b7d-combined-ca-bundle\") pod \"b570eb96-751f-4200-ba76-1cd02d524b7d\" (UID: \"b570eb96-751f-4200-ba76-1cd02d524b7d\") " Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.821804 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjrhd\" (UniqueName: \"kubernetes.io/projected/b570eb96-751f-4200-ba76-1cd02d524b7d-kube-api-access-fjrhd\") pod \"b570eb96-751f-4200-ba76-1cd02d524b7d\" (UID: \"b570eb96-751f-4200-ba76-1cd02d524b7d\") " Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.830488 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b570eb96-751f-4200-ba76-1cd02d524b7d-kube-api-access-fjrhd" (OuterVolumeSpecName: "kube-api-access-fjrhd") pod "b570eb96-751f-4200-ba76-1cd02d524b7d" (UID: "b570eb96-751f-4200-ba76-1cd02d524b7d"). InnerVolumeSpecName "kube-api-access-fjrhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.859707 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b570eb96-751f-4200-ba76-1cd02d524b7d-config-data" (OuterVolumeSpecName: "config-data") pod "b570eb96-751f-4200-ba76-1cd02d524b7d" (UID: "b570eb96-751f-4200-ba76-1cd02d524b7d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.868367 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b570eb96-751f-4200-ba76-1cd02d524b7d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b570eb96-751f-4200-ba76-1cd02d524b7d" (UID: "b570eb96-751f-4200-ba76-1cd02d524b7d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.905860 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8688x" podUID="9b0ac5e2-e200-454b-928e-93ea9b253ca1" containerName="registry-server" probeResult="failure" output=< Dec 11 14:10:34 crc kubenswrapper[5050]: timeout: failed to connect service ":50051" within 1s Dec 11 14:10:34 crc kubenswrapper[5050]: > Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.912452 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.924614 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjrhd\" (UniqueName: \"kubernetes.io/projected/b570eb96-751f-4200-ba76-1cd02d524b7d-kube-api-access-fjrhd\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.924662 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b570eb96-751f-4200-ba76-1cd02d524b7d-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.924678 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b570eb96-751f-4200-ba76-1cd02d524b7d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:34 crc kubenswrapper[5050]: I1211 14:10:34.968159 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 14:10:34 crc kubenswrapper[5050]: W1211 14:10:34.977079 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6817c570_f6ff_4b08_825a_027a9c8630b0.slice/crio-be9cff5d067d029402380a5e60a79c3e8a579a7a9b9b4d9e3b7081d07b9b74e2 WatchSource:0}: Error finding container be9cff5d067d029402380a5e60a79c3e8a579a7a9b9b4d9e3b7081d07b9b74e2: Status 404 returned error can't find the container with id be9cff5d067d029402380a5e60a79c3e8a579a7a9b9b4d9e3b7081d07b9b74e2 Dec 11 14:10:35 crc kubenswrapper[5050]: I1211 14:10:35.082131 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6817c570-f6ff-4b08-825a-027a9c8630b0","Type":"ContainerStarted","Data":"be9cff5d067d029402380a5e60a79c3e8a579a7a9b9b4d9e3b7081d07b9b74e2"} Dec 11 14:10:35 crc kubenswrapper[5050]: I1211 14:10:35.086149 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 14:10:35 crc kubenswrapper[5050]: I1211 14:10:35.086425 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b570eb96-751f-4200-ba76-1cd02d524b7d","Type":"ContainerDied","Data":"f31d6a638e4a654221a5a66006ea8add931d173b803bdba99bab253a35d30895"} Dec 11 14:10:35 crc kubenswrapper[5050]: I1211 14:10:35.086910 5050 scope.go:117] "RemoveContainer" containerID="591d0909f0600652a524dd46d8fcb1a8d7961f6e73aaed3bbe6427afaacc3272" Dec 11 14:10:35 crc kubenswrapper[5050]: I1211 14:10:35.092915 5050 generic.go:334] "Generic (PLEG): container finished" podID="1f0b2c51-9300-49e3-be4f-f08bc63c7e8d" containerID="ba1c4a582b693a6685972a569e5da562813f6fe559ba28e42bc1ec1ce63e4859" exitCode=0 Dec 11 14:10:35 crc kubenswrapper[5050]: I1211 14:10:35.092950 5050 generic.go:334] "Generic (PLEG): container finished" podID="1f0b2c51-9300-49e3-be4f-f08bc63c7e8d" containerID="83e968fbb2ebfcae094253150ed9e812c5c80149a383c66044b8e8058694c6e5" exitCode=2 Dec 11 14:10:35 crc kubenswrapper[5050]: I1211 14:10:35.092960 5050 generic.go:334] "Generic (PLEG): container finished" podID="1f0b2c51-9300-49e3-be4f-f08bc63c7e8d" containerID="2166c87ee01e02647af3563324f2721942e6a80653c81391ee64da6d659b2eb5" exitCode=0 Dec 11 14:10:35 crc kubenswrapper[5050]: I1211 14:10:35.093026 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d","Type":"ContainerDied","Data":"ba1c4a582b693a6685972a569e5da562813f6fe559ba28e42bc1ec1ce63e4859"} Dec 11 14:10:35 crc kubenswrapper[5050]: I1211 14:10:35.093053 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d","Type":"ContainerDied","Data":"83e968fbb2ebfcae094253150ed9e812c5c80149a383c66044b8e8058694c6e5"} Dec 11 14:10:35 crc kubenswrapper[5050]: I1211 14:10:35.093064 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d","Type":"ContainerDied","Data":"2166c87ee01e02647af3563324f2721942e6a80653c81391ee64da6d659b2eb5"} Dec 11 14:10:35 crc kubenswrapper[5050]: I1211 14:10:35.098440 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"89a7ecdc-c4ec-4097-a100-c74ebb8b86ff","Type":"ContainerStarted","Data":"6f6ea4edccb31c0846824bc09921d5ce22176ef4016e83980b93be9c8271819a"} Dec 11 14:10:35 crc kubenswrapper[5050]: I1211 14:10:35.185103 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 14:10:35 crc kubenswrapper[5050]: I1211 14:10:35.201824 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 14:10:35 crc kubenswrapper[5050]: I1211 14:10:35.214437 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 14:10:35 crc kubenswrapper[5050]: E1211 14:10:35.215078 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b570eb96-751f-4200-ba76-1cd02d524b7d" containerName="nova-scheduler-scheduler" Dec 11 14:10:35 crc kubenswrapper[5050]: I1211 14:10:35.215099 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="b570eb96-751f-4200-ba76-1cd02d524b7d" containerName="nova-scheduler-scheduler" Dec 11 14:10:35 crc kubenswrapper[5050]: I1211 14:10:35.215346 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="b570eb96-751f-4200-ba76-1cd02d524b7d" containerName="nova-scheduler-scheduler" Dec 11 14:10:35 crc kubenswrapper[5050]: I1211 14:10:35.216136 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 14:10:35 crc kubenswrapper[5050]: I1211 14:10:35.221320 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 11 14:10:35 crc kubenswrapper[5050]: I1211 14:10:35.225509 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 14:10:35 crc kubenswrapper[5050]: I1211 14:10:35.346672 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd3de51-b591-4c54-9479-f94369d70ecf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2dd3de51-b591-4c54-9479-f94369d70ecf\") " pod="openstack/nova-scheduler-0" Dec 11 14:10:35 crc kubenswrapper[5050]: I1211 14:10:35.346734 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dd3de51-b591-4c54-9479-f94369d70ecf-config-data\") pod \"nova-scheduler-0\" (UID: \"2dd3de51-b591-4c54-9479-f94369d70ecf\") " pod="openstack/nova-scheduler-0" Dec 11 14:10:35 crc kubenswrapper[5050]: I1211 14:10:35.346856 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqlpj\" (UniqueName: \"kubernetes.io/projected/2dd3de51-b591-4c54-9479-f94369d70ecf-kube-api-access-lqlpj\") pod \"nova-scheduler-0\" (UID: \"2dd3de51-b591-4c54-9479-f94369d70ecf\") " pod="openstack/nova-scheduler-0" Dec 11 14:10:35 crc kubenswrapper[5050]: I1211 14:10:35.449269 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqlpj\" (UniqueName: \"kubernetes.io/projected/2dd3de51-b591-4c54-9479-f94369d70ecf-kube-api-access-lqlpj\") pod \"nova-scheduler-0\" (UID: \"2dd3de51-b591-4c54-9479-f94369d70ecf\") " pod="openstack/nova-scheduler-0" Dec 11 14:10:35 crc kubenswrapper[5050]: I1211 14:10:35.449406 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd3de51-b591-4c54-9479-f94369d70ecf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2dd3de51-b591-4c54-9479-f94369d70ecf\") " pod="openstack/nova-scheduler-0" Dec 11 14:10:35 crc kubenswrapper[5050]: I1211 14:10:35.449453 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dd3de51-b591-4c54-9479-f94369d70ecf-config-data\") pod \"nova-scheduler-0\" (UID: \"2dd3de51-b591-4c54-9479-f94369d70ecf\") " pod="openstack/nova-scheduler-0" Dec 11 14:10:35 crc kubenswrapper[5050]: I1211 14:10:35.454490 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd3de51-b591-4c54-9479-f94369d70ecf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2dd3de51-b591-4c54-9479-f94369d70ecf\") " pod="openstack/nova-scheduler-0" Dec 11 14:10:35 crc kubenswrapper[5050]: I1211 14:10:35.455444 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dd3de51-b591-4c54-9479-f94369d70ecf-config-data\") pod \"nova-scheduler-0\" (UID: \"2dd3de51-b591-4c54-9479-f94369d70ecf\") " pod="openstack/nova-scheduler-0" Dec 11 14:10:35 crc kubenswrapper[5050]: I1211 14:10:35.470872 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqlpj\" (UniqueName: \"kubernetes.io/projected/2dd3de51-b591-4c54-9479-f94369d70ecf-kube-api-access-lqlpj\") pod \"nova-scheduler-0\" (UID: \"2dd3de51-b591-4c54-9479-f94369d70ecf\") " pod="openstack/nova-scheduler-0" Dec 11 14:10:35 crc kubenswrapper[5050]: I1211 14:10:35.574307 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02d4b151-446f-42dc-86ec-edb6eb55b289" path="/var/lib/kubelet/pods/02d4b151-446f-42dc-86ec-edb6eb55b289/volumes" Dec 11 14:10:35 crc kubenswrapper[5050]: I1211 14:10:35.575706 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e60c3c2-6055-4e50-99b6-4a5f08728b17" path="/var/lib/kubelet/pods/8e60c3c2-6055-4e50-99b6-4a5f08728b17/volumes" Dec 11 14:10:35 crc kubenswrapper[5050]: I1211 14:10:35.586583 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b570eb96-751f-4200-ba76-1cd02d524b7d" path="/var/lib/kubelet/pods/b570eb96-751f-4200-ba76-1cd02d524b7d/volumes" Dec 11 14:10:35 crc kubenswrapper[5050]: I1211 14:10:35.652100 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 14:10:36 crc kubenswrapper[5050]: I1211 14:10:36.111671 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"89a7ecdc-c4ec-4097-a100-c74ebb8b86ff","Type":"ContainerStarted","Data":"59b103688d380bd3ea1ccbe23b8e50d69ce7096f799e44614d95596b4ae9353c"} Dec 11 14:10:36 crc kubenswrapper[5050]: I1211 14:10:36.112223 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"89a7ecdc-c4ec-4097-a100-c74ebb8b86ff","Type":"ContainerStarted","Data":"8101049b849d3c3de301d09aa49baf5e04594ebbf9f8f8020238d54f0b38a06d"} Dec 11 14:10:36 crc kubenswrapper[5050]: I1211 14:10:36.113987 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6817c570-f6ff-4b08-825a-027a9c8630b0","Type":"ContainerStarted","Data":"99b0a4e0ddebc7695b430edc234ac8f69f475befeae07527d5e1dffee8ce52e4"} Dec 11 14:10:36 crc kubenswrapper[5050]: I1211 14:10:36.114961 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 11 14:10:36 crc kubenswrapper[5050]: I1211 14:10:36.159186 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.159157615 podStartE2EDuration="2.159157615s" podCreationTimestamp="2025-12-11 14:10:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:10:36.147358336 +0000 UTC m=+1326.991080922" watchObservedRunningTime="2025-12-11 14:10:36.159157615 +0000 UTC m=+1327.002880201" Dec 11 14:10:36 crc kubenswrapper[5050]: I1211 14:10:36.180123 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.576299168 podStartE2EDuration="2.18009982s" podCreationTimestamp="2025-12-11 14:10:34 +0000 UTC" firstStartedPulling="2025-12-11 14:10:34.981084709 +0000 UTC m=+1325.824807295" lastFinishedPulling="2025-12-11 14:10:35.584885361 +0000 UTC m=+1326.428607947" observedRunningTime="2025-12-11 14:10:36.175948098 +0000 UTC m=+1327.019670684" watchObservedRunningTime="2025-12-11 14:10:36.18009982 +0000 UTC m=+1327.023822406" Dec 11 14:10:36 crc kubenswrapper[5050]: I1211 14:10:36.221508 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 14:10:37 crc kubenswrapper[5050]: I1211 14:10:37.133207 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2dd3de51-b591-4c54-9479-f94369d70ecf","Type":"ContainerStarted","Data":"288445757ecc34f547f099e2777c0212196ffb7ac831d9f3f4b9391630a1db24"} Dec 11 14:10:37 crc kubenswrapper[5050]: I1211 14:10:37.133556 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2dd3de51-b591-4c54-9479-f94369d70ecf","Type":"ContainerStarted","Data":"76c7cc04f94a7671c50208c9b02c0154ba6419c7191d51fa0f8af2ea12eaf5a0"} Dec 11 14:10:37 crc kubenswrapper[5050]: I1211 14:10:37.163092 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.163066335 podStartE2EDuration="2.163066335s" podCreationTimestamp="2025-12-11 14:10:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:10:37.152265863 +0000 UTC m=+1327.995988449" watchObservedRunningTime="2025-12-11 14:10:37.163066335 +0000 UTC m=+1328.006788921" Dec 11 14:10:38 crc kubenswrapper[5050]: I1211 14:10:38.150542 5050 generic.go:334] "Generic (PLEG): container finished" podID="1f0b2c51-9300-49e3-be4f-f08bc63c7e8d" containerID="d1d520dbb4c4e8a4630669d75c825090adfefa6f450c209bead69d7203ca76ea" exitCode=0 Dec 11 14:10:38 crc kubenswrapper[5050]: I1211 14:10:38.151672 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d","Type":"ContainerDied","Data":"d1d520dbb4c4e8a4630669d75c825090adfefa6f450c209bead69d7203ca76ea"} Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:38.999242 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.141916 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-combined-ca-bundle\") pod \"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d\" (UID: \"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d\") " Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.142049 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-log-httpd\") pod \"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d\" (UID: \"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d\") " Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.142132 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-config-data\") pod \"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d\" (UID: \"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d\") " Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.142185 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krnld\" (UniqueName: \"kubernetes.io/projected/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-kube-api-access-krnld\") pod \"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d\" (UID: \"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d\") " Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.142275 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-scripts\") pod \"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d\" (UID: \"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d\") " Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.142386 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-sg-core-conf-yaml\") pod \"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d\" (UID: \"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d\") " Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.142445 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-run-httpd\") pod \"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d\" (UID: \"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d\") " Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.144741 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1f0b2c51-9300-49e3-be4f-f08bc63c7e8d" (UID: "1f0b2c51-9300-49e3-be4f-f08bc63c7e8d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.145822 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1f0b2c51-9300-49e3-be4f-f08bc63c7e8d" (UID: "1f0b2c51-9300-49e3-be4f-f08bc63c7e8d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.158859 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-kube-api-access-krnld" (OuterVolumeSpecName: "kube-api-access-krnld") pod "1f0b2c51-9300-49e3-be4f-f08bc63c7e8d" (UID: "1f0b2c51-9300-49e3-be4f-f08bc63c7e8d"). InnerVolumeSpecName "kube-api-access-krnld". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.159345 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-scripts" (OuterVolumeSpecName: "scripts") pod "1f0b2c51-9300-49e3-be4f-f08bc63c7e8d" (UID: "1f0b2c51-9300-49e3-be4f-f08bc63c7e8d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.174914 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f0b2c51-9300-49e3-be4f-f08bc63c7e8d","Type":"ContainerDied","Data":"4c25bd92aecc85e8873ca1cdc15920a93598b5e2e37eea6c5f3f696e0ae932b3"} Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.175049 5050 scope.go:117] "RemoveContainer" containerID="ba1c4a582b693a6685972a569e5da562813f6fe559ba28e42bc1ec1ce63e4859" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.175076 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.181966 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1f0b2c51-9300-49e3-be4f-f08bc63c7e8d" (UID: "1f0b2c51-9300-49e3-be4f-f08bc63c7e8d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.226585 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f0b2c51-9300-49e3-be4f-f08bc63c7e8d" (UID: "1f0b2c51-9300-49e3-be4f-f08bc63c7e8d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.245452 5050 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.245486 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.245498 5050 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.245507 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krnld\" (UniqueName: \"kubernetes.io/projected/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-kube-api-access-krnld\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.245516 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.245524 5050 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.275652 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-config-data" (OuterVolumeSpecName: "config-data") pod "1f0b2c51-9300-49e3-be4f-f08bc63c7e8d" (UID: "1f0b2c51-9300-49e3-be4f-f08bc63c7e8d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.348138 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.387226 5050 scope.go:117] "RemoveContainer" containerID="83e968fbb2ebfcae094253150ed9e812c5c80149a383c66044b8e8058694c6e5" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.407210 5050 scope.go:117] "RemoveContainer" containerID="d1d520dbb4c4e8a4630669d75c825090adfefa6f450c209bead69d7203ca76ea" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.428524 5050 scope.go:117] "RemoveContainer" containerID="2166c87ee01e02647af3563324f2721942e6a80653c81391ee64da6d659b2eb5" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.537927 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.567762 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.586274 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:10:39 crc kubenswrapper[5050]: E1211 14:10:39.586959 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f0b2c51-9300-49e3-be4f-f08bc63c7e8d" containerName="sg-core" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.586987 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f0b2c51-9300-49e3-be4f-f08bc63c7e8d" containerName="sg-core" Dec 11 14:10:39 crc kubenswrapper[5050]: E1211 14:10:39.587066 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f0b2c51-9300-49e3-be4f-f08bc63c7e8d" containerName="ceilometer-central-agent" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.587077 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f0b2c51-9300-49e3-be4f-f08bc63c7e8d" containerName="ceilometer-central-agent" Dec 11 14:10:39 crc kubenswrapper[5050]: E1211 14:10:39.587091 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f0b2c51-9300-49e3-be4f-f08bc63c7e8d" containerName="proxy-httpd" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.587099 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f0b2c51-9300-49e3-be4f-f08bc63c7e8d" containerName="proxy-httpd" Dec 11 14:10:39 crc kubenswrapper[5050]: E1211 14:10:39.587123 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f0b2c51-9300-49e3-be4f-f08bc63c7e8d" containerName="ceilometer-notification-agent" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.587141 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f0b2c51-9300-49e3-be4f-f08bc63c7e8d" containerName="ceilometer-notification-agent" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.587600 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f0b2c51-9300-49e3-be4f-f08bc63c7e8d" containerName="ceilometer-central-agent" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.587643 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f0b2c51-9300-49e3-be4f-f08bc63c7e8d" containerName="ceilometer-notification-agent" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.587656 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f0b2c51-9300-49e3-be4f-f08bc63c7e8d" containerName="sg-core" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.587669 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f0b2c51-9300-49e3-be4f-f08bc63c7e8d" containerName="proxy-httpd" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.590108 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.593406 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.593680 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.594444 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.597908 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.756222 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5af14101-b5b1-4735-9b62-83f55267c624-log-httpd\") pod \"ceilometer-0\" (UID: \"5af14101-b5b1-4735-9b62-83f55267c624\") " pod="openstack/ceilometer-0" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.756283 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5af14101-b5b1-4735-9b62-83f55267c624-run-httpd\") pod \"ceilometer-0\" (UID: \"5af14101-b5b1-4735-9b62-83f55267c624\") " pod="openstack/ceilometer-0" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.756310 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5af14101-b5b1-4735-9b62-83f55267c624-scripts\") pod \"ceilometer-0\" (UID: \"5af14101-b5b1-4735-9b62-83f55267c624\") " pod="openstack/ceilometer-0" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.756333 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5af14101-b5b1-4735-9b62-83f55267c624-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5af14101-b5b1-4735-9b62-83f55267c624\") " pod="openstack/ceilometer-0" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.756367 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5af14101-b5b1-4735-9b62-83f55267c624-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5af14101-b5b1-4735-9b62-83f55267c624\") " pod="openstack/ceilometer-0" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.756406 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhktj\" (UniqueName: \"kubernetes.io/projected/5af14101-b5b1-4735-9b62-83f55267c624-kube-api-access-xhktj\") pod \"ceilometer-0\" (UID: \"5af14101-b5b1-4735-9b62-83f55267c624\") " pod="openstack/ceilometer-0" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.756436 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5af14101-b5b1-4735-9b62-83f55267c624-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5af14101-b5b1-4735-9b62-83f55267c624\") " pod="openstack/ceilometer-0" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.756960 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5af14101-b5b1-4735-9b62-83f55267c624-config-data\") pod \"ceilometer-0\" (UID: \"5af14101-b5b1-4735-9b62-83f55267c624\") " pod="openstack/ceilometer-0" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.859601 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhktj\" (UniqueName: \"kubernetes.io/projected/5af14101-b5b1-4735-9b62-83f55267c624-kube-api-access-xhktj\") pod \"ceilometer-0\" (UID: \"5af14101-b5b1-4735-9b62-83f55267c624\") " pod="openstack/ceilometer-0" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.859675 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5af14101-b5b1-4735-9b62-83f55267c624-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5af14101-b5b1-4735-9b62-83f55267c624\") " pod="openstack/ceilometer-0" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.859769 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5af14101-b5b1-4735-9b62-83f55267c624-config-data\") pod \"ceilometer-0\" (UID: \"5af14101-b5b1-4735-9b62-83f55267c624\") " pod="openstack/ceilometer-0" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.859844 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5af14101-b5b1-4735-9b62-83f55267c624-log-httpd\") pod \"ceilometer-0\" (UID: \"5af14101-b5b1-4735-9b62-83f55267c624\") " pod="openstack/ceilometer-0" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.859892 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5af14101-b5b1-4735-9b62-83f55267c624-run-httpd\") pod \"ceilometer-0\" (UID: \"5af14101-b5b1-4735-9b62-83f55267c624\") " pod="openstack/ceilometer-0" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.859922 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5af14101-b5b1-4735-9b62-83f55267c624-scripts\") pod \"ceilometer-0\" (UID: \"5af14101-b5b1-4735-9b62-83f55267c624\") " pod="openstack/ceilometer-0" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.859956 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5af14101-b5b1-4735-9b62-83f55267c624-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5af14101-b5b1-4735-9b62-83f55267c624\") " pod="openstack/ceilometer-0" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.860000 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5af14101-b5b1-4735-9b62-83f55267c624-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5af14101-b5b1-4735-9b62-83f55267c624\") " pod="openstack/ceilometer-0" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.860980 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5af14101-b5b1-4735-9b62-83f55267c624-log-httpd\") pod \"ceilometer-0\" (UID: \"5af14101-b5b1-4735-9b62-83f55267c624\") " pod="openstack/ceilometer-0" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.861379 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5af14101-b5b1-4735-9b62-83f55267c624-run-httpd\") pod \"ceilometer-0\" (UID: \"5af14101-b5b1-4735-9b62-83f55267c624\") " pod="openstack/ceilometer-0" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.864130 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5af14101-b5b1-4735-9b62-83f55267c624-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5af14101-b5b1-4735-9b62-83f55267c624\") " pod="openstack/ceilometer-0" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.864169 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5af14101-b5b1-4735-9b62-83f55267c624-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5af14101-b5b1-4735-9b62-83f55267c624\") " pod="openstack/ceilometer-0" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.864499 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5af14101-b5b1-4735-9b62-83f55267c624-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5af14101-b5b1-4735-9b62-83f55267c624\") " pod="openstack/ceilometer-0" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.864802 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5af14101-b5b1-4735-9b62-83f55267c624-scripts\") pod \"ceilometer-0\" (UID: \"5af14101-b5b1-4735-9b62-83f55267c624\") " pod="openstack/ceilometer-0" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.868222 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5af14101-b5b1-4735-9b62-83f55267c624-config-data\") pod \"ceilometer-0\" (UID: \"5af14101-b5b1-4735-9b62-83f55267c624\") " pod="openstack/ceilometer-0" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.891724 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhktj\" (UniqueName: \"kubernetes.io/projected/5af14101-b5b1-4735-9b62-83f55267c624-kube-api-access-xhktj\") pod \"ceilometer-0\" (UID: \"5af14101-b5b1-4735-9b62-83f55267c624\") " pod="openstack/ceilometer-0" Dec 11 14:10:39 crc kubenswrapper[5050]: I1211 14:10:39.933081 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 14:10:40 crc kubenswrapper[5050]: I1211 14:10:40.205348 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:10:40 crc kubenswrapper[5050]: W1211 14:10:40.211075 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5af14101_b5b1_4735_9b62_83f55267c624.slice/crio-ec5cde7a993b0ae9ebab4dc75749d892f0b4d84209a58883f1d91ba0daf287bd WatchSource:0}: Error finding container ec5cde7a993b0ae9ebab4dc75749d892f0b4d84209a58883f1d91ba0daf287bd: Status 404 returned error can't find the container with id ec5cde7a993b0ae9ebab4dc75749d892f0b4d84209a58883f1d91ba0daf287bd Dec 11 14:10:40 crc kubenswrapper[5050]: I1211 14:10:40.651339 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 11 14:10:41 crc kubenswrapper[5050]: I1211 14:10:41.227388 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5af14101-b5b1-4735-9b62-83f55267c624","Type":"ContainerStarted","Data":"ec5cde7a993b0ae9ebab4dc75749d892f0b4d84209a58883f1d91ba0daf287bd"} Dec 11 14:10:41 crc kubenswrapper[5050]: I1211 14:10:41.559414 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f0b2c51-9300-49e3-be4f-f08bc63c7e8d" path="/var/lib/kubelet/pods/1f0b2c51-9300-49e3-be4f-f08bc63c7e8d/volumes" Dec 11 14:10:42 crc kubenswrapper[5050]: I1211 14:10:42.253894 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5af14101-b5b1-4735-9b62-83f55267c624","Type":"ContainerStarted","Data":"f01971a9930b0d296d2f00a4b54b7e230d9d8aa4dfba82f2fd676ccec111c3ca"} Dec 11 14:10:42 crc kubenswrapper[5050]: I1211 14:10:42.257497 5050 generic.go:334] "Generic (PLEG): container finished" podID="e1627fd8-6a34-432b-a4c8-8a39b534f4f2" containerID="5fce3440d274f6239f70451c81cca3a699ec3610161019c9df0fe303fc7d4623" exitCode=0 Dec 11 14:10:42 crc kubenswrapper[5050]: I1211 14:10:42.257558 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-tccfb" event={"ID":"e1627fd8-6a34-432b-a4c8-8a39b534f4f2","Type":"ContainerDied","Data":"5fce3440d274f6239f70451c81cca3a699ec3610161019c9df0fe303fc7d4623"} Dec 11 14:10:43 crc kubenswrapper[5050]: I1211 14:10:43.269808 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5af14101-b5b1-4735-9b62-83f55267c624","Type":"ContainerStarted","Data":"b38da25d4497b4e86576a5d45c1b504d4746d4b019c1057ba04fb00c29adfed1"} Dec 11 14:10:43 crc kubenswrapper[5050]: I1211 14:10:43.677040 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-tccfb" Dec 11 14:10:43 crc kubenswrapper[5050]: I1211 14:10:43.755295 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1627fd8-6a34-432b-a4c8-8a39b534f4f2-config-data\") pod \"e1627fd8-6a34-432b-a4c8-8a39b534f4f2\" (UID: \"e1627fd8-6a34-432b-a4c8-8a39b534f4f2\") " Dec 11 14:10:43 crc kubenswrapper[5050]: I1211 14:10:43.755349 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kl74j\" (UniqueName: \"kubernetes.io/projected/e1627fd8-6a34-432b-a4c8-8a39b534f4f2-kube-api-access-kl74j\") pod \"e1627fd8-6a34-432b-a4c8-8a39b534f4f2\" (UID: \"e1627fd8-6a34-432b-a4c8-8a39b534f4f2\") " Dec 11 14:10:43 crc kubenswrapper[5050]: I1211 14:10:43.755449 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1627fd8-6a34-432b-a4c8-8a39b534f4f2-combined-ca-bundle\") pod \"e1627fd8-6a34-432b-a4c8-8a39b534f4f2\" (UID: \"e1627fd8-6a34-432b-a4c8-8a39b534f4f2\") " Dec 11 14:10:43 crc kubenswrapper[5050]: I1211 14:10:43.755636 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1627fd8-6a34-432b-a4c8-8a39b534f4f2-scripts\") pod \"e1627fd8-6a34-432b-a4c8-8a39b534f4f2\" (UID: \"e1627fd8-6a34-432b-a4c8-8a39b534f4f2\") " Dec 11 14:10:43 crc kubenswrapper[5050]: I1211 14:10:43.761318 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1627fd8-6a34-432b-a4c8-8a39b534f4f2-kube-api-access-kl74j" (OuterVolumeSpecName: "kube-api-access-kl74j") pod "e1627fd8-6a34-432b-a4c8-8a39b534f4f2" (UID: "e1627fd8-6a34-432b-a4c8-8a39b534f4f2"). InnerVolumeSpecName "kube-api-access-kl74j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:10:43 crc kubenswrapper[5050]: I1211 14:10:43.766107 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1627fd8-6a34-432b-a4c8-8a39b534f4f2-scripts" (OuterVolumeSpecName: "scripts") pod "e1627fd8-6a34-432b-a4c8-8a39b534f4f2" (UID: "e1627fd8-6a34-432b-a4c8-8a39b534f4f2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:10:43 crc kubenswrapper[5050]: I1211 14:10:43.788639 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1627fd8-6a34-432b-a4c8-8a39b534f4f2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1627fd8-6a34-432b-a4c8-8a39b534f4f2" (UID: "e1627fd8-6a34-432b-a4c8-8a39b534f4f2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:10:43 crc kubenswrapper[5050]: I1211 14:10:43.797303 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1627fd8-6a34-432b-a4c8-8a39b534f4f2-config-data" (OuterVolumeSpecName: "config-data") pod "e1627fd8-6a34-432b-a4c8-8a39b534f4f2" (UID: "e1627fd8-6a34-432b-a4c8-8a39b534f4f2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:10:43 crc kubenswrapper[5050]: I1211 14:10:43.860455 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1627fd8-6a34-432b-a4c8-8a39b534f4f2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:43 crc kubenswrapper[5050]: I1211 14:10:43.860503 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e1627fd8-6a34-432b-a4c8-8a39b534f4f2-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:43 crc kubenswrapper[5050]: I1211 14:10:43.860513 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kl74j\" (UniqueName: \"kubernetes.io/projected/e1627fd8-6a34-432b-a4c8-8a39b534f4f2-kube-api-access-kl74j\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:43 crc kubenswrapper[5050]: I1211 14:10:43.860528 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1627fd8-6a34-432b-a4c8-8a39b534f4f2-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:43 crc kubenswrapper[5050]: I1211 14:10:43.901747 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8688x" Dec 11 14:10:43 crc kubenswrapper[5050]: I1211 14:10:43.964441 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8688x" Dec 11 14:10:44 crc kubenswrapper[5050]: I1211 14:10:44.282455 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5af14101-b5b1-4735-9b62-83f55267c624","Type":"ContainerStarted","Data":"cb959dc2fc3437aa8531276c17c72c1b2256f354bc0fdd721185755e63372e1c"} Dec 11 14:10:44 crc kubenswrapper[5050]: I1211 14:10:44.284923 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-tccfb" Dec 11 14:10:44 crc kubenswrapper[5050]: I1211 14:10:44.284943 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-tccfb" event={"ID":"e1627fd8-6a34-432b-a4c8-8a39b534f4f2","Type":"ContainerDied","Data":"54e4f40b20777c3837b0d80b6c36e06ee009ed62c427e7b2303523b7c76726ba"} Dec 11 14:10:44 crc kubenswrapper[5050]: I1211 14:10:44.285051 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54e4f40b20777c3837b0d80b6c36e06ee009ed62c427e7b2303523b7c76726ba" Dec 11 14:10:44 crc kubenswrapper[5050]: I1211 14:10:44.380745 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 11 14:10:44 crc kubenswrapper[5050]: E1211 14:10:44.381275 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1627fd8-6a34-432b-a4c8-8a39b534f4f2" containerName="nova-cell1-conductor-db-sync" Dec 11 14:10:44 crc kubenswrapper[5050]: I1211 14:10:44.381295 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1627fd8-6a34-432b-a4c8-8a39b534f4f2" containerName="nova-cell1-conductor-db-sync" Dec 11 14:10:44 crc kubenswrapper[5050]: I1211 14:10:44.381509 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1627fd8-6a34-432b-a4c8-8a39b534f4f2" containerName="nova-cell1-conductor-db-sync" Dec 11 14:10:44 crc kubenswrapper[5050]: I1211 14:10:44.382305 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 11 14:10:44 crc kubenswrapper[5050]: I1211 14:10:44.395526 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 11 14:10:44 crc kubenswrapper[5050]: I1211 14:10:44.424575 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 11 14:10:44 crc kubenswrapper[5050]: I1211 14:10:44.476535 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7\") " pod="openstack/nova-cell1-conductor-0" Dec 11 14:10:44 crc kubenswrapper[5050]: I1211 14:10:44.476603 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7\") " pod="openstack/nova-cell1-conductor-0" Dec 11 14:10:44 crc kubenswrapper[5050]: I1211 14:10:44.476632 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77m6b\" (UniqueName: \"kubernetes.io/projected/ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7-kube-api-access-77m6b\") pod \"nova-cell1-conductor-0\" (UID: \"ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7\") " pod="openstack/nova-cell1-conductor-0" Dec 11 14:10:44 crc kubenswrapper[5050]: I1211 14:10:44.477457 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 11 14:10:44 crc kubenswrapper[5050]: I1211 14:10:44.578814 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77m6b\" (UniqueName: \"kubernetes.io/projected/ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7-kube-api-access-77m6b\") pod \"nova-cell1-conductor-0\" (UID: \"ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7\") " pod="openstack/nova-cell1-conductor-0" Dec 11 14:10:44 crc kubenswrapper[5050]: I1211 14:10:44.579149 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7\") " pod="openstack/nova-cell1-conductor-0" Dec 11 14:10:44 crc kubenswrapper[5050]: I1211 14:10:44.579222 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7\") " pod="openstack/nova-cell1-conductor-0" Dec 11 14:10:44 crc kubenswrapper[5050]: I1211 14:10:44.587930 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7\") " pod="openstack/nova-cell1-conductor-0" Dec 11 14:10:44 crc kubenswrapper[5050]: I1211 14:10:44.591350 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7\") " pod="openstack/nova-cell1-conductor-0" Dec 11 14:10:44 crc kubenswrapper[5050]: I1211 14:10:44.596417 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 14:10:44 crc kubenswrapper[5050]: I1211 14:10:44.596475 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 14:10:44 crc kubenswrapper[5050]: I1211 14:10:44.597435 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77m6b\" (UniqueName: \"kubernetes.io/projected/ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7-kube-api-access-77m6b\") pod \"nova-cell1-conductor-0\" (UID: \"ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7\") " pod="openstack/nova-cell1-conductor-0" Dec 11 14:10:44 crc kubenswrapper[5050]: I1211 14:10:44.690999 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8688x"] Dec 11 14:10:44 crc kubenswrapper[5050]: I1211 14:10:44.749139 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 11 14:10:45 crc kubenswrapper[5050]: I1211 14:10:45.237045 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 11 14:10:45 crc kubenswrapper[5050]: I1211 14:10:45.319188 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7","Type":"ContainerStarted","Data":"af19112315c9608df5732560911f46a812f8af4824b95694609e39556ada9276"} Dec 11 14:10:45 crc kubenswrapper[5050]: I1211 14:10:45.322687 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5af14101-b5b1-4735-9b62-83f55267c624","Type":"ContainerStarted","Data":"63baec29346df5fb0b3439eea0c2936e227931e33f5f093c4945067f2937f575"} Dec 11 14:10:45 crc kubenswrapper[5050]: I1211 14:10:45.322739 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8688x" podUID="9b0ac5e2-e200-454b-928e-93ea9b253ca1" containerName="registry-server" containerID="cri-o://079efb91a670ffc024f3c1db2218acc3f9af9f0415bd2c0fdbaa469ca4f8e129" gracePeriod=2 Dec 11 14:10:45 crc kubenswrapper[5050]: I1211 14:10:45.356978 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.993491033 podStartE2EDuration="6.35695636s" podCreationTimestamp="2025-12-11 14:10:39 +0000 UTC" firstStartedPulling="2025-12-11 14:10:40.214793606 +0000 UTC m=+1331.058516192" lastFinishedPulling="2025-12-11 14:10:44.578258933 +0000 UTC m=+1335.421981519" observedRunningTime="2025-12-11 14:10:45.349713404 +0000 UTC m=+1336.193436000" watchObservedRunningTime="2025-12-11 14:10:45.35695636 +0000 UTC m=+1336.200678946" Dec 11 14:10:45 crc kubenswrapper[5050]: I1211 14:10:45.652440 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 11 14:10:45 crc kubenswrapper[5050]: I1211 14:10:45.690911 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="89a7ecdc-c4ec-4097-a100-c74ebb8b86ff" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 14:10:45 crc kubenswrapper[5050]: I1211 14:10:45.691618 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="89a7ecdc-c4ec-4097-a100-c74ebb8b86ff" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.190:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 14:10:45 crc kubenswrapper[5050]: I1211 14:10:45.704336 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 11 14:10:45 crc kubenswrapper[5050]: I1211 14:10:45.884453 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8688x" Dec 11 14:10:46 crc kubenswrapper[5050]: I1211 14:10:46.025470 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b0ac5e2-e200-454b-928e-93ea9b253ca1-utilities\") pod \"9b0ac5e2-e200-454b-928e-93ea9b253ca1\" (UID: \"9b0ac5e2-e200-454b-928e-93ea9b253ca1\") " Dec 11 14:10:46 crc kubenswrapper[5050]: I1211 14:10:46.026031 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b0ac5e2-e200-454b-928e-93ea9b253ca1-catalog-content\") pod \"9b0ac5e2-e200-454b-928e-93ea9b253ca1\" (UID: \"9b0ac5e2-e200-454b-928e-93ea9b253ca1\") " Dec 11 14:10:46 crc kubenswrapper[5050]: I1211 14:10:46.026140 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l47cj\" (UniqueName: \"kubernetes.io/projected/9b0ac5e2-e200-454b-928e-93ea9b253ca1-kube-api-access-l47cj\") pod \"9b0ac5e2-e200-454b-928e-93ea9b253ca1\" (UID: \"9b0ac5e2-e200-454b-928e-93ea9b253ca1\") " Dec 11 14:10:46 crc kubenswrapper[5050]: I1211 14:10:46.026280 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b0ac5e2-e200-454b-928e-93ea9b253ca1-utilities" (OuterVolumeSpecName: "utilities") pod "9b0ac5e2-e200-454b-928e-93ea9b253ca1" (UID: "9b0ac5e2-e200-454b-928e-93ea9b253ca1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:10:46 crc kubenswrapper[5050]: I1211 14:10:46.027165 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b0ac5e2-e200-454b-928e-93ea9b253ca1-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:46 crc kubenswrapper[5050]: I1211 14:10:46.033845 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b0ac5e2-e200-454b-928e-93ea9b253ca1-kube-api-access-l47cj" (OuterVolumeSpecName: "kube-api-access-l47cj") pod "9b0ac5e2-e200-454b-928e-93ea9b253ca1" (UID: "9b0ac5e2-e200-454b-928e-93ea9b253ca1"). InnerVolumeSpecName "kube-api-access-l47cj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:10:46 crc kubenswrapper[5050]: I1211 14:10:46.139937 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l47cj\" (UniqueName: \"kubernetes.io/projected/9b0ac5e2-e200-454b-928e-93ea9b253ca1-kube-api-access-l47cj\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:46 crc kubenswrapper[5050]: I1211 14:10:46.159145 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b0ac5e2-e200-454b-928e-93ea9b253ca1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b0ac5e2-e200-454b-928e-93ea9b253ca1" (UID: "9b0ac5e2-e200-454b-928e-93ea9b253ca1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:10:46 crc kubenswrapper[5050]: I1211 14:10:46.246371 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b0ac5e2-e200-454b-928e-93ea9b253ca1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:46 crc kubenswrapper[5050]: I1211 14:10:46.333814 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7","Type":"ContainerStarted","Data":"927455209d981bf727b535faeebc64bff29620b7370b60af99e0a5354d586a34"} Dec 11 14:10:46 crc kubenswrapper[5050]: I1211 14:10:46.335248 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 11 14:10:46 crc kubenswrapper[5050]: I1211 14:10:46.340027 5050 generic.go:334] "Generic (PLEG): container finished" podID="9b0ac5e2-e200-454b-928e-93ea9b253ca1" containerID="079efb91a670ffc024f3c1db2218acc3f9af9f0415bd2c0fdbaa469ca4f8e129" exitCode=0 Dec 11 14:10:46 crc kubenswrapper[5050]: I1211 14:10:46.340143 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8688x" Dec 11 14:10:46 crc kubenswrapper[5050]: I1211 14:10:46.341714 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8688x" event={"ID":"9b0ac5e2-e200-454b-928e-93ea9b253ca1","Type":"ContainerDied","Data":"079efb91a670ffc024f3c1db2218acc3f9af9f0415bd2c0fdbaa469ca4f8e129"} Dec 11 14:10:46 crc kubenswrapper[5050]: I1211 14:10:46.341771 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 11 14:10:46 crc kubenswrapper[5050]: I1211 14:10:46.341790 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8688x" event={"ID":"9b0ac5e2-e200-454b-928e-93ea9b253ca1","Type":"ContainerDied","Data":"1456c38c50d956f8bee9207bc7ce1f56ff928c81d8a7c2f9d03f682e68aaefaf"} Dec 11 14:10:46 crc kubenswrapper[5050]: I1211 14:10:46.342488 5050 scope.go:117] "RemoveContainer" containerID="079efb91a670ffc024f3c1db2218acc3f9af9f0415bd2c0fdbaa469ca4f8e129" Dec 11 14:10:46 crc kubenswrapper[5050]: I1211 14:10:46.363936 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.363908691 podStartE2EDuration="2.363908691s" podCreationTimestamp="2025-12-11 14:10:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:10:46.356281285 +0000 UTC m=+1337.200003871" watchObservedRunningTime="2025-12-11 14:10:46.363908691 +0000 UTC m=+1337.207631277" Dec 11 14:10:46 crc kubenswrapper[5050]: I1211 14:10:46.382835 5050 scope.go:117] "RemoveContainer" containerID="6bd6802f5b5a2e7cebd94d259b8775fc91e55f79de55a23a27b79983a9421b97" Dec 11 14:10:46 crc kubenswrapper[5050]: I1211 14:10:46.388103 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8688x"] Dec 11 14:10:46 crc kubenswrapper[5050]: I1211 14:10:46.389045 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 11 14:10:46 crc kubenswrapper[5050]: I1211 14:10:46.398003 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8688x"] Dec 11 14:10:46 crc kubenswrapper[5050]: I1211 14:10:46.415765 5050 scope.go:117] "RemoveContainer" containerID="bbd9577d16ba312f99b2c9ebad9e02198fcbe52a92b7a05eea66064381895bcb" Dec 11 14:10:46 crc kubenswrapper[5050]: I1211 14:10:46.470873 5050 scope.go:117] "RemoveContainer" containerID="079efb91a670ffc024f3c1db2218acc3f9af9f0415bd2c0fdbaa469ca4f8e129" Dec 11 14:10:46 crc kubenswrapper[5050]: E1211 14:10:46.471730 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"079efb91a670ffc024f3c1db2218acc3f9af9f0415bd2c0fdbaa469ca4f8e129\": container with ID starting with 079efb91a670ffc024f3c1db2218acc3f9af9f0415bd2c0fdbaa469ca4f8e129 not found: ID does not exist" containerID="079efb91a670ffc024f3c1db2218acc3f9af9f0415bd2c0fdbaa469ca4f8e129" Dec 11 14:10:46 crc kubenswrapper[5050]: I1211 14:10:46.471779 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"079efb91a670ffc024f3c1db2218acc3f9af9f0415bd2c0fdbaa469ca4f8e129"} err="failed to get container status \"079efb91a670ffc024f3c1db2218acc3f9af9f0415bd2c0fdbaa469ca4f8e129\": rpc error: code = NotFound desc = could not find container \"079efb91a670ffc024f3c1db2218acc3f9af9f0415bd2c0fdbaa469ca4f8e129\": container with ID starting with 079efb91a670ffc024f3c1db2218acc3f9af9f0415bd2c0fdbaa469ca4f8e129 not found: ID does not exist" Dec 11 14:10:46 crc kubenswrapper[5050]: I1211 14:10:46.471812 5050 scope.go:117] "RemoveContainer" containerID="6bd6802f5b5a2e7cebd94d259b8775fc91e55f79de55a23a27b79983a9421b97" Dec 11 14:10:46 crc kubenswrapper[5050]: E1211 14:10:46.472115 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bd6802f5b5a2e7cebd94d259b8775fc91e55f79de55a23a27b79983a9421b97\": container with ID starting with 6bd6802f5b5a2e7cebd94d259b8775fc91e55f79de55a23a27b79983a9421b97 not found: ID does not exist" containerID="6bd6802f5b5a2e7cebd94d259b8775fc91e55f79de55a23a27b79983a9421b97" Dec 11 14:10:46 crc kubenswrapper[5050]: I1211 14:10:46.472132 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bd6802f5b5a2e7cebd94d259b8775fc91e55f79de55a23a27b79983a9421b97"} err="failed to get container status \"6bd6802f5b5a2e7cebd94d259b8775fc91e55f79de55a23a27b79983a9421b97\": rpc error: code = NotFound desc = could not find container \"6bd6802f5b5a2e7cebd94d259b8775fc91e55f79de55a23a27b79983a9421b97\": container with ID starting with 6bd6802f5b5a2e7cebd94d259b8775fc91e55f79de55a23a27b79983a9421b97 not found: ID does not exist" Dec 11 14:10:46 crc kubenswrapper[5050]: I1211 14:10:46.472146 5050 scope.go:117] "RemoveContainer" containerID="bbd9577d16ba312f99b2c9ebad9e02198fcbe52a92b7a05eea66064381895bcb" Dec 11 14:10:46 crc kubenswrapper[5050]: E1211 14:10:46.472374 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbd9577d16ba312f99b2c9ebad9e02198fcbe52a92b7a05eea66064381895bcb\": container with ID starting with bbd9577d16ba312f99b2c9ebad9e02198fcbe52a92b7a05eea66064381895bcb not found: ID does not exist" containerID="bbd9577d16ba312f99b2c9ebad9e02198fcbe52a92b7a05eea66064381895bcb" Dec 11 14:10:46 crc kubenswrapper[5050]: I1211 14:10:46.472391 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbd9577d16ba312f99b2c9ebad9e02198fcbe52a92b7a05eea66064381895bcb"} err="failed to get container status \"bbd9577d16ba312f99b2c9ebad9e02198fcbe52a92b7a05eea66064381895bcb\": rpc error: code = NotFound desc = could not find container \"bbd9577d16ba312f99b2c9ebad9e02198fcbe52a92b7a05eea66064381895bcb\": container with ID starting with bbd9577d16ba312f99b2c9ebad9e02198fcbe52a92b7a05eea66064381895bcb not found: ID does not exist" Dec 11 14:10:47 crc kubenswrapper[5050]: I1211 14:10:47.558646 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b0ac5e2-e200-454b-928e-93ea9b253ca1" path="/var/lib/kubelet/pods/9b0ac5e2-e200-454b-928e-93ea9b253ca1/volumes" Dec 11 14:10:54 crc kubenswrapper[5050]: I1211 14:10:54.601811 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 11 14:10:54 crc kubenswrapper[5050]: I1211 14:10:54.603045 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 11 14:10:54 crc kubenswrapper[5050]: I1211 14:10:54.604288 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 11 14:10:54 crc kubenswrapper[5050]: I1211 14:10:54.609321 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 11 14:10:54 crc kubenswrapper[5050]: I1211 14:10:54.784083 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.338558 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.342778 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.449216 5050 generic.go:334] "Generic (PLEG): container finished" podID="3a082a10-0ad5-42cc-9cbe-5c15b93a9af2" containerID="80982f5107cdb80b0b0b443dd9966b245e84e492c7a44d0cc5646abce649be9b" exitCode=137 Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.449272 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.449284 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3a082a10-0ad5-42cc-9cbe-5c15b93a9af2","Type":"ContainerDied","Data":"80982f5107cdb80b0b0b443dd9966b245e84e492c7a44d0cc5646abce649be9b"} Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.449369 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3a082a10-0ad5-42cc-9cbe-5c15b93a9af2","Type":"ContainerDied","Data":"f9672b25cef4c126900a518bb364cf33174171386ab7e51996d89e6c12144d57"} Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.449398 5050 scope.go:117] "RemoveContainer" containerID="80982f5107cdb80b0b0b443dd9966b245e84e492c7a44d0cc5646abce649be9b" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.451843 5050 generic.go:334] "Generic (PLEG): container finished" podID="750efff9-baf7-4feb-9dbc-6fc187a4350f" containerID="99dbb11830672c70c03138355bb7cb30550a454f1f7d54f9f59a6f4c5c10154e" exitCode=137 Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.451912 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.451933 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"750efff9-baf7-4feb-9dbc-6fc187a4350f","Type":"ContainerDied","Data":"99dbb11830672c70c03138355bb7cb30550a454f1f7d54f9f59a6f4c5c10154e"} Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.451996 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"750efff9-baf7-4feb-9dbc-6fc187a4350f","Type":"ContainerDied","Data":"cb25cd7b0973e8fe63f3b10bc0211768a4786d3a720cb5de084babd17e1d34cf"} Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.452273 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.458792 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.461293 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a082a10-0ad5-42cc-9cbe-5c15b93a9af2-combined-ca-bundle\") pod \"3a082a10-0ad5-42cc-9cbe-5c15b93a9af2\" (UID: \"3a082a10-0ad5-42cc-9cbe-5c15b93a9af2\") " Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.461410 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/750efff9-baf7-4feb-9dbc-6fc187a4350f-config-data\") pod \"750efff9-baf7-4feb-9dbc-6fc187a4350f\" (UID: \"750efff9-baf7-4feb-9dbc-6fc187a4350f\") " Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.461504 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a082a10-0ad5-42cc-9cbe-5c15b93a9af2-config-data\") pod \"3a082a10-0ad5-42cc-9cbe-5c15b93a9af2\" (UID: \"3a082a10-0ad5-42cc-9cbe-5c15b93a9af2\") " Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.461575 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/750efff9-baf7-4feb-9dbc-6fc187a4350f-combined-ca-bundle\") pod \"750efff9-baf7-4feb-9dbc-6fc187a4350f\" (UID: \"750efff9-baf7-4feb-9dbc-6fc187a4350f\") " Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.461599 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b84pc\" (UniqueName: \"kubernetes.io/projected/750efff9-baf7-4feb-9dbc-6fc187a4350f-kube-api-access-b84pc\") pod \"750efff9-baf7-4feb-9dbc-6fc187a4350f\" (UID: \"750efff9-baf7-4feb-9dbc-6fc187a4350f\") " Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.461658 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a082a10-0ad5-42cc-9cbe-5c15b93a9af2-logs\") pod \"3a082a10-0ad5-42cc-9cbe-5c15b93a9af2\" (UID: \"3a082a10-0ad5-42cc-9cbe-5c15b93a9af2\") " Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.461735 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-299ws\" (UniqueName: \"kubernetes.io/projected/3a082a10-0ad5-42cc-9cbe-5c15b93a9af2-kube-api-access-299ws\") pod \"3a082a10-0ad5-42cc-9cbe-5c15b93a9af2\" (UID: \"3a082a10-0ad5-42cc-9cbe-5c15b93a9af2\") " Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.462875 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a082a10-0ad5-42cc-9cbe-5c15b93a9af2-logs" (OuterVolumeSpecName: "logs") pod "3a082a10-0ad5-42cc-9cbe-5c15b93a9af2" (UID: "3a082a10-0ad5-42cc-9cbe-5c15b93a9af2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.469064 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a082a10-0ad5-42cc-9cbe-5c15b93a9af2-kube-api-access-299ws" (OuterVolumeSpecName: "kube-api-access-299ws") pod "3a082a10-0ad5-42cc-9cbe-5c15b93a9af2" (UID: "3a082a10-0ad5-42cc-9cbe-5c15b93a9af2"). InnerVolumeSpecName "kube-api-access-299ws". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.471177 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/750efff9-baf7-4feb-9dbc-6fc187a4350f-kube-api-access-b84pc" (OuterVolumeSpecName: "kube-api-access-b84pc") pod "750efff9-baf7-4feb-9dbc-6fc187a4350f" (UID: "750efff9-baf7-4feb-9dbc-6fc187a4350f"). InnerVolumeSpecName "kube-api-access-b84pc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.493408 5050 scope.go:117] "RemoveContainer" containerID="c3532ca3f2438e45923351146f742d11dd5521aec22d1bd4d7dab86e5bdda261" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.501939 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/750efff9-baf7-4feb-9dbc-6fc187a4350f-config-data" (OuterVolumeSpecName: "config-data") pod "750efff9-baf7-4feb-9dbc-6fc187a4350f" (UID: "750efff9-baf7-4feb-9dbc-6fc187a4350f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.502846 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a082a10-0ad5-42cc-9cbe-5c15b93a9af2-config-data" (OuterVolumeSpecName: "config-data") pod "3a082a10-0ad5-42cc-9cbe-5c15b93a9af2" (UID: "3a082a10-0ad5-42cc-9cbe-5c15b93a9af2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.503113 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a082a10-0ad5-42cc-9cbe-5c15b93a9af2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a082a10-0ad5-42cc-9cbe-5c15b93a9af2" (UID: "3a082a10-0ad5-42cc-9cbe-5c15b93a9af2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.505207 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/750efff9-baf7-4feb-9dbc-6fc187a4350f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "750efff9-baf7-4feb-9dbc-6fc187a4350f" (UID: "750efff9-baf7-4feb-9dbc-6fc187a4350f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.565162 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3a082a10-0ad5-42cc-9cbe-5c15b93a9af2-logs\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.565637 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-299ws\" (UniqueName: \"kubernetes.io/projected/3a082a10-0ad5-42cc-9cbe-5c15b93a9af2-kube-api-access-299ws\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.565652 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a082a10-0ad5-42cc-9cbe-5c15b93a9af2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.565663 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/750efff9-baf7-4feb-9dbc-6fc187a4350f-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.565673 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a082a10-0ad5-42cc-9cbe-5c15b93a9af2-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.565682 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b84pc\" (UniqueName: \"kubernetes.io/projected/750efff9-baf7-4feb-9dbc-6fc187a4350f-kube-api-access-b84pc\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.565690 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/750efff9-baf7-4feb-9dbc-6fc187a4350f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.574064 5050 scope.go:117] "RemoveContainer" containerID="80982f5107cdb80b0b0b443dd9966b245e84e492c7a44d0cc5646abce649be9b" Dec 11 14:10:55 crc kubenswrapper[5050]: E1211 14:10:55.583373 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80982f5107cdb80b0b0b443dd9966b245e84e492c7a44d0cc5646abce649be9b\": container with ID starting with 80982f5107cdb80b0b0b443dd9966b245e84e492c7a44d0cc5646abce649be9b not found: ID does not exist" containerID="80982f5107cdb80b0b0b443dd9966b245e84e492c7a44d0cc5646abce649be9b" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.583719 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80982f5107cdb80b0b0b443dd9966b245e84e492c7a44d0cc5646abce649be9b"} err="failed to get container status \"80982f5107cdb80b0b0b443dd9966b245e84e492c7a44d0cc5646abce649be9b\": rpc error: code = NotFound desc = could not find container \"80982f5107cdb80b0b0b443dd9966b245e84e492c7a44d0cc5646abce649be9b\": container with ID starting with 80982f5107cdb80b0b0b443dd9966b245e84e492c7a44d0cc5646abce649be9b not found: ID does not exist" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.583889 5050 scope.go:117] "RemoveContainer" containerID="c3532ca3f2438e45923351146f742d11dd5521aec22d1bd4d7dab86e5bdda261" Dec 11 14:10:55 crc kubenswrapper[5050]: E1211 14:10:55.584530 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3532ca3f2438e45923351146f742d11dd5521aec22d1bd4d7dab86e5bdda261\": container with ID starting with c3532ca3f2438e45923351146f742d11dd5521aec22d1bd4d7dab86e5bdda261 not found: ID does not exist" containerID="c3532ca3f2438e45923351146f742d11dd5521aec22d1bd4d7dab86e5bdda261" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.584583 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3532ca3f2438e45923351146f742d11dd5521aec22d1bd4d7dab86e5bdda261"} err="failed to get container status \"c3532ca3f2438e45923351146f742d11dd5521aec22d1bd4d7dab86e5bdda261\": rpc error: code = NotFound desc = could not find container \"c3532ca3f2438e45923351146f742d11dd5521aec22d1bd4d7dab86e5bdda261\": container with ID starting with c3532ca3f2438e45923351146f742d11dd5521aec22d1bd4d7dab86e5bdda261 not found: ID does not exist" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.584617 5050 scope.go:117] "RemoveContainer" containerID="99dbb11830672c70c03138355bb7cb30550a454f1f7d54f9f59a6f4c5c10154e" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.655225 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-b59kh"] Dec 11 14:10:55 crc kubenswrapper[5050]: E1211 14:10:55.655771 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a082a10-0ad5-42cc-9cbe-5c15b93a9af2" containerName="nova-metadata-metadata" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.655787 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a082a10-0ad5-42cc-9cbe-5c15b93a9af2" containerName="nova-metadata-metadata" Dec 11 14:10:55 crc kubenswrapper[5050]: E1211 14:10:55.655806 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a082a10-0ad5-42cc-9cbe-5c15b93a9af2" containerName="nova-metadata-log" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.655812 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a082a10-0ad5-42cc-9cbe-5c15b93a9af2" containerName="nova-metadata-log" Dec 11 14:10:55 crc kubenswrapper[5050]: E1211 14:10:55.655832 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b0ac5e2-e200-454b-928e-93ea9b253ca1" containerName="extract-utilities" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.655841 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b0ac5e2-e200-454b-928e-93ea9b253ca1" containerName="extract-utilities" Dec 11 14:10:55 crc kubenswrapper[5050]: E1211 14:10:55.655859 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b0ac5e2-e200-454b-928e-93ea9b253ca1" containerName="extract-content" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.655865 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b0ac5e2-e200-454b-928e-93ea9b253ca1" containerName="extract-content" Dec 11 14:10:55 crc kubenswrapper[5050]: E1211 14:10:55.655882 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b0ac5e2-e200-454b-928e-93ea9b253ca1" containerName="registry-server" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.655917 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b0ac5e2-e200-454b-928e-93ea9b253ca1" containerName="registry-server" Dec 11 14:10:55 crc kubenswrapper[5050]: E1211 14:10:55.655933 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="750efff9-baf7-4feb-9dbc-6fc187a4350f" containerName="nova-cell1-novncproxy-novncproxy" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.655941 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="750efff9-baf7-4feb-9dbc-6fc187a4350f" containerName="nova-cell1-novncproxy-novncproxy" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.656155 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a082a10-0ad5-42cc-9cbe-5c15b93a9af2" containerName="nova-metadata-log" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.656171 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a082a10-0ad5-42cc-9cbe-5c15b93a9af2" containerName="nova-metadata-metadata" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.656181 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b0ac5e2-e200-454b-928e-93ea9b253ca1" containerName="registry-server" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.656205 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="750efff9-baf7-4feb-9dbc-6fc187a4350f" containerName="nova-cell1-novncproxy-novncproxy" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.657343 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-b59kh" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.671266 5050 scope.go:117] "RemoveContainer" containerID="99dbb11830672c70c03138355bb7cb30550a454f1f7d54f9f59a6f4c5c10154e" Dec 11 14:10:55 crc kubenswrapper[5050]: E1211 14:10:55.676586 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99dbb11830672c70c03138355bb7cb30550a454f1f7d54f9f59a6f4c5c10154e\": container with ID starting with 99dbb11830672c70c03138355bb7cb30550a454f1f7d54f9f59a6f4c5c10154e not found: ID does not exist" containerID="99dbb11830672c70c03138355bb7cb30550a454f1f7d54f9f59a6f4c5c10154e" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.676635 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99dbb11830672c70c03138355bb7cb30550a454f1f7d54f9f59a6f4c5c10154e"} err="failed to get container status \"99dbb11830672c70c03138355bb7cb30550a454f1f7d54f9f59a6f4c5c10154e\": rpc error: code = NotFound desc = could not find container \"99dbb11830672c70c03138355bb7cb30550a454f1f7d54f9f59a6f4c5c10154e\": container with ID starting with 99dbb11830672c70c03138355bb7cb30550a454f1f7d54f9f59a6f4c5c10154e not found: ID does not exist" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.680118 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-b59kh"] Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.770050 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c443a35b-44e5-495f-b23b-75ff35319194-dns-svc\") pod \"dnsmasq-dns-fcd6f8f8f-b59kh\" (UID: \"c443a35b-44e5-495f-b23b-75ff35319194\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-b59kh" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.770149 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c443a35b-44e5-495f-b23b-75ff35319194-dns-swift-storage-0\") pod \"dnsmasq-dns-fcd6f8f8f-b59kh\" (UID: \"c443a35b-44e5-495f-b23b-75ff35319194\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-b59kh" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.770219 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c443a35b-44e5-495f-b23b-75ff35319194-ovsdbserver-nb\") pod \"dnsmasq-dns-fcd6f8f8f-b59kh\" (UID: \"c443a35b-44e5-495f-b23b-75ff35319194\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-b59kh" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.770241 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p25b\" (UniqueName: \"kubernetes.io/projected/c443a35b-44e5-495f-b23b-75ff35319194-kube-api-access-7p25b\") pod \"dnsmasq-dns-fcd6f8f8f-b59kh\" (UID: \"c443a35b-44e5-495f-b23b-75ff35319194\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-b59kh" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.770274 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c443a35b-44e5-495f-b23b-75ff35319194-config\") pod \"dnsmasq-dns-fcd6f8f8f-b59kh\" (UID: \"c443a35b-44e5-495f-b23b-75ff35319194\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-b59kh" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.770297 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c443a35b-44e5-495f-b23b-75ff35319194-ovsdbserver-sb\") pod \"dnsmasq-dns-fcd6f8f8f-b59kh\" (UID: \"c443a35b-44e5-495f-b23b-75ff35319194\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-b59kh" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.799618 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.818162 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.876658 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c443a35b-44e5-495f-b23b-75ff35319194-dns-svc\") pod \"dnsmasq-dns-fcd6f8f8f-b59kh\" (UID: \"c443a35b-44e5-495f-b23b-75ff35319194\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-b59kh" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.876798 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c443a35b-44e5-495f-b23b-75ff35319194-dns-swift-storage-0\") pod \"dnsmasq-dns-fcd6f8f8f-b59kh\" (UID: \"c443a35b-44e5-495f-b23b-75ff35319194\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-b59kh" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.876865 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c443a35b-44e5-495f-b23b-75ff35319194-ovsdbserver-nb\") pod \"dnsmasq-dns-fcd6f8f8f-b59kh\" (UID: \"c443a35b-44e5-495f-b23b-75ff35319194\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-b59kh" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.876905 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p25b\" (UniqueName: \"kubernetes.io/projected/c443a35b-44e5-495f-b23b-75ff35319194-kube-api-access-7p25b\") pod \"dnsmasq-dns-fcd6f8f8f-b59kh\" (UID: \"c443a35b-44e5-495f-b23b-75ff35319194\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-b59kh" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.876954 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c443a35b-44e5-495f-b23b-75ff35319194-config\") pod \"dnsmasq-dns-fcd6f8f8f-b59kh\" (UID: \"c443a35b-44e5-495f-b23b-75ff35319194\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-b59kh" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.876978 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c443a35b-44e5-495f-b23b-75ff35319194-ovsdbserver-sb\") pod \"dnsmasq-dns-fcd6f8f8f-b59kh\" (UID: \"c443a35b-44e5-495f-b23b-75ff35319194\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-b59kh" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.878162 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c443a35b-44e5-495f-b23b-75ff35319194-ovsdbserver-sb\") pod \"dnsmasq-dns-fcd6f8f8f-b59kh\" (UID: \"c443a35b-44e5-495f-b23b-75ff35319194\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-b59kh" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.879158 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c443a35b-44e5-495f-b23b-75ff35319194-ovsdbserver-nb\") pod \"dnsmasq-dns-fcd6f8f8f-b59kh\" (UID: \"c443a35b-44e5-495f-b23b-75ff35319194\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-b59kh" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.880443 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c443a35b-44e5-495f-b23b-75ff35319194-config\") pod \"dnsmasq-dns-fcd6f8f8f-b59kh\" (UID: \"c443a35b-44e5-495f-b23b-75ff35319194\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-b59kh" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.881406 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c443a35b-44e5-495f-b23b-75ff35319194-dns-svc\") pod \"dnsmasq-dns-fcd6f8f8f-b59kh\" (UID: \"c443a35b-44e5-495f-b23b-75ff35319194\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-b59kh" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.883555 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c443a35b-44e5-495f-b23b-75ff35319194-dns-swift-storage-0\") pod \"dnsmasq-dns-fcd6f8f8f-b59kh\" (UID: \"c443a35b-44e5-495f-b23b-75ff35319194\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-b59kh" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.901260 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p25b\" (UniqueName: \"kubernetes.io/projected/c443a35b-44e5-495f-b23b-75ff35319194-kube-api-access-7p25b\") pod \"dnsmasq-dns-fcd6f8f8f-b59kh\" (UID: \"c443a35b-44e5-495f-b23b-75ff35319194\") " pod="openstack/dnsmasq-dns-fcd6f8f8f-b59kh" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.901374 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.935418 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.957007 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.959573 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.963093 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.963417 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.964849 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.976875 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.979720 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.984782 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.984978 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 11 14:10:55 crc kubenswrapper[5050]: I1211 14:10:55.998721 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 14:10:56 crc kubenswrapper[5050]: I1211 14:10:56.007653 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-b59kh" Dec 11 14:10:56 crc kubenswrapper[5050]: I1211 14:10:56.008795 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 14:10:56 crc kubenswrapper[5050]: I1211 14:10:56.082123 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fc263ce-85a4-4182-87ac-a5cd6d48b65a-config-data\") pod \"nova-metadata-0\" (UID: \"2fc263ce-85a4-4182-87ac-a5cd6d48b65a\") " pod="openstack/nova-metadata-0" Dec 11 14:10:56 crc kubenswrapper[5050]: I1211 14:10:56.083373 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ca28ba4-2b37-4836-9d51-8dea84046163-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ca28ba4-2b37-4836-9d51-8dea84046163\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 14:10:56 crc kubenswrapper[5050]: I1211 14:10:56.083528 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fc263ce-85a4-4182-87ac-a5cd6d48b65a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2fc263ce-85a4-4182-87ac-a5cd6d48b65a\") " pod="openstack/nova-metadata-0" Dec 11 14:10:56 crc kubenswrapper[5050]: I1211 14:10:56.083609 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzkzr\" (UniqueName: \"kubernetes.io/projected/2fc263ce-85a4-4182-87ac-a5cd6d48b65a-kube-api-access-jzkzr\") pod \"nova-metadata-0\" (UID: \"2fc263ce-85a4-4182-87ac-a5cd6d48b65a\") " pod="openstack/nova-metadata-0" Dec 11 14:10:56 crc kubenswrapper[5050]: I1211 14:10:56.083708 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plhh6\" (UniqueName: \"kubernetes.io/projected/0ca28ba4-2b37-4836-9d51-8dea84046163-kube-api-access-plhh6\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ca28ba4-2b37-4836-9d51-8dea84046163\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 14:10:56 crc kubenswrapper[5050]: I1211 14:10:56.083841 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ca28ba4-2b37-4836-9d51-8dea84046163-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ca28ba4-2b37-4836-9d51-8dea84046163\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 14:10:56 crc kubenswrapper[5050]: I1211 14:10:56.083942 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fc263ce-85a4-4182-87ac-a5cd6d48b65a-logs\") pod \"nova-metadata-0\" (UID: \"2fc263ce-85a4-4182-87ac-a5cd6d48b65a\") " pod="openstack/nova-metadata-0" Dec 11 14:10:56 crc kubenswrapper[5050]: I1211 14:10:56.083995 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc263ce-85a4-4182-87ac-a5cd6d48b65a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2fc263ce-85a4-4182-87ac-a5cd6d48b65a\") " pod="openstack/nova-metadata-0" Dec 11 14:10:56 crc kubenswrapper[5050]: I1211 14:10:56.084041 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ca28ba4-2b37-4836-9d51-8dea84046163-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ca28ba4-2b37-4836-9d51-8dea84046163\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 14:10:56 crc kubenswrapper[5050]: I1211 14:10:56.084108 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ca28ba4-2b37-4836-9d51-8dea84046163-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ca28ba4-2b37-4836-9d51-8dea84046163\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 14:10:56 crc kubenswrapper[5050]: I1211 14:10:56.189381 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fc263ce-85a4-4182-87ac-a5cd6d48b65a-config-data\") pod \"nova-metadata-0\" (UID: \"2fc263ce-85a4-4182-87ac-a5cd6d48b65a\") " pod="openstack/nova-metadata-0" Dec 11 14:10:56 crc kubenswrapper[5050]: I1211 14:10:56.189441 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ca28ba4-2b37-4836-9d51-8dea84046163-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ca28ba4-2b37-4836-9d51-8dea84046163\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 14:10:56 crc kubenswrapper[5050]: I1211 14:10:56.189477 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fc263ce-85a4-4182-87ac-a5cd6d48b65a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2fc263ce-85a4-4182-87ac-a5cd6d48b65a\") " pod="openstack/nova-metadata-0" Dec 11 14:10:56 crc kubenswrapper[5050]: I1211 14:10:56.189503 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzkzr\" (UniqueName: \"kubernetes.io/projected/2fc263ce-85a4-4182-87ac-a5cd6d48b65a-kube-api-access-jzkzr\") pod \"nova-metadata-0\" (UID: \"2fc263ce-85a4-4182-87ac-a5cd6d48b65a\") " pod="openstack/nova-metadata-0" Dec 11 14:10:56 crc kubenswrapper[5050]: I1211 14:10:56.189536 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plhh6\" (UniqueName: \"kubernetes.io/projected/0ca28ba4-2b37-4836-9d51-8dea84046163-kube-api-access-plhh6\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ca28ba4-2b37-4836-9d51-8dea84046163\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 14:10:56 crc kubenswrapper[5050]: I1211 14:10:56.189579 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ca28ba4-2b37-4836-9d51-8dea84046163-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ca28ba4-2b37-4836-9d51-8dea84046163\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 14:10:56 crc kubenswrapper[5050]: I1211 14:10:56.189613 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fc263ce-85a4-4182-87ac-a5cd6d48b65a-logs\") pod \"nova-metadata-0\" (UID: \"2fc263ce-85a4-4182-87ac-a5cd6d48b65a\") " pod="openstack/nova-metadata-0" Dec 11 14:10:56 crc kubenswrapper[5050]: I1211 14:10:56.189633 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc263ce-85a4-4182-87ac-a5cd6d48b65a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2fc263ce-85a4-4182-87ac-a5cd6d48b65a\") " pod="openstack/nova-metadata-0" Dec 11 14:10:56 crc kubenswrapper[5050]: I1211 14:10:56.189662 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ca28ba4-2b37-4836-9d51-8dea84046163-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ca28ba4-2b37-4836-9d51-8dea84046163\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 14:10:56 crc kubenswrapper[5050]: I1211 14:10:56.189689 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ca28ba4-2b37-4836-9d51-8dea84046163-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ca28ba4-2b37-4836-9d51-8dea84046163\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 14:10:56 crc kubenswrapper[5050]: I1211 14:10:56.191214 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fc263ce-85a4-4182-87ac-a5cd6d48b65a-logs\") pod \"nova-metadata-0\" (UID: \"2fc263ce-85a4-4182-87ac-a5cd6d48b65a\") " pod="openstack/nova-metadata-0" Dec 11 14:10:56 crc kubenswrapper[5050]: I1211 14:10:56.196538 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ca28ba4-2b37-4836-9d51-8dea84046163-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ca28ba4-2b37-4836-9d51-8dea84046163\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 14:10:56 crc kubenswrapper[5050]: I1211 14:10:56.198898 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ca28ba4-2b37-4836-9d51-8dea84046163-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ca28ba4-2b37-4836-9d51-8dea84046163\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 14:10:56 crc kubenswrapper[5050]: I1211 14:10:56.199687 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fc263ce-85a4-4182-87ac-a5cd6d48b65a-config-data\") pod \"nova-metadata-0\" (UID: \"2fc263ce-85a4-4182-87ac-a5cd6d48b65a\") " pod="openstack/nova-metadata-0" Dec 11 14:10:56 crc kubenswrapper[5050]: I1211 14:10:56.200186 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc263ce-85a4-4182-87ac-a5cd6d48b65a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2fc263ce-85a4-4182-87ac-a5cd6d48b65a\") " pod="openstack/nova-metadata-0" Dec 11 14:10:56 crc kubenswrapper[5050]: I1211 14:10:56.202407 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ca28ba4-2b37-4836-9d51-8dea84046163-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ca28ba4-2b37-4836-9d51-8dea84046163\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 14:10:56 crc kubenswrapper[5050]: I1211 14:10:56.212854 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ca28ba4-2b37-4836-9d51-8dea84046163-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ca28ba4-2b37-4836-9d51-8dea84046163\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 14:10:56 crc kubenswrapper[5050]: I1211 14:10:56.213239 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fc263ce-85a4-4182-87ac-a5cd6d48b65a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"2fc263ce-85a4-4182-87ac-a5cd6d48b65a\") " pod="openstack/nova-metadata-0" Dec 11 14:10:56 crc kubenswrapper[5050]: I1211 14:10:56.216555 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzkzr\" (UniqueName: \"kubernetes.io/projected/2fc263ce-85a4-4182-87ac-a5cd6d48b65a-kube-api-access-jzkzr\") pod \"nova-metadata-0\" (UID: \"2fc263ce-85a4-4182-87ac-a5cd6d48b65a\") " pod="openstack/nova-metadata-0" Dec 11 14:10:56 crc kubenswrapper[5050]: I1211 14:10:56.216799 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plhh6\" (UniqueName: \"kubernetes.io/projected/0ca28ba4-2b37-4836-9d51-8dea84046163-kube-api-access-plhh6\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ca28ba4-2b37-4836-9d51-8dea84046163\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 14:10:56 crc kubenswrapper[5050]: I1211 14:10:56.334605 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 14:10:56 crc kubenswrapper[5050]: I1211 14:10:56.349863 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 14:10:56 crc kubenswrapper[5050]: I1211 14:10:56.604544 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-b59kh"] Dec 11 14:10:56 crc kubenswrapper[5050]: W1211 14:10:56.608188 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc443a35b_44e5_495f_b23b_75ff35319194.slice/crio-6a70c42a01f461d0f0d8ee21f2bb944842009eb72f63c5ed2a6307203f9e4767 WatchSource:0}: Error finding container 6a70c42a01f461d0f0d8ee21f2bb944842009eb72f63c5ed2a6307203f9e4767: Status 404 returned error can't find the container with id 6a70c42a01f461d0f0d8ee21f2bb944842009eb72f63c5ed2a6307203f9e4767 Dec 11 14:10:56 crc kubenswrapper[5050]: I1211 14:10:56.663622 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 14:10:56 crc kubenswrapper[5050]: I1211 14:10:56.738179 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 14:10:57 crc kubenswrapper[5050]: I1211 14:10:57.523155 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0ca28ba4-2b37-4836-9d51-8dea84046163","Type":"ContainerStarted","Data":"c999df53c600f82fa92bc84444337d1373326ba3f2b76682afad53362cb34c3d"} Dec 11 14:10:57 crc kubenswrapper[5050]: I1211 14:10:57.523928 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0ca28ba4-2b37-4836-9d51-8dea84046163","Type":"ContainerStarted","Data":"603a01b16aa18321f03b4cf26f49d47fea569c2a318a7b8a94f0b9703c75750d"} Dec 11 14:10:57 crc kubenswrapper[5050]: I1211 14:10:57.532508 5050 generic.go:334] "Generic (PLEG): container finished" podID="c443a35b-44e5-495f-b23b-75ff35319194" containerID="da35097d4938c80747d4330c14a62405c62267dddef20cb1d8041548fb7caa56" exitCode=0 Dec 11 14:10:57 crc kubenswrapper[5050]: I1211 14:10:57.532632 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-b59kh" event={"ID":"c443a35b-44e5-495f-b23b-75ff35319194","Type":"ContainerDied","Data":"da35097d4938c80747d4330c14a62405c62267dddef20cb1d8041548fb7caa56"} Dec 11 14:10:57 crc kubenswrapper[5050]: I1211 14:10:57.532668 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-b59kh" event={"ID":"c443a35b-44e5-495f-b23b-75ff35319194","Type":"ContainerStarted","Data":"6a70c42a01f461d0f0d8ee21f2bb944842009eb72f63c5ed2a6307203f9e4767"} Dec 11 14:10:57 crc kubenswrapper[5050]: I1211 14:10:57.541648 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2fc263ce-85a4-4182-87ac-a5cd6d48b65a","Type":"ContainerStarted","Data":"907158c68cab64311c7fc08de659e03e3f32e17b40d777736a044df8f88009bc"} Dec 11 14:10:57 crc kubenswrapper[5050]: I1211 14:10:57.541742 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2fc263ce-85a4-4182-87ac-a5cd6d48b65a","Type":"ContainerStarted","Data":"65116f2a8027939135081a58a517923b4acc7016a42da1a1008f62b9e4677834"} Dec 11 14:10:57 crc kubenswrapper[5050]: I1211 14:10:57.541765 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2fc263ce-85a4-4182-87ac-a5cd6d48b65a","Type":"ContainerStarted","Data":"dd4e926bcc126f22d11295efc1781f81ecc1fb76661af81d539ba41f6e8382e5"} Dec 11 14:10:57 crc kubenswrapper[5050]: I1211 14:10:57.549940 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.549917265 podStartE2EDuration="2.549917265s" podCreationTimestamp="2025-12-11 14:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:10:57.547389747 +0000 UTC m=+1348.391112333" watchObservedRunningTime="2025-12-11 14:10:57.549917265 +0000 UTC m=+1348.393639851" Dec 11 14:10:57 crc kubenswrapper[5050]: I1211 14:10:57.563952 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a082a10-0ad5-42cc-9cbe-5c15b93a9af2" path="/var/lib/kubelet/pods/3a082a10-0ad5-42cc-9cbe-5c15b93a9af2/volumes" Dec 11 14:10:57 crc kubenswrapper[5050]: I1211 14:10:57.564698 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="750efff9-baf7-4feb-9dbc-6fc187a4350f" path="/var/lib/kubelet/pods/750efff9-baf7-4feb-9dbc-6fc187a4350f/volumes" Dec 11 14:10:57 crc kubenswrapper[5050]: I1211 14:10:57.578312 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.578286771 podStartE2EDuration="2.578286771s" podCreationTimestamp="2025-12-11 14:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:10:57.569219826 +0000 UTC m=+1348.412942412" watchObservedRunningTime="2025-12-11 14:10:57.578286771 +0000 UTC m=+1348.422009357" Dec 11 14:10:58 crc kubenswrapper[5050]: I1211 14:10:58.060386 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:10:58 crc kubenswrapper[5050]: I1211 14:10:58.060764 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5af14101-b5b1-4735-9b62-83f55267c624" containerName="ceilometer-central-agent" containerID="cri-o://f01971a9930b0d296d2f00a4b54b7e230d9d8aa4dfba82f2fd676ccec111c3ca" gracePeriod=30 Dec 11 14:10:58 crc kubenswrapper[5050]: I1211 14:10:58.061888 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5af14101-b5b1-4735-9b62-83f55267c624" containerName="proxy-httpd" containerID="cri-o://63baec29346df5fb0b3439eea0c2936e227931e33f5f093c4945067f2937f575" gracePeriod=30 Dec 11 14:10:58 crc kubenswrapper[5050]: I1211 14:10:58.061948 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5af14101-b5b1-4735-9b62-83f55267c624" containerName="sg-core" containerID="cri-o://cb959dc2fc3437aa8531276c17c72c1b2256f354bc0fdd721185755e63372e1c" gracePeriod=30 Dec 11 14:10:58 crc kubenswrapper[5050]: I1211 14:10:58.062030 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5af14101-b5b1-4735-9b62-83f55267c624" containerName="ceilometer-notification-agent" containerID="cri-o://b38da25d4497b4e86576a5d45c1b504d4746d4b019c1057ba04fb00c29adfed1" gracePeriod=30 Dec 11 14:10:58 crc kubenswrapper[5050]: I1211 14:10:58.167436 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="5af14101-b5b1-4735-9b62-83f55267c624" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.192:3000/\": read tcp 10.217.0.2:40304->10.217.0.192:3000: read: connection reset by peer" Dec 11 14:10:58 crc kubenswrapper[5050]: I1211 14:10:58.554501 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-b59kh" event={"ID":"c443a35b-44e5-495f-b23b-75ff35319194","Type":"ContainerStarted","Data":"5891d989abb1d991a1a438c7eb2a7b5afaeabc910991c4571e2dac6645358de5"} Dec 11 14:10:58 crc kubenswrapper[5050]: I1211 14:10:58.555131 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fcd6f8f8f-b59kh" Dec 11 14:10:58 crc kubenswrapper[5050]: I1211 14:10:58.558363 5050 generic.go:334] "Generic (PLEG): container finished" podID="5af14101-b5b1-4735-9b62-83f55267c624" containerID="63baec29346df5fb0b3439eea0c2936e227931e33f5f093c4945067f2937f575" exitCode=0 Dec 11 14:10:58 crc kubenswrapper[5050]: I1211 14:10:58.558404 5050 generic.go:334] "Generic (PLEG): container finished" podID="5af14101-b5b1-4735-9b62-83f55267c624" containerID="cb959dc2fc3437aa8531276c17c72c1b2256f354bc0fdd721185755e63372e1c" exitCode=2 Dec 11 14:10:58 crc kubenswrapper[5050]: I1211 14:10:58.558680 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5af14101-b5b1-4735-9b62-83f55267c624","Type":"ContainerDied","Data":"63baec29346df5fb0b3439eea0c2936e227931e33f5f093c4945067f2937f575"} Dec 11 14:10:58 crc kubenswrapper[5050]: I1211 14:10:58.558811 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5af14101-b5b1-4735-9b62-83f55267c624","Type":"ContainerDied","Data":"cb959dc2fc3437aa8531276c17c72c1b2256f354bc0fdd721185755e63372e1c"} Dec 11 14:10:58 crc kubenswrapper[5050]: I1211 14:10:58.594942 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fcd6f8f8f-b59kh" podStartSLOduration=3.5949161050000003 podStartE2EDuration="3.594916105s" podCreationTimestamp="2025-12-11 14:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:10:58.58991759 +0000 UTC m=+1349.433640186" watchObservedRunningTime="2025-12-11 14:10:58.594916105 +0000 UTC m=+1349.438638691" Dec 11 14:10:58 crc kubenswrapper[5050]: I1211 14:10:58.855061 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 14:10:58 crc kubenswrapper[5050]: I1211 14:10:58.855483 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="89a7ecdc-c4ec-4097-a100-c74ebb8b86ff" containerName="nova-api-api" containerID="cri-o://59b103688d380bd3ea1ccbe23b8e50d69ce7096f799e44614d95596b4ae9353c" gracePeriod=30 Dec 11 14:10:58 crc kubenswrapper[5050]: I1211 14:10:58.855420 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="89a7ecdc-c4ec-4097-a100-c74ebb8b86ff" containerName="nova-api-log" containerID="cri-o://8101049b849d3c3de301d09aa49baf5e04594ebbf9f8f8020238d54f0b38a06d" gracePeriod=30 Dec 11 14:10:59 crc kubenswrapper[5050]: I1211 14:10:59.573547 5050 generic.go:334] "Generic (PLEG): container finished" podID="5af14101-b5b1-4735-9b62-83f55267c624" containerID="f01971a9930b0d296d2f00a4b54b7e230d9d8aa4dfba82f2fd676ccec111c3ca" exitCode=0 Dec 11 14:10:59 crc kubenswrapper[5050]: I1211 14:10:59.573624 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5af14101-b5b1-4735-9b62-83f55267c624","Type":"ContainerDied","Data":"f01971a9930b0d296d2f00a4b54b7e230d9d8aa4dfba82f2fd676ccec111c3ca"} Dec 11 14:10:59 crc kubenswrapper[5050]: I1211 14:10:59.576678 5050 generic.go:334] "Generic (PLEG): container finished" podID="89a7ecdc-c4ec-4097-a100-c74ebb8b86ff" containerID="8101049b849d3c3de301d09aa49baf5e04594ebbf9f8f8020238d54f0b38a06d" exitCode=143 Dec 11 14:10:59 crc kubenswrapper[5050]: I1211 14:10:59.576746 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"89a7ecdc-c4ec-4097-a100-c74ebb8b86ff","Type":"ContainerDied","Data":"8101049b849d3c3de301d09aa49baf5e04594ebbf9f8f8020238d54f0b38a06d"} Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.281931 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.398838 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5af14101-b5b1-4735-9b62-83f55267c624-config-data\") pod \"5af14101-b5b1-4735-9b62-83f55267c624\" (UID: \"5af14101-b5b1-4735-9b62-83f55267c624\") " Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.398893 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhktj\" (UniqueName: \"kubernetes.io/projected/5af14101-b5b1-4735-9b62-83f55267c624-kube-api-access-xhktj\") pod \"5af14101-b5b1-4735-9b62-83f55267c624\" (UID: \"5af14101-b5b1-4735-9b62-83f55267c624\") " Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.398923 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5af14101-b5b1-4735-9b62-83f55267c624-run-httpd\") pod \"5af14101-b5b1-4735-9b62-83f55267c624\" (UID: \"5af14101-b5b1-4735-9b62-83f55267c624\") " Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.398972 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5af14101-b5b1-4735-9b62-83f55267c624-log-httpd\") pod \"5af14101-b5b1-4735-9b62-83f55267c624\" (UID: \"5af14101-b5b1-4735-9b62-83f55267c624\") " Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.399156 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5af14101-b5b1-4735-9b62-83f55267c624-ceilometer-tls-certs\") pod \"5af14101-b5b1-4735-9b62-83f55267c624\" (UID: \"5af14101-b5b1-4735-9b62-83f55267c624\") " Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.399258 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5af14101-b5b1-4735-9b62-83f55267c624-sg-core-conf-yaml\") pod \"5af14101-b5b1-4735-9b62-83f55267c624\" (UID: \"5af14101-b5b1-4735-9b62-83f55267c624\") " Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.399327 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5af14101-b5b1-4735-9b62-83f55267c624-combined-ca-bundle\") pod \"5af14101-b5b1-4735-9b62-83f55267c624\" (UID: \"5af14101-b5b1-4735-9b62-83f55267c624\") " Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.399395 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5af14101-b5b1-4735-9b62-83f55267c624-scripts\") pod \"5af14101-b5b1-4735-9b62-83f55267c624\" (UID: \"5af14101-b5b1-4735-9b62-83f55267c624\") " Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.400707 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5af14101-b5b1-4735-9b62-83f55267c624-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5af14101-b5b1-4735-9b62-83f55267c624" (UID: "5af14101-b5b1-4735-9b62-83f55267c624"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.400903 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5af14101-b5b1-4735-9b62-83f55267c624-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5af14101-b5b1-4735-9b62-83f55267c624" (UID: "5af14101-b5b1-4735-9b62-83f55267c624"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.407904 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5af14101-b5b1-4735-9b62-83f55267c624-kube-api-access-xhktj" (OuterVolumeSpecName: "kube-api-access-xhktj") pod "5af14101-b5b1-4735-9b62-83f55267c624" (UID: "5af14101-b5b1-4735-9b62-83f55267c624"). InnerVolumeSpecName "kube-api-access-xhktj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.408613 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5af14101-b5b1-4735-9b62-83f55267c624-scripts" (OuterVolumeSpecName: "scripts") pod "5af14101-b5b1-4735-9b62-83f55267c624" (UID: "5af14101-b5b1-4735-9b62-83f55267c624"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.453461 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5af14101-b5b1-4735-9b62-83f55267c624-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5af14101-b5b1-4735-9b62-83f55267c624" (UID: "5af14101-b5b1-4735-9b62-83f55267c624"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.492217 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5af14101-b5b1-4735-9b62-83f55267c624-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "5af14101-b5b1-4735-9b62-83f55267c624" (UID: "5af14101-b5b1-4735-9b62-83f55267c624"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.502595 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5af14101-b5b1-4735-9b62-83f55267c624-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.502635 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhktj\" (UniqueName: \"kubernetes.io/projected/5af14101-b5b1-4735-9b62-83f55267c624-kube-api-access-xhktj\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.502644 5050 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5af14101-b5b1-4735-9b62-83f55267c624-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.502654 5050 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5af14101-b5b1-4735-9b62-83f55267c624-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.502664 5050 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5af14101-b5b1-4735-9b62-83f55267c624-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.502671 5050 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5af14101-b5b1-4735-9b62-83f55267c624-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.532337 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5af14101-b5b1-4735-9b62-83f55267c624-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5af14101-b5b1-4735-9b62-83f55267c624" (UID: "5af14101-b5b1-4735-9b62-83f55267c624"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.546820 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5af14101-b5b1-4735-9b62-83f55267c624-config-data" (OuterVolumeSpecName: "config-data") pod "5af14101-b5b1-4735-9b62-83f55267c624" (UID: "5af14101-b5b1-4735-9b62-83f55267c624"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.604687 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5af14101-b5b1-4735-9b62-83f55267c624-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.604727 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5af14101-b5b1-4735-9b62-83f55267c624-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.608781 5050 generic.go:334] "Generic (PLEG): container finished" podID="5af14101-b5b1-4735-9b62-83f55267c624" containerID="b38da25d4497b4e86576a5d45c1b504d4746d4b019c1057ba04fb00c29adfed1" exitCode=0 Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.608842 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5af14101-b5b1-4735-9b62-83f55267c624","Type":"ContainerDied","Data":"b38da25d4497b4e86576a5d45c1b504d4746d4b019c1057ba04fb00c29adfed1"} Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.608882 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5af14101-b5b1-4735-9b62-83f55267c624","Type":"ContainerDied","Data":"ec5cde7a993b0ae9ebab4dc75749d892f0b4d84209a58883f1d91ba0daf287bd"} Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.608906 5050 scope.go:117] "RemoveContainer" containerID="63baec29346df5fb0b3439eea0c2936e227931e33f5f093c4945067f2937f575" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.608848 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.648378 5050 scope.go:117] "RemoveContainer" containerID="cb959dc2fc3437aa8531276c17c72c1b2256f354bc0fdd721185755e63372e1c" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.661076 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.679145 5050 scope.go:117] "RemoveContainer" containerID="b38da25d4497b4e86576a5d45c1b504d4746d4b019c1057ba04fb00c29adfed1" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.679374 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.686790 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:11:00 crc kubenswrapper[5050]: E1211 14:11:00.687410 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5af14101-b5b1-4735-9b62-83f55267c624" containerName="proxy-httpd" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.687438 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="5af14101-b5b1-4735-9b62-83f55267c624" containerName="proxy-httpd" Dec 11 14:11:00 crc kubenswrapper[5050]: E1211 14:11:00.687454 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5af14101-b5b1-4735-9b62-83f55267c624" containerName="ceilometer-notification-agent" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.687466 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="5af14101-b5b1-4735-9b62-83f55267c624" containerName="ceilometer-notification-agent" Dec 11 14:11:00 crc kubenswrapper[5050]: E1211 14:11:00.687478 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5af14101-b5b1-4735-9b62-83f55267c624" containerName="sg-core" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.687486 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="5af14101-b5b1-4735-9b62-83f55267c624" containerName="sg-core" Dec 11 14:11:00 crc kubenswrapper[5050]: E1211 14:11:00.687502 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5af14101-b5b1-4735-9b62-83f55267c624" containerName="ceilometer-central-agent" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.687509 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="5af14101-b5b1-4735-9b62-83f55267c624" containerName="ceilometer-central-agent" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.687782 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="5af14101-b5b1-4735-9b62-83f55267c624" containerName="sg-core" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.687812 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="5af14101-b5b1-4735-9b62-83f55267c624" containerName="proxy-httpd" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.687823 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="5af14101-b5b1-4735-9b62-83f55267c624" containerName="ceilometer-central-agent" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.687842 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="5af14101-b5b1-4735-9b62-83f55267c624" containerName="ceilometer-notification-agent" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.690881 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.698455 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.698724 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.698983 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.709143 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.731712 5050 scope.go:117] "RemoveContainer" containerID="f01971a9930b0d296d2f00a4b54b7e230d9d8aa4dfba82f2fd676ccec111c3ca" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.769362 5050 scope.go:117] "RemoveContainer" containerID="63baec29346df5fb0b3439eea0c2936e227931e33f5f093c4945067f2937f575" Dec 11 14:11:00 crc kubenswrapper[5050]: E1211 14:11:00.770168 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63baec29346df5fb0b3439eea0c2936e227931e33f5f093c4945067f2937f575\": container with ID starting with 63baec29346df5fb0b3439eea0c2936e227931e33f5f093c4945067f2937f575 not found: ID does not exist" containerID="63baec29346df5fb0b3439eea0c2936e227931e33f5f093c4945067f2937f575" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.770224 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63baec29346df5fb0b3439eea0c2936e227931e33f5f093c4945067f2937f575"} err="failed to get container status \"63baec29346df5fb0b3439eea0c2936e227931e33f5f093c4945067f2937f575\": rpc error: code = NotFound desc = could not find container \"63baec29346df5fb0b3439eea0c2936e227931e33f5f093c4945067f2937f575\": container with ID starting with 63baec29346df5fb0b3439eea0c2936e227931e33f5f093c4945067f2937f575 not found: ID does not exist" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.770258 5050 scope.go:117] "RemoveContainer" containerID="cb959dc2fc3437aa8531276c17c72c1b2256f354bc0fdd721185755e63372e1c" Dec 11 14:11:00 crc kubenswrapper[5050]: E1211 14:11:00.772394 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb959dc2fc3437aa8531276c17c72c1b2256f354bc0fdd721185755e63372e1c\": container with ID starting with cb959dc2fc3437aa8531276c17c72c1b2256f354bc0fdd721185755e63372e1c not found: ID does not exist" containerID="cb959dc2fc3437aa8531276c17c72c1b2256f354bc0fdd721185755e63372e1c" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.772447 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb959dc2fc3437aa8531276c17c72c1b2256f354bc0fdd721185755e63372e1c"} err="failed to get container status \"cb959dc2fc3437aa8531276c17c72c1b2256f354bc0fdd721185755e63372e1c\": rpc error: code = NotFound desc = could not find container \"cb959dc2fc3437aa8531276c17c72c1b2256f354bc0fdd721185755e63372e1c\": container with ID starting with cb959dc2fc3437aa8531276c17c72c1b2256f354bc0fdd721185755e63372e1c not found: ID does not exist" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.772481 5050 scope.go:117] "RemoveContainer" containerID="b38da25d4497b4e86576a5d45c1b504d4746d4b019c1057ba04fb00c29adfed1" Dec 11 14:11:00 crc kubenswrapper[5050]: E1211 14:11:00.772830 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b38da25d4497b4e86576a5d45c1b504d4746d4b019c1057ba04fb00c29adfed1\": container with ID starting with b38da25d4497b4e86576a5d45c1b504d4746d4b019c1057ba04fb00c29adfed1 not found: ID does not exist" containerID="b38da25d4497b4e86576a5d45c1b504d4746d4b019c1057ba04fb00c29adfed1" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.772872 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b38da25d4497b4e86576a5d45c1b504d4746d4b019c1057ba04fb00c29adfed1"} err="failed to get container status \"b38da25d4497b4e86576a5d45c1b504d4746d4b019c1057ba04fb00c29adfed1\": rpc error: code = NotFound desc = could not find container \"b38da25d4497b4e86576a5d45c1b504d4746d4b019c1057ba04fb00c29adfed1\": container with ID starting with b38da25d4497b4e86576a5d45c1b504d4746d4b019c1057ba04fb00c29adfed1 not found: ID does not exist" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.772901 5050 scope.go:117] "RemoveContainer" containerID="f01971a9930b0d296d2f00a4b54b7e230d9d8aa4dfba82f2fd676ccec111c3ca" Dec 11 14:11:00 crc kubenswrapper[5050]: E1211 14:11:00.773162 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f01971a9930b0d296d2f00a4b54b7e230d9d8aa4dfba82f2fd676ccec111c3ca\": container with ID starting with f01971a9930b0d296d2f00a4b54b7e230d9d8aa4dfba82f2fd676ccec111c3ca not found: ID does not exist" containerID="f01971a9930b0d296d2f00a4b54b7e230d9d8aa4dfba82f2fd676ccec111c3ca" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.773198 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f01971a9930b0d296d2f00a4b54b7e230d9d8aa4dfba82f2fd676ccec111c3ca"} err="failed to get container status \"f01971a9930b0d296d2f00a4b54b7e230d9d8aa4dfba82f2fd676ccec111c3ca\": rpc error: code = NotFound desc = could not find container \"f01971a9930b0d296d2f00a4b54b7e230d9d8aa4dfba82f2fd676ccec111c3ca\": container with ID starting with f01971a9930b0d296d2f00a4b54b7e230d9d8aa4dfba82f2fd676ccec111c3ca not found: ID does not exist" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.808722 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef32727-7bbd-4a50-8292-4740b34107cc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2ef32727-7bbd-4a50-8292-4740b34107cc\") " pod="openstack/ceilometer-0" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.808789 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ef32727-7bbd-4a50-8292-4740b34107cc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2ef32727-7bbd-4a50-8292-4740b34107cc\") " pod="openstack/ceilometer-0" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.808824 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ef32727-7bbd-4a50-8292-4740b34107cc-scripts\") pod \"ceilometer-0\" (UID: \"2ef32727-7bbd-4a50-8292-4740b34107cc\") " pod="openstack/ceilometer-0" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.808933 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ef32727-7bbd-4a50-8292-4740b34107cc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2ef32727-7bbd-4a50-8292-4740b34107cc\") " pod="openstack/ceilometer-0" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.808998 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef32727-7bbd-4a50-8292-4740b34107cc-config-data\") pod \"ceilometer-0\" (UID: \"2ef32727-7bbd-4a50-8292-4740b34107cc\") " pod="openstack/ceilometer-0" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.809053 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bgb9\" (UniqueName: \"kubernetes.io/projected/2ef32727-7bbd-4a50-8292-4740b34107cc-kube-api-access-4bgb9\") pod \"ceilometer-0\" (UID: \"2ef32727-7bbd-4a50-8292-4740b34107cc\") " pod="openstack/ceilometer-0" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.809085 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ef32727-7bbd-4a50-8292-4740b34107cc-log-httpd\") pod \"ceilometer-0\" (UID: \"2ef32727-7bbd-4a50-8292-4740b34107cc\") " pod="openstack/ceilometer-0" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.809123 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ef32727-7bbd-4a50-8292-4740b34107cc-run-httpd\") pod \"ceilometer-0\" (UID: \"2ef32727-7bbd-4a50-8292-4740b34107cc\") " pod="openstack/ceilometer-0" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.911294 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef32727-7bbd-4a50-8292-4740b34107cc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2ef32727-7bbd-4a50-8292-4740b34107cc\") " pod="openstack/ceilometer-0" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.911344 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ef32727-7bbd-4a50-8292-4740b34107cc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2ef32727-7bbd-4a50-8292-4740b34107cc\") " pod="openstack/ceilometer-0" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.911367 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ef32727-7bbd-4a50-8292-4740b34107cc-scripts\") pod \"ceilometer-0\" (UID: \"2ef32727-7bbd-4a50-8292-4740b34107cc\") " pod="openstack/ceilometer-0" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.911434 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ef32727-7bbd-4a50-8292-4740b34107cc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2ef32727-7bbd-4a50-8292-4740b34107cc\") " pod="openstack/ceilometer-0" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.911482 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef32727-7bbd-4a50-8292-4740b34107cc-config-data\") pod \"ceilometer-0\" (UID: \"2ef32727-7bbd-4a50-8292-4740b34107cc\") " pod="openstack/ceilometer-0" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.911506 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bgb9\" (UniqueName: \"kubernetes.io/projected/2ef32727-7bbd-4a50-8292-4740b34107cc-kube-api-access-4bgb9\") pod \"ceilometer-0\" (UID: \"2ef32727-7bbd-4a50-8292-4740b34107cc\") " pod="openstack/ceilometer-0" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.911529 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ef32727-7bbd-4a50-8292-4740b34107cc-log-httpd\") pod \"ceilometer-0\" (UID: \"2ef32727-7bbd-4a50-8292-4740b34107cc\") " pod="openstack/ceilometer-0" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.911555 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ef32727-7bbd-4a50-8292-4740b34107cc-run-httpd\") pod \"ceilometer-0\" (UID: \"2ef32727-7bbd-4a50-8292-4740b34107cc\") " pod="openstack/ceilometer-0" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.912130 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ef32727-7bbd-4a50-8292-4740b34107cc-run-httpd\") pod \"ceilometer-0\" (UID: \"2ef32727-7bbd-4a50-8292-4740b34107cc\") " pod="openstack/ceilometer-0" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.912545 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ef32727-7bbd-4a50-8292-4740b34107cc-log-httpd\") pod \"ceilometer-0\" (UID: \"2ef32727-7bbd-4a50-8292-4740b34107cc\") " pod="openstack/ceilometer-0" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.915987 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ef32727-7bbd-4a50-8292-4740b34107cc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2ef32727-7bbd-4a50-8292-4740b34107cc\") " pod="openstack/ceilometer-0" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.916289 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef32727-7bbd-4a50-8292-4740b34107cc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2ef32727-7bbd-4a50-8292-4740b34107cc\") " pod="openstack/ceilometer-0" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.916394 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ef32727-7bbd-4a50-8292-4740b34107cc-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2ef32727-7bbd-4a50-8292-4740b34107cc\") " pod="openstack/ceilometer-0" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.917634 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef32727-7bbd-4a50-8292-4740b34107cc-config-data\") pod \"ceilometer-0\" (UID: \"2ef32727-7bbd-4a50-8292-4740b34107cc\") " pod="openstack/ceilometer-0" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.920774 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ef32727-7bbd-4a50-8292-4740b34107cc-scripts\") pod \"ceilometer-0\" (UID: \"2ef32727-7bbd-4a50-8292-4740b34107cc\") " pod="openstack/ceilometer-0" Dec 11 14:11:00 crc kubenswrapper[5050]: I1211 14:11:00.930511 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bgb9\" (UniqueName: \"kubernetes.io/projected/2ef32727-7bbd-4a50-8292-4740b34107cc-kube-api-access-4bgb9\") pod \"ceilometer-0\" (UID: \"2ef32727-7bbd-4a50-8292-4740b34107cc\") " pod="openstack/ceilometer-0" Dec 11 14:11:01 crc kubenswrapper[5050]: I1211 14:11:01.048954 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 14:11:01 crc kubenswrapper[5050]: I1211 14:11:01.336404 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 11 14:11:01 crc kubenswrapper[5050]: I1211 14:11:01.350709 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 11 14:11:01 crc kubenswrapper[5050]: I1211 14:11:01.350950 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 11 14:11:01 crc kubenswrapper[5050]: I1211 14:11:01.543734 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:11:01 crc kubenswrapper[5050]: I1211 14:11:01.564634 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5af14101-b5b1-4735-9b62-83f55267c624" path="/var/lib/kubelet/pods/5af14101-b5b1-4735-9b62-83f55267c624/volumes" Dec 11 14:11:01 crc kubenswrapper[5050]: I1211 14:11:01.632681 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ef32727-7bbd-4a50-8292-4740b34107cc","Type":"ContainerStarted","Data":"7bf26eb4fda28e68ea4407e15ac13b2c882d4b170f7d5683b64dce95795af9b9"} Dec 11 14:11:02 crc kubenswrapper[5050]: I1211 14:11:02.647269 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ef32727-7bbd-4a50-8292-4740b34107cc","Type":"ContainerStarted","Data":"f6d56512f1f5665c8ed3d4a8969144001b8fe4f5868e5cfd74524cc0c96c62f3"} Dec 11 14:11:02 crc kubenswrapper[5050]: I1211 14:11:02.650040 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"89a7ecdc-c4ec-4097-a100-c74ebb8b86ff","Type":"ContainerDied","Data":"59b103688d380bd3ea1ccbe23b8e50d69ce7096f799e44614d95596b4ae9353c"} Dec 11 14:11:02 crc kubenswrapper[5050]: I1211 14:11:02.650029 5050 generic.go:334] "Generic (PLEG): container finished" podID="89a7ecdc-c4ec-4097-a100-c74ebb8b86ff" containerID="59b103688d380bd3ea1ccbe23b8e50d69ce7096f799e44614d95596b4ae9353c" exitCode=0 Dec 11 14:11:02 crc kubenswrapper[5050]: I1211 14:11:02.996863 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.067080 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89a7ecdc-c4ec-4097-a100-c74ebb8b86ff-logs\") pod \"89a7ecdc-c4ec-4097-a100-c74ebb8b86ff\" (UID: \"89a7ecdc-c4ec-4097-a100-c74ebb8b86ff\") " Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.067196 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89a7ecdc-c4ec-4097-a100-c74ebb8b86ff-combined-ca-bundle\") pod \"89a7ecdc-c4ec-4097-a100-c74ebb8b86ff\" (UID: \"89a7ecdc-c4ec-4097-a100-c74ebb8b86ff\") " Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.067625 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pztmg\" (UniqueName: \"kubernetes.io/projected/89a7ecdc-c4ec-4097-a100-c74ebb8b86ff-kube-api-access-pztmg\") pod \"89a7ecdc-c4ec-4097-a100-c74ebb8b86ff\" (UID: \"89a7ecdc-c4ec-4097-a100-c74ebb8b86ff\") " Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.067740 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89a7ecdc-c4ec-4097-a100-c74ebb8b86ff-config-data\") pod \"89a7ecdc-c4ec-4097-a100-c74ebb8b86ff\" (UID: \"89a7ecdc-c4ec-4097-a100-c74ebb8b86ff\") " Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.068759 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89a7ecdc-c4ec-4097-a100-c74ebb8b86ff-logs" (OuterVolumeSpecName: "logs") pod "89a7ecdc-c4ec-4097-a100-c74ebb8b86ff" (UID: "89a7ecdc-c4ec-4097-a100-c74ebb8b86ff"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.075400 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89a7ecdc-c4ec-4097-a100-c74ebb8b86ff-kube-api-access-pztmg" (OuterVolumeSpecName: "kube-api-access-pztmg") pod "89a7ecdc-c4ec-4097-a100-c74ebb8b86ff" (UID: "89a7ecdc-c4ec-4097-a100-c74ebb8b86ff"). InnerVolumeSpecName "kube-api-access-pztmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.111029 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89a7ecdc-c4ec-4097-a100-c74ebb8b86ff-config-data" (OuterVolumeSpecName: "config-data") pod "89a7ecdc-c4ec-4097-a100-c74ebb8b86ff" (UID: "89a7ecdc-c4ec-4097-a100-c74ebb8b86ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.121134 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89a7ecdc-c4ec-4097-a100-c74ebb8b86ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89a7ecdc-c4ec-4097-a100-c74ebb8b86ff" (UID: "89a7ecdc-c4ec-4097-a100-c74ebb8b86ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.170544 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89a7ecdc-c4ec-4097-a100-c74ebb8b86ff-logs\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.170578 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89a7ecdc-c4ec-4097-a100-c74ebb8b86ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.170592 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pztmg\" (UniqueName: \"kubernetes.io/projected/89a7ecdc-c4ec-4097-a100-c74ebb8b86ff-kube-api-access-pztmg\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.170601 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89a7ecdc-c4ec-4097-a100-c74ebb8b86ff-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.670275 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"89a7ecdc-c4ec-4097-a100-c74ebb8b86ff","Type":"ContainerDied","Data":"6f6ea4edccb31c0846824bc09921d5ce22176ef4016e83980b93be9c8271819a"} Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.670722 5050 scope.go:117] "RemoveContainer" containerID="59b103688d380bd3ea1ccbe23b8e50d69ce7096f799e44614d95596b4ae9353c" Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.670352 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.725333 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.732259 5050 scope.go:117] "RemoveContainer" containerID="8101049b849d3c3de301d09aa49baf5e04594ebbf9f8f8020238d54f0b38a06d" Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.747557 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.782111 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 11 14:11:03 crc kubenswrapper[5050]: E1211 14:11:03.782916 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a7ecdc-c4ec-4097-a100-c74ebb8b86ff" containerName="nova-api-log" Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.782950 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a7ecdc-c4ec-4097-a100-c74ebb8b86ff" containerName="nova-api-log" Dec 11 14:11:03 crc kubenswrapper[5050]: E1211 14:11:03.782975 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a7ecdc-c4ec-4097-a100-c74ebb8b86ff" containerName="nova-api-api" Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.782985 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a7ecdc-c4ec-4097-a100-c74ebb8b86ff" containerName="nova-api-api" Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.783318 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="89a7ecdc-c4ec-4097-a100-c74ebb8b86ff" containerName="nova-api-log" Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.783354 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="89a7ecdc-c4ec-4097-a100-c74ebb8b86ff" containerName="nova-api-api" Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.784924 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.790746 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.791485 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.791697 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.796692 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.886485 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e2c30e6-8333-4cea-bf9c-8111be66ed79-config-data\") pod \"nova-api-0\" (UID: \"7e2c30e6-8333-4cea-bf9c-8111be66ed79\") " pod="openstack/nova-api-0" Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.886548 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47thb\" (UniqueName: \"kubernetes.io/projected/7e2c30e6-8333-4cea-bf9c-8111be66ed79-kube-api-access-47thb\") pod \"nova-api-0\" (UID: \"7e2c30e6-8333-4cea-bf9c-8111be66ed79\") " pod="openstack/nova-api-0" Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.886588 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2c30e6-8333-4cea-bf9c-8111be66ed79-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7e2c30e6-8333-4cea-bf9c-8111be66ed79\") " pod="openstack/nova-api-0" Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.886624 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2c30e6-8333-4cea-bf9c-8111be66ed79-public-tls-certs\") pod \"nova-api-0\" (UID: \"7e2c30e6-8333-4cea-bf9c-8111be66ed79\") " pod="openstack/nova-api-0" Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.886721 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e2c30e6-8333-4cea-bf9c-8111be66ed79-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7e2c30e6-8333-4cea-bf9c-8111be66ed79\") " pod="openstack/nova-api-0" Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.886749 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e2c30e6-8333-4cea-bf9c-8111be66ed79-logs\") pod \"nova-api-0\" (UID: \"7e2c30e6-8333-4cea-bf9c-8111be66ed79\") " pod="openstack/nova-api-0" Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.988861 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e2c30e6-8333-4cea-bf9c-8111be66ed79-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7e2c30e6-8333-4cea-bf9c-8111be66ed79\") " pod="openstack/nova-api-0" Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.989320 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e2c30e6-8333-4cea-bf9c-8111be66ed79-logs\") pod \"nova-api-0\" (UID: \"7e2c30e6-8333-4cea-bf9c-8111be66ed79\") " pod="openstack/nova-api-0" Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.989393 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e2c30e6-8333-4cea-bf9c-8111be66ed79-config-data\") pod \"nova-api-0\" (UID: \"7e2c30e6-8333-4cea-bf9c-8111be66ed79\") " pod="openstack/nova-api-0" Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.989413 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47thb\" (UniqueName: \"kubernetes.io/projected/7e2c30e6-8333-4cea-bf9c-8111be66ed79-kube-api-access-47thb\") pod \"nova-api-0\" (UID: \"7e2c30e6-8333-4cea-bf9c-8111be66ed79\") " pod="openstack/nova-api-0" Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.989444 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2c30e6-8333-4cea-bf9c-8111be66ed79-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7e2c30e6-8333-4cea-bf9c-8111be66ed79\") " pod="openstack/nova-api-0" Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.989476 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2c30e6-8333-4cea-bf9c-8111be66ed79-public-tls-certs\") pod \"nova-api-0\" (UID: \"7e2c30e6-8333-4cea-bf9c-8111be66ed79\") " pod="openstack/nova-api-0" Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.996269 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e2c30e6-8333-4cea-bf9c-8111be66ed79-logs\") pod \"nova-api-0\" (UID: \"7e2c30e6-8333-4cea-bf9c-8111be66ed79\") " pod="openstack/nova-api-0" Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.996982 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2c30e6-8333-4cea-bf9c-8111be66ed79-public-tls-certs\") pod \"nova-api-0\" (UID: \"7e2c30e6-8333-4cea-bf9c-8111be66ed79\") " pod="openstack/nova-api-0" Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.997542 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e2c30e6-8333-4cea-bf9c-8111be66ed79-config-data\") pod \"nova-api-0\" (UID: \"7e2c30e6-8333-4cea-bf9c-8111be66ed79\") " pod="openstack/nova-api-0" Dec 11 14:11:03 crc kubenswrapper[5050]: I1211 14:11:03.998448 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e2c30e6-8333-4cea-bf9c-8111be66ed79-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7e2c30e6-8333-4cea-bf9c-8111be66ed79\") " pod="openstack/nova-api-0" Dec 11 14:11:04 crc kubenswrapper[5050]: I1211 14:11:04.000762 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2c30e6-8333-4cea-bf9c-8111be66ed79-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7e2c30e6-8333-4cea-bf9c-8111be66ed79\") " pod="openstack/nova-api-0" Dec 11 14:11:04 crc kubenswrapper[5050]: I1211 14:11:04.014695 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47thb\" (UniqueName: \"kubernetes.io/projected/7e2c30e6-8333-4cea-bf9c-8111be66ed79-kube-api-access-47thb\") pod \"nova-api-0\" (UID: \"7e2c30e6-8333-4cea-bf9c-8111be66ed79\") " pod="openstack/nova-api-0" Dec 11 14:11:04 crc kubenswrapper[5050]: I1211 14:11:04.116753 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 14:11:04 crc kubenswrapper[5050]: I1211 14:11:04.415847 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 14:11:04 crc kubenswrapper[5050]: I1211 14:11:04.689091 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7e2c30e6-8333-4cea-bf9c-8111be66ed79","Type":"ContainerStarted","Data":"88dd40b935a4cf133a3d1d782f97c00bc4273d0b55555e184bcebd256e069f78"} Dec 11 14:11:04 crc kubenswrapper[5050]: I1211 14:11:04.692565 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ef32727-7bbd-4a50-8292-4740b34107cc","Type":"ContainerStarted","Data":"de890e47f39aab39e39d2eca3d15df094ac343a9fec0b6242bea5fd2f19dbaf8"} Dec 11 14:11:05 crc kubenswrapper[5050]: I1211 14:11:05.588715 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89a7ecdc-c4ec-4097-a100-c74ebb8b86ff" path="/var/lib/kubelet/pods/89a7ecdc-c4ec-4097-a100-c74ebb8b86ff/volumes" Dec 11 14:11:05 crc kubenswrapper[5050]: I1211 14:11:05.714583 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ef32727-7bbd-4a50-8292-4740b34107cc","Type":"ContainerStarted","Data":"8295f391211fffee759bcdccec5b566c3a41abb5db15ca1a987be72e3711b3cf"} Dec 11 14:11:05 crc kubenswrapper[5050]: I1211 14:11:05.718248 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7e2c30e6-8333-4cea-bf9c-8111be66ed79","Type":"ContainerStarted","Data":"b6342cfc4dce56b6a59f67ac4172b491a0e6b90b71daf0925a6a8600b24badf2"} Dec 11 14:11:05 crc kubenswrapper[5050]: I1211 14:11:05.718302 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7e2c30e6-8333-4cea-bf9c-8111be66ed79","Type":"ContainerStarted","Data":"6cca3c7c0251946dd45dfce4cb39a2bd940d4693e3808767f9590bd1643c4a06"} Dec 11 14:11:05 crc kubenswrapper[5050]: I1211 14:11:05.745872 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.745851965 podStartE2EDuration="2.745851965s" podCreationTimestamp="2025-12-11 14:11:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:11:05.742036292 +0000 UTC m=+1356.585758868" watchObservedRunningTime="2025-12-11 14:11:05.745851965 +0000 UTC m=+1356.589574551" Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.009534 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fcd6f8f8f-b59kh" Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.082925 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-xtfbx"] Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.083467 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-647df7b8c5-xtfbx" podUID="39bb53b7-f4e4-4645-b635-62a51e5e286e" containerName="dnsmasq-dns" containerID="cri-o://3d3cc6d7327de8c7178081e8f184853e2d85c3ba9076ba34794a4a7356e835f2" gracePeriod=10 Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.336670 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.351649 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.351698 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.374446 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.658660 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-xtfbx" Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.737484 5050 generic.go:334] "Generic (PLEG): container finished" podID="39bb53b7-f4e4-4645-b635-62a51e5e286e" containerID="3d3cc6d7327de8c7178081e8f184853e2d85c3ba9076ba34794a4a7356e835f2" exitCode=0 Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.739465 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-647df7b8c5-xtfbx" Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.740129 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-xtfbx" event={"ID":"39bb53b7-f4e4-4645-b635-62a51e5e286e","Type":"ContainerDied","Data":"3d3cc6d7327de8c7178081e8f184853e2d85c3ba9076ba34794a4a7356e835f2"} Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.740181 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-647df7b8c5-xtfbx" event={"ID":"39bb53b7-f4e4-4645-b635-62a51e5e286e","Type":"ContainerDied","Data":"237a7c537ce9dfabb233fb9cbd732508ac7a9d2421a8df880b8790a2edbd07a6"} Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.740212 5050 scope.go:117] "RemoveContainer" containerID="3d3cc6d7327de8c7178081e8f184853e2d85c3ba9076ba34794a4a7356e835f2" Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.764264 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39bb53b7-f4e4-4645-b635-62a51e5e286e-config\") pod \"39bb53b7-f4e4-4645-b635-62a51e5e286e\" (UID: \"39bb53b7-f4e4-4645-b635-62a51e5e286e\") " Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.764414 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/39bb53b7-f4e4-4645-b635-62a51e5e286e-dns-swift-storage-0\") pod \"39bb53b7-f4e4-4645-b635-62a51e5e286e\" (UID: \"39bb53b7-f4e4-4645-b635-62a51e5e286e\") " Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.764453 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75kxw\" (UniqueName: \"kubernetes.io/projected/39bb53b7-f4e4-4645-b635-62a51e5e286e-kube-api-access-75kxw\") pod \"39bb53b7-f4e4-4645-b635-62a51e5e286e\" (UID: \"39bb53b7-f4e4-4645-b635-62a51e5e286e\") " Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.764499 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39bb53b7-f4e4-4645-b635-62a51e5e286e-dns-svc\") pod \"39bb53b7-f4e4-4645-b635-62a51e5e286e\" (UID: \"39bb53b7-f4e4-4645-b635-62a51e5e286e\") " Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.764592 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39bb53b7-f4e4-4645-b635-62a51e5e286e-ovsdbserver-nb\") pod \"39bb53b7-f4e4-4645-b635-62a51e5e286e\" (UID: \"39bb53b7-f4e4-4645-b635-62a51e5e286e\") " Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.764776 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39bb53b7-f4e4-4645-b635-62a51e5e286e-ovsdbserver-sb\") pod \"39bb53b7-f4e4-4645-b635-62a51e5e286e\" (UID: \"39bb53b7-f4e4-4645-b635-62a51e5e286e\") " Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.770234 5050 scope.go:117] "RemoveContainer" containerID="b6251f4991aa3171de6238cafba9994014c4ca38da379499874bc838d785ad3a" Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.775315 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39bb53b7-f4e4-4645-b635-62a51e5e286e-kube-api-access-75kxw" (OuterVolumeSpecName: "kube-api-access-75kxw") pod "39bb53b7-f4e4-4645-b635-62a51e5e286e" (UID: "39bb53b7-f4e4-4645-b635-62a51e5e286e"). InnerVolumeSpecName "kube-api-access-75kxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.795801 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.835768 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39bb53b7-f4e4-4645-b635-62a51e5e286e-config" (OuterVolumeSpecName: "config") pod "39bb53b7-f4e4-4645-b635-62a51e5e286e" (UID: "39bb53b7-f4e4-4645-b635-62a51e5e286e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.867362 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39bb53b7-f4e4-4645-b635-62a51e5e286e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "39bb53b7-f4e4-4645-b635-62a51e5e286e" (UID: "39bb53b7-f4e4-4645-b635-62a51e5e286e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.867860 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39bb53b7-f4e4-4645-b635-62a51e5e286e-dns-svc\") pod \"39bb53b7-f4e4-4645-b635-62a51e5e286e\" (UID: \"39bb53b7-f4e4-4645-b635-62a51e5e286e\") " Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.868939 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39bb53b7-f4e4-4645-b635-62a51e5e286e-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.868965 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75kxw\" (UniqueName: \"kubernetes.io/projected/39bb53b7-f4e4-4645-b635-62a51e5e286e-kube-api-access-75kxw\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:06 crc kubenswrapper[5050]: W1211 14:11:06.870250 5050 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/39bb53b7-f4e4-4645-b635-62a51e5e286e/volumes/kubernetes.io~configmap/dns-svc Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.870279 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39bb53b7-f4e4-4645-b635-62a51e5e286e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "39bb53b7-f4e4-4645-b635-62a51e5e286e" (UID: "39bb53b7-f4e4-4645-b635-62a51e5e286e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.872466 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39bb53b7-f4e4-4645-b635-62a51e5e286e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "39bb53b7-f4e4-4645-b635-62a51e5e286e" (UID: "39bb53b7-f4e4-4645-b635-62a51e5e286e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.891776 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39bb53b7-f4e4-4645-b635-62a51e5e286e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "39bb53b7-f4e4-4645-b635-62a51e5e286e" (UID: "39bb53b7-f4e4-4645-b635-62a51e5e286e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.926185 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39bb53b7-f4e4-4645-b635-62a51e5e286e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "39bb53b7-f4e4-4645-b635-62a51e5e286e" (UID: "39bb53b7-f4e4-4645-b635-62a51e5e286e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.959736 5050 scope.go:117] "RemoveContainer" containerID="3d3cc6d7327de8c7178081e8f184853e2d85c3ba9076ba34794a4a7356e835f2" Dec 11 14:11:06 crc kubenswrapper[5050]: E1211 14:11:06.960323 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d3cc6d7327de8c7178081e8f184853e2d85c3ba9076ba34794a4a7356e835f2\": container with ID starting with 3d3cc6d7327de8c7178081e8f184853e2d85c3ba9076ba34794a4a7356e835f2 not found: ID does not exist" containerID="3d3cc6d7327de8c7178081e8f184853e2d85c3ba9076ba34794a4a7356e835f2" Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.960399 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d3cc6d7327de8c7178081e8f184853e2d85c3ba9076ba34794a4a7356e835f2"} err="failed to get container status \"3d3cc6d7327de8c7178081e8f184853e2d85c3ba9076ba34794a4a7356e835f2\": rpc error: code = NotFound desc = could not find container \"3d3cc6d7327de8c7178081e8f184853e2d85c3ba9076ba34794a4a7356e835f2\": container with ID starting with 3d3cc6d7327de8c7178081e8f184853e2d85c3ba9076ba34794a4a7356e835f2 not found: ID does not exist" Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.960427 5050 scope.go:117] "RemoveContainer" containerID="b6251f4991aa3171de6238cafba9994014c4ca38da379499874bc838d785ad3a" Dec 11 14:11:06 crc kubenswrapper[5050]: E1211 14:11:06.960753 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6251f4991aa3171de6238cafba9994014c4ca38da379499874bc838d785ad3a\": container with ID starting with b6251f4991aa3171de6238cafba9994014c4ca38da379499874bc838d785ad3a not found: ID does not exist" containerID="b6251f4991aa3171de6238cafba9994014c4ca38da379499874bc838d785ad3a" Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.960787 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6251f4991aa3171de6238cafba9994014c4ca38da379499874bc838d785ad3a"} err="failed to get container status \"b6251f4991aa3171de6238cafba9994014c4ca38da379499874bc838d785ad3a\": rpc error: code = NotFound desc = could not find container \"b6251f4991aa3171de6238cafba9994014c4ca38da379499874bc838d785ad3a\": container with ID starting with b6251f4991aa3171de6238cafba9994014c4ca38da379499874bc838d785ad3a not found: ID does not exist" Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.971635 5050 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/39bb53b7-f4e4-4645-b635-62a51e5e286e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.971670 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/39bb53b7-f4e4-4645-b635-62a51e5e286e-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.971684 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/39bb53b7-f4e4-4645-b635-62a51e5e286e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:06 crc kubenswrapper[5050]: I1211 14:11:06.971693 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/39bb53b7-f4e4-4645-b635-62a51e5e286e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:07 crc kubenswrapper[5050]: I1211 14:11:07.063850 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-x6sqt"] Dec 11 14:11:07 crc kubenswrapper[5050]: E1211 14:11:07.065000 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39bb53b7-f4e4-4645-b635-62a51e5e286e" containerName="dnsmasq-dns" Dec 11 14:11:07 crc kubenswrapper[5050]: I1211 14:11:07.065041 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="39bb53b7-f4e4-4645-b635-62a51e5e286e" containerName="dnsmasq-dns" Dec 11 14:11:07 crc kubenswrapper[5050]: E1211 14:11:07.065060 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39bb53b7-f4e4-4645-b635-62a51e5e286e" containerName="init" Dec 11 14:11:07 crc kubenswrapper[5050]: I1211 14:11:07.065071 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="39bb53b7-f4e4-4645-b635-62a51e5e286e" containerName="init" Dec 11 14:11:07 crc kubenswrapper[5050]: I1211 14:11:07.065402 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="39bb53b7-f4e4-4645-b635-62a51e5e286e" containerName="dnsmasq-dns" Dec 11 14:11:07 crc kubenswrapper[5050]: I1211 14:11:07.066921 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-x6sqt" Dec 11 14:11:07 crc kubenswrapper[5050]: I1211 14:11:07.073754 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 11 14:11:07 crc kubenswrapper[5050]: I1211 14:11:07.074038 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 11 14:11:07 crc kubenswrapper[5050]: I1211 14:11:07.079293 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-x6sqt"] Dec 11 14:11:07 crc kubenswrapper[5050]: I1211 14:11:07.094248 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-xtfbx"] Dec 11 14:11:07 crc kubenswrapper[5050]: I1211 14:11:07.108065 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-647df7b8c5-xtfbx"] Dec 11 14:11:07 crc kubenswrapper[5050]: I1211 14:11:07.178739 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4334061e-4daa-4f87-bbdc-d1ccbfdafa27-scripts\") pod \"nova-cell1-cell-mapping-x6sqt\" (UID: \"4334061e-4daa-4f87-bbdc-d1ccbfdafa27\") " pod="openstack/nova-cell1-cell-mapping-x6sqt" Dec 11 14:11:07 crc kubenswrapper[5050]: I1211 14:11:07.178837 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4334061e-4daa-4f87-bbdc-d1ccbfdafa27-config-data\") pod \"nova-cell1-cell-mapping-x6sqt\" (UID: \"4334061e-4daa-4f87-bbdc-d1ccbfdafa27\") " pod="openstack/nova-cell1-cell-mapping-x6sqt" Dec 11 14:11:07 crc kubenswrapper[5050]: I1211 14:11:07.178868 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tsph\" (UniqueName: \"kubernetes.io/projected/4334061e-4daa-4f87-bbdc-d1ccbfdafa27-kube-api-access-9tsph\") pod \"nova-cell1-cell-mapping-x6sqt\" (UID: \"4334061e-4daa-4f87-bbdc-d1ccbfdafa27\") " pod="openstack/nova-cell1-cell-mapping-x6sqt" Dec 11 14:11:07 crc kubenswrapper[5050]: I1211 14:11:07.178967 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4334061e-4daa-4f87-bbdc-d1ccbfdafa27-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-x6sqt\" (UID: \"4334061e-4daa-4f87-bbdc-d1ccbfdafa27\") " pod="openstack/nova-cell1-cell-mapping-x6sqt" Dec 11 14:11:07 crc kubenswrapper[5050]: I1211 14:11:07.280997 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4334061e-4daa-4f87-bbdc-d1ccbfdafa27-scripts\") pod \"nova-cell1-cell-mapping-x6sqt\" (UID: \"4334061e-4daa-4f87-bbdc-d1ccbfdafa27\") " pod="openstack/nova-cell1-cell-mapping-x6sqt" Dec 11 14:11:07 crc kubenswrapper[5050]: I1211 14:11:07.281148 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4334061e-4daa-4f87-bbdc-d1ccbfdafa27-config-data\") pod \"nova-cell1-cell-mapping-x6sqt\" (UID: \"4334061e-4daa-4f87-bbdc-d1ccbfdafa27\") " pod="openstack/nova-cell1-cell-mapping-x6sqt" Dec 11 14:11:07 crc kubenswrapper[5050]: I1211 14:11:07.281188 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tsph\" (UniqueName: \"kubernetes.io/projected/4334061e-4daa-4f87-bbdc-d1ccbfdafa27-kube-api-access-9tsph\") pod \"nova-cell1-cell-mapping-x6sqt\" (UID: \"4334061e-4daa-4f87-bbdc-d1ccbfdafa27\") " pod="openstack/nova-cell1-cell-mapping-x6sqt" Dec 11 14:11:07 crc kubenswrapper[5050]: I1211 14:11:07.281260 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4334061e-4daa-4f87-bbdc-d1ccbfdafa27-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-x6sqt\" (UID: \"4334061e-4daa-4f87-bbdc-d1ccbfdafa27\") " pod="openstack/nova-cell1-cell-mapping-x6sqt" Dec 11 14:11:07 crc kubenswrapper[5050]: I1211 14:11:07.286734 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4334061e-4daa-4f87-bbdc-d1ccbfdafa27-scripts\") pod \"nova-cell1-cell-mapping-x6sqt\" (UID: \"4334061e-4daa-4f87-bbdc-d1ccbfdafa27\") " pod="openstack/nova-cell1-cell-mapping-x6sqt" Dec 11 14:11:07 crc kubenswrapper[5050]: I1211 14:11:07.287286 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4334061e-4daa-4f87-bbdc-d1ccbfdafa27-config-data\") pod \"nova-cell1-cell-mapping-x6sqt\" (UID: \"4334061e-4daa-4f87-bbdc-d1ccbfdafa27\") " pod="openstack/nova-cell1-cell-mapping-x6sqt" Dec 11 14:11:07 crc kubenswrapper[5050]: I1211 14:11:07.287608 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4334061e-4daa-4f87-bbdc-d1ccbfdafa27-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-x6sqt\" (UID: \"4334061e-4daa-4f87-bbdc-d1ccbfdafa27\") " pod="openstack/nova-cell1-cell-mapping-x6sqt" Dec 11 14:11:07 crc kubenswrapper[5050]: I1211 14:11:07.303397 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tsph\" (UniqueName: \"kubernetes.io/projected/4334061e-4daa-4f87-bbdc-d1ccbfdafa27-kube-api-access-9tsph\") pod \"nova-cell1-cell-mapping-x6sqt\" (UID: \"4334061e-4daa-4f87-bbdc-d1ccbfdafa27\") " pod="openstack/nova-cell1-cell-mapping-x6sqt" Dec 11 14:11:07 crc kubenswrapper[5050]: I1211 14:11:07.372371 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2fc263ce-85a4-4182-87ac-a5cd6d48b65a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 14:11:07 crc kubenswrapper[5050]: I1211 14:11:07.372417 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="2fc263ce-85a4-4182-87ac-a5cd6d48b65a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 14:11:07 crc kubenswrapper[5050]: I1211 14:11:07.394580 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-x6sqt" Dec 11 14:11:07 crc kubenswrapper[5050]: I1211 14:11:07.578476 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39bb53b7-f4e4-4645-b635-62a51e5e286e" path="/var/lib/kubelet/pods/39bb53b7-f4e4-4645-b635-62a51e5e286e/volumes" Dec 11 14:11:07 crc kubenswrapper[5050]: W1211 14:11:07.893186 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4334061e_4daa_4f87_bbdc_d1ccbfdafa27.slice/crio-f5665562858b97ed69c917e98ec0856c1a00237908d5bfac973f634a488f0826 WatchSource:0}: Error finding container f5665562858b97ed69c917e98ec0856c1a00237908d5bfac973f634a488f0826: Status 404 returned error can't find the container with id f5665562858b97ed69c917e98ec0856c1a00237908d5bfac973f634a488f0826 Dec 11 14:11:07 crc kubenswrapper[5050]: I1211 14:11:07.908220 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-x6sqt"] Dec 11 14:11:08 crc kubenswrapper[5050]: I1211 14:11:08.779696 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-x6sqt" event={"ID":"4334061e-4daa-4f87-bbdc-d1ccbfdafa27","Type":"ContainerStarted","Data":"2ef3cf73755caba84b78f8b5a189480c1997d37bed0c94192044db0751dc4ded"} Dec 11 14:11:08 crc kubenswrapper[5050]: I1211 14:11:08.780260 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-x6sqt" event={"ID":"4334061e-4daa-4f87-bbdc-d1ccbfdafa27","Type":"ContainerStarted","Data":"f5665562858b97ed69c917e98ec0856c1a00237908d5bfac973f634a488f0826"} Dec 11 14:11:08 crc kubenswrapper[5050]: I1211 14:11:08.788031 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ef32727-7bbd-4a50-8292-4740b34107cc","Type":"ContainerStarted","Data":"2df25eb8394651a0a8bc136ef8c2753058167b5dbecc7a2a0c134fede2448a38"} Dec 11 14:11:08 crc kubenswrapper[5050]: I1211 14:11:08.788519 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 11 14:11:08 crc kubenswrapper[5050]: I1211 14:11:08.807085 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-x6sqt" podStartSLOduration=1.8070520220000001 podStartE2EDuration="1.807052022s" podCreationTimestamp="2025-12-11 14:11:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:11:08.799249761 +0000 UTC m=+1359.642972347" watchObservedRunningTime="2025-12-11 14:11:08.807052022 +0000 UTC m=+1359.650774608" Dec 11 14:11:08 crc kubenswrapper[5050]: I1211 14:11:08.830695 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.198258198 podStartE2EDuration="8.83067381s" podCreationTimestamp="2025-12-11 14:11:00 +0000 UTC" firstStartedPulling="2025-12-11 14:11:01.552133813 +0000 UTC m=+1352.395856399" lastFinishedPulling="2025-12-11 14:11:08.184549415 +0000 UTC m=+1359.028272011" observedRunningTime="2025-12-11 14:11:08.822352615 +0000 UTC m=+1359.666075211" watchObservedRunningTime="2025-12-11 14:11:08.83067381 +0000 UTC m=+1359.674396396" Dec 11 14:11:14 crc kubenswrapper[5050]: I1211 14:11:14.117618 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 14:11:14 crc kubenswrapper[5050]: I1211 14:11:14.118538 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 14:11:14 crc kubenswrapper[5050]: I1211 14:11:14.854860 5050 generic.go:334] "Generic (PLEG): container finished" podID="4334061e-4daa-4f87-bbdc-d1ccbfdafa27" containerID="2ef3cf73755caba84b78f8b5a189480c1997d37bed0c94192044db0751dc4ded" exitCode=0 Dec 11 14:11:14 crc kubenswrapper[5050]: I1211 14:11:14.854920 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-x6sqt" event={"ID":"4334061e-4daa-4f87-bbdc-d1ccbfdafa27","Type":"ContainerDied","Data":"2ef3cf73755caba84b78f8b5a189480c1997d37bed0c94192044db0751dc4ded"} Dec 11 14:11:15 crc kubenswrapper[5050]: I1211 14:11:15.133226 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7e2c30e6-8333-4cea-bf9c-8111be66ed79" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.198:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 14:11:15 crc kubenswrapper[5050]: I1211 14:11:15.133258 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7e2c30e6-8333-4cea-bf9c-8111be66ed79" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.198:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 14:11:16 crc kubenswrapper[5050]: I1211 14:11:16.321782 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-x6sqt" Dec 11 14:11:16 crc kubenswrapper[5050]: I1211 14:11:16.366928 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 11 14:11:16 crc kubenswrapper[5050]: I1211 14:11:16.369645 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 11 14:11:16 crc kubenswrapper[5050]: I1211 14:11:16.375028 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 11 14:11:16 crc kubenswrapper[5050]: I1211 14:11:16.403181 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4334061e-4daa-4f87-bbdc-d1ccbfdafa27-scripts\") pod \"4334061e-4daa-4f87-bbdc-d1ccbfdafa27\" (UID: \"4334061e-4daa-4f87-bbdc-d1ccbfdafa27\") " Dec 11 14:11:16 crc kubenswrapper[5050]: I1211 14:11:16.403405 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4334061e-4daa-4f87-bbdc-d1ccbfdafa27-combined-ca-bundle\") pod \"4334061e-4daa-4f87-bbdc-d1ccbfdafa27\" (UID: \"4334061e-4daa-4f87-bbdc-d1ccbfdafa27\") " Dec 11 14:11:16 crc kubenswrapper[5050]: I1211 14:11:16.403484 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tsph\" (UniqueName: \"kubernetes.io/projected/4334061e-4daa-4f87-bbdc-d1ccbfdafa27-kube-api-access-9tsph\") pod \"4334061e-4daa-4f87-bbdc-d1ccbfdafa27\" (UID: \"4334061e-4daa-4f87-bbdc-d1ccbfdafa27\") " Dec 11 14:11:16 crc kubenswrapper[5050]: I1211 14:11:16.403556 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4334061e-4daa-4f87-bbdc-d1ccbfdafa27-config-data\") pod \"4334061e-4daa-4f87-bbdc-d1ccbfdafa27\" (UID: \"4334061e-4daa-4f87-bbdc-d1ccbfdafa27\") " Dec 11 14:11:16 crc kubenswrapper[5050]: I1211 14:11:16.420975 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4334061e-4daa-4f87-bbdc-d1ccbfdafa27-kube-api-access-9tsph" (OuterVolumeSpecName: "kube-api-access-9tsph") pod "4334061e-4daa-4f87-bbdc-d1ccbfdafa27" (UID: "4334061e-4daa-4f87-bbdc-d1ccbfdafa27"). InnerVolumeSpecName "kube-api-access-9tsph". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:11:16 crc kubenswrapper[5050]: I1211 14:11:16.426547 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4334061e-4daa-4f87-bbdc-d1ccbfdafa27-scripts" (OuterVolumeSpecName: "scripts") pod "4334061e-4daa-4f87-bbdc-d1ccbfdafa27" (UID: "4334061e-4daa-4f87-bbdc-d1ccbfdafa27"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:11:16 crc kubenswrapper[5050]: I1211 14:11:16.439225 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4334061e-4daa-4f87-bbdc-d1ccbfdafa27-config-data" (OuterVolumeSpecName: "config-data") pod "4334061e-4daa-4f87-bbdc-d1ccbfdafa27" (UID: "4334061e-4daa-4f87-bbdc-d1ccbfdafa27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:11:16 crc kubenswrapper[5050]: I1211 14:11:16.445551 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4334061e-4daa-4f87-bbdc-d1ccbfdafa27-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4334061e-4daa-4f87-bbdc-d1ccbfdafa27" (UID: "4334061e-4daa-4f87-bbdc-d1ccbfdafa27"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:11:16 crc kubenswrapper[5050]: I1211 14:11:16.506739 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4334061e-4daa-4f87-bbdc-d1ccbfdafa27-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:16 crc kubenswrapper[5050]: I1211 14:11:16.506781 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tsph\" (UniqueName: \"kubernetes.io/projected/4334061e-4daa-4f87-bbdc-d1ccbfdafa27-kube-api-access-9tsph\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:16 crc kubenswrapper[5050]: I1211 14:11:16.506794 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4334061e-4daa-4f87-bbdc-d1ccbfdafa27-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:16 crc kubenswrapper[5050]: I1211 14:11:16.506809 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4334061e-4daa-4f87-bbdc-d1ccbfdafa27-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:16 crc kubenswrapper[5050]: I1211 14:11:16.879116 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-x6sqt" event={"ID":"4334061e-4daa-4f87-bbdc-d1ccbfdafa27","Type":"ContainerDied","Data":"f5665562858b97ed69c917e98ec0856c1a00237908d5bfac973f634a488f0826"} Dec 11 14:11:16 crc kubenswrapper[5050]: I1211 14:11:16.879199 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5665562858b97ed69c917e98ec0856c1a00237908d5bfac973f634a488f0826" Dec 11 14:11:16 crc kubenswrapper[5050]: I1211 14:11:16.879211 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-x6sqt" Dec 11 14:11:16 crc kubenswrapper[5050]: I1211 14:11:16.885956 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 11 14:11:17 crc kubenswrapper[5050]: I1211 14:11:17.086278 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 14:11:17 crc kubenswrapper[5050]: I1211 14:11:17.086627 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7e2c30e6-8333-4cea-bf9c-8111be66ed79" containerName="nova-api-log" containerID="cri-o://6cca3c7c0251946dd45dfce4cb39a2bd940d4693e3808767f9590bd1643c4a06" gracePeriod=30 Dec 11 14:11:17 crc kubenswrapper[5050]: I1211 14:11:17.086713 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7e2c30e6-8333-4cea-bf9c-8111be66ed79" containerName="nova-api-api" containerID="cri-o://b6342cfc4dce56b6a59f67ac4172b491a0e6b90b71daf0925a6a8600b24badf2" gracePeriod=30 Dec 11 14:11:17 crc kubenswrapper[5050]: I1211 14:11:17.102876 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 14:11:17 crc kubenswrapper[5050]: I1211 14:11:17.103715 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="2dd3de51-b591-4c54-9479-f94369d70ecf" containerName="nova-scheduler-scheduler" containerID="cri-o://288445757ecc34f547f099e2777c0212196ffb7ac831d9f3f4b9391630a1db24" gracePeriod=30 Dec 11 14:11:17 crc kubenswrapper[5050]: I1211 14:11:17.225359 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 14:11:17 crc kubenswrapper[5050]: I1211 14:11:17.892524 5050 generic.go:334] "Generic (PLEG): container finished" podID="7e2c30e6-8333-4cea-bf9c-8111be66ed79" containerID="6cca3c7c0251946dd45dfce4cb39a2bd940d4693e3808767f9590bd1643c4a06" exitCode=143 Dec 11 14:11:17 crc kubenswrapper[5050]: I1211 14:11:17.892596 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7e2c30e6-8333-4cea-bf9c-8111be66ed79","Type":"ContainerDied","Data":"6cca3c7c0251946dd45dfce4cb39a2bd940d4693e3808767f9590bd1643c4a06"} Dec 11 14:11:18 crc kubenswrapper[5050]: I1211 14:11:18.904663 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2fc263ce-85a4-4182-87ac-a5cd6d48b65a" containerName="nova-metadata-log" containerID="cri-o://65116f2a8027939135081a58a517923b4acc7016a42da1a1008f62b9e4677834" gracePeriod=30 Dec 11 14:11:18 crc kubenswrapper[5050]: I1211 14:11:18.904731 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2fc263ce-85a4-4182-87ac-a5cd6d48b65a" containerName="nova-metadata-metadata" containerID="cri-o://907158c68cab64311c7fc08de659e03e3f32e17b40d777736a044df8f88009bc" gracePeriod=30 Dec 11 14:11:19 crc kubenswrapper[5050]: I1211 14:11:19.919429 5050 generic.go:334] "Generic (PLEG): container finished" podID="2dd3de51-b591-4c54-9479-f94369d70ecf" containerID="288445757ecc34f547f099e2777c0212196ffb7ac831d9f3f4b9391630a1db24" exitCode=0 Dec 11 14:11:19 crc kubenswrapper[5050]: I1211 14:11:19.919521 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2dd3de51-b591-4c54-9479-f94369d70ecf","Type":"ContainerDied","Data":"288445757ecc34f547f099e2777c0212196ffb7ac831d9f3f4b9391630a1db24"} Dec 11 14:11:19 crc kubenswrapper[5050]: I1211 14:11:19.925359 5050 generic.go:334] "Generic (PLEG): container finished" podID="2fc263ce-85a4-4182-87ac-a5cd6d48b65a" containerID="65116f2a8027939135081a58a517923b4acc7016a42da1a1008f62b9e4677834" exitCode=143 Dec 11 14:11:19 crc kubenswrapper[5050]: I1211 14:11:19.925419 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2fc263ce-85a4-4182-87ac-a5cd6d48b65a","Type":"ContainerDied","Data":"65116f2a8027939135081a58a517923b4acc7016a42da1a1008f62b9e4677834"} Dec 11 14:11:20 crc kubenswrapper[5050]: I1211 14:11:20.217237 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 14:11:20 crc kubenswrapper[5050]: I1211 14:11:20.286765 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dd3de51-b591-4c54-9479-f94369d70ecf-config-data\") pod \"2dd3de51-b591-4c54-9479-f94369d70ecf\" (UID: \"2dd3de51-b591-4c54-9479-f94369d70ecf\") " Dec 11 14:11:20 crc kubenswrapper[5050]: I1211 14:11:20.287128 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqlpj\" (UniqueName: \"kubernetes.io/projected/2dd3de51-b591-4c54-9479-f94369d70ecf-kube-api-access-lqlpj\") pod \"2dd3de51-b591-4c54-9479-f94369d70ecf\" (UID: \"2dd3de51-b591-4c54-9479-f94369d70ecf\") " Dec 11 14:11:20 crc kubenswrapper[5050]: I1211 14:11:20.287158 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd3de51-b591-4c54-9479-f94369d70ecf-combined-ca-bundle\") pod \"2dd3de51-b591-4c54-9479-f94369d70ecf\" (UID: \"2dd3de51-b591-4c54-9479-f94369d70ecf\") " Dec 11 14:11:20 crc kubenswrapper[5050]: I1211 14:11:20.312273 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dd3de51-b591-4c54-9479-f94369d70ecf-kube-api-access-lqlpj" (OuterVolumeSpecName: "kube-api-access-lqlpj") pod "2dd3de51-b591-4c54-9479-f94369d70ecf" (UID: "2dd3de51-b591-4c54-9479-f94369d70ecf"). InnerVolumeSpecName "kube-api-access-lqlpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:11:20 crc kubenswrapper[5050]: I1211 14:11:20.323566 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dd3de51-b591-4c54-9479-f94369d70ecf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2dd3de51-b591-4c54-9479-f94369d70ecf" (UID: "2dd3de51-b591-4c54-9479-f94369d70ecf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:11:20 crc kubenswrapper[5050]: I1211 14:11:20.337657 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dd3de51-b591-4c54-9479-f94369d70ecf-config-data" (OuterVolumeSpecName: "config-data") pod "2dd3de51-b591-4c54-9479-f94369d70ecf" (UID: "2dd3de51-b591-4c54-9479-f94369d70ecf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:11:20 crc kubenswrapper[5050]: I1211 14:11:20.389510 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqlpj\" (UniqueName: \"kubernetes.io/projected/2dd3de51-b591-4c54-9479-f94369d70ecf-kube-api-access-lqlpj\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:20 crc kubenswrapper[5050]: I1211 14:11:20.389716 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dd3de51-b591-4c54-9479-f94369d70ecf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:20 crc kubenswrapper[5050]: I1211 14:11:20.389827 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dd3de51-b591-4c54-9479-f94369d70ecf-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:20 crc kubenswrapper[5050]: I1211 14:11:20.941296 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2dd3de51-b591-4c54-9479-f94369d70ecf","Type":"ContainerDied","Data":"76c7cc04f94a7671c50208c9b02c0154ba6419c7191d51fa0f8af2ea12eaf5a0"} Dec 11 14:11:20 crc kubenswrapper[5050]: I1211 14:11:20.941360 5050 scope.go:117] "RemoveContainer" containerID="288445757ecc34f547f099e2777c0212196ffb7ac831d9f3f4b9391630a1db24" Dec 11 14:11:20 crc kubenswrapper[5050]: I1211 14:11:20.941513 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 14:11:20 crc kubenswrapper[5050]: I1211 14:11:20.967897 5050 generic.go:334] "Generic (PLEG): container finished" podID="7e2c30e6-8333-4cea-bf9c-8111be66ed79" containerID="b6342cfc4dce56b6a59f67ac4172b491a0e6b90b71daf0925a6a8600b24badf2" exitCode=0 Dec 11 14:11:20 crc kubenswrapper[5050]: I1211 14:11:20.967962 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7e2c30e6-8333-4cea-bf9c-8111be66ed79","Type":"ContainerDied","Data":"b6342cfc4dce56b6a59f67ac4172b491a0e6b90b71daf0925a6a8600b24badf2"} Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.012983 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.024297 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.038319 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 14:11:21 crc kubenswrapper[5050]: E1211 14:11:21.039109 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4334061e-4daa-4f87-bbdc-d1ccbfdafa27" containerName="nova-manage" Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.039142 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="4334061e-4daa-4f87-bbdc-d1ccbfdafa27" containerName="nova-manage" Dec 11 14:11:21 crc kubenswrapper[5050]: E1211 14:11:21.039157 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dd3de51-b591-4c54-9479-f94369d70ecf" containerName="nova-scheduler-scheduler" Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.039168 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dd3de51-b591-4c54-9479-f94369d70ecf" containerName="nova-scheduler-scheduler" Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.039480 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dd3de51-b591-4c54-9479-f94369d70ecf" containerName="nova-scheduler-scheduler" Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.039548 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="4334061e-4daa-4f87-bbdc-d1ccbfdafa27" containerName="nova-manage" Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.040825 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.048801 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.049997 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.105783 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87937f27-2525-4fed-88bb-38a90404860c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"87937f27-2525-4fed-88bb-38a90404860c\") " pod="openstack/nova-scheduler-0" Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.106304 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87937f27-2525-4fed-88bb-38a90404860c-config-data\") pod \"nova-scheduler-0\" (UID: \"87937f27-2525-4fed-88bb-38a90404860c\") " pod="openstack/nova-scheduler-0" Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.106355 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glxxg\" (UniqueName: \"kubernetes.io/projected/87937f27-2525-4fed-88bb-38a90404860c-kube-api-access-glxxg\") pod \"nova-scheduler-0\" (UID: \"87937f27-2525-4fed-88bb-38a90404860c\") " pod="openstack/nova-scheduler-0" Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.208648 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87937f27-2525-4fed-88bb-38a90404860c-config-data\") pod \"nova-scheduler-0\" (UID: \"87937f27-2525-4fed-88bb-38a90404860c\") " pod="openstack/nova-scheduler-0" Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.208741 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glxxg\" (UniqueName: \"kubernetes.io/projected/87937f27-2525-4fed-88bb-38a90404860c-kube-api-access-glxxg\") pod \"nova-scheduler-0\" (UID: \"87937f27-2525-4fed-88bb-38a90404860c\") " pod="openstack/nova-scheduler-0" Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.208785 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87937f27-2525-4fed-88bb-38a90404860c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"87937f27-2525-4fed-88bb-38a90404860c\") " pod="openstack/nova-scheduler-0" Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.225678 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87937f27-2525-4fed-88bb-38a90404860c-config-data\") pod \"nova-scheduler-0\" (UID: \"87937f27-2525-4fed-88bb-38a90404860c\") " pod="openstack/nova-scheduler-0" Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.228235 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87937f27-2525-4fed-88bb-38a90404860c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"87937f27-2525-4fed-88bb-38a90404860c\") " pod="openstack/nova-scheduler-0" Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.231716 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glxxg\" (UniqueName: \"kubernetes.io/projected/87937f27-2525-4fed-88bb-38a90404860c-kube-api-access-glxxg\") pod \"nova-scheduler-0\" (UID: \"87937f27-2525-4fed-88bb-38a90404860c\") " pod="openstack/nova-scheduler-0" Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.323090 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.361915 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.411628 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47thb\" (UniqueName: \"kubernetes.io/projected/7e2c30e6-8333-4cea-bf9c-8111be66ed79-kube-api-access-47thb\") pod \"7e2c30e6-8333-4cea-bf9c-8111be66ed79\" (UID: \"7e2c30e6-8333-4cea-bf9c-8111be66ed79\") " Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.411761 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e2c30e6-8333-4cea-bf9c-8111be66ed79-logs\") pod \"7e2c30e6-8333-4cea-bf9c-8111be66ed79\" (UID: \"7e2c30e6-8333-4cea-bf9c-8111be66ed79\") " Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.411799 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2c30e6-8333-4cea-bf9c-8111be66ed79-internal-tls-certs\") pod \"7e2c30e6-8333-4cea-bf9c-8111be66ed79\" (UID: \"7e2c30e6-8333-4cea-bf9c-8111be66ed79\") " Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.411872 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2c30e6-8333-4cea-bf9c-8111be66ed79-public-tls-certs\") pod \"7e2c30e6-8333-4cea-bf9c-8111be66ed79\" (UID: \"7e2c30e6-8333-4cea-bf9c-8111be66ed79\") " Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.411960 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e2c30e6-8333-4cea-bf9c-8111be66ed79-config-data\") pod \"7e2c30e6-8333-4cea-bf9c-8111be66ed79\" (UID: \"7e2c30e6-8333-4cea-bf9c-8111be66ed79\") " Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.412088 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e2c30e6-8333-4cea-bf9c-8111be66ed79-combined-ca-bundle\") pod \"7e2c30e6-8333-4cea-bf9c-8111be66ed79\" (UID: \"7e2c30e6-8333-4cea-bf9c-8111be66ed79\") " Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.415240 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e2c30e6-8333-4cea-bf9c-8111be66ed79-logs" (OuterVolumeSpecName: "logs") pod "7e2c30e6-8333-4cea-bf9c-8111be66ed79" (UID: "7e2c30e6-8333-4cea-bf9c-8111be66ed79"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.422487 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e2c30e6-8333-4cea-bf9c-8111be66ed79-kube-api-access-47thb" (OuterVolumeSpecName: "kube-api-access-47thb") pod "7e2c30e6-8333-4cea-bf9c-8111be66ed79" (UID: "7e2c30e6-8333-4cea-bf9c-8111be66ed79"). InnerVolumeSpecName "kube-api-access-47thb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.463672 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e2c30e6-8333-4cea-bf9c-8111be66ed79-config-data" (OuterVolumeSpecName: "config-data") pod "7e2c30e6-8333-4cea-bf9c-8111be66ed79" (UID: "7e2c30e6-8333-4cea-bf9c-8111be66ed79"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.469276 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e2c30e6-8333-4cea-bf9c-8111be66ed79-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e2c30e6-8333-4cea-bf9c-8111be66ed79" (UID: "7e2c30e6-8333-4cea-bf9c-8111be66ed79"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.484223 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e2c30e6-8333-4cea-bf9c-8111be66ed79-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7e2c30e6-8333-4cea-bf9c-8111be66ed79" (UID: "7e2c30e6-8333-4cea-bf9c-8111be66ed79"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.492633 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e2c30e6-8333-4cea-bf9c-8111be66ed79-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7e2c30e6-8333-4cea-bf9c-8111be66ed79" (UID: "7e2c30e6-8333-4cea-bf9c-8111be66ed79"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.515224 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e2c30e6-8333-4cea-bf9c-8111be66ed79-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.515671 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e2c30e6-8333-4cea-bf9c-8111be66ed79-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.515691 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47thb\" (UniqueName: \"kubernetes.io/projected/7e2c30e6-8333-4cea-bf9c-8111be66ed79-kube-api-access-47thb\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.515704 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e2c30e6-8333-4cea-bf9c-8111be66ed79-logs\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.515716 5050 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2c30e6-8333-4cea-bf9c-8111be66ed79-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.515731 5050 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e2c30e6-8333-4cea-bf9c-8111be66ed79-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.564274 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dd3de51-b591-4c54-9479-f94369d70ecf" path="/var/lib/kubelet/pods/2dd3de51-b591-4c54-9479-f94369d70ecf/volumes" Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.876501 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 14:11:21 crc kubenswrapper[5050]: W1211 14:11:21.882536 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87937f27_2525_4fed_88bb_38a90404860c.slice/crio-12f511cd7b7e0497850929955d2ffdd39e255588a6ae04bfa636080187e6b832 WatchSource:0}: Error finding container 12f511cd7b7e0497850929955d2ffdd39e255588a6ae04bfa636080187e6b832: Status 404 returned error can't find the container with id 12f511cd7b7e0497850929955d2ffdd39e255588a6ae04bfa636080187e6b832 Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.984295 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7e2c30e6-8333-4cea-bf9c-8111be66ed79","Type":"ContainerDied","Data":"88dd40b935a4cf133a3d1d782f97c00bc4273d0b55555e184bcebd256e069f78"} Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.984367 5050 scope.go:117] "RemoveContainer" containerID="b6342cfc4dce56b6a59f67ac4172b491a0e6b90b71daf0925a6a8600b24badf2" Dec 11 14:11:21 crc kubenswrapper[5050]: I1211 14:11:21.984430 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 14:11:22 crc kubenswrapper[5050]: I1211 14:11:21.997222 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"87937f27-2525-4fed-88bb-38a90404860c","Type":"ContainerStarted","Data":"12f511cd7b7e0497850929955d2ffdd39e255588a6ae04bfa636080187e6b832"} Dec 11 14:11:22 crc kubenswrapper[5050]: I1211 14:11:22.048065 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 14:11:22 crc kubenswrapper[5050]: I1211 14:11:22.051482 5050 scope.go:117] "RemoveContainer" containerID="6cca3c7c0251946dd45dfce4cb39a2bd940d4693e3808767f9590bd1643c4a06" Dec 11 14:11:22 crc kubenswrapper[5050]: I1211 14:11:22.058049 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="2fc263ce-85a4-4182-87ac-a5cd6d48b65a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:33700->10.217.0.196:8775: read: connection reset by peer" Dec 11 14:11:22 crc kubenswrapper[5050]: I1211 14:11:22.058230 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="2fc263ce-85a4-4182-87ac-a5cd6d48b65a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:33688->10.217.0.196:8775: read: connection reset by peer" Dec 11 14:11:22 crc kubenswrapper[5050]: I1211 14:11:22.099586 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 11 14:11:22 crc kubenswrapper[5050]: I1211 14:11:22.122053 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 11 14:11:22 crc kubenswrapper[5050]: E1211 14:11:22.122611 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e2c30e6-8333-4cea-bf9c-8111be66ed79" containerName="nova-api-api" Dec 11 14:11:22 crc kubenswrapper[5050]: I1211 14:11:22.122633 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e2c30e6-8333-4cea-bf9c-8111be66ed79" containerName="nova-api-api" Dec 11 14:11:22 crc kubenswrapper[5050]: E1211 14:11:22.122675 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e2c30e6-8333-4cea-bf9c-8111be66ed79" containerName="nova-api-log" Dec 11 14:11:22 crc kubenswrapper[5050]: I1211 14:11:22.122682 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e2c30e6-8333-4cea-bf9c-8111be66ed79" containerName="nova-api-log" Dec 11 14:11:22 crc kubenswrapper[5050]: I1211 14:11:22.122938 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e2c30e6-8333-4cea-bf9c-8111be66ed79" containerName="nova-api-api" Dec 11 14:11:22 crc kubenswrapper[5050]: I1211 14:11:22.122968 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e2c30e6-8333-4cea-bf9c-8111be66ed79" containerName="nova-api-log" Dec 11 14:11:22 crc kubenswrapper[5050]: I1211 14:11:22.124386 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 14:11:22 crc kubenswrapper[5050]: I1211 14:11:22.127468 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 11 14:11:22 crc kubenswrapper[5050]: I1211 14:11:22.128291 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 11 14:11:22 crc kubenswrapper[5050]: I1211 14:11:22.135037 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 11 14:11:22 crc kubenswrapper[5050]: I1211 14:11:22.137000 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 14:11:22 crc kubenswrapper[5050]: I1211 14:11:22.249911 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3386ffea-45ca-41e8-9aa5-61a2923a3394-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3386ffea-45ca-41e8-9aa5-61a2923a3394\") " pod="openstack/nova-api-0" Dec 11 14:11:22 crc kubenswrapper[5050]: I1211 14:11:22.249969 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzgf4\" (UniqueName: \"kubernetes.io/projected/3386ffea-45ca-41e8-9aa5-61a2923a3394-kube-api-access-wzgf4\") pod \"nova-api-0\" (UID: \"3386ffea-45ca-41e8-9aa5-61a2923a3394\") " pod="openstack/nova-api-0" Dec 11 14:11:22 crc kubenswrapper[5050]: I1211 14:11:22.250073 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3386ffea-45ca-41e8-9aa5-61a2923a3394-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3386ffea-45ca-41e8-9aa5-61a2923a3394\") " pod="openstack/nova-api-0" Dec 11 14:11:22 crc kubenswrapper[5050]: I1211 14:11:22.250424 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3386ffea-45ca-41e8-9aa5-61a2923a3394-logs\") pod \"nova-api-0\" (UID: \"3386ffea-45ca-41e8-9aa5-61a2923a3394\") " pod="openstack/nova-api-0" Dec 11 14:11:22 crc kubenswrapper[5050]: I1211 14:11:22.250609 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3386ffea-45ca-41e8-9aa5-61a2923a3394-config-data\") pod \"nova-api-0\" (UID: \"3386ffea-45ca-41e8-9aa5-61a2923a3394\") " pod="openstack/nova-api-0" Dec 11 14:11:22 crc kubenswrapper[5050]: I1211 14:11:22.250929 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3386ffea-45ca-41e8-9aa5-61a2923a3394-public-tls-certs\") pod \"nova-api-0\" (UID: \"3386ffea-45ca-41e8-9aa5-61a2923a3394\") " pod="openstack/nova-api-0" Dec 11 14:11:22 crc kubenswrapper[5050]: I1211 14:11:22.352609 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3386ffea-45ca-41e8-9aa5-61a2923a3394-logs\") pod \"nova-api-0\" (UID: \"3386ffea-45ca-41e8-9aa5-61a2923a3394\") " pod="openstack/nova-api-0" Dec 11 14:11:22 crc kubenswrapper[5050]: I1211 14:11:22.352703 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3386ffea-45ca-41e8-9aa5-61a2923a3394-config-data\") pod \"nova-api-0\" (UID: \"3386ffea-45ca-41e8-9aa5-61a2923a3394\") " pod="openstack/nova-api-0" Dec 11 14:11:22 crc kubenswrapper[5050]: I1211 14:11:22.352799 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3386ffea-45ca-41e8-9aa5-61a2923a3394-public-tls-certs\") pod \"nova-api-0\" (UID: \"3386ffea-45ca-41e8-9aa5-61a2923a3394\") " pod="openstack/nova-api-0" Dec 11 14:11:22 crc kubenswrapper[5050]: I1211 14:11:22.352834 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3386ffea-45ca-41e8-9aa5-61a2923a3394-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3386ffea-45ca-41e8-9aa5-61a2923a3394\") " pod="openstack/nova-api-0" Dec 11 14:11:22 crc kubenswrapper[5050]: I1211 14:11:22.353593 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzgf4\" (UniqueName: \"kubernetes.io/projected/3386ffea-45ca-41e8-9aa5-61a2923a3394-kube-api-access-wzgf4\") pod \"nova-api-0\" (UID: \"3386ffea-45ca-41e8-9aa5-61a2923a3394\") " pod="openstack/nova-api-0" Dec 11 14:11:22 crc kubenswrapper[5050]: I1211 14:11:22.353645 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3386ffea-45ca-41e8-9aa5-61a2923a3394-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3386ffea-45ca-41e8-9aa5-61a2923a3394\") " pod="openstack/nova-api-0" Dec 11 14:11:22 crc kubenswrapper[5050]: I1211 14:11:22.353264 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3386ffea-45ca-41e8-9aa5-61a2923a3394-logs\") pod \"nova-api-0\" (UID: \"3386ffea-45ca-41e8-9aa5-61a2923a3394\") " pod="openstack/nova-api-0" Dec 11 14:11:22 crc kubenswrapper[5050]: I1211 14:11:22.359178 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3386ffea-45ca-41e8-9aa5-61a2923a3394-config-data\") pod \"nova-api-0\" (UID: \"3386ffea-45ca-41e8-9aa5-61a2923a3394\") " pod="openstack/nova-api-0" Dec 11 14:11:22 crc kubenswrapper[5050]: I1211 14:11:22.359507 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3386ffea-45ca-41e8-9aa5-61a2923a3394-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"3386ffea-45ca-41e8-9aa5-61a2923a3394\") " pod="openstack/nova-api-0" Dec 11 14:11:22 crc kubenswrapper[5050]: I1211 14:11:22.359898 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3386ffea-45ca-41e8-9aa5-61a2923a3394-internal-tls-certs\") pod \"nova-api-0\" (UID: \"3386ffea-45ca-41e8-9aa5-61a2923a3394\") " pod="openstack/nova-api-0" Dec 11 14:11:22 crc kubenswrapper[5050]: I1211 14:11:22.361560 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3386ffea-45ca-41e8-9aa5-61a2923a3394-public-tls-certs\") pod \"nova-api-0\" (UID: \"3386ffea-45ca-41e8-9aa5-61a2923a3394\") " pod="openstack/nova-api-0" Dec 11 14:11:22 crc kubenswrapper[5050]: I1211 14:11:22.379893 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzgf4\" (UniqueName: \"kubernetes.io/projected/3386ffea-45ca-41e8-9aa5-61a2923a3394-kube-api-access-wzgf4\") pod \"nova-api-0\" (UID: \"3386ffea-45ca-41e8-9aa5-61a2923a3394\") " pod="openstack/nova-api-0" Dec 11 14:11:22 crc kubenswrapper[5050]: I1211 14:11:22.445993 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 14:11:23 crc kubenswrapper[5050]: I1211 14:11:23.001638 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 14:11:23 crc kubenswrapper[5050]: I1211 14:11:23.015447 5050 generic.go:334] "Generic (PLEG): container finished" podID="2fc263ce-85a4-4182-87ac-a5cd6d48b65a" containerID="907158c68cab64311c7fc08de659e03e3f32e17b40d777736a044df8f88009bc" exitCode=0 Dec 11 14:11:23 crc kubenswrapper[5050]: I1211 14:11:23.015530 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2fc263ce-85a4-4182-87ac-a5cd6d48b65a","Type":"ContainerDied","Data":"907158c68cab64311c7fc08de659e03e3f32e17b40d777736a044df8f88009bc"} Dec 11 14:11:23 crc kubenswrapper[5050]: I1211 14:11:23.017438 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3386ffea-45ca-41e8-9aa5-61a2923a3394","Type":"ContainerStarted","Data":"9ea84eaaf8c76e0514c788691c988324c41b656d2f9f56a1293298caf825c84c"} Dec 11 14:11:23 crc kubenswrapper[5050]: I1211 14:11:23.019350 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"87937f27-2525-4fed-88bb-38a90404860c","Type":"ContainerStarted","Data":"8106f10bed3bf46c894b57795f1b60477931a5aba11442b13a0172969049c89a"} Dec 11 14:11:23 crc kubenswrapper[5050]: I1211 14:11:23.054279 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.054255234 podStartE2EDuration="3.054255234s" podCreationTimestamp="2025-12-11 14:11:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:11:23.043511164 +0000 UTC m=+1373.887233760" watchObservedRunningTime="2025-12-11 14:11:23.054255234 +0000 UTC m=+1373.897977820" Dec 11 14:11:23 crc kubenswrapper[5050]: I1211 14:11:23.068345 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 14:11:23 crc kubenswrapper[5050]: I1211 14:11:23.173312 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc263ce-85a4-4182-87ac-a5cd6d48b65a-combined-ca-bundle\") pod \"2fc263ce-85a4-4182-87ac-a5cd6d48b65a\" (UID: \"2fc263ce-85a4-4182-87ac-a5cd6d48b65a\") " Dec 11 14:11:23 crc kubenswrapper[5050]: I1211 14:11:23.173620 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzkzr\" (UniqueName: \"kubernetes.io/projected/2fc263ce-85a4-4182-87ac-a5cd6d48b65a-kube-api-access-jzkzr\") pod \"2fc263ce-85a4-4182-87ac-a5cd6d48b65a\" (UID: \"2fc263ce-85a4-4182-87ac-a5cd6d48b65a\") " Dec 11 14:11:23 crc kubenswrapper[5050]: I1211 14:11:23.173786 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fc263ce-85a4-4182-87ac-a5cd6d48b65a-config-data\") pod \"2fc263ce-85a4-4182-87ac-a5cd6d48b65a\" (UID: \"2fc263ce-85a4-4182-87ac-a5cd6d48b65a\") " Dec 11 14:11:23 crc kubenswrapper[5050]: I1211 14:11:23.173860 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fc263ce-85a4-4182-87ac-a5cd6d48b65a-nova-metadata-tls-certs\") pod \"2fc263ce-85a4-4182-87ac-a5cd6d48b65a\" (UID: \"2fc263ce-85a4-4182-87ac-a5cd6d48b65a\") " Dec 11 14:11:23 crc kubenswrapper[5050]: I1211 14:11:23.173947 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fc263ce-85a4-4182-87ac-a5cd6d48b65a-logs\") pod \"2fc263ce-85a4-4182-87ac-a5cd6d48b65a\" (UID: \"2fc263ce-85a4-4182-87ac-a5cd6d48b65a\") " Dec 11 14:11:23 crc kubenswrapper[5050]: I1211 14:11:23.175008 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fc263ce-85a4-4182-87ac-a5cd6d48b65a-logs" (OuterVolumeSpecName: "logs") pod "2fc263ce-85a4-4182-87ac-a5cd6d48b65a" (UID: "2fc263ce-85a4-4182-87ac-a5cd6d48b65a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:11:23 crc kubenswrapper[5050]: I1211 14:11:23.181204 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fc263ce-85a4-4182-87ac-a5cd6d48b65a-kube-api-access-jzkzr" (OuterVolumeSpecName: "kube-api-access-jzkzr") pod "2fc263ce-85a4-4182-87ac-a5cd6d48b65a" (UID: "2fc263ce-85a4-4182-87ac-a5cd6d48b65a"). InnerVolumeSpecName "kube-api-access-jzkzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:11:23 crc kubenswrapper[5050]: I1211 14:11:23.215496 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc263ce-85a4-4182-87ac-a5cd6d48b65a-config-data" (OuterVolumeSpecName: "config-data") pod "2fc263ce-85a4-4182-87ac-a5cd6d48b65a" (UID: "2fc263ce-85a4-4182-87ac-a5cd6d48b65a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:11:23 crc kubenswrapper[5050]: I1211 14:11:23.215507 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc263ce-85a4-4182-87ac-a5cd6d48b65a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2fc263ce-85a4-4182-87ac-a5cd6d48b65a" (UID: "2fc263ce-85a4-4182-87ac-a5cd6d48b65a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:11:23 crc kubenswrapper[5050]: I1211 14:11:23.264583 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc263ce-85a4-4182-87ac-a5cd6d48b65a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "2fc263ce-85a4-4182-87ac-a5cd6d48b65a" (UID: "2fc263ce-85a4-4182-87ac-a5cd6d48b65a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:11:23 crc kubenswrapper[5050]: I1211 14:11:23.277093 5050 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/2fc263ce-85a4-4182-87ac-a5cd6d48b65a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:23 crc kubenswrapper[5050]: I1211 14:11:23.277325 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fc263ce-85a4-4182-87ac-a5cd6d48b65a-logs\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:23 crc kubenswrapper[5050]: I1211 14:11:23.277387 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc263ce-85a4-4182-87ac-a5cd6d48b65a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:23 crc kubenswrapper[5050]: I1211 14:11:23.277446 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzkzr\" (UniqueName: \"kubernetes.io/projected/2fc263ce-85a4-4182-87ac-a5cd6d48b65a-kube-api-access-jzkzr\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:23 crc kubenswrapper[5050]: I1211 14:11:23.277503 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fc263ce-85a4-4182-87ac-a5cd6d48b65a-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:11:23 crc kubenswrapper[5050]: I1211 14:11:23.559552 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e2c30e6-8333-4cea-bf9c-8111be66ed79" path="/var/lib/kubelet/pods/7e2c30e6-8333-4cea-bf9c-8111be66ed79/volumes" Dec 11 14:11:24 crc kubenswrapper[5050]: I1211 14:11:24.033666 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2fc263ce-85a4-4182-87ac-a5cd6d48b65a","Type":"ContainerDied","Data":"dd4e926bcc126f22d11295efc1781f81ecc1fb76661af81d539ba41f6e8382e5"} Dec 11 14:11:24 crc kubenswrapper[5050]: I1211 14:11:24.033717 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 14:11:24 crc kubenswrapper[5050]: I1211 14:11:24.034199 5050 scope.go:117] "RemoveContainer" containerID="907158c68cab64311c7fc08de659e03e3f32e17b40d777736a044df8f88009bc" Dec 11 14:11:24 crc kubenswrapper[5050]: I1211 14:11:24.035693 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3386ffea-45ca-41e8-9aa5-61a2923a3394","Type":"ContainerStarted","Data":"b8472b1400fec5a115ddf8be1b9aa9e96f77b6e60231027d5893fbbc8989bdac"} Dec 11 14:11:24 crc kubenswrapper[5050]: I1211 14:11:24.036177 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3386ffea-45ca-41e8-9aa5-61a2923a3394","Type":"ContainerStarted","Data":"26578d5a108fa4d1bfd89b54489a03a4c1c636d76f83751795e52594a63ff439"} Dec 11 14:11:24 crc kubenswrapper[5050]: I1211 14:11:24.067947 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 14:11:24 crc kubenswrapper[5050]: I1211 14:11:24.083300 5050 scope.go:117] "RemoveContainer" containerID="65116f2a8027939135081a58a517923b4acc7016a42da1a1008f62b9e4677834" Dec 11 14:11:24 crc kubenswrapper[5050]: I1211 14:11:24.100541 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 14:11:24 crc kubenswrapper[5050]: I1211 14:11:24.117960 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 11 14:11:24 crc kubenswrapper[5050]: E1211 14:11:24.118653 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fc263ce-85a4-4182-87ac-a5cd6d48b65a" containerName="nova-metadata-log" Dec 11 14:11:24 crc kubenswrapper[5050]: I1211 14:11:24.118672 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fc263ce-85a4-4182-87ac-a5cd6d48b65a" containerName="nova-metadata-log" Dec 11 14:11:24 crc kubenswrapper[5050]: E1211 14:11:24.118707 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fc263ce-85a4-4182-87ac-a5cd6d48b65a" containerName="nova-metadata-metadata" Dec 11 14:11:24 crc kubenswrapper[5050]: I1211 14:11:24.118717 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fc263ce-85a4-4182-87ac-a5cd6d48b65a" containerName="nova-metadata-metadata" Dec 11 14:11:24 crc kubenswrapper[5050]: I1211 14:11:24.118993 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fc263ce-85a4-4182-87ac-a5cd6d48b65a" containerName="nova-metadata-log" Dec 11 14:11:24 crc kubenswrapper[5050]: I1211 14:11:24.119030 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fc263ce-85a4-4182-87ac-a5cd6d48b65a" containerName="nova-metadata-metadata" Dec 11 14:11:24 crc kubenswrapper[5050]: I1211 14:11:24.120440 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 14:11:24 crc kubenswrapper[5050]: I1211 14:11:24.124784 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 11 14:11:24 crc kubenswrapper[5050]: I1211 14:11:24.125214 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 11 14:11:24 crc kubenswrapper[5050]: I1211 14:11:24.127507 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 14:11:24 crc kubenswrapper[5050]: I1211 14:11:24.128671 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.128648289 podStartE2EDuration="2.128648289s" podCreationTimestamp="2025-12-11 14:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:11:24.10129405 +0000 UTC m=+1374.945016636" watchObservedRunningTime="2025-12-11 14:11:24.128648289 +0000 UTC m=+1374.972370895" Dec 11 14:11:24 crc kubenswrapper[5050]: I1211 14:11:24.201120 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1eb418aa-1d3c-469c-8ff4-2b3c86a71e97-logs\") pod \"nova-metadata-0\" (UID: \"1eb418aa-1d3c-469c-8ff4-2b3c86a71e97\") " pod="openstack/nova-metadata-0" Dec 11 14:11:24 crc kubenswrapper[5050]: I1211 14:11:24.201215 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eb418aa-1d3c-469c-8ff4-2b3c86a71e97-config-data\") pod \"nova-metadata-0\" (UID: \"1eb418aa-1d3c-469c-8ff4-2b3c86a71e97\") " pod="openstack/nova-metadata-0" Dec 11 14:11:24 crc kubenswrapper[5050]: I1211 14:11:24.201308 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wckq9\" (UniqueName: \"kubernetes.io/projected/1eb418aa-1d3c-469c-8ff4-2b3c86a71e97-kube-api-access-wckq9\") pod \"nova-metadata-0\" (UID: \"1eb418aa-1d3c-469c-8ff4-2b3c86a71e97\") " pod="openstack/nova-metadata-0" Dec 11 14:11:24 crc kubenswrapper[5050]: I1211 14:11:24.201388 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1eb418aa-1d3c-469c-8ff4-2b3c86a71e97-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1eb418aa-1d3c-469c-8ff4-2b3c86a71e97\") " pod="openstack/nova-metadata-0" Dec 11 14:11:24 crc kubenswrapper[5050]: I1211 14:11:24.201444 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eb418aa-1d3c-469c-8ff4-2b3c86a71e97-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1eb418aa-1d3c-469c-8ff4-2b3c86a71e97\") " pod="openstack/nova-metadata-0" Dec 11 14:11:24 crc kubenswrapper[5050]: I1211 14:11:24.304215 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eb418aa-1d3c-469c-8ff4-2b3c86a71e97-config-data\") pod \"nova-metadata-0\" (UID: \"1eb418aa-1d3c-469c-8ff4-2b3c86a71e97\") " pod="openstack/nova-metadata-0" Dec 11 14:11:24 crc kubenswrapper[5050]: I1211 14:11:24.304373 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wckq9\" (UniqueName: \"kubernetes.io/projected/1eb418aa-1d3c-469c-8ff4-2b3c86a71e97-kube-api-access-wckq9\") pod \"nova-metadata-0\" (UID: \"1eb418aa-1d3c-469c-8ff4-2b3c86a71e97\") " pod="openstack/nova-metadata-0" Dec 11 14:11:24 crc kubenswrapper[5050]: I1211 14:11:24.304458 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1eb418aa-1d3c-469c-8ff4-2b3c86a71e97-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1eb418aa-1d3c-469c-8ff4-2b3c86a71e97\") " pod="openstack/nova-metadata-0" Dec 11 14:11:24 crc kubenswrapper[5050]: I1211 14:11:24.304493 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eb418aa-1d3c-469c-8ff4-2b3c86a71e97-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1eb418aa-1d3c-469c-8ff4-2b3c86a71e97\") " pod="openstack/nova-metadata-0" Dec 11 14:11:24 crc kubenswrapper[5050]: I1211 14:11:24.304581 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1eb418aa-1d3c-469c-8ff4-2b3c86a71e97-logs\") pod \"nova-metadata-0\" (UID: \"1eb418aa-1d3c-469c-8ff4-2b3c86a71e97\") " pod="openstack/nova-metadata-0" Dec 11 14:11:24 crc kubenswrapper[5050]: I1211 14:11:24.305111 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1eb418aa-1d3c-469c-8ff4-2b3c86a71e97-logs\") pod \"nova-metadata-0\" (UID: \"1eb418aa-1d3c-469c-8ff4-2b3c86a71e97\") " pod="openstack/nova-metadata-0" Dec 11 14:11:24 crc kubenswrapper[5050]: I1211 14:11:24.312595 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1eb418aa-1d3c-469c-8ff4-2b3c86a71e97-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1eb418aa-1d3c-469c-8ff4-2b3c86a71e97\") " pod="openstack/nova-metadata-0" Dec 11 14:11:24 crc kubenswrapper[5050]: I1211 14:11:24.313147 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eb418aa-1d3c-469c-8ff4-2b3c86a71e97-config-data\") pod \"nova-metadata-0\" (UID: \"1eb418aa-1d3c-469c-8ff4-2b3c86a71e97\") " pod="openstack/nova-metadata-0" Dec 11 14:11:24 crc kubenswrapper[5050]: I1211 14:11:24.319838 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eb418aa-1d3c-469c-8ff4-2b3c86a71e97-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1eb418aa-1d3c-469c-8ff4-2b3c86a71e97\") " pod="openstack/nova-metadata-0" Dec 11 14:11:24 crc kubenswrapper[5050]: I1211 14:11:24.324741 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wckq9\" (UniqueName: \"kubernetes.io/projected/1eb418aa-1d3c-469c-8ff4-2b3c86a71e97-kube-api-access-wckq9\") pod \"nova-metadata-0\" (UID: \"1eb418aa-1d3c-469c-8ff4-2b3c86a71e97\") " pod="openstack/nova-metadata-0" Dec 11 14:11:24 crc kubenswrapper[5050]: I1211 14:11:24.449875 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 14:11:24 crc kubenswrapper[5050]: I1211 14:11:24.936110 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 14:11:25 crc kubenswrapper[5050]: I1211 14:11:25.053133 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1eb418aa-1d3c-469c-8ff4-2b3c86a71e97","Type":"ContainerStarted","Data":"b78a33f6104b5f79be70fd93a1678bdfe6caecfc71974569402b8c36abef844e"} Dec 11 14:11:25 crc kubenswrapper[5050]: I1211 14:11:25.571742 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fc263ce-85a4-4182-87ac-a5cd6d48b65a" path="/var/lib/kubelet/pods/2fc263ce-85a4-4182-87ac-a5cd6d48b65a/volumes" Dec 11 14:11:26 crc kubenswrapper[5050]: I1211 14:11:26.066844 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1eb418aa-1d3c-469c-8ff4-2b3c86a71e97","Type":"ContainerStarted","Data":"c851fe09742b366b6fb0fe111786b9251ff34ac22d585974055bba383135d605"} Dec 11 14:11:26 crc kubenswrapper[5050]: I1211 14:11:26.066907 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1eb418aa-1d3c-469c-8ff4-2b3c86a71e97","Type":"ContainerStarted","Data":"9ed3b2343fe57f3204bfcc9d8b8b6ffb7c52336371d620cd7dace42eedff80a0"} Dec 11 14:11:26 crc kubenswrapper[5050]: I1211 14:11:26.125842 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.12580635 podStartE2EDuration="2.12580635s" podCreationTimestamp="2025-12-11 14:11:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:11:26.093535799 +0000 UTC m=+1376.937258405" watchObservedRunningTime="2025-12-11 14:11:26.12580635 +0000 UTC m=+1376.969528946" Dec 11 14:11:26 crc kubenswrapper[5050]: I1211 14:11:26.362070 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 11 14:11:29 crc kubenswrapper[5050]: I1211 14:11:29.450997 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 11 14:11:29 crc kubenswrapper[5050]: I1211 14:11:29.451545 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 11 14:11:31 crc kubenswrapper[5050]: I1211 14:11:31.362839 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 11 14:11:31 crc kubenswrapper[5050]: I1211 14:11:31.378042 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 11 14:11:31 crc kubenswrapper[5050]: I1211 14:11:31.399217 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 11 14:11:32 crc kubenswrapper[5050]: I1211 14:11:32.175401 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 11 14:11:32 crc kubenswrapper[5050]: I1211 14:11:32.447376 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 14:11:32 crc kubenswrapper[5050]: I1211 14:11:32.447431 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 14:11:33 crc kubenswrapper[5050]: I1211 14:11:33.465284 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3386ffea-45ca-41e8-9aa5-61a2923a3394" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.201:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 14:11:33 crc kubenswrapper[5050]: I1211 14:11:33.465394 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="3386ffea-45ca-41e8-9aa5-61a2923a3394" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.201:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 14:11:34 crc kubenswrapper[5050]: I1211 14:11:34.450267 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 11 14:11:34 crc kubenswrapper[5050]: I1211 14:11:34.450891 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 11 14:11:35 crc kubenswrapper[5050]: I1211 14:11:35.468450 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="1eb418aa-1d3c-469c-8ff4-2b3c86a71e97" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 14:11:35 crc kubenswrapper[5050]: I1211 14:11:35.468791 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="1eb418aa-1d3c-469c-8ff4-2b3c86a71e97" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 14:11:42 crc kubenswrapper[5050]: I1211 14:11:42.456611 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 11 14:11:42 crc kubenswrapper[5050]: I1211 14:11:42.457644 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 11 14:11:42 crc kubenswrapper[5050]: I1211 14:11:42.458057 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 11 14:11:42 crc kubenswrapper[5050]: I1211 14:11:42.458110 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 11 14:11:42 crc kubenswrapper[5050]: I1211 14:11:42.465299 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 11 14:11:42 crc kubenswrapper[5050]: I1211 14:11:42.467233 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 11 14:11:44 crc kubenswrapper[5050]: I1211 14:11:44.456481 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 11 14:11:44 crc kubenswrapper[5050]: I1211 14:11:44.459215 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 11 14:11:44 crc kubenswrapper[5050]: I1211 14:11:44.467490 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 11 14:11:45 crc kubenswrapper[5050]: I1211 14:11:45.289458 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 11 14:12:01 crc kubenswrapper[5050]: I1211 14:12:01.721784 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-z72zl"] Dec 11 14:12:01 crc kubenswrapper[5050]: I1211 14:12:01.732650 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z72zl" Dec 11 14:12:01 crc kubenswrapper[5050]: I1211 14:12:01.779193 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z72zl"] Dec 11 14:12:01 crc kubenswrapper[5050]: I1211 14:12:01.881921 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8-catalog-content\") pod \"certified-operators-z72zl\" (UID: \"3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8\") " pod="openshift-marketplace/certified-operators-z72zl" Dec 11 14:12:01 crc kubenswrapper[5050]: I1211 14:12:01.882031 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9mxx\" (UniqueName: \"kubernetes.io/projected/3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8-kube-api-access-w9mxx\") pod \"certified-operators-z72zl\" (UID: \"3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8\") " pod="openshift-marketplace/certified-operators-z72zl" Dec 11 14:12:01 crc kubenswrapper[5050]: I1211 14:12:01.882114 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8-utilities\") pod \"certified-operators-z72zl\" (UID: \"3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8\") " pod="openshift-marketplace/certified-operators-z72zl" Dec 11 14:12:01 crc kubenswrapper[5050]: I1211 14:12:01.984760 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9mxx\" (UniqueName: \"kubernetes.io/projected/3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8-kube-api-access-w9mxx\") pod \"certified-operators-z72zl\" (UID: \"3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8\") " pod="openshift-marketplace/certified-operators-z72zl" Dec 11 14:12:01 crc kubenswrapper[5050]: I1211 14:12:01.984908 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8-utilities\") pod \"certified-operators-z72zl\" (UID: \"3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8\") " pod="openshift-marketplace/certified-operators-z72zl" Dec 11 14:12:01 crc kubenswrapper[5050]: I1211 14:12:01.984996 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8-catalog-content\") pod \"certified-operators-z72zl\" (UID: \"3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8\") " pod="openshift-marketplace/certified-operators-z72zl" Dec 11 14:12:01 crc kubenswrapper[5050]: I1211 14:12:01.985901 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8-utilities\") pod \"certified-operators-z72zl\" (UID: \"3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8\") " pod="openshift-marketplace/certified-operators-z72zl" Dec 11 14:12:01 crc kubenswrapper[5050]: I1211 14:12:01.985957 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8-catalog-content\") pod \"certified-operators-z72zl\" (UID: \"3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8\") " pod="openshift-marketplace/certified-operators-z72zl" Dec 11 14:12:02 crc kubenswrapper[5050]: I1211 14:12:02.010179 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9mxx\" (UniqueName: \"kubernetes.io/projected/3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8-kube-api-access-w9mxx\") pod \"certified-operators-z72zl\" (UID: \"3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8\") " pod="openshift-marketplace/certified-operators-z72zl" Dec 11 14:12:02 crc kubenswrapper[5050]: I1211 14:12:02.064925 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z72zl" Dec 11 14:12:02 crc kubenswrapper[5050]: I1211 14:12:02.630547 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-z72zl"] Dec 11 14:12:03 crc kubenswrapper[5050]: I1211 14:12:03.467563 5050 generic.go:334] "Generic (PLEG): container finished" podID="3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8" containerID="0d96e38916ee974bfb00beafad9631882e17ac13c98ca0ed36655136da275949" exitCode=0 Dec 11 14:12:03 crc kubenswrapper[5050]: I1211 14:12:03.467871 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z72zl" event={"ID":"3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8","Type":"ContainerDied","Data":"0d96e38916ee974bfb00beafad9631882e17ac13c98ca0ed36655136da275949"} Dec 11 14:12:03 crc kubenswrapper[5050]: I1211 14:12:03.468038 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z72zl" event={"ID":"3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8","Type":"ContainerStarted","Data":"c15320225f101e1a2e912cef7830173e3b944e61d96c75a72c6b39390d43a440"} Dec 11 14:12:03 crc kubenswrapper[5050]: I1211 14:12:03.505074 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Dec 11 14:12:03 crc kubenswrapper[5050]: I1211 14:12:03.505368 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="2396be70-52b5-4a91-b8f8-463803fcc4d0" containerName="openstackclient" containerID="cri-o://d2e5c82ae90e1137ec73bef8dd6ce2e374ca6ff4f54e4da5f33502be7443eb03" gracePeriod=2 Dec 11 14:12:03 crc kubenswrapper[5050]: I1211 14:12:03.532335 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Dec 11 14:12:03 crc kubenswrapper[5050]: I1211 14:12:03.797112 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron7594-account-delete-8wjht"] Dec 11 14:12:03 crc kubenswrapper[5050]: E1211 14:12:03.797972 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2396be70-52b5-4a91-b8f8-463803fcc4d0" containerName="openstackclient" Dec 11 14:12:03 crc kubenswrapper[5050]: I1211 14:12:03.797988 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="2396be70-52b5-4a91-b8f8-463803fcc4d0" containerName="openstackclient" Dec 11 14:12:03 crc kubenswrapper[5050]: I1211 14:12:03.798226 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="2396be70-52b5-4a91-b8f8-463803fcc4d0" containerName="openstackclient" Dec 11 14:12:03 crc kubenswrapper[5050]: I1211 14:12:03.799086 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron7594-account-delete-8wjht" Dec 11 14:12:03 crc kubenswrapper[5050]: I1211 14:12:03.833975 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron7594-account-delete-8wjht"] Dec 11 14:12:03 crc kubenswrapper[5050]: I1211 14:12:03.879218 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 14:12:03 crc kubenswrapper[5050]: I1211 14:12:03.932759 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9hsj\" (UniqueName: \"kubernetes.io/projected/bc2e956f-6026-4a75-b11a-5106aad626a5-kube-api-access-h9hsj\") pod \"neutron7594-account-delete-8wjht\" (UID: \"bc2e956f-6026-4a75-b11a-5106aad626a5\") " pod="openstack/neutron7594-account-delete-8wjht" Dec 11 14:12:03 crc kubenswrapper[5050]: I1211 14:12:03.932835 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc2e956f-6026-4a75-b11a-5106aad626a5-operator-scripts\") pod \"neutron7594-account-delete-8wjht\" (UID: \"bc2e956f-6026-4a75-b11a-5106aad626a5\") " pod="openstack/neutron7594-account-delete-8wjht" Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.039585 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9hsj\" (UniqueName: \"kubernetes.io/projected/bc2e956f-6026-4a75-b11a-5106aad626a5-kube-api-access-h9hsj\") pod \"neutron7594-account-delete-8wjht\" (UID: \"bc2e956f-6026-4a75-b11a-5106aad626a5\") " pod="openstack/neutron7594-account-delete-8wjht" Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.039654 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc2e956f-6026-4a75-b11a-5106aad626a5-operator-scripts\") pod \"neutron7594-account-delete-8wjht\" (UID: \"bc2e956f-6026-4a75-b11a-5106aad626a5\") " pod="openstack/neutron7594-account-delete-8wjht" Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.040598 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc2e956f-6026-4a75-b11a-5106aad626a5-operator-scripts\") pod \"neutron7594-account-delete-8wjht\" (UID: \"bc2e956f-6026-4a75-b11a-5106aad626a5\") " pod="openstack/neutron7594-account-delete-8wjht" Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.058326 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-rdbqg"] Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.102564 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-rdbqg"] Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.121243 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9hsj\" (UniqueName: \"kubernetes.io/projected/bc2e956f-6026-4a75-b11a-5106aad626a5-kube-api-access-h9hsj\") pod \"neutron7594-account-delete-8wjht\" (UID: \"bc2e956f-6026-4a75-b11a-5106aad626a5\") " pod="openstack/neutron7594-account-delete-8wjht" Dec 11 14:12:04 crc kubenswrapper[5050]: E1211 14:12:04.143130 5050 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Dec 11 14:12:04 crc kubenswrapper[5050]: E1211 14:12:04.143208 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/458f05be-2fd6-44d9-8034-f077356964ce-config-data podName:458f05be-2fd6-44d9-8034-f077356964ce nodeName:}" failed. No retries permitted until 2025-12-11 14:12:04.643182431 +0000 UTC m=+1415.486905017 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/458f05be-2fd6-44d9-8034-f077356964ce-config-data") pod "rabbitmq-cell1-server-0" (UID: "458f05be-2fd6-44d9-8034-f077356964ce") : configmap "rabbitmq-cell1-config-data" not found Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.171152 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron7594-account-delete-8wjht" Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.183540 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder27b7-account-delete-h9wjn"] Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.211138 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder27b7-account-delete-h9wjn" Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.224179 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder27b7-account-delete-h9wjn"] Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.250264 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.250952 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="c928931c-d49d-41dc-9181-11d856ed3bd0" containerName="openstack-network-exporter" containerID="cri-o://3111270185c2d5c0f18227cafa21198528d2d2e6ed0667e53d97a8edfd2d1860" gracePeriod=300 Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.340589 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-7kvxk"] Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.352372 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e365d825-a3cb-42a3-8a00-8a9be42ed290-operator-scripts\") pod \"cinder27b7-account-delete-h9wjn\" (UID: \"e365d825-a3cb-42a3-8a00-8a9be42ed290\") " pod="openstack/cinder27b7-account-delete-h9wjn" Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.352512 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnlzl\" (UniqueName: \"kubernetes.io/projected/e365d825-a3cb-42a3-8a00-8a9be42ed290-kube-api-access-nnlzl\") pod \"cinder27b7-account-delete-h9wjn\" (UID: \"e365d825-a3cb-42a3-8a00-8a9be42ed290\") " pod="openstack/cinder27b7-account-delete-h9wjn" Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.388392 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.389280 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="e96be66c-07f2-47c0-a784-6af473c8a2a8" containerName="openstack-network-exporter" containerID="cri-o://93fdd4d5510bd166ab18bedc5b8b3ae3ef5a610b722c510e999ff481726b0345" gracePeriod=300 Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.432244 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.432585 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="5371a32d-3998-4ddc-93d6-27e9afdb9712" containerName="ovn-northd" containerID="cri-o://43b0ab8e8f0e3806e99b599e730fb8eac2bff4d265f8afa1bea527256b44c445" gracePeriod=30 Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.433153 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="5371a32d-3998-4ddc-93d6-27e9afdb9712" containerName="openstack-network-exporter" containerID="cri-o://7f95d994fd5fc97f391f6f15efe0c185c18faac91b7536e24f460feb81c83897" gracePeriod=30 Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.456780 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e365d825-a3cb-42a3-8a00-8a9be42ed290-operator-scripts\") pod \"cinder27b7-account-delete-h9wjn\" (UID: \"e365d825-a3cb-42a3-8a00-8a9be42ed290\") " pod="openstack/cinder27b7-account-delete-h9wjn" Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.456986 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnlzl\" (UniqueName: \"kubernetes.io/projected/e365d825-a3cb-42a3-8a00-8a9be42ed290-kube-api-access-nnlzl\") pod \"cinder27b7-account-delete-h9wjn\" (UID: \"e365d825-a3cb-42a3-8a00-8a9be42ed290\") " pod="openstack/cinder27b7-account-delete-h9wjn" Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.457974 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e365d825-a3cb-42a3-8a00-8a9be42ed290-operator-scripts\") pod \"cinder27b7-account-delete-h9wjn\" (UID: \"e365d825-a3cb-42a3-8a00-8a9be42ed290\") " pod="openstack/cinder27b7-account-delete-h9wjn" Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.467290 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-7kvxk"] Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.662399 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-b59kh"] Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.663068 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fcd6f8f8f-b59kh" podUID="c443a35b-44e5-495f-b23b-75ff35319194" containerName="dnsmasq-dns" containerID="cri-o://5891d989abb1d991a1a438c7eb2a7b5afaeabc910991c4571e2dac6645358de5" gracePeriod=10 Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.663179 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnlzl\" (UniqueName: \"kubernetes.io/projected/e365d825-a3cb-42a3-8a00-8a9be42ed290-kube-api-access-nnlzl\") pod \"cinder27b7-account-delete-h9wjn\" (UID: \"e365d825-a3cb-42a3-8a00-8a9be42ed290\") " pod="openstack/cinder27b7-account-delete-h9wjn" Dec 11 14:12:04 crc kubenswrapper[5050]: E1211 14:12:04.708506 5050 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Dec 11 14:12:04 crc kubenswrapper[5050]: E1211 14:12:04.708587 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/458f05be-2fd6-44d9-8034-f077356964ce-config-data podName:458f05be-2fd6-44d9-8034-f077356964ce nodeName:}" failed. No retries permitted until 2025-12-11 14:12:05.708569914 +0000 UTC m=+1416.552292500 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/458f05be-2fd6-44d9-8034-f077356964ce-config-data") pod "rabbitmq-cell1-server-0" (UID: "458f05be-2fd6-44d9-8034-f077356964ce") : configmap "rabbitmq-cell1-config-data" not found Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.731112 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican34ac-account-delete-cd525"] Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.732930 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican34ac-account-delete-cd525" Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.779270 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-6vzzr"] Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.809672 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66cb4589-6296-417b-87eb-4bcbff7bf580-operator-scripts\") pod \"barbican34ac-account-delete-cd525\" (UID: \"66cb4589-6296-417b-87eb-4bcbff7bf580\") " pod="openstack/barbican34ac-account-delete-cd525" Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.809815 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-956wq\" (UniqueName: \"kubernetes.io/projected/66cb4589-6296-417b-87eb-4bcbff7bf580-kube-api-access-956wq\") pod \"barbican34ac-account-delete-cd525\" (UID: \"66cb4589-6296-417b-87eb-4bcbff7bf580\") " pod="openstack/barbican34ac-account-delete-cd525" Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.812109 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.844490 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican34ac-account-delete-cd525"] Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.855270 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder27b7-account-delete-h9wjn" Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.875143 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-zwtmr"] Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.910197 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-6vzzr"] Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.914534 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66cb4589-6296-417b-87eb-4bcbff7bf580-operator-scripts\") pod \"barbican34ac-account-delete-cd525\" (UID: \"66cb4589-6296-417b-87eb-4bcbff7bf580\") " pod="openstack/barbican34ac-account-delete-cd525" Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.914697 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-956wq\" (UniqueName: \"kubernetes.io/projected/66cb4589-6296-417b-87eb-4bcbff7bf580-kube-api-access-956wq\") pod \"barbican34ac-account-delete-cd525\" (UID: \"66cb4589-6296-417b-87eb-4bcbff7bf580\") " pod="openstack/barbican34ac-account-delete-cd525" Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.915621 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66cb4589-6296-417b-87eb-4bcbff7bf580-operator-scripts\") pod \"barbican34ac-account-delete-cd525\" (UID: \"66cb4589-6296-417b-87eb-4bcbff7bf580\") " pod="openstack/barbican34ac-account-delete-cd525" Dec 11 14:12:04 crc kubenswrapper[5050]: I1211 14:12:04.969767 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-zwtmr"] Dec 11 14:12:05 crc kubenswrapper[5050]: I1211 14:12:05.021899 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-956wq\" (UniqueName: \"kubernetes.io/projected/66cb4589-6296-417b-87eb-4bcbff7bf580-kube-api-access-956wq\") pod \"barbican34ac-account-delete-cd525\" (UID: \"66cb4589-6296-417b-87eb-4bcbff7bf580\") " pod="openstack/barbican34ac-account-delete-cd525" Dec 11 14:12:05 crc kubenswrapper[5050]: E1211 14:12:05.024328 5050 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Dec 11 14:12:05 crc kubenswrapper[5050]: E1211 14:12:05.024396 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0891f075-8101-475b-b844-e7cb42a4990b-config-data podName:0891f075-8101-475b-b844-e7cb42a4990b nodeName:}" failed. No retries permitted until 2025-12-11 14:12:05.524375306 +0000 UTC m=+1416.368097892 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0891f075-8101-475b-b844-e7cb42a4990b-config-data") pod "rabbitmq-server-0" (UID: "0891f075-8101-475b-b844-e7cb42a4990b") : configmap "rabbitmq-config-data" not found Dec 11 14:12:05 crc kubenswrapper[5050]: I1211 14:12:05.036057 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placementd2f2-account-delete-njqwh"] Dec 11 14:12:05 crc kubenswrapper[5050]: I1211 14:12:05.061650 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placementd2f2-account-delete-njqwh" Dec 11 14:12:05 crc kubenswrapper[5050]: I1211 14:12:05.104221 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican34ac-account-delete-cd525" Dec 11 14:12:05 crc kubenswrapper[5050]: I1211 14:12:05.130088 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placementd2f2-account-delete-njqwh"] Dec 11 14:12:05 crc kubenswrapper[5050]: I1211 14:12:05.148092 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-pjzpq"] Dec 11 14:12:05 crc kubenswrapper[5050]: I1211 14:12:05.160716 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-47tvr"] Dec 11 14:12:05 crc kubenswrapper[5050]: I1211 14:12:05.179096 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance2653-account-delete-rgdnl"] Dec 11 14:12:05 crc kubenswrapper[5050]: I1211 14:12:05.180883 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance2653-account-delete-rgdnl" Dec 11 14:12:05 crc kubenswrapper[5050]: I1211 14:12:05.195565 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7766777c65-2rcww"] Dec 11 14:12:05 crc kubenswrapper[5050]: I1211 14:12:05.195874 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7766777c65-2rcww" podUID="ee634ad2-5f9a-4183-bddc-d076b6456276" containerName="neutron-api" containerID="cri-o://b54b96dfb39cdff4b8a6a73d84c57d9222db83cd6d5c351fcff1ac130593b742" gracePeriod=30 Dec 11 14:12:05 crc kubenswrapper[5050]: I1211 14:12:05.196059 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7766777c65-2rcww" podUID="ee634ad2-5f9a-4183-bddc-d076b6456276" containerName="neutron-httpd" containerID="cri-o://61e67897af4ee162906e77cb4c294b5d3db5572b3070be4672c47d685548ceec" gracePeriod=30 Dec 11 14:12:05 crc kubenswrapper[5050]: I1211 14:12:05.232616 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc8efd61-e4fb-4ec0-834a-b495797039a1-operator-scripts\") pod \"placementd2f2-account-delete-njqwh\" (UID: \"bc8efd61-e4fb-4ec0-834a-b495797039a1\") " pod="openstack/placementd2f2-account-delete-njqwh" Dec 11 14:12:05 crc kubenswrapper[5050]: I1211 14:12:05.232696 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5rsd\" (UniqueName: \"kubernetes.io/projected/bc8efd61-e4fb-4ec0-834a-b495797039a1-kube-api-access-d5rsd\") pod \"placementd2f2-account-delete-njqwh\" (UID: \"bc8efd61-e4fb-4ec0-834a-b495797039a1\") " pod="openstack/placementd2f2-account-delete-njqwh" Dec 11 14:12:05 crc kubenswrapper[5050]: I1211 14:12:05.347155 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhvqd\" (UniqueName: \"kubernetes.io/projected/1d484c84-7333-4701-a4f3-655c3d2cbfa7-kube-api-access-hhvqd\") pod \"glance2653-account-delete-rgdnl\" (UID: \"1d484c84-7333-4701-a4f3-655c3d2cbfa7\") " pod="openstack/glance2653-account-delete-rgdnl" Dec 11 14:12:05 crc kubenswrapper[5050]: I1211 14:12:05.356419 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc8efd61-e4fb-4ec0-834a-b495797039a1-operator-scripts\") pod \"placementd2f2-account-delete-njqwh\" (UID: \"bc8efd61-e4fb-4ec0-834a-b495797039a1\") " pod="openstack/placementd2f2-account-delete-njqwh" Dec 11 14:12:05 crc kubenswrapper[5050]: I1211 14:12:05.356512 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5rsd\" (UniqueName: \"kubernetes.io/projected/bc8efd61-e4fb-4ec0-834a-b495797039a1-kube-api-access-d5rsd\") pod \"placementd2f2-account-delete-njqwh\" (UID: \"bc8efd61-e4fb-4ec0-834a-b495797039a1\") " pod="openstack/placementd2f2-account-delete-njqwh" Dec 11 14:12:05 crc kubenswrapper[5050]: I1211 14:12:05.356905 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d484c84-7333-4701-a4f3-655c3d2cbfa7-operator-scripts\") pod \"glance2653-account-delete-rgdnl\" (UID: \"1d484c84-7333-4701-a4f3-655c3d2cbfa7\") " pod="openstack/glance2653-account-delete-rgdnl" Dec 11 14:12:05 crc kubenswrapper[5050]: I1211 14:12:05.397999 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc8efd61-e4fb-4ec0-834a-b495797039a1-operator-scripts\") pod \"placementd2f2-account-delete-njqwh\" (UID: \"bc8efd61-e4fb-4ec0-834a-b495797039a1\") " pod="openstack/placementd2f2-account-delete-njqwh" Dec 11 14:12:05 crc kubenswrapper[5050]: I1211 14:12:05.426218 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="e96be66c-07f2-47c0-a784-6af473c8a2a8" containerName="ovsdbserver-nb" containerID="cri-o://c612f0ddd6bdd2093ab8086d9662d761b9c9ab6d418a4c7403db013b23c7a269" gracePeriod=299 Dec 11 14:12:05 crc kubenswrapper[5050]: I1211 14:12:05.428738 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-j5zml"] Dec 11 14:12:05 crc kubenswrapper[5050]: I1211 14:12:05.461634 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d484c84-7333-4701-a4f3-655c3d2cbfa7-operator-scripts\") pod \"glance2653-account-delete-rgdnl\" (UID: \"1d484c84-7333-4701-a4f3-655c3d2cbfa7\") " pod="openstack/glance2653-account-delete-rgdnl" Dec 11 14:12:05 crc kubenswrapper[5050]: I1211 14:12:05.461717 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhvqd\" (UniqueName: \"kubernetes.io/projected/1d484c84-7333-4701-a4f3-655c3d2cbfa7-kube-api-access-hhvqd\") pod \"glance2653-account-delete-rgdnl\" (UID: \"1d484c84-7333-4701-a4f3-655c3d2cbfa7\") " pod="openstack/glance2653-account-delete-rgdnl" Dec 11 14:12:05 crc kubenswrapper[5050]: I1211 14:12:05.466669 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-j5zml"] Dec 11 14:12:05 crc kubenswrapper[5050]: I1211 14:12:05.475724 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5rsd\" (UniqueName: \"kubernetes.io/projected/bc8efd61-e4fb-4ec0-834a-b495797039a1-kube-api-access-d5rsd\") pod \"placementd2f2-account-delete-njqwh\" (UID: \"bc8efd61-e4fb-4ec0-834a-b495797039a1\") " pod="openstack/placementd2f2-account-delete-njqwh" Dec 11 14:12:05 crc kubenswrapper[5050]: I1211 14:12:05.477163 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d484c84-7333-4701-a4f3-655c3d2cbfa7-operator-scripts\") pod \"glance2653-account-delete-rgdnl\" (UID: \"1d484c84-7333-4701-a4f3-655c3d2cbfa7\") " pod="openstack/glance2653-account-delete-rgdnl" Dec 11 14:12:05 crc kubenswrapper[5050]: I1211 14:12:05.509301 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance2653-account-delete-rgdnl"] Dec 11 14:12:05 crc kubenswrapper[5050]: I1211 14:12:05.536426 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-7gmrp"] Dec 11 14:12:05 crc kubenswrapper[5050]: I1211 14:12:05.536706 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-7gmrp" podUID="58cdcd05-e81a-4ed4-8357-249649b17449" containerName="openstack-network-exporter" containerID="cri-o://c4c46a2b59720049a8e8b5a6204d154b1116f9503accccc382e625d7853d1369" gracePeriod=30 Dec 11 14:12:05 crc kubenswrapper[5050]: I1211 14:12:05.561485 5050 generic.go:334] "Generic (PLEG): container finished" podID="c443a35b-44e5-495f-b23b-75ff35319194" containerID="5891d989abb1d991a1a438c7eb2a7b5afaeabc910991c4571e2dac6645358de5" exitCode=0 Dec 11 14:12:05 crc kubenswrapper[5050]: E1211 14:12:05.565797 5050 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Dec 11 14:12:05 crc kubenswrapper[5050]: E1211 14:12:05.565895 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0891f075-8101-475b-b844-e7cb42a4990b-config-data podName:0891f075-8101-475b-b844-e7cb42a4990b nodeName:}" failed. No retries permitted until 2025-12-11 14:12:06.565877354 +0000 UTC m=+1417.409599940 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0891f075-8101-475b-b844-e7cb42a4990b-config-data") pod "rabbitmq-server-0" (UID: "0891f075-8101-475b-b844-e7cb42a4990b") : configmap "rabbitmq-config-data" not found Dec 11 14:12:05 crc kubenswrapper[5050]: I1211 14:12:05.575960 5050 generic.go:334] "Generic (PLEG): container finished" podID="e96be66c-07f2-47c0-a784-6af473c8a2a8" containerID="93fdd4d5510bd166ab18bedc5b8b3ae3ef5a610b722c510e999ff481726b0345" exitCode=2 Dec 11 14:12:05 crc kubenswrapper[5050]: I1211 14:12:05.595768 5050 generic.go:334] "Generic (PLEG): container finished" podID="5371a32d-3998-4ddc-93d6-27e9afdb9712" containerID="7f95d994fd5fc97f391f6f15efe0c185c18faac91b7536e24f460feb81c83897" exitCode=2 Dec 11 14:12:05 crc kubenswrapper[5050]: I1211 14:12:05.630615 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhvqd\" (UniqueName: \"kubernetes.io/projected/1d484c84-7333-4701-a4f3-655c3d2cbfa7-kube-api-access-hhvqd\") pod \"glance2653-account-delete-rgdnl\" (UID: \"1d484c84-7333-4701-a4f3-655c3d2cbfa7\") " pod="openstack/glance2653-account-delete-rgdnl" Dec 11 14:12:05 crc kubenswrapper[5050]: I1211 14:12:05.703570 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placementd2f2-account-delete-njqwh" Dec 11 14:12:05 crc kubenswrapper[5050]: E1211 14:12:05.770728 5050 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Dec 11 14:12:05 crc kubenswrapper[5050]: E1211 14:12:05.771552 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/458f05be-2fd6-44d9-8034-f077356964ce-config-data podName:458f05be-2fd6-44d9-8034-f077356964ce nodeName:}" failed. No retries permitted until 2025-12-11 14:12:07.77151994 +0000 UTC m=+1418.615242526 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/458f05be-2fd6-44d9-8034-f077356964ce-config-data") pod "rabbitmq-cell1-server-0" (UID: "458f05be-2fd6-44d9-8034-f077356964ce") : configmap "rabbitmq-cell1-config-data" not found Dec 11 14:12:05 crc kubenswrapper[5050]: I1211 14:12:05.805472 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="c8b3d8cd-9278-4639-86fe-1aa7696fecca" containerName="galera" probeResult="failure" output="command timed out" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:05.947948 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="107786ed-ea8f-4c2f-ac86-54b1bb504a69" path="/var/lib/kubelet/pods/107786ed-ea8f-4c2f-ac86-54b1bb504a69/volumes" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.010322 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-fcd6f8f8f-b59kh" podUID="c443a35b-44e5-495f-b23b-75ff35319194" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.194:5353: connect: connection refused" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.034331 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="c928931c-d49d-41dc-9181-11d856ed3bd0" containerName="ovsdbserver-sb" containerID="cri-o://3768c3d6cf415867973810bdb14c5966684aab657f8614b9d4062545081db44d" gracePeriod=299 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.051856 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d9200d5-8f1c-46be-a802-995c7f58b754" path="/var/lib/kubelet/pods/2d9200d5-8f1c-46be-a802-995c7f58b754/volumes" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.052694 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="669fc9ec-b625-44f9-bd15-bc8a79158127" path="/var/lib/kubelet/pods/669fc9ec-b625-44f9-bd15-bc8a79158127/volumes" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.053578 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aff61f93-f202-4057-a14e-7b395a73e323" path="/var/lib/kubelet/pods/aff61f93-f202-4057-a14e-7b395a73e323/volumes" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.054971 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1d16bf5-88f5-4ec7-943a-fc1ec7c15425" path="/var/lib/kubelet/pods/f1d16bf5-88f5-4ec7-943a-fc1ec7c15425/volumes" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.055731 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-b59kh" event={"ID":"c443a35b-44e5-495f-b23b-75ff35319194","Type":"ContainerDied","Data":"5891d989abb1d991a1a438c7eb2a7b5afaeabc910991c4571e2dac6645358de5"} Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.055765 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-dl7tx"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.055785 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e96be66c-07f2-47c0-a784-6af473c8a2a8","Type":"ContainerDied","Data":"93fdd4d5510bd166ab18bedc5b8b3ae3ef5a610b722c510e999ff481726b0345"} Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.055800 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-dl7tx"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.055833 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5371a32d-3998-4ddc-93d6-27e9afdb9712","Type":"ContainerDied","Data":"7f95d994fd5fc97f391f6f15efe0c185c18faac91b7536e24f460feb81c83897"} Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.055847 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.055867 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novacell1d6ec-account-delete-cwf97"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.058229 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="account-server" containerID="cri-o://69f5ff7e4ffed5e07ece2e747c39e17e11ce1252b75c01b5c3313338481c02f5" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.059678 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="swift-recon-cron" containerID="cri-o://44acc22d4dbaf9801a70faf934b08100e13594f1cab4f854bc7c2b3dd8963fb5" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.059779 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="rsync" containerID="cri-o://bfa20bc6bb25080f92169274679704ad90a7e9f219408ae8226d21d94b1cbce8" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.059992 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="container-updater" containerID="cri-o://b6c3d2263c2a8d964cc7422913cdc01c0a98e50a91cd20af0a8e5219f5c49d84" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.059908 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="object-updater" containerID="cri-o://038b0092de538faefca3e8ca1075a18dd7d58853d0c6eb5fdadf157d7e0f2147" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.059924 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="object-auditor" containerID="cri-o://cf225480f25db60b0e9d83e3b98e796c13673684172c9b6129d91e173f39beb6" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.059879 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="object-expirer" containerID="cri-o://c59f8bb548eec4e62535766386e180811808e5f7cf7913a3c02582a806b4073f" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.059977 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="object-server" containerID="cri-o://0cac73e478a996fa3e9d0714853b7480372b37e951d6e3e0667c3722790407c8" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.059959 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="object-replicator" containerID="cri-o://cca45253ddc48fc0f165034563c70e630dd7fac3f3c0cf0ba23d657266869519" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.060060 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="container-auditor" containerID="cri-o://3c2652501efb162ddb07fcdf676ff7b425046c43c56a32e87cf2a1b7f86d8517" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.060092 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="container-server" containerID="cri-o://a92c2e4e55be6c0dccf533363df9021ca510e9f14d1f5a908a2795582d914ca4" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.060102 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="account-reaper" containerID="cri-o://cc96ca859857b932852bab79b175e12e28dd66ea2b3f97528e65f1c394df699c" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.060073 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="container-replicator" containerID="cri-o://0b5dadf04d75b453da45faaba622c4ca8fa3ea72335c7e1cbe662af9423d5319" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.060117 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="account-auditor" containerID="cri-o://c1a089eb1d8d523f1a786eee0915def7fa7aab5c3e4514f0c035a46c61eef1cb" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: E1211 14:12:06.061207 5050 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-47tvr" message=< Dec 11 14:12:07 crc kubenswrapper[5050]: Exiting ovn-controller (1) [ OK ] Dec 11 14:12:07 crc kubenswrapper[5050]: > Dec 11 14:12:07 crc kubenswrapper[5050]: E1211 14:12:06.061424 5050 kuberuntime_container.go:691] "PreStop hook failed" err="command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: " pod="openstack/ovn-controller-47tvr" podUID="01fa4d89-aae5-451a-8798-2700053fe3d4" containerName="ovn-controller" containerID="cri-o://3dc7b12cce104d7f7c7d469a1c36d591c390e0b69268044a2dd0bba6d7255d70" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.061519 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-47tvr" podUID="01fa4d89-aae5-451a-8798-2700053fe3d4" containerName="ovn-controller" containerID="cri-o://3dc7b12cce104d7f7c7d469a1c36d591c390e0b69268044a2dd0bba6d7255d70" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.060145 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="account-replicator" containerID="cri-o://feac1300b5d8a5ea16c8321c45cc457e5dbf72ac6aab1103080d7accf21709e1" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.066500 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.066555 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell1d6ec-account-delete-cwf97"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.066572 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.066598 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-clbrx"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.066640 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-78ccc9f8bd-jdg2t"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.066660 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-clbrx"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.066676 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-x6sqt"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.066687 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-x6sqt"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.066708 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.067091 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="38b1a06e-804a-44dc-8e77-a7d8162f38bd" containerName="glance-log" containerID="cri-o://4a0dd3bf669f7beb6461a99c18d911c75efcecd8fddb14f47b6513fec2bf9b54" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.067328 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell1d6ec-account-delete-cwf97" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.068879 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="29a26d59-027f-428e-928e-12222b61a350" containerName="cinder-scheduler" containerID="cri-o://68a9a87e998a4bb0563913fd86e150d1605935b84a4da45aa67210b036a699f2" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.069231 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="003b423c-92a0-47f6-8358-003f3ad24ded" containerName="cinder-api-log" containerID="cri-o://80030e514e19d023c1bec72880044d75c75621951af814ba5560c38086dcbc3d" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.069297 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="38b1a06e-804a-44dc-8e77-a7d8162f38bd" containerName="glance-httpd" containerID="cri-o://0a03b521b37d9fb0f7030177a2bb20787cfd84f0c0449bc65282aede0e194ffc" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.069403 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="29a26d59-027f-428e-928e-12222b61a350" containerName="probe" containerID="cri-o://3e6132bd898662eb15caae20bc63d62858df7ed7da6bd64261b666f48768ec52" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.069569 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="003b423c-92a0-47f6-8358-003f3ad24ded" containerName="cinder-api" containerID="cri-o://204ac42ef63a05788b1880c5f6c33e7a413d56ea5c69370c5a87fa4d156de0ba" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.069784 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-78ccc9f8bd-jdg2t" podUID="4de557a0-8b74-4d40-8c91-351ba127eb13" containerName="placement-api" containerID="cri-o://8af0220738d7b4267aab1e60eaa3da9d17f3f47fefe09dc1901f5e2bee442704" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.089236 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-78ccc9f8bd-jdg2t" podUID="4de557a0-8b74-4d40-8c91-351ba127eb13" containerName="placement-log" containerID="cri-o://1977929bc424b057bf59a3155bf7f4cfdfe00b2e3f9856bd807dc72825864a27" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.139616 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance2653-account-delete-rgdnl" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.231484 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7896842-a23c-4427-ab0e-702138a8cdd0-operator-scripts\") pod \"novacell1d6ec-account-delete-cwf97\" (UID: \"e7896842-a23c-4427-ab0e-702138a8cdd0\") " pod="openstack/novacell1d6ec-account-delete-cwf97" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.232022 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bh5r\" (UniqueName: \"kubernetes.io/projected/e7896842-a23c-4427-ab0e-702138a8cdd0-kube-api-access-2bh5r\") pod \"novacell1d6ec-account-delete-cwf97\" (UID: \"e7896842-a23c-4427-ab0e-702138a8cdd0\") " pod="openstack/novacell1d6ec-account-delete-cwf97" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.274131 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novacell0e20d-account-delete-ntvhn"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.275937 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell0e20d-account-delete-ntvhn" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.301177 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.334500 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bh5r\" (UniqueName: \"kubernetes.io/projected/e7896842-a23c-4427-ab0e-702138a8cdd0-kube-api-access-2bh5r\") pod \"novacell1d6ec-account-delete-cwf97\" (UID: \"e7896842-a23c-4427-ab0e-702138a8cdd0\") " pod="openstack/novacell1d6ec-account-delete-cwf97" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.334576 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca005c2d-f7de-486a-bbd6-a32443582833-operator-scripts\") pod \"novacell0e20d-account-delete-ntvhn\" (UID: \"ca005c2d-f7de-486a-bbd6-a32443582833\") " pod="openstack/novacell0e20d-account-delete-ntvhn" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.334637 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcmzh\" (UniqueName: \"kubernetes.io/projected/ca005c2d-f7de-486a-bbd6-a32443582833-kube-api-access-dcmzh\") pod \"novacell0e20d-account-delete-ntvhn\" (UID: \"ca005c2d-f7de-486a-bbd6-a32443582833\") " pod="openstack/novacell0e20d-account-delete-ntvhn" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.334840 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7896842-a23c-4427-ab0e-702138a8cdd0-operator-scripts\") pod \"novacell1d6ec-account-delete-cwf97\" (UID: \"e7896842-a23c-4427-ab0e-702138a8cdd0\") " pod="openstack/novacell1d6ec-account-delete-cwf97" Dec 11 14:12:07 crc kubenswrapper[5050]: E1211 14:12:06.335400 5050 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Dec 11 14:12:07 crc kubenswrapper[5050]: E1211 14:12:06.335470 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e7896842-a23c-4427-ab0e-702138a8cdd0-operator-scripts podName:e7896842-a23c-4427-ab0e-702138a8cdd0 nodeName:}" failed. No retries permitted until 2025-12-11 14:12:06.835448964 +0000 UTC m=+1417.679171550 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/e7896842-a23c-4427-ab0e-702138a8cdd0-operator-scripts") pod "novacell1d6ec-account-delete-cwf97" (UID: "e7896842-a23c-4427-ab0e-702138a8cdd0") : configmap "openstack-cell1-scripts" not found Dec 11 14:12:07 crc kubenswrapper[5050]: E1211 14:12:06.349243 5050 projected.go:194] Error preparing data for projected volume kube-api-access-2bh5r for pod openstack/novacell1d6ec-account-delete-cwf97: failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Dec 11 14:12:07 crc kubenswrapper[5050]: E1211 14:12:06.349382 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e7896842-a23c-4427-ab0e-702138a8cdd0-kube-api-access-2bh5r podName:e7896842-a23c-4427-ab0e-702138a8cdd0 nodeName:}" failed. No retries permitted until 2025-12-11 14:12:06.84935593 +0000 UTC m=+1417.693078506 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2bh5r" (UniqueName: "kubernetes.io/projected/e7896842-a23c-4427-ab0e-702138a8cdd0-kube-api-access-2bh5r") pod "novacell1d6ec-account-delete-cwf97" (UID: "e7896842-a23c-4427-ab0e-702138a8cdd0") : failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.373714 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-fcd4b466f-vsss4"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.374598 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-fcd4b466f-vsss4" podUID="cffff412-bf3c-4739-8bb8-3d099c8c83fe" containerName="proxy-server" containerID="cri-o://0834c0f817c98ccee2d1fc466612fef74fe94fe1c05828492e898fa4014682d9" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.374540 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-fcd4b466f-vsss4" podUID="cffff412-bf3c-4739-8bb8-3d099c8c83fe" containerName="proxy-httpd" containerID="cri-o://bab4849e060d80aa6504ac4459a8bc36b1517d2feed47ced5829f252e6c45900" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.408679 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.409178 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="213cfec6-ba42-4dbc-bd9c-051b193e4577" containerName="glance-log" containerID="cri-o://ed60fdd58c4339e3164d7f4e317f1114bf6ecb3ec2bef7cd7a80d1158c76ff29" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.409412 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="213cfec6-ba42-4dbc-bd9c-051b193e4577" containerName="glance-httpd" containerID="cri-o://e6dc15c8d2821c9d66fa830b0740353eeecafc3c6002947a42891501a4a72dfd" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: E1211 14:12:06.421087 5050 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc928931c_d49d_41dc_9181_11d856ed3bd0.slice/crio-conmon-3111270185c2d5c0f18227cafa21198528d2d2e6ed0667e53d97a8edfd2d1860.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5371a32d_3998_4ddc_93d6_27e9afdb9712.slice/crio-conmon-43b0ab8e8f0e3806e99b599e730fb8eac2bff4d265f8afa1bea527256b44c445.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58cdcd05_e81a_4ed4_8357_249649b17449.slice/crio-conmon-c4c46a2b59720049a8e8b5a6204d154b1116f9503accccc382e625d7853d1369.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee634ad2_5f9a_4183_bddc_d076b6456276.slice/crio-conmon-61e67897af4ee162906e77cb4c294b5d3db5572b3070be4672c47d685548ceec.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode96be66c_07f2_47c0_a784_6af473c8a2a8.slice/crio-conmon-93fdd4d5510bd166ab18bedc5b8b3ae3ef5a610b722c510e999ff481726b0345.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc443a35b_44e5_495f_b23b_75ff35319194.slice/crio-conmon-5891d989abb1d991a1a438c7eb2a7b5afaeabc910991c4571e2dac6645358de5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01fa4d89_aae5_451a_8798_2700053fe3d4.slice/crio-3dc7b12cce104d7f7c7d469a1c36d591c390e0b69268044a2dd0bba6d7255d70.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda5dabf50_534b_45cb_87db_45373930fe82.slice/crio-0b5dadf04d75b453da45faaba622c4ca8fa3ea72335c7e1cbe662af9423d5319.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5371a32d_3998_4ddc_93d6_27e9afdb9712.slice/crio-43b0ab8e8f0e3806e99b599e730fb8eac2bff4d265f8afa1bea527256b44c445.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode96be66c_07f2_47c0_a784_6af473c8a2a8.slice/crio-conmon-c612f0ddd6bdd2093ab8086d9662d761b9c9ab6d418a4c7403db013b23c7a269.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2396be70_52b5_4a91_b8f8_463803fcc4d0.slice/crio-d2e5c82ae90e1137ec73bef8dd6ce2e374ca6ff4f54e4da5f33502be7443eb03.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc928931c_d49d_41dc_9181_11d856ed3bd0.slice/crio-3111270185c2d5c0f18227cafa21198528d2d2e6ed0667e53d97a8edfd2d1860.scope\": RecentStats: unable to find data in memory cache]" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.426913 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell0e20d-account-delete-ntvhn"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.438251 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca005c2d-f7de-486a-bbd6-a32443582833-operator-scripts\") pod \"novacell0e20d-account-delete-ntvhn\" (UID: \"ca005c2d-f7de-486a-bbd6-a32443582833\") " pod="openstack/novacell0e20d-account-delete-ntvhn" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.438324 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcmzh\" (UniqueName: \"kubernetes.io/projected/ca005c2d-f7de-486a-bbd6-a32443582833-kube-api-access-dcmzh\") pod \"novacell0e20d-account-delete-ntvhn\" (UID: \"ca005c2d-f7de-486a-bbd6-a32443582833\") " pod="openstack/novacell0e20d-account-delete-ntvhn" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.439614 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca005c2d-f7de-486a-bbd6-a32443582833-operator-scripts\") pod \"novacell0e20d-account-delete-ntvhn\" (UID: \"ca005c2d-f7de-486a-bbd6-a32443582833\") " pod="openstack/novacell0e20d-account-delete-ntvhn" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.456111 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-655647566b-n2tcs"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.456545 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-655647566b-n2tcs" podUID="569cb143-086a-42f1-9e8c-6f6f614c9ee2" containerName="barbican-keystone-listener-log" containerID="cri-o://d11f1570a4983e90360fd498bdee9b19f208c7f5acd61496d60bf9cadd7bc16f" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.456740 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-655647566b-n2tcs" podUID="569cb143-086a-42f1-9e8c-6f6f614c9ee2" containerName="barbican-keystone-listener" containerID="cri-o://e402b883c564b7c4156be1691f2f8af60f04df5e1dc8aa45ac6e3435d54ea395" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.464151 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcmzh\" (UniqueName: \"kubernetes.io/projected/ca005c2d-f7de-486a-bbd6-a32443582833-kube-api-access-dcmzh\") pod \"novacell0e20d-account-delete-ntvhn\" (UID: \"ca005c2d-f7de-486a-bbd6-a32443582833\") " pod="openstack/novacell0e20d-account-delete-ntvhn" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.503986 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/novaapi4326-account-delete-9bmsz"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.509365 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi4326-account-delete-9bmsz" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.518663 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="458f05be-2fd6-44d9-8034-f077356964ce" containerName="rabbitmq" containerID="cri-o://7fc0726972676985eb911b818bc159c8c1b12a1ca0e646ddda6558ea21079201" gracePeriod=604800 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.542520 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d917f471-6630-4e96-a0e4-cbde631da4a8-operator-scripts\") pod \"novaapi4326-account-delete-9bmsz\" (UID: \"d917f471-6630-4e96-a0e4-cbde631da4a8\") " pod="openstack/novaapi4326-account-delete-9bmsz" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.542627 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzt76\" (UniqueName: \"kubernetes.io/projected/d917f471-6630-4e96-a0e4-cbde631da4a8-kube-api-access-tzt76\") pod \"novaapi4326-account-delete-9bmsz\" (UID: \"d917f471-6630-4e96-a0e4-cbde631da4a8\") " pod="openstack/novaapi4326-account-delete-9bmsz" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.546188 5050 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/barbican-api-57f899fb58-v2lwj" secret="" err="secret \"barbican-barbican-dockercfg-zdpcp\" not found" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.561098 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-54bb9c4d69-975sg"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.561437 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-54bb9c4d69-975sg" podUID="934cae9e-c75b-434d-b1e1-d566d6fb8b7d" containerName="barbican-worker-log" containerID="cri-o://955a0ee0c9eed128222ddf5d6dedbc74a4c5d1d3bcc7732f13e94db5162a8ca2" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.561991 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-54bb9c4d69-975sg" podUID="934cae9e-c75b-434d-b1e1-d566d6fb8b7d" containerName="barbican-worker" containerID="cri-o://d2f88cb82773ad5f567925e106c60ec7bef84c6e078be7c5e2a9bd340e19b35c" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.644868 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapi4326-account-delete-9bmsz"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.646726 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d917f471-6630-4e96-a0e4-cbde631da4a8-operator-scripts\") pod \"novaapi4326-account-delete-9bmsz\" (UID: \"d917f471-6630-4e96-a0e4-cbde631da4a8\") " pod="openstack/novaapi4326-account-delete-9bmsz" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.646839 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzt76\" (UniqueName: \"kubernetes.io/projected/d917f471-6630-4e96-a0e4-cbde631da4a8-kube-api-access-tzt76\") pod \"novaapi4326-account-delete-9bmsz\" (UID: \"d917f471-6630-4e96-a0e4-cbde631da4a8\") " pod="openstack/novaapi4326-account-delete-9bmsz" Dec 11 14:12:07 crc kubenswrapper[5050]: E1211 14:12:06.647903 5050 secret.go:188] Couldn't get secret openstack/barbican-api-config-data: secret "barbican-api-config-data" not found Dec 11 14:12:07 crc kubenswrapper[5050]: E1211 14:12:06.647975 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-config-data-custom podName:4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf nodeName:}" failed. No retries permitted until 2025-12-11 14:12:07.147950616 +0000 UTC m=+1417.991673202 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data-custom" (UniqueName: "kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-config-data-custom") pod "barbican-api-57f899fb58-v2lwj" (UID: "4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf") : secret "barbican-api-config-data" not found Dec 11 14:12:07 crc kubenswrapper[5050]: E1211 14:12:06.648531 5050 secret.go:188] Couldn't get secret openstack/barbican-config-data: secret "barbican-config-data" not found Dec 11 14:12:07 crc kubenswrapper[5050]: E1211 14:12:06.648586 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-config-data podName:4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf nodeName:}" failed. No retries permitted until 2025-12-11 14:12:07.148567723 +0000 UTC m=+1417.992290299 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-config-data") pod "barbican-api-57f899fb58-v2lwj" (UID: "4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf") : secret "barbican-config-data" not found Dec 11 14:12:07 crc kubenswrapper[5050]: E1211 14:12:06.648584 5050 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Dec 11 14:12:07 crc kubenswrapper[5050]: E1211 14:12:06.648673 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0891f075-8101-475b-b844-e7cb42a4990b-config-data podName:0891f075-8101-475b-b844-e7cb42a4990b nodeName:}" failed. No retries permitted until 2025-12-11 14:12:08.648644195 +0000 UTC m=+1419.492367001 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0891f075-8101-475b-b844-e7cb42a4990b-config-data") pod "rabbitmq-server-0" (UID: "0891f075-8101-475b-b844-e7cb42a4990b") : configmap "rabbitmq-config-data" not found Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.661470 5050 generic.go:334] "Generic (PLEG): container finished" podID="2396be70-52b5-4a91-b8f8-463803fcc4d0" containerID="d2e5c82ae90e1137ec73bef8dd6ce2e374ca6ff4f54e4da5f33502be7443eb03" exitCode=137 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.662711 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d917f471-6630-4e96-a0e4-cbde631da4a8-operator-scripts\") pod \"novaapi4326-account-delete-9bmsz\" (UID: \"d917f471-6630-4e96-a0e4-cbde631da4a8\") " pod="openstack/novaapi4326-account-delete-9bmsz" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.672325 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzt76\" (UniqueName: \"kubernetes.io/projected/d917f471-6630-4e96-a0e4-cbde631da4a8-kube-api-access-tzt76\") pod \"novaapi4326-account-delete-9bmsz\" (UID: \"d917f471-6630-4e96-a0e4-cbde631da4a8\") " pod="openstack/novaapi4326-account-delete-9bmsz" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.684623 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-57f899fb58-v2lwj"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.692747 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.693090 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1eb418aa-1d3c-469c-8ff4-2b3c86a71e97" containerName="nova-metadata-log" containerID="cri-o://9ed3b2343fe57f3204bfcc9d8b8b6ffb7c52336371d620cd7dace42eedff80a0" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.693693 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1eb418aa-1d3c-469c-8ff4-2b3c86a71e97" containerName="nova-metadata-metadata" containerID="cri-o://c851fe09742b366b6fb0fe111786b9251ff34ac22d585974055bba383135d605" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.711436 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.711707 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="87937f27-2525-4fed-88bb-38a90404860c" containerName="nova-scheduler-scheduler" containerID="cri-o://8106f10bed3bf46c894b57795f1b60477931a5aba11442b13a0172969049c89a" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.722149 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell0e20d-account-delete-ntvhn" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.730949 5050 generic.go:334] "Generic (PLEG): container finished" podID="a5dabf50-534b-45cb-87db-45373930fe82" containerID="bfa20bc6bb25080f92169274679704ad90a7e9f219408ae8226d21d94b1cbce8" exitCode=0 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.731377 5050 generic.go:334] "Generic (PLEG): container finished" podID="a5dabf50-534b-45cb-87db-45373930fe82" containerID="c59f8bb548eec4e62535766386e180811808e5f7cf7913a3c02582a806b4073f" exitCode=0 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.731389 5050 generic.go:334] "Generic (PLEG): container finished" podID="a5dabf50-534b-45cb-87db-45373930fe82" containerID="038b0092de538faefca3e8ca1075a18dd7d58853d0c6eb5fdadf157d7e0f2147" exitCode=0 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.731396 5050 generic.go:334] "Generic (PLEG): container finished" podID="a5dabf50-534b-45cb-87db-45373930fe82" containerID="cf225480f25db60b0e9d83e3b98e796c13673684172c9b6129d91e173f39beb6" exitCode=0 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.731404 5050 generic.go:334] "Generic (PLEG): container finished" podID="a5dabf50-534b-45cb-87db-45373930fe82" containerID="cca45253ddc48fc0f165034563c70e630dd7fac3f3c0cf0ba23d657266869519" exitCode=0 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.731411 5050 generic.go:334] "Generic (PLEG): container finished" podID="a5dabf50-534b-45cb-87db-45373930fe82" containerID="b6c3d2263c2a8d964cc7422913cdc01c0a98e50a91cd20af0a8e5219f5c49d84" exitCode=0 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.731418 5050 generic.go:334] "Generic (PLEG): container finished" podID="a5dabf50-534b-45cb-87db-45373930fe82" containerID="3c2652501efb162ddb07fcdf676ff7b425046c43c56a32e87cf2a1b7f86d8517" exitCode=0 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.731425 5050 generic.go:334] "Generic (PLEG): container finished" podID="a5dabf50-534b-45cb-87db-45373930fe82" containerID="0b5dadf04d75b453da45faaba622c4ca8fa3ea72335c7e1cbe662af9423d5319" exitCode=0 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.731434 5050 generic.go:334] "Generic (PLEG): container finished" podID="a5dabf50-534b-45cb-87db-45373930fe82" containerID="cc96ca859857b932852bab79b175e12e28dd66ea2b3f97528e65f1c394df699c" exitCode=0 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.731442 5050 generic.go:334] "Generic (PLEG): container finished" podID="a5dabf50-534b-45cb-87db-45373930fe82" containerID="c1a089eb1d8d523f1a786eee0915def7fa7aab5c3e4514f0c035a46c61eef1cb" exitCode=0 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.731450 5050 generic.go:334] "Generic (PLEG): container finished" podID="a5dabf50-534b-45cb-87db-45373930fe82" containerID="feac1300b5d8a5ea16c8321c45cc457e5dbf72ac6aab1103080d7accf21709e1" exitCode=0 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.731538 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a5dabf50-534b-45cb-87db-45373930fe82","Type":"ContainerDied","Data":"bfa20bc6bb25080f92169274679704ad90a7e9f219408ae8226d21d94b1cbce8"} Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.731569 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a5dabf50-534b-45cb-87db-45373930fe82","Type":"ContainerDied","Data":"c59f8bb548eec4e62535766386e180811808e5f7cf7913a3c02582a806b4073f"} Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.731581 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a5dabf50-534b-45cb-87db-45373930fe82","Type":"ContainerDied","Data":"038b0092de538faefca3e8ca1075a18dd7d58853d0c6eb5fdadf157d7e0f2147"} Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.731591 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a5dabf50-534b-45cb-87db-45373930fe82","Type":"ContainerDied","Data":"cf225480f25db60b0e9d83e3b98e796c13673684172c9b6129d91e173f39beb6"} Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.731600 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a5dabf50-534b-45cb-87db-45373930fe82","Type":"ContainerDied","Data":"cca45253ddc48fc0f165034563c70e630dd7fac3f3c0cf0ba23d657266869519"} Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.731612 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a5dabf50-534b-45cb-87db-45373930fe82","Type":"ContainerDied","Data":"b6c3d2263c2a8d964cc7422913cdc01c0a98e50a91cd20af0a8e5219f5c49d84"} Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.731622 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a5dabf50-534b-45cb-87db-45373930fe82","Type":"ContainerDied","Data":"3c2652501efb162ddb07fcdf676ff7b425046c43c56a32e87cf2a1b7f86d8517"} Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.731631 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a5dabf50-534b-45cb-87db-45373930fe82","Type":"ContainerDied","Data":"0b5dadf04d75b453da45faaba622c4ca8fa3ea72335c7e1cbe662af9423d5319"} Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.731639 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a5dabf50-534b-45cb-87db-45373930fe82","Type":"ContainerDied","Data":"cc96ca859857b932852bab79b175e12e28dd66ea2b3f97528e65f1c394df699c"} Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.731648 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a5dabf50-534b-45cb-87db-45373930fe82","Type":"ContainerDied","Data":"c1a089eb1d8d523f1a786eee0915def7fa7aab5c3e4514f0c035a46c61eef1cb"} Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.731658 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a5dabf50-534b-45cb-87db-45373930fe82","Type":"ContainerDied","Data":"feac1300b5d8a5ea16c8321c45cc457e5dbf72ac6aab1103080d7accf21709e1"} Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.738355 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi4326-account-delete-9bmsz" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.749516 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-7gmrp_58cdcd05-e81a-4ed4-8357-249649b17449/openstack-network-exporter/0.log" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.749640 5050 generic.go:334] "Generic (PLEG): container finished" podID="58cdcd05-e81a-4ed4-8357-249649b17449" containerID="c4c46a2b59720049a8e8b5a6204d154b1116f9503accccc382e625d7853d1369" exitCode=2 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.749927 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-7gmrp" event={"ID":"58cdcd05-e81a-4ed4-8357-249649b17449","Type":"ContainerDied","Data":"c4c46a2b59720049a8e8b5a6204d154b1116f9503accccc382e625d7853d1369"} Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.770880 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-pjzpq" podUID="88b4966d-124b-4cf4-b52b-704955059220" containerName="ovs-vswitchd" containerID="cri-o://3713fc357ee4f2a6f119e91ece32b1dd727d67185d218ebb887b345bbd9015d1" gracePeriod=29 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.796080 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.796475 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3386ffea-45ca-41e8-9aa5-61a2923a3394" containerName="nova-api-log" containerID="cri-o://26578d5a108fa4d1bfd89b54489a03a4c1c636d76f83751795e52594a63ff439" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.797182 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="3386ffea-45ca-41e8-9aa5-61a2923a3394" containerName="nova-api-api" containerID="cri-o://b8472b1400fec5a115ddf8be1b9aa9e96f77b6e60231027d5893fbbc8989bdac" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.819706 5050 generic.go:334] "Generic (PLEG): container finished" podID="213cfec6-ba42-4dbc-bd9c-051b193e4577" containerID="ed60fdd58c4339e3164d7f4e317f1114bf6ecb3ec2bef7cd7a80d1158c76ff29" exitCode=143 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.819812 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"213cfec6-ba42-4dbc-bd9c-051b193e4577","Type":"ContainerDied","Data":"ed60fdd58c4339e3164d7f4e317f1114bf6ecb3ec2bef7cd7a80d1158c76ff29"} Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.829928 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.831319 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="0ca28ba4-2b37-4836-9d51-8dea84046163" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://c999df53c600f82fa92bc84444337d1373326ba3f2b76682afad53362cb34c3d" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: E1211 14:12:06.832162 5050 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Dec 11 14:12:07 crc kubenswrapper[5050]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Dec 11 14:12:07 crc kubenswrapper[5050]: + source /usr/local/bin/container-scripts/functions Dec 11 14:12:07 crc kubenswrapper[5050]: ++ OVNBridge=br-int Dec 11 14:12:07 crc kubenswrapper[5050]: ++ OVNRemote=tcp:localhost:6642 Dec 11 14:12:07 crc kubenswrapper[5050]: ++ OVNEncapType=geneve Dec 11 14:12:07 crc kubenswrapper[5050]: ++ OVNAvailabilityZones= Dec 11 14:12:07 crc kubenswrapper[5050]: ++ EnableChassisAsGateway=true Dec 11 14:12:07 crc kubenswrapper[5050]: ++ PhysicalNetworks= Dec 11 14:12:07 crc kubenswrapper[5050]: ++ OVNHostName= Dec 11 14:12:07 crc kubenswrapper[5050]: ++ DB_FILE=/etc/openvswitch/conf.db Dec 11 14:12:07 crc kubenswrapper[5050]: ++ ovs_dir=/var/lib/openvswitch Dec 11 14:12:07 crc kubenswrapper[5050]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Dec 11 14:12:07 crc kubenswrapper[5050]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Dec 11 14:12:07 crc kubenswrapper[5050]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Dec 11 14:12:07 crc kubenswrapper[5050]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 11 14:12:07 crc kubenswrapper[5050]: + sleep 0.5 Dec 11 14:12:07 crc kubenswrapper[5050]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 11 14:12:07 crc kubenswrapper[5050]: + sleep 0.5 Dec 11 14:12:07 crc kubenswrapper[5050]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 11 14:12:07 crc kubenswrapper[5050]: + cleanup_ovsdb_server_semaphore Dec 11 14:12:07 crc kubenswrapper[5050]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Dec 11 14:12:07 crc kubenswrapper[5050]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Dec 11 14:12:07 crc kubenswrapper[5050]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-pjzpq" message=< Dec 11 14:12:07 crc kubenswrapper[5050]: Exiting ovsdb-server (5) [ OK ] Dec 11 14:12:07 crc kubenswrapper[5050]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Dec 11 14:12:07 crc kubenswrapper[5050]: + source /usr/local/bin/container-scripts/functions Dec 11 14:12:07 crc kubenswrapper[5050]: ++ OVNBridge=br-int Dec 11 14:12:07 crc kubenswrapper[5050]: ++ OVNRemote=tcp:localhost:6642 Dec 11 14:12:07 crc kubenswrapper[5050]: ++ OVNEncapType=geneve Dec 11 14:12:07 crc kubenswrapper[5050]: ++ OVNAvailabilityZones= Dec 11 14:12:07 crc kubenswrapper[5050]: ++ EnableChassisAsGateway=true Dec 11 14:12:07 crc kubenswrapper[5050]: ++ PhysicalNetworks= Dec 11 14:12:07 crc kubenswrapper[5050]: ++ OVNHostName= Dec 11 14:12:07 crc kubenswrapper[5050]: ++ DB_FILE=/etc/openvswitch/conf.db Dec 11 14:12:07 crc kubenswrapper[5050]: ++ ovs_dir=/var/lib/openvswitch Dec 11 14:12:07 crc kubenswrapper[5050]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Dec 11 14:12:07 crc kubenswrapper[5050]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Dec 11 14:12:07 crc kubenswrapper[5050]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Dec 11 14:12:07 crc kubenswrapper[5050]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 11 14:12:07 crc kubenswrapper[5050]: + sleep 0.5 Dec 11 14:12:07 crc kubenswrapper[5050]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 11 14:12:07 crc kubenswrapper[5050]: + sleep 0.5 Dec 11 14:12:07 crc kubenswrapper[5050]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 11 14:12:07 crc kubenswrapper[5050]: + cleanup_ovsdb_server_semaphore Dec 11 14:12:07 crc kubenswrapper[5050]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Dec 11 14:12:07 crc kubenswrapper[5050]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Dec 11 14:12:07 crc kubenswrapper[5050]: > Dec 11 14:12:07 crc kubenswrapper[5050]: E1211 14:12:06.832247 5050 kuberuntime_container.go:691] "PreStop hook failed" err=< Dec 11 14:12:07 crc kubenswrapper[5050]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Dec 11 14:12:07 crc kubenswrapper[5050]: + source /usr/local/bin/container-scripts/functions Dec 11 14:12:07 crc kubenswrapper[5050]: ++ OVNBridge=br-int Dec 11 14:12:07 crc kubenswrapper[5050]: ++ OVNRemote=tcp:localhost:6642 Dec 11 14:12:07 crc kubenswrapper[5050]: ++ OVNEncapType=geneve Dec 11 14:12:07 crc kubenswrapper[5050]: ++ OVNAvailabilityZones= Dec 11 14:12:07 crc kubenswrapper[5050]: ++ EnableChassisAsGateway=true Dec 11 14:12:07 crc kubenswrapper[5050]: ++ PhysicalNetworks= Dec 11 14:12:07 crc kubenswrapper[5050]: ++ OVNHostName= Dec 11 14:12:07 crc kubenswrapper[5050]: ++ DB_FILE=/etc/openvswitch/conf.db Dec 11 14:12:07 crc kubenswrapper[5050]: ++ ovs_dir=/var/lib/openvswitch Dec 11 14:12:07 crc kubenswrapper[5050]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Dec 11 14:12:07 crc kubenswrapper[5050]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Dec 11 14:12:07 crc kubenswrapper[5050]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Dec 11 14:12:07 crc kubenswrapper[5050]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 11 14:12:07 crc kubenswrapper[5050]: + sleep 0.5 Dec 11 14:12:07 crc kubenswrapper[5050]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 11 14:12:07 crc kubenswrapper[5050]: + sleep 0.5 Dec 11 14:12:07 crc kubenswrapper[5050]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Dec 11 14:12:07 crc kubenswrapper[5050]: + cleanup_ovsdb_server_semaphore Dec 11 14:12:07 crc kubenswrapper[5050]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Dec 11 14:12:07 crc kubenswrapper[5050]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Dec 11 14:12:07 crc kubenswrapper[5050]: > pod="openstack/ovn-controller-ovs-pjzpq" podUID="88b4966d-124b-4cf4-b52b-704955059220" containerName="ovsdb-server" containerID="cri-o://5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.832291 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-pjzpq" podUID="88b4966d-124b-4cf4-b52b-704955059220" containerName="ovsdb-server" containerID="cri-o://5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c" gracePeriod=29 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.846772 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.867608 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c928931c-d49d-41dc-9181-11d856ed3bd0/ovsdbserver-sb/0.log" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.867659 5050 generic.go:334] "Generic (PLEG): container finished" podID="c928931c-d49d-41dc-9181-11d856ed3bd0" containerID="3111270185c2d5c0f18227cafa21198528d2d2e6ed0667e53d97a8edfd2d1860" exitCode=2 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.867678 5050 generic.go:334] "Generic (PLEG): container finished" podID="c928931c-d49d-41dc-9181-11d856ed3bd0" containerID="3768c3d6cf415867973810bdb14c5966684aab657f8614b9d4062545081db44d" exitCode=143 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.867801 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-97qxs"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.867831 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c928931c-d49d-41dc-9181-11d856ed3bd0","Type":"ContainerDied","Data":"3111270185c2d5c0f18227cafa21198528d2d2e6ed0667e53d97a8edfd2d1860"} Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.867854 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c928931c-d49d-41dc-9181-11d856ed3bd0","Type":"ContainerDied","Data":"3768c3d6cf415867973810bdb14c5966684aab657f8614b9d4062545081db44d"} Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.868675 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7896842-a23c-4427-ab0e-702138a8cdd0-operator-scripts\") pod \"novacell1d6ec-account-delete-cwf97\" (UID: \"e7896842-a23c-4427-ab0e-702138a8cdd0\") " pod="openstack/novacell1d6ec-account-delete-cwf97" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.868991 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bh5r\" (UniqueName: \"kubernetes.io/projected/e7896842-a23c-4427-ab0e-702138a8cdd0-kube-api-access-2bh5r\") pod \"novacell1d6ec-account-delete-cwf97\" (UID: \"e7896842-a23c-4427-ab0e-702138a8cdd0\") " pod="openstack/novacell1d6ec-account-delete-cwf97" Dec 11 14:12:07 crc kubenswrapper[5050]: E1211 14:12:06.869140 5050 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Dec 11 14:12:07 crc kubenswrapper[5050]: E1211 14:12:06.882459 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e7896842-a23c-4427-ab0e-702138a8cdd0-operator-scripts podName:e7896842-a23c-4427-ab0e-702138a8cdd0 nodeName:}" failed. No retries permitted until 2025-12-11 14:12:07.88242858 +0000 UTC m=+1418.726151156 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/e7896842-a23c-4427-ab0e-702138a8cdd0-operator-scripts") pod "novacell1d6ec-account-delete-cwf97" (UID: "e7896842-a23c-4427-ab0e-702138a8cdd0") : configmap "openstack-cell1-scripts" not found Dec 11 14:12:07 crc kubenswrapper[5050]: E1211 14:12:06.875586 5050 projected.go:194] Error preparing data for projected volume kube-api-access-2bh5r for pod openstack/novacell1d6ec-account-delete-cwf97: failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Dec 11 14:12:07 crc kubenswrapper[5050]: E1211 14:12:06.882516 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e7896842-a23c-4427-ab0e-702138a8cdd0-kube-api-access-2bh5r podName:e7896842-a23c-4427-ab0e-702138a8cdd0 nodeName:}" failed. No retries permitted until 2025-12-11 14:12:07.882508373 +0000 UTC m=+1418.726230959 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2bh5r" (UniqueName: "kubernetes.io/projected/e7896842-a23c-4427-ab0e-702138a8cdd0-kube-api-access-2bh5r") pod "novacell1d6ec-account-delete-cwf97" (UID: "e7896842-a23c-4427-ab0e-702138a8cdd0") : failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.886451 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-97qxs"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.899179 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell1d6ec-account-delete-cwf97"] Dec 11 14:12:07 crc kubenswrapper[5050]: E1211 14:12:06.900665 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-2bh5r operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/novacell1d6ec-account-delete-cwf97" podUID="e7896842-a23c-4427-ab0e-702138a8cdd0" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.914673 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-d6ec-account-create-update-f7xfx"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.915749 5050 generic.go:334] "Generic (PLEG): container finished" podID="003b423c-92a0-47f6-8358-003f3ad24ded" containerID="80030e514e19d023c1bec72880044d75c75621951af814ba5560c38086dcbc3d" exitCode=143 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.915844 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"003b423c-92a0-47f6-8358-003f3ad24ded","Type":"ContainerDied","Data":"80030e514e19d023c1bec72880044d75c75621951af814ba5560c38086dcbc3d"} Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.929620 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-d6ec-account-create-update-f7xfx"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.943466 5050 generic.go:334] "Generic (PLEG): container finished" podID="38b1a06e-804a-44dc-8e77-a7d8162f38bd" containerID="4a0dd3bf669f7beb6461a99c18d911c75efcecd8fddb14f47b6513fec2bf9b54" exitCode=143 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.943572 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"38b1a06e-804a-44dc-8e77-a7d8162f38bd","Type":"ContainerDied","Data":"4a0dd3bf669f7beb6461a99c18d911c75efcecd8fddb14f47b6513fec2bf9b54"} Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.952912 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.969659 5050 generic.go:334] "Generic (PLEG): container finished" podID="01fa4d89-aae5-451a-8798-2700053fe3d4" containerID="3dc7b12cce104d7f7c7d469a1c36d591c390e0b69268044a2dd0bba6d7255d70" exitCode=0 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.969808 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-47tvr" event={"ID":"01fa4d89-aae5-451a-8798-2700053fe3d4","Type":"ContainerDied","Data":"3dc7b12cce104d7f7c7d469a1c36d591c390e0b69268044a2dd0bba6d7255d70"} Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.978595 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e96be66c-07f2-47c0-a784-6af473c8a2a8/ovsdbserver-nb/0.log" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.978645 5050 generic.go:334] "Generic (PLEG): container finished" podID="e96be66c-07f2-47c0-a784-6af473c8a2a8" containerID="c612f0ddd6bdd2093ab8086d9662d761b9c9ab6d418a4c7403db013b23c7a269" exitCode=143 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.978701 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e96be66c-07f2-47c0-a784-6af473c8a2a8","Type":"ContainerDied","Data":"c612f0ddd6bdd2093ab8086d9662d761b9c9ab6d418a4c7403db013b23c7a269"} Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.980701 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5371a32d-3998-4ddc-93d6-27e9afdb9712/ovn-northd/0.log" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.980724 5050 generic.go:334] "Generic (PLEG): container finished" podID="5371a32d-3998-4ddc-93d6-27e9afdb9712" containerID="43b0ab8e8f0e3806e99b599e730fb8eac2bff4d265f8afa1bea527256b44c445" exitCode=143 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.980757 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5371a32d-3998-4ddc-93d6-27e9afdb9712","Type":"ContainerDied","Data":"43b0ab8e8f0e3806e99b599e730fb8eac2bff4d265f8afa1bea527256b44c445"} Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.995871 5050 generic.go:334] "Generic (PLEG): container finished" podID="4de557a0-8b74-4d40-8c91-351ba127eb13" containerID="1977929bc424b057bf59a3155bf7f4cfdfe00b2e3f9856bd807dc72825864a27" exitCode=143 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:06.995946 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78ccc9f8bd-jdg2t" event={"ID":"4de557a0-8b74-4d40-8c91-351ba127eb13","Type":"ContainerDied","Data":"1977929bc424b057bf59a3155bf7f4cfdfe00b2e3f9856bd807dc72825864a27"} Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.008341 5050 generic.go:334] "Generic (PLEG): container finished" podID="ee634ad2-5f9a-4183-bddc-d076b6456276" containerID="61e67897af4ee162906e77cb4c294b5d3db5572b3070be4672c47d685548ceec" exitCode=0 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.008589 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-57f899fb58-v2lwj" podUID="4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf" containerName="barbican-api-log" containerID="cri-o://887182e7bdf510cf5f8d29d8def14429f4899834fa471d481e28b9675086a309" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.008722 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7766777c65-2rcww" event={"ID":"ee634ad2-5f9a-4183-bddc-d076b6456276","Type":"ContainerDied","Data":"61e67897af4ee162906e77cb4c294b5d3db5572b3070be4672c47d685548ceec"} Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.009062 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-57f899fb58-v2lwj" podUID="4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf" containerName="barbican-api" containerID="cri-o://9048b99f225c02588f0acf6ab078d23ba9d748c49478356c85f61a74df87c960" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.018839 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="0891f075-8101-475b-b844-e7cb42a4990b" containerName="rabbitmq" containerID="cri-o://db23d3f3f27190827f163f21b2da4cd0ca1fc9aa0bfb390a14b8c83a5ed2ee47" gracePeriod=604800 Dec 11 14:12:07 crc kubenswrapper[5050]: E1211 14:12:07.186856 5050 secret.go:188] Couldn't get secret openstack/barbican-api-config-data: secret "barbican-api-config-data" not found Dec 11 14:12:07 crc kubenswrapper[5050]: E1211 14:12:07.187312 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-config-data-custom podName:4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf nodeName:}" failed. No retries permitted until 2025-12-11 14:12:08.187293336 +0000 UTC m=+1419.031015912 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data-custom" (UniqueName: "kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-config-data-custom") pod "barbican-api-57f899fb58-v2lwj" (UID: "4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf") : secret "barbican-api-config-data" not found Dec 11 14:12:07 crc kubenswrapper[5050]: E1211 14:12:07.187367 5050 secret.go:188] Couldn't get secret openstack/barbican-config-data: secret "barbican-config-data" not found Dec 11 14:12:07 crc kubenswrapper[5050]: E1211 14:12:07.187389 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-config-data podName:4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf nodeName:}" failed. No retries permitted until 2025-12-11 14:12:08.187383499 +0000 UTC m=+1419.031106085 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-config-data") pod "barbican-api-57f899fb58-v2lwj" (UID: "4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf") : secret "barbican-config-data" not found Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.348633 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="c8b3d8cd-9278-4639-86fe-1aa7696fecca" containerName="galera" containerID="cri-o://8498e742367424482ed9a44ca42a11a58844241a90788c1a5e431a1e93f23131" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.573368 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="097e2c08-b7fb-4d21-8e0b-efdb0ac0c9a6" path="/var/lib/kubelet/pods/097e2c08-b7fb-4d21-8e0b-efdb0ac0c9a6/volumes" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.574712 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="401d393f-46fc-4150-b785-313d42022d95" path="/var/lib/kubelet/pods/401d393f-46fc-4150-b785-313d42022d95/volumes" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.576512 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4334061e-4daa-4f87-bbdc-d1ccbfdafa27" path="/var/lib/kubelet/pods/4334061e-4daa-4f87-bbdc-d1ccbfdafa27/volumes" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.583491 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84abe132-b822-4b40-9952-7454c24cf3d0" path="/var/lib/kubelet/pods/84abe132-b822-4b40-9952-7454c24cf3d0/volumes" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.591868 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5371a32d-3998-4ddc-93d6-27e9afdb9712/ovn-northd/0.log" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.592227 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.613265 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-7gmrp_58cdcd05-e81a-4ed4-8357-249649b17449/openstack-network-exporter/0.log" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.613395 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-7gmrp" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.630276 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3f359da-3978-4220-91da-28b53f4cf109" path="/var/lib/kubelet/pods/c3f359da-3978-4220-91da-28b53f4cf109/volumes" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.638539 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tccfb"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.638711 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.639288 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7" containerName="nova-cell1-conductor-conductor" containerID="cri-o://927455209d981bf727b535faeebc64bff29620b7370b60af99e0a5354d586a34" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.672759 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-tccfb"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.677976 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.681490 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="7c13a1ff-0952-40b8-9157-3f1ba8b232c0" containerName="nova-cell0-conductor-conductor" containerID="cri-o://dcd6241bf625d5260eb81af83e760bdddc10ab5c6ec8bd47adbb59c1809dd3e4" gracePeriod=30 Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.695478 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-w884k"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.709162 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-w884k"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.717755 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5371a32d-3998-4ddc-93d6-27e9afdb9712-ovn-northd-tls-certs\") pod \"5371a32d-3998-4ddc-93d6-27e9afdb9712\" (UID: \"5371a32d-3998-4ddc-93d6-27e9afdb9712\") " Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.717821 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jdmv\" (UniqueName: \"kubernetes.io/projected/5371a32d-3998-4ddc-93d6-27e9afdb9712-kube-api-access-8jdmv\") pod \"5371a32d-3998-4ddc-93d6-27e9afdb9712\" (UID: \"5371a32d-3998-4ddc-93d6-27e9afdb9712\") " Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.730288 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58cdcd05-e81a-4ed4-8357-249649b17449-config\") pod \"58cdcd05-e81a-4ed4-8357-249649b17449\" (UID: \"58cdcd05-e81a-4ed4-8357-249649b17449\") " Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.730497 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58cdcd05-e81a-4ed4-8357-249649b17449-combined-ca-bundle\") pod \"58cdcd05-e81a-4ed4-8357-249649b17449\" (UID: \"58cdcd05-e81a-4ed4-8357-249649b17449\") " Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.730536 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxgn8\" (UniqueName: \"kubernetes.io/projected/58cdcd05-e81a-4ed4-8357-249649b17449-kube-api-access-lxgn8\") pod \"58cdcd05-e81a-4ed4-8357-249649b17449\" (UID: \"58cdcd05-e81a-4ed4-8357-249649b17449\") " Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.730592 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5371a32d-3998-4ddc-93d6-27e9afdb9712-ovn-rundir\") pod \"5371a32d-3998-4ddc-93d6-27e9afdb9712\" (UID: \"5371a32d-3998-4ddc-93d6-27e9afdb9712\") " Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.730710 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5371a32d-3998-4ddc-93d6-27e9afdb9712-metrics-certs-tls-certs\") pod \"5371a32d-3998-4ddc-93d6-27e9afdb9712\" (UID: \"5371a32d-3998-4ddc-93d6-27e9afdb9712\") " Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.730739 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/58cdcd05-e81a-4ed4-8357-249649b17449-metrics-certs-tls-certs\") pod \"58cdcd05-e81a-4ed4-8357-249649b17449\" (UID: \"58cdcd05-e81a-4ed4-8357-249649b17449\") " Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.730805 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5371a32d-3998-4ddc-93d6-27e9afdb9712-config\") pod \"5371a32d-3998-4ddc-93d6-27e9afdb9712\" (UID: \"5371a32d-3998-4ddc-93d6-27e9afdb9712\") " Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.730830 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/58cdcd05-e81a-4ed4-8357-249649b17449-ovs-rundir\") pod \"58cdcd05-e81a-4ed4-8357-249649b17449\" (UID: \"58cdcd05-e81a-4ed4-8357-249649b17449\") " Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.730875 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5371a32d-3998-4ddc-93d6-27e9afdb9712-combined-ca-bundle\") pod \"5371a32d-3998-4ddc-93d6-27e9afdb9712\" (UID: \"5371a32d-3998-4ddc-93d6-27e9afdb9712\") " Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.730959 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/58cdcd05-e81a-4ed4-8357-249649b17449-ovn-rundir\") pod \"58cdcd05-e81a-4ed4-8357-249649b17449\" (UID: \"58cdcd05-e81a-4ed4-8357-249649b17449\") " Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.730987 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5371a32d-3998-4ddc-93d6-27e9afdb9712-scripts\") pod \"5371a32d-3998-4ddc-93d6-27e9afdb9712\" (UID: \"5371a32d-3998-4ddc-93d6-27e9afdb9712\") " Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.733299 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5371a32d-3998-4ddc-93d6-27e9afdb9712-scripts" (OuterVolumeSpecName: "scripts") pod "5371a32d-3998-4ddc-93d6-27e9afdb9712" (UID: "5371a32d-3998-4ddc-93d6-27e9afdb9712"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.736625 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5371a32d-3998-4ddc-93d6-27e9afdb9712-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "5371a32d-3998-4ddc-93d6-27e9afdb9712" (UID: "5371a32d-3998-4ddc-93d6-27e9afdb9712"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.736863 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58cdcd05-e81a-4ed4-8357-249649b17449-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "58cdcd05-e81a-4ed4-8357-249649b17449" (UID: "58cdcd05-e81a-4ed4-8357-249649b17449"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.738199 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5371a32d-3998-4ddc-93d6-27e9afdb9712-config" (OuterVolumeSpecName: "config") pod "5371a32d-3998-4ddc-93d6-27e9afdb9712" (UID: "5371a32d-3998-4ddc-93d6-27e9afdb9712"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.739619 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58cdcd05-e81a-4ed4-8357-249649b17449-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "58cdcd05-e81a-4ed4-8357-249649b17449" (UID: "58cdcd05-e81a-4ed4-8357-249649b17449"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.747668 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58cdcd05-e81a-4ed4-8357-249649b17449-config" (OuterVolumeSpecName: "config") pod "58cdcd05-e81a-4ed4-8357-249649b17449" (UID: "58cdcd05-e81a-4ed4-8357-249649b17449"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.755607 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58cdcd05-e81a-4ed4-8357-249649b17449-kube-api-access-lxgn8" (OuterVolumeSpecName: "kube-api-access-lxgn8") pod "58cdcd05-e81a-4ed4-8357-249649b17449" (UID: "58cdcd05-e81a-4ed4-8357-249649b17449"). InnerVolumeSpecName "kube-api-access-lxgn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.757938 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5371a32d-3998-4ddc-93d6-27e9afdb9712-kube-api-access-8jdmv" (OuterVolumeSpecName: "kube-api-access-8jdmv") pod "5371a32d-3998-4ddc-93d6-27e9afdb9712" (UID: "5371a32d-3998-4ddc-93d6-27e9afdb9712"). InnerVolumeSpecName "kube-api-access-8jdmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.830946 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c928931c-d49d-41dc-9181-11d856ed3bd0/ovsdbserver-sb/0.log" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.831384 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.834373 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/58cdcd05-e81a-4ed4-8357-249649b17449-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.834415 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxgn8\" (UniqueName: \"kubernetes.io/projected/58cdcd05-e81a-4ed4-8357-249649b17449-kube-api-access-lxgn8\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.834428 5050 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5371a32d-3998-4ddc-93d6-27e9afdb9712-ovn-rundir\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.834442 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5371a32d-3998-4ddc-93d6-27e9afdb9712-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.834452 5050 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/58cdcd05-e81a-4ed4-8357-249649b17449-ovs-rundir\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.834461 5050 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/58cdcd05-e81a-4ed4-8357-249649b17449-ovn-rundir\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.834470 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5371a32d-3998-4ddc-93d6-27e9afdb9712-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.834479 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jdmv\" (UniqueName: \"kubernetes.io/projected/5371a32d-3998-4ddc-93d6-27e9afdb9712-kube-api-access-8jdmv\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:07 crc kubenswrapper[5050]: E1211 14:12:07.834555 5050 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Dec 11 14:12:07 crc kubenswrapper[5050]: E1211 14:12:07.834617 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/458f05be-2fd6-44d9-8034-f077356964ce-config-data podName:458f05be-2fd6-44d9-8034-f077356964ce nodeName:}" failed. No retries permitted until 2025-12-11 14:12:11.834597202 +0000 UTC m=+1422.678319788 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/458f05be-2fd6-44d9-8034-f077356964ce-config-data") pod "rabbitmq-cell1-server-0" (UID: "458f05be-2fd6-44d9-8034-f077356964ce") : configmap "rabbitmq-cell1-config-data" not found Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.837045 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-47tvr" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.869179 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5371a32d-3998-4ddc-93d6-27e9afdb9712-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5371a32d-3998-4ddc-93d6-27e9afdb9712" (UID: "5371a32d-3998-4ddc-93d6-27e9afdb9712"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.874862 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-b59kh" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.876333 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58cdcd05-e81a-4ed4-8357-249649b17449-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "58cdcd05-e81a-4ed4-8357-249649b17449" (UID: "58cdcd05-e81a-4ed4-8357-249649b17449"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.898855 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 11 14:12:07 crc kubenswrapper[5050]: E1211 14:12:07.922162 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dcd6241bf625d5260eb81af83e760bdddc10ab5c6ec8bd47adbb59c1809dd3e4" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 11 14:12:07 crc kubenswrapper[5050]: E1211 14:12:07.938678 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dcd6241bf625d5260eb81af83e760bdddc10ab5c6ec8bd47adbb59c1809dd3e4" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.939253 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/01fa4d89-aae5-451a-8798-2700053fe3d4-ovn-controller-tls-certs\") pod \"01fa4d89-aae5-451a-8798-2700053fe3d4\" (UID: \"01fa4d89-aae5-451a-8798-2700053fe3d4\") " Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.939328 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p25b\" (UniqueName: \"kubernetes.io/projected/c443a35b-44e5-495f-b23b-75ff35319194-kube-api-access-7p25b\") pod \"c443a35b-44e5-495f-b23b-75ff35319194\" (UID: \"c443a35b-44e5-495f-b23b-75ff35319194\") " Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.939409 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c443a35b-44e5-495f-b23b-75ff35319194-ovsdbserver-nb\") pod \"c443a35b-44e5-495f-b23b-75ff35319194\" (UID: \"c443a35b-44e5-495f-b23b-75ff35319194\") " Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.939464 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/01fa4d89-aae5-451a-8798-2700053fe3d4-var-run\") pod \"01fa4d89-aae5-451a-8798-2700053fe3d4\" (UID: \"01fa4d89-aae5-451a-8798-2700053fe3d4\") " Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.939491 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxnd5\" (UniqueName: \"kubernetes.io/projected/01fa4d89-aae5-451a-8798-2700053fe3d4-kube-api-access-kxnd5\") pod \"01fa4d89-aae5-451a-8798-2700053fe3d4\" (UID: \"01fa4d89-aae5-451a-8798-2700053fe3d4\") " Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.939518 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c443a35b-44e5-495f-b23b-75ff35319194-config\") pod \"c443a35b-44e5-495f-b23b-75ff35319194\" (UID: \"c443a35b-44e5-495f-b23b-75ff35319194\") " Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.939563 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c443a35b-44e5-495f-b23b-75ff35319194-dns-swift-storage-0\") pod \"c443a35b-44e5-495f-b23b-75ff35319194\" (UID: \"c443a35b-44e5-495f-b23b-75ff35319194\") " Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.939585 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/01fa4d89-aae5-451a-8798-2700053fe3d4-var-log-ovn\") pod \"01fa4d89-aae5-451a-8798-2700053fe3d4\" (UID: \"01fa4d89-aae5-451a-8798-2700053fe3d4\") " Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.939607 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c928931c-d49d-41dc-9181-11d856ed3bd0-config\") pod \"c928931c-d49d-41dc-9181-11d856ed3bd0\" (UID: \"c928931c-d49d-41dc-9181-11d856ed3bd0\") " Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.939663 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9gv9\" (UniqueName: \"kubernetes.io/projected/c928931c-d49d-41dc-9181-11d856ed3bd0-kube-api-access-c9gv9\") pod \"c928931c-d49d-41dc-9181-11d856ed3bd0\" (UID: \"c928931c-d49d-41dc-9181-11d856ed3bd0\") " Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.939820 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c443a35b-44e5-495f-b23b-75ff35319194-dns-svc\") pod \"c443a35b-44e5-495f-b23b-75ff35319194\" (UID: \"c443a35b-44e5-495f-b23b-75ff35319194\") " Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.939856 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c928931c-d49d-41dc-9181-11d856ed3bd0-combined-ca-bundle\") pod \"c928931c-d49d-41dc-9181-11d856ed3bd0\" (UID: \"c928931c-d49d-41dc-9181-11d856ed3bd0\") " Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.940073 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c928931c-d49d-41dc-9181-11d856ed3bd0-ovsdb-rundir\") pod \"c928931c-d49d-41dc-9181-11d856ed3bd0\" (UID: \"c928931c-d49d-41dc-9181-11d856ed3bd0\") " Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.940164 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c928931c-d49d-41dc-9181-11d856ed3bd0-metrics-certs-tls-certs\") pod \"c928931c-d49d-41dc-9181-11d856ed3bd0\" (UID: \"c928931c-d49d-41dc-9181-11d856ed3bd0\") " Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.940277 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/01fa4d89-aae5-451a-8798-2700053fe3d4-var-run-ovn\") pod \"01fa4d89-aae5-451a-8798-2700053fe3d4\" (UID: \"01fa4d89-aae5-451a-8798-2700053fe3d4\") " Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.940342 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01fa4d89-aae5-451a-8798-2700053fe3d4-combined-ca-bundle\") pod \"01fa4d89-aae5-451a-8798-2700053fe3d4\" (UID: \"01fa4d89-aae5-451a-8798-2700053fe3d4\") " Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.940442 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c443a35b-44e5-495f-b23b-75ff35319194-ovsdbserver-sb\") pod \"c443a35b-44e5-495f-b23b-75ff35319194\" (UID: \"c443a35b-44e5-495f-b23b-75ff35319194\") " Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.940474 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"c928931c-d49d-41dc-9181-11d856ed3bd0\" (UID: \"c928931c-d49d-41dc-9181-11d856ed3bd0\") " Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.940741 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c928931c-d49d-41dc-9181-11d856ed3bd0-ovsdbserver-sb-tls-certs\") pod \"c928931c-d49d-41dc-9181-11d856ed3bd0\" (UID: \"c928931c-d49d-41dc-9181-11d856ed3bd0\") " Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.940792 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c928931c-d49d-41dc-9181-11d856ed3bd0-scripts\") pod \"c928931c-d49d-41dc-9181-11d856ed3bd0\" (UID: \"c928931c-d49d-41dc-9181-11d856ed3bd0\") " Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.940839 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/01fa4d89-aae5-451a-8798-2700053fe3d4-scripts\") pod \"01fa4d89-aae5-451a-8798-2700053fe3d4\" (UID: \"01fa4d89-aae5-451a-8798-2700053fe3d4\") " Dec 11 14:12:07 crc kubenswrapper[5050]: E1211 14:12:07.941587 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dcd6241bf625d5260eb81af83e760bdddc10ab5c6ec8bd47adbb59c1809dd3e4" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 11 14:12:07 crc kubenswrapper[5050]: E1211 14:12:07.941651 5050 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="7c13a1ff-0952-40b8-9157-3f1ba8b232c0" containerName="nova-cell0-conductor-conductor" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.942426 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bh5r\" (UniqueName: \"kubernetes.io/projected/e7896842-a23c-4427-ab0e-702138a8cdd0-kube-api-access-2bh5r\") pod \"novacell1d6ec-account-delete-cwf97\" (UID: \"e7896842-a23c-4427-ab0e-702138a8cdd0\") " pod="openstack/novacell1d6ec-account-delete-cwf97" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.942598 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01fa4d89-aae5-451a-8798-2700053fe3d4-var-run" (OuterVolumeSpecName: "var-run") pod "01fa4d89-aae5-451a-8798-2700053fe3d4" (UID: "01fa4d89-aae5-451a-8798-2700053fe3d4"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.942850 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7896842-a23c-4427-ab0e-702138a8cdd0-operator-scripts\") pod \"novacell1d6ec-account-delete-cwf97\" (UID: \"e7896842-a23c-4427-ab0e-702138a8cdd0\") " pod="openstack/novacell1d6ec-account-delete-cwf97" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.943181 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/58cdcd05-e81a-4ed4-8357-249649b17449-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.943200 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5371a32d-3998-4ddc-93d6-27e9afdb9712-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.943215 5050 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/01fa4d89-aae5-451a-8798-2700053fe3d4-var-run\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:07 crc kubenswrapper[5050]: E1211 14:12:07.943289 5050 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.943316 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01fa4d89-aae5-451a-8798-2700053fe3d4-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "01fa4d89-aae5-451a-8798-2700053fe3d4" (UID: "01fa4d89-aae5-451a-8798-2700053fe3d4"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 14:12:07 crc kubenswrapper[5050]: E1211 14:12:07.943347 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e7896842-a23c-4427-ab0e-702138a8cdd0-operator-scripts podName:e7896842-a23c-4427-ab0e-702138a8cdd0 nodeName:}" failed. No retries permitted until 2025-12-11 14:12:09.94332466 +0000 UTC m=+1420.787047246 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/e7896842-a23c-4427-ab0e-702138a8cdd0-operator-scripts") pod "novacell1d6ec-account-delete-cwf97" (UID: "e7896842-a23c-4427-ab0e-702138a8cdd0") : configmap "openstack-cell1-scripts" not found Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.944039 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/01fa4d89-aae5-451a-8798-2700053fe3d4-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "01fa4d89-aae5-451a-8798-2700053fe3d4" (UID: "01fa4d89-aae5-451a-8798-2700053fe3d4"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.947928 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e96be66c-07f2-47c0-a784-6af473c8a2a8/ovsdbserver-nb/0.log" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.948097 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.948262 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c928931c-d49d-41dc-9181-11d856ed3bd0-scripts" (OuterVolumeSpecName: "scripts") pod "c928931c-d49d-41dc-9181-11d856ed3bd0" (UID: "c928931c-d49d-41dc-9181-11d856ed3bd0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.948560 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c928931c-d49d-41dc-9181-11d856ed3bd0-config" (OuterVolumeSpecName: "config") pod "c928931c-d49d-41dc-9181-11d856ed3bd0" (UID: "c928931c-d49d-41dc-9181-11d856ed3bd0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:12:07 crc kubenswrapper[5050]: E1211 14:12:07.949253 5050 projected.go:194] Error preparing data for projected volume kube-api-access-2bh5r for pod openstack/novacell1d6ec-account-delete-cwf97: failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Dec 11 14:12:07 crc kubenswrapper[5050]: E1211 14:12:07.949344 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e7896842-a23c-4427-ab0e-702138a8cdd0-kube-api-access-2bh5r podName:e7896842-a23c-4427-ab0e-702138a8cdd0 nodeName:}" failed. No retries permitted until 2025-12-11 14:12:09.949317821 +0000 UTC m=+1420.793040467 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2bh5r" (UniqueName: "kubernetes.io/projected/e7896842-a23c-4427-ab0e-702138a8cdd0-kube-api-access-2bh5r") pod "novacell1d6ec-account-delete-cwf97" (UID: "e7896842-a23c-4427-ab0e-702138a8cdd0") : failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.949702 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01fa4d89-aae5-451a-8798-2700053fe3d4-scripts" (OuterVolumeSpecName: "scripts") pod "01fa4d89-aae5-451a-8798-2700053fe3d4" (UID: "01fa4d89-aae5-451a-8798-2700053fe3d4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.951024 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c928931c-d49d-41dc-9181-11d856ed3bd0-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "c928931c-d49d-41dc-9181-11d856ed3bd0" (UID: "c928931c-d49d-41dc-9181-11d856ed3bd0"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.956416 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5371a32d-3998-4ddc-93d6-27e9afdb9712-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "5371a32d-3998-4ddc-93d6-27e9afdb9712" (UID: "5371a32d-3998-4ddc-93d6-27e9afdb9712"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.969267 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "c928931c-d49d-41dc-9181-11d856ed3bd0" (UID: "c928931c-d49d-41dc-9181-11d856ed3bd0"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.977383 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01fa4d89-aae5-451a-8798-2700053fe3d4-kube-api-access-kxnd5" (OuterVolumeSpecName: "kube-api-access-kxnd5") pod "01fa4d89-aae5-451a-8798-2700053fe3d4" (UID: "01fa4d89-aae5-451a-8798-2700053fe3d4"). InnerVolumeSpecName "kube-api-access-kxnd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:07 crc kubenswrapper[5050]: I1211 14:12:07.994949 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c928931c-d49d-41dc-9181-11d856ed3bd0-kube-api-access-c9gv9" (OuterVolumeSpecName: "kube-api-access-c9gv9") pod "c928931c-d49d-41dc-9181-11d856ed3bd0" (UID: "c928931c-d49d-41dc-9181-11d856ed3bd0"). InnerVolumeSpecName "kube-api-access-c9gv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.013828 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c443a35b-44e5-495f-b23b-75ff35319194-kube-api-access-7p25b" (OuterVolumeSpecName: "kube-api-access-7p25b") pod "c443a35b-44e5-495f-b23b-75ff35319194" (UID: "c443a35b-44e5-495f-b23b-75ff35319194"). InnerVolumeSpecName "kube-api-access-7p25b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.023805 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-47tvr" event={"ID":"01fa4d89-aae5-451a-8798-2700053fe3d4","Type":"ContainerDied","Data":"1ade635b19d02c0aef64a1546e17b2e6fa10bdb422590899fc1a126ab22d4372"} Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.023879 5050 scope.go:117] "RemoveContainer" containerID="3dc7b12cce104d7f7c7d469a1c36d591c390e0b69268044a2dd0bba6d7255d70" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.024084 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-47tvr" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.040783 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5371a32d-3998-4ddc-93d6-27e9afdb9712-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "5371a32d-3998-4ddc-93d6-27e9afdb9712" (UID: "5371a32d-3998-4ddc-93d6-27e9afdb9712"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.042051 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e96be66c-07f2-47c0-a784-6af473c8a2a8/ovsdbserver-nb/0.log" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.042136 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e96be66c-07f2-47c0-a784-6af473c8a2a8","Type":"ContainerDied","Data":"cfb0a77aeeda20dcd88cfbfdd07fabd66839af79463ba0d77f3fc7604c35e830"} Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.042231 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.046144 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"e96be66c-07f2-47c0-a784-6af473c8a2a8\" (UID: \"e96be66c-07f2-47c0-a784-6af473c8a2a8\") " Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.046204 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e96be66c-07f2-47c0-a784-6af473c8a2a8-config\") pod \"e96be66c-07f2-47c0-a784-6af473c8a2a8\" (UID: \"e96be66c-07f2-47c0-a784-6af473c8a2a8\") " Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.046272 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e96be66c-07f2-47c0-a784-6af473c8a2a8-combined-ca-bundle\") pod \"e96be66c-07f2-47c0-a784-6af473c8a2a8\" (UID: \"e96be66c-07f2-47c0-a784-6af473c8a2a8\") " Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.046395 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2396be70-52b5-4a91-b8f8-463803fcc4d0-combined-ca-bundle\") pod \"2396be70-52b5-4a91-b8f8-463803fcc4d0\" (UID: \"2396be70-52b5-4a91-b8f8-463803fcc4d0\") " Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.046458 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhkkd\" (UniqueName: \"kubernetes.io/projected/e96be66c-07f2-47c0-a784-6af473c8a2a8-kube-api-access-vhkkd\") pod \"e96be66c-07f2-47c0-a784-6af473c8a2a8\" (UID: \"e96be66c-07f2-47c0-a784-6af473c8a2a8\") " Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.046484 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2396be70-52b5-4a91-b8f8-463803fcc4d0-openstack-config\") pod \"2396be70-52b5-4a91-b8f8-463803fcc4d0\" (UID: \"2396be70-52b5-4a91-b8f8-463803fcc4d0\") " Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.046562 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e96be66c-07f2-47c0-a784-6af473c8a2a8-ovsdb-rundir\") pod \"e96be66c-07f2-47c0-a784-6af473c8a2a8\" (UID: \"e96be66c-07f2-47c0-a784-6af473c8a2a8\") " Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.046627 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e96be66c-07f2-47c0-a784-6af473c8a2a8-scripts\") pod \"e96be66c-07f2-47c0-a784-6af473c8a2a8\" (UID: \"e96be66c-07f2-47c0-a784-6af473c8a2a8\") " Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.046734 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clgnq\" (UniqueName: \"kubernetes.io/projected/2396be70-52b5-4a91-b8f8-463803fcc4d0-kube-api-access-clgnq\") pod \"2396be70-52b5-4a91-b8f8-463803fcc4d0\" (UID: \"2396be70-52b5-4a91-b8f8-463803fcc4d0\") " Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.046777 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e96be66c-07f2-47c0-a784-6af473c8a2a8-metrics-certs-tls-certs\") pod \"e96be66c-07f2-47c0-a784-6af473c8a2a8\" (UID: \"e96be66c-07f2-47c0-a784-6af473c8a2a8\") " Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.046818 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e96be66c-07f2-47c0-a784-6af473c8a2a8-ovsdbserver-nb-tls-certs\") pod \"e96be66c-07f2-47c0-a784-6af473c8a2a8\" (UID: \"e96be66c-07f2-47c0-a784-6af473c8a2a8\") " Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.046851 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2396be70-52b5-4a91-b8f8-463803fcc4d0-openstack-config-secret\") pod \"2396be70-52b5-4a91-b8f8-463803fcc4d0\" (UID: \"2396be70-52b5-4a91-b8f8-463803fcc4d0\") " Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.047608 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c928931c-d49d-41dc-9181-11d856ed3bd0-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.047633 5050 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/01fa4d89-aae5-451a-8798-2700053fe3d4-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.047646 5050 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5371a32d-3998-4ddc-93d6-27e9afdb9712-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.047673 5050 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.047686 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c928931c-d49d-41dc-9181-11d856ed3bd0-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.047700 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/01fa4d89-aae5-451a-8798-2700053fe3d4-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.047713 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p25b\" (UniqueName: \"kubernetes.io/projected/c443a35b-44e5-495f-b23b-75ff35319194-kube-api-access-7p25b\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.047726 5050 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/5371a32d-3998-4ddc-93d6-27e9afdb9712-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.047740 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxnd5\" (UniqueName: \"kubernetes.io/projected/01fa4d89-aae5-451a-8798-2700053fe3d4-kube-api-access-kxnd5\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.047752 5050 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/01fa4d89-aae5-451a-8798-2700053fe3d4-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.047763 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c928931c-d49d-41dc-9181-11d856ed3bd0-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.047775 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9gv9\" (UniqueName: \"kubernetes.io/projected/c928931c-d49d-41dc-9181-11d856ed3bd0-kube-api-access-c9gv9\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.050601 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e96be66c-07f2-47c0-a784-6af473c8a2a8-config" (OuterVolumeSpecName: "config") pod "e96be66c-07f2-47c0-a784-6af473c8a2a8" (UID: "e96be66c-07f2-47c0-a784-6af473c8a2a8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.052187 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e96be66c-07f2-47c0-a784-6af473c8a2a8-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "e96be66c-07f2-47c0-a784-6af473c8a2a8" (UID: "e96be66c-07f2-47c0-a784-6af473c8a2a8"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.064501 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-fcd4b466f-vsss4" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.065612 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e96be66c-07f2-47c0-a784-6af473c8a2a8-scripts" (OuterVolumeSpecName: "scripts") pod "e96be66c-07f2-47c0-a784-6af473c8a2a8" (UID: "e96be66c-07f2-47c0-a784-6af473c8a2a8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.069226 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58cdcd05-e81a-4ed4-8357-249649b17449-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "58cdcd05-e81a-4ed4-8357-249649b17449" (UID: "58cdcd05-e81a-4ed4-8357-249649b17449"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.076677 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2396be70-52b5-4a91-b8f8-463803fcc4d0-kube-api-access-clgnq" (OuterVolumeSpecName: "kube-api-access-clgnq") pod "2396be70-52b5-4a91-b8f8-463803fcc4d0" (UID: "2396be70-52b5-4a91-b8f8-463803fcc4d0"). InnerVolumeSpecName "kube-api-access-clgnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.082418 5050 generic.go:334] "Generic (PLEG): container finished" podID="1eb418aa-1d3c-469c-8ff4-2b3c86a71e97" containerID="9ed3b2343fe57f3204bfcc9d8b8b6ffb7c52336371d620cd7dace42eedff80a0" exitCode=143 Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.082511 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1eb418aa-1d3c-469c-8ff4-2b3c86a71e97","Type":"ContainerDied","Data":"9ed3b2343fe57f3204bfcc9d8b8b6ffb7c52336371d620cd7dace42eedff80a0"} Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.089330 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e96be66c-07f2-47c0-a784-6af473c8a2a8-kube-api-access-vhkkd" (OuterVolumeSpecName: "kube-api-access-vhkkd") pod "e96be66c-07f2-47c0-a784-6af473c8a2a8" (UID: "e96be66c-07f2-47c0-a784-6af473c8a2a8"). InnerVolumeSpecName "kube-api-access-vhkkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.090713 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "e96be66c-07f2-47c0-a784-6af473c8a2a8" (UID: "e96be66c-07f2-47c0-a784-6af473c8a2a8"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.127000 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_5371a32d-3998-4ddc-93d6-27e9afdb9712/ovn-northd/0.log" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.127405 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.129079 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"5371a32d-3998-4ddc-93d6-27e9afdb9712","Type":"ContainerDied","Data":"9060d643ec7e71ef960f3be115d349c7b177a56db9bd83b2fa67c2629f764c76"} Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.141355 5050 scope.go:117] "RemoveContainer" containerID="93fdd4d5510bd166ab18bedc5b8b3ae3ef5a610b722c510e999ff481726b0345" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.169406 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cffff412-bf3c-4739-8bb8-3d099c8c83fe-combined-ca-bundle\") pod \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\" (UID: \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\") " Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.169584 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cffff412-bf3c-4739-8bb8-3d099c8c83fe-etc-swift\") pod \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\" (UID: \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\") " Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.169649 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cffff412-bf3c-4739-8bb8-3d099c8c83fe-config-data\") pod \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\" (UID: \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\") " Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.169689 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6h8b\" (UniqueName: \"kubernetes.io/projected/cffff412-bf3c-4739-8bb8-3d099c8c83fe-kube-api-access-b6h8b\") pod \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\" (UID: \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\") " Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.169908 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cffff412-bf3c-4739-8bb8-3d099c8c83fe-internal-tls-certs\") pod \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\" (UID: \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\") " Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.169943 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cffff412-bf3c-4739-8bb8-3d099c8c83fe-public-tls-certs\") pod \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\" (UID: \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\") " Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.169979 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cffff412-bf3c-4739-8bb8-3d099c8c83fe-run-httpd\") pod \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\" (UID: \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\") " Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.170100 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cffff412-bf3c-4739-8bb8-3d099c8c83fe-log-httpd\") pod \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\" (UID: \"cffff412-bf3c-4739-8bb8-3d099c8c83fe\") " Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.171569 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.178262 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cffff412-bf3c-4739-8bb8-3d099c8c83fe-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cffff412-bf3c-4739-8bb8-3d099c8c83fe" (UID: "cffff412-bf3c-4739-8bb8-3d099c8c83fe"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.179627 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cffff412-bf3c-4739-8bb8-3d099c8c83fe-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cffff412-bf3c-4739-8bb8-3d099c8c83fe" (UID: "cffff412-bf3c-4739-8bb8-3d099c8c83fe"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.180129 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e96be66c-07f2-47c0-a784-6af473c8a2a8-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.180154 5050 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/58cdcd05-e81a-4ed4-8357-249649b17449-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.180170 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clgnq\" (UniqueName: \"kubernetes.io/projected/2396be70-52b5-4a91-b8f8-463803fcc4d0-kube-api-access-clgnq\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.180186 5050 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cffff412-bf3c-4739-8bb8-3d099c8c83fe-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.180199 5050 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cffff412-bf3c-4739-8bb8-3d099c8c83fe-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.180227 5050 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.180426 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e96be66c-07f2-47c0-a784-6af473c8a2a8-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.180441 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhkkd\" (UniqueName: \"kubernetes.io/projected/e96be66c-07f2-47c0-a784-6af473c8a2a8-kube-api-access-vhkkd\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.180465 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e96be66c-07f2-47c0-a784-6af473c8a2a8-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.197499 5050 generic.go:334] "Generic (PLEG): container finished" podID="934cae9e-c75b-434d-b1e1-d566d6fb8b7d" containerID="955a0ee0c9eed128222ddf5d6dedbc74a4c5d1d3bcc7732f13e94db5162a8ca2" exitCode=143 Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.197592 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-54bb9c4d69-975sg" event={"ID":"934cae9e-c75b-434d-b1e1-d566d6fb8b7d","Type":"ContainerDied","Data":"955a0ee0c9eed128222ddf5d6dedbc74a4c5d1d3bcc7732f13e94db5162a8ca2"} Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.202228 5050 generic.go:334] "Generic (PLEG): container finished" podID="4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf" containerID="887182e7bdf510cf5f8d29d8def14429f4899834fa471d481e28b9675086a309" exitCode=143 Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.202329 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57f899fb58-v2lwj" event={"ID":"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf","Type":"ContainerDied","Data":"887182e7bdf510cf5f8d29d8def14429f4899834fa471d481e28b9675086a309"} Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.205462 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-7gmrp_58cdcd05-e81a-4ed4-8357-249649b17449/openstack-network-exporter/0.log" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.205570 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-7gmrp" event={"ID":"58cdcd05-e81a-4ed4-8357-249649b17449","Type":"ContainerDied","Data":"70b268aba91e4e02de538365adb4c126705681e130601bd8739cdafa467c2a68"} Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.205597 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-7gmrp" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.208128 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cffff412-bf3c-4739-8bb8-3d099c8c83fe-kube-api-access-b6h8b" (OuterVolumeSpecName: "kube-api-access-b6h8b") pod "cffff412-bf3c-4739-8bb8-3d099c8c83fe" (UID: "cffff412-bf3c-4739-8bb8-3d099c8c83fe"). InnerVolumeSpecName "kube-api-access-b6h8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.218983 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_c928931c-d49d-41dc-9181-11d856ed3bd0/ovsdbserver-sb/0.log" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.219407 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.220689 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"c928931c-d49d-41dc-9181-11d856ed3bd0","Type":"ContainerDied","Data":"1cea5ee03c4b98e2dd708e2afdf4fa98c7bbe1a680f5dc49724ce2f1716f81ee"} Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.227341 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cffff412-bf3c-4739-8bb8-3d099c8c83fe-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "cffff412-bf3c-4739-8bb8-3d099c8c83fe" (UID: "cffff412-bf3c-4739-8bb8-3d099c8c83fe"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.229674 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e96be66c-07f2-47c0-a784-6af473c8a2a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e96be66c-07f2-47c0-a784-6af473c8a2a8" (UID: "e96be66c-07f2-47c0-a784-6af473c8a2a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.241075 5050 generic.go:334] "Generic (PLEG): container finished" podID="3386ffea-45ca-41e8-9aa5-61a2923a3394" containerID="26578d5a108fa4d1bfd89b54489a03a4c1c636d76f83751795e52594a63ff439" exitCode=143 Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.241161 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3386ffea-45ca-41e8-9aa5-61a2923a3394","Type":"ContainerDied","Data":"26578d5a108fa4d1bfd89b54489a03a4c1c636d76f83751795e52594a63ff439"} Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.243588 5050 generic.go:334] "Generic (PLEG): container finished" podID="569cb143-086a-42f1-9e8c-6f6f614c9ee2" containerID="d11f1570a4983e90360fd498bdee9b19f208c7f5acd61496d60bf9cadd7bc16f" exitCode=143 Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.243650 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-655647566b-n2tcs" event={"ID":"569cb143-086a-42f1-9e8c-6f6f614c9ee2","Type":"ContainerDied","Data":"d11f1570a4983e90360fd498bdee9b19f208c7f5acd61496d60bf9cadd7bc16f"} Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.245084 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcd6f8f8f-b59kh" event={"ID":"c443a35b-44e5-495f-b23b-75ff35319194","Type":"ContainerDied","Data":"6a70c42a01f461d0f0d8ee21f2bb944842009eb72f63c5ed2a6307203f9e4767"} Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.245166 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcd6f8f8f-b59kh" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.285740 5050 generic.go:334] "Generic (PLEG): container finished" podID="29a26d59-027f-428e-928e-12222b61a350" containerID="3e6132bd898662eb15caae20bc63d62858df7ed7da6bd64261b666f48768ec52" exitCode=0 Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.285957 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"29a26d59-027f-428e-928e-12222b61a350","Type":"ContainerDied","Data":"3e6132bd898662eb15caae20bc63d62858df7ed7da6bd64261b666f48768ec52"} Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.291792 5050 generic.go:334] "Generic (PLEG): container finished" podID="cffff412-bf3c-4739-8bb8-3d099c8c83fe" containerID="0834c0f817c98ccee2d1fc466612fef74fe94fe1c05828492e898fa4014682d9" exitCode=0 Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.291838 5050 generic.go:334] "Generic (PLEG): container finished" podID="cffff412-bf3c-4739-8bb8-3d099c8c83fe" containerID="bab4849e060d80aa6504ac4459a8bc36b1517d2feed47ced5829f252e6c45900" exitCode=0 Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.291891 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-fcd4b466f-vsss4" event={"ID":"cffff412-bf3c-4739-8bb8-3d099c8c83fe","Type":"ContainerDied","Data":"0834c0f817c98ccee2d1fc466612fef74fe94fe1c05828492e898fa4014682d9"} Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.291932 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-fcd4b466f-vsss4" event={"ID":"cffff412-bf3c-4739-8bb8-3d099c8c83fe","Type":"ContainerDied","Data":"bab4849e060d80aa6504ac4459a8bc36b1517d2feed47ced5829f252e6c45900"} Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.292002 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-fcd4b466f-vsss4" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.296833 5050 generic.go:334] "Generic (PLEG): container finished" podID="0ca28ba4-2b37-4836-9d51-8dea84046163" containerID="c999df53c600f82fa92bc84444337d1373326ba3f2b76682afad53362cb34c3d" exitCode=0 Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.296883 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0ca28ba4-2b37-4836-9d51-8dea84046163","Type":"ContainerDied","Data":"c999df53c600f82fa92bc84444337d1373326ba3f2b76682afad53362cb34c3d"} Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.298889 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e96be66c-07f2-47c0-a784-6af473c8a2a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.298936 5050 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cffff412-bf3c-4739-8bb8-3d099c8c83fe-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.298948 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6h8b\" (UniqueName: \"kubernetes.io/projected/cffff412-bf3c-4739-8bb8-3d099c8c83fe-kube-api-access-b6h8b\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: E1211 14:12:08.299120 5050 secret.go:188] Couldn't get secret openstack/barbican-api-config-data: secret "barbican-api-config-data" not found Dec 11 14:12:08 crc kubenswrapper[5050]: E1211 14:12:08.299220 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-config-data-custom podName:4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf nodeName:}" failed. No retries permitted until 2025-12-11 14:12:10.299177983 +0000 UTC m=+1421.142900569 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data-custom" (UniqueName: "kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-config-data-custom") pod "barbican-api-57f899fb58-v2lwj" (UID: "4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf") : secret "barbican-api-config-data" not found Dec 11 14:12:08 crc kubenswrapper[5050]: E1211 14:12:08.299257 5050 secret.go:188] Couldn't get secret openstack/barbican-config-data: secret "barbican-config-data" not found Dec 11 14:12:08 crc kubenswrapper[5050]: E1211 14:12:08.299340 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-config-data podName:4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf nodeName:}" failed. No retries permitted until 2025-12-11 14:12:10.299311856 +0000 UTC m=+1421.143034632 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-config-data") pod "barbican-api-57f899fb58-v2lwj" (UID: "4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf") : secret "barbican-config-data" not found Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.342262 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01fa4d89-aae5-451a-8798-2700053fe3d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01fa4d89-aae5-451a-8798-2700053fe3d4" (UID: "01fa4d89-aae5-451a-8798-2700053fe3d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.346095 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c928931c-d49d-41dc-9181-11d856ed3bd0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c928931c-d49d-41dc-9181-11d856ed3bd0" (UID: "c928931c-d49d-41dc-9181-11d856ed3bd0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.346759 5050 generic.go:334] "Generic (PLEG): container finished" podID="88b4966d-124b-4cf4-b52b-704955059220" containerID="5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c" exitCode=0 Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.346854 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pjzpq" event={"ID":"88b4966d-124b-4cf4-b52b-704955059220","Type":"ContainerDied","Data":"5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c"} Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.392401 5050 generic.go:334] "Generic (PLEG): container finished" podID="3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8" containerID="09344e4a97a8ab6a4dd14bdb19ffc7b857d5e251f7e429a179a74c21dc2fac38" exitCode=0 Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.392733 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z72zl" event={"ID":"3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8","Type":"ContainerDied","Data":"09344e4a97a8ab6a4dd14bdb19ffc7b857d5e251f7e429a179a74c21dc2fac38"} Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.401391 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c928931c-d49d-41dc-9181-11d856ed3bd0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.401426 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01fa4d89-aae5-451a-8798-2700053fe3d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.443371 5050 generic.go:334] "Generic (PLEG): container finished" podID="a5dabf50-534b-45cb-87db-45373930fe82" containerID="0cac73e478a996fa3e9d0714853b7480372b37e951d6e3e0667c3722790407c8" exitCode=0 Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.451291 5050 generic.go:334] "Generic (PLEG): container finished" podID="a5dabf50-534b-45cb-87db-45373930fe82" containerID="a92c2e4e55be6c0dccf533363df9021ca510e9f14d1f5a908a2795582d914ca4" exitCode=0 Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.451396 5050 generic.go:334] "Generic (PLEG): container finished" podID="a5dabf50-534b-45cb-87db-45373930fe82" containerID="69f5ff7e4ffed5e07ece2e747c39e17e11ce1252b75c01b5c3313338481c02f5" exitCode=0 Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.451566 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell1d6ec-account-delete-cwf97" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.450881 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a5dabf50-534b-45cb-87db-45373930fe82","Type":"ContainerDied","Data":"0cac73e478a996fa3e9d0714853b7480372b37e951d6e3e0667c3722790407c8"} Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.451880 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a5dabf50-534b-45cb-87db-45373930fe82","Type":"ContainerDied","Data":"a92c2e4e55be6c0dccf533363df9021ca510e9f14d1f5a908a2795582d914ca4"} Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.451946 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a5dabf50-534b-45cb-87db-45373930fe82","Type":"ContainerDied","Data":"69f5ff7e4ffed5e07ece2e747c39e17e11ce1252b75c01b5c3313338481c02f5"} Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.465607 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c443a35b-44e5-495f-b23b-75ff35319194-config" (OuterVolumeSpecName: "config") pod "c443a35b-44e5-495f-b23b-75ff35319194" (UID: "c443a35b-44e5-495f-b23b-75ff35319194"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.490810 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2396be70-52b5-4a91-b8f8-463803fcc4d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2396be70-52b5-4a91-b8f8-463803fcc4d0" (UID: "2396be70-52b5-4a91-b8f8-463803fcc4d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.501650 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron7594-account-delete-8wjht"] Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.504295 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c443a35b-44e5-495f-b23b-75ff35319194-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.506069 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2396be70-52b5-4a91-b8f8-463803fcc4d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.550604 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c443a35b-44e5-495f-b23b-75ff35319194-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c443a35b-44e5-495f-b23b-75ff35319194" (UID: "c443a35b-44e5-495f-b23b-75ff35319194"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.552467 5050 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.609087 5050 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.609777 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c443a35b-44e5-495f-b23b-75ff35319194-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.609800 5050 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.618630 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2396be70-52b5-4a91-b8f8-463803fcc4d0-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "2396be70-52b5-4a91-b8f8-463803fcc4d0" (UID: "2396be70-52b5-4a91-b8f8-463803fcc4d0"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.669410 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2396be70-52b5-4a91-b8f8-463803fcc4d0-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "2396be70-52b5-4a91-b8f8-463803fcc4d0" (UID: "2396be70-52b5-4a91-b8f8-463803fcc4d0"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.672800 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c443a35b-44e5-495f-b23b-75ff35319194-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c443a35b-44e5-495f-b23b-75ff35319194" (UID: "c443a35b-44e5-495f-b23b-75ff35319194"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.688935 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c443a35b-44e5-495f-b23b-75ff35319194-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c443a35b-44e5-495f-b23b-75ff35319194" (UID: "c443a35b-44e5-495f-b23b-75ff35319194"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.716666 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c443a35b-44e5-495f-b23b-75ff35319194-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.716708 5050 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2396be70-52b5-4a91-b8f8-463803fcc4d0-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.716722 5050 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.716737 5050 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2396be70-52b5-4a91-b8f8-463803fcc4d0-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.716748 5050 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c443a35b-44e5-495f-b23b-75ff35319194-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: E1211 14:12:08.718202 5050 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Dec 11 14:12:08 crc kubenswrapper[5050]: E1211 14:12:08.718261 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0891f075-8101-475b-b844-e7cb42a4990b-config-data podName:0891f075-8101-475b-b844-e7cb42a4990b nodeName:}" failed. No retries permitted until 2025-12-11 14:12:12.718236883 +0000 UTC m=+1423.561959469 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0891f075-8101-475b-b844-e7cb42a4990b-config-data") pod "rabbitmq-server-0" (UID: "0891f075-8101-475b-b844-e7cb42a4990b") : configmap "rabbitmq-config-data" not found Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.719376 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cffff412-bf3c-4739-8bb8-3d099c8c83fe-config-data" (OuterVolumeSpecName: "config-data") pod "cffff412-bf3c-4739-8bb8-3d099c8c83fe" (UID: "cffff412-bf3c-4739-8bb8-3d099c8c83fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.727613 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cffff412-bf3c-4739-8bb8-3d099c8c83fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cffff412-bf3c-4739-8bb8-3d099c8c83fe" (UID: "cffff412-bf3c-4739-8bb8-3d099c8c83fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.739213 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cffff412-bf3c-4739-8bb8-3d099c8c83fe-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "cffff412-bf3c-4739-8bb8-3d099c8c83fe" (UID: "cffff412-bf3c-4739-8bb8-3d099c8c83fe"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.750558 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c928931c-d49d-41dc-9181-11d856ed3bd0-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "c928931c-d49d-41dc-9181-11d856ed3bd0" (UID: "c928931c-d49d-41dc-9181-11d856ed3bd0"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.753126 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cffff412-bf3c-4739-8bb8-3d099c8c83fe-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "cffff412-bf3c-4739-8bb8-3d099c8c83fe" (UID: "cffff412-bf3c-4739-8bb8-3d099c8c83fe"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.753971 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01fa4d89-aae5-451a-8798-2700053fe3d4-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "01fa4d89-aae5-451a-8798-2700053fe3d4" (UID: "01fa4d89-aae5-451a-8798-2700053fe3d4"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.754879 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c443a35b-44e5-495f-b23b-75ff35319194-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c443a35b-44e5-495f-b23b-75ff35319194" (UID: "c443a35b-44e5-495f-b23b-75ff35319194"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.757339 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c928931c-d49d-41dc-9181-11d856ed3bd0-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "c928931c-d49d-41dc-9181-11d856ed3bd0" (UID: "c928931c-d49d-41dc-9181-11d856ed3bd0"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.764202 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e96be66c-07f2-47c0-a784-6af473c8a2a8-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "e96be66c-07f2-47c0-a784-6af473c8a2a8" (UID: "e96be66c-07f2-47c0-a784-6af473c8a2a8"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.797064 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e96be66c-07f2-47c0-a784-6af473c8a2a8-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "e96be66c-07f2-47c0-a784-6af473c8a2a8" (UID: "e96be66c-07f2-47c0-a784-6af473c8a2a8"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.821184 5050 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/01fa4d89-aae5-451a-8798-2700053fe3d4-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.821223 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c443a35b-44e5-495f-b23b-75ff35319194-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.821233 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cffff412-bf3c-4739-8bb8-3d099c8c83fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.821241 5050 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c928931c-d49d-41dc-9181-11d856ed3bd0-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.821250 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cffff412-bf3c-4739-8bb8-3d099c8c83fe-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.821264 5050 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e96be66c-07f2-47c0-a784-6af473c8a2a8-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.821272 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e96be66c-07f2-47c0-a784-6af473c8a2a8-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.821281 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c928931c-d49d-41dc-9181-11d856ed3bd0-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.821289 5050 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cffff412-bf3c-4739-8bb8-3d099c8c83fe-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.821299 5050 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cffff412-bf3c-4739-8bb8-3d099c8c83fe-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.910845 5050 scope.go:117] "RemoveContainer" containerID="c612f0ddd6bdd2093ab8086d9662d761b9c9ab6d418a4c7403db013b23c7a269" Dec 11 14:12:08 crc kubenswrapper[5050]: I1211 14:12:08.926660 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell1d6ec-account-delete-cwf97" Dec 11 14:12:08 crc kubenswrapper[5050]: E1211 14:12:08.953059 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3713fc357ee4f2a6f119e91ece32b1dd727d67185d218ebb887b345bbd9015d1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 11 14:12:08 crc kubenswrapper[5050]: E1211 14:12:08.953264 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c is running failed: container process not found" containerID="5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 11 14:12:08 crc kubenswrapper[5050]: E1211 14:12:08.965319 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3713fc357ee4f2a6f119e91ece32b1dd727d67185d218ebb887b345bbd9015d1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 11 14:12:08 crc kubenswrapper[5050]: E1211 14:12:08.965398 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c is running failed: container process not found" containerID="5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 11 14:12:08 crc kubenswrapper[5050]: E1211 14:12:08.970217 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c is running failed: container process not found" containerID="5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 11 14:12:08 crc kubenswrapper[5050]: E1211 14:12:08.970318 5050 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-pjzpq" podUID="88b4966d-124b-4cf4-b52b-704955059220" containerName="ovsdb-server" Dec 11 14:12:08 crc kubenswrapper[5050]: E1211 14:12:08.977432 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3713fc357ee4f2a6f119e91ece32b1dd727d67185d218ebb887b345bbd9015d1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 11 14:12:08 crc kubenswrapper[5050]: E1211 14:12:08.977570 5050 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-pjzpq" podUID="88b4966d-124b-4cf4-b52b-704955059220" containerName="ovs-vswitchd" Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.031917 5050 scope.go:117] "RemoveContainer" containerID="7f95d994fd5fc97f391f6f15efe0c185c18faac91b7536e24f460feb81c83897" Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.078386 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.125828 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-fcd4b466f-vsss4"] Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.146352 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-fcd4b466f-vsss4"] Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.184184 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-7gmrp"] Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.205340 5050 scope.go:117] "RemoveContainer" containerID="43b0ab8e8f0e3806e99b599e730fb8eac2bff4d265f8afa1bea527256b44c445" Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.235766 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ca28ba4-2b37-4836-9d51-8dea84046163-combined-ca-bundle\") pod \"0ca28ba4-2b37-4836-9d51-8dea84046163\" (UID: \"0ca28ba4-2b37-4836-9d51-8dea84046163\") " Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.235827 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ca28ba4-2b37-4836-9d51-8dea84046163-config-data\") pod \"0ca28ba4-2b37-4836-9d51-8dea84046163\" (UID: \"0ca28ba4-2b37-4836-9d51-8dea84046163\") " Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.236445 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ca28ba4-2b37-4836-9d51-8dea84046163-vencrypt-tls-certs\") pod \"0ca28ba4-2b37-4836-9d51-8dea84046163\" (UID: \"0ca28ba4-2b37-4836-9d51-8dea84046163\") " Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.236510 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ca28ba4-2b37-4836-9d51-8dea84046163-nova-novncproxy-tls-certs\") pod \"0ca28ba4-2b37-4836-9d51-8dea84046163\" (UID: \"0ca28ba4-2b37-4836-9d51-8dea84046163\") " Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.236536 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plhh6\" (UniqueName: \"kubernetes.io/projected/0ca28ba4-2b37-4836-9d51-8dea84046163-kube-api-access-plhh6\") pod \"0ca28ba4-2b37-4836-9d51-8dea84046163\" (UID: \"0ca28ba4-2b37-4836-9d51-8dea84046163\") " Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.256191 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-7gmrp"] Dec 11 14:12:09 crc kubenswrapper[5050]: W1211 14:12:09.262392 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca005c2d_f7de_486a_bbd6_a32443582833.slice/crio-2c85789ec5a0ae6495c4ef59c97eea63484bc9c142ec4843fc3fdc3c1aa20aeb WatchSource:0}: Error finding container 2c85789ec5a0ae6495c4ef59c97eea63484bc9c142ec4843fc3fdc3c1aa20aeb: Status 404 returned error can't find the container with id 2c85789ec5a0ae6495c4ef59c97eea63484bc9c142ec4843fc3fdc3c1aa20aeb Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.291536 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder27b7-account-delete-h9wjn"] Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.293430 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ca28ba4-2b37-4836-9d51-8dea84046163-kube-api-access-plhh6" (OuterVolumeSpecName: "kube-api-access-plhh6") pod "0ca28ba4-2b37-4836-9d51-8dea84046163" (UID: "0ca28ba4-2b37-4836-9d51-8dea84046163"). InnerVolumeSpecName "kube-api-access-plhh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:09 crc kubenswrapper[5050]: W1211 14:12:09.316766 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd917f471_6630_4e96_a0e4_cbde631da4a8.slice/crio-b36256477183642b6fa08f2717a1915fe2ce3b93abc1181edc865af5e306f1f4 WatchSource:0}: Error finding container b36256477183642b6fa08f2717a1915fe2ce3b93abc1181edc865af5e306f1f4: Status 404 returned error can't find the container with id b36256477183642b6fa08f2717a1915fe2ce3b93abc1181edc865af5e306f1f4 Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.346584 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plhh6\" (UniqueName: \"kubernetes.io/projected/0ca28ba4-2b37-4836-9d51-8dea84046163-kube-api-access-plhh6\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.377488 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novaapi4326-account-delete-9bmsz"] Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.383324 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ca28ba4-2b37-4836-9d51-8dea84046163-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ca28ba4-2b37-4836-9d51-8dea84046163" (UID: "0ca28ba4-2b37-4836-9d51-8dea84046163"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.387951 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/novacell0e20d-account-delete-ntvhn"] Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.405214 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ca28ba4-2b37-4836-9d51-8dea84046163-config-data" (OuterVolumeSpecName: "config-data") pod "0ca28ba4-2b37-4836-9d51-8dea84046163" (UID: "0ca28ba4-2b37-4836-9d51-8dea84046163"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:09 crc kubenswrapper[5050]: W1211 14:12:09.409735 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d484c84_7333_4701_a4f3_655c3d2cbfa7.slice/crio-e9083430e24059def49fce8b8dffea65371542e4f2d51b5504b0837c83c03e56 WatchSource:0}: Error finding container e9083430e24059def49fce8b8dffea65371542e4f2d51b5504b0837c83c03e56: Status 404 returned error can't find the container with id e9083430e24059def49fce8b8dffea65371542e4f2d51b5504b0837c83c03e56 Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.427246 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.445591 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placementd2f2-account-delete-njqwh"] Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.448953 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ca28ba4-2b37-4836-9d51-8dea84046163-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.448983 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ca28ba4-2b37-4836-9d51-8dea84046163-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.470519 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ca28ba4-2b37-4836-9d51-8dea84046163-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "0ca28ba4-2b37-4836-9d51-8dea84046163" (UID: "0ca28ba4-2b37-4836-9d51-8dea84046163"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.493933 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ca28ba4-2b37-4836-9d51-8dea84046163-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "0ca28ba4-2b37-4836-9d51-8dea84046163" (UID: "0ca28ba4-2b37-4836-9d51-8dea84046163"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.502392 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.513718 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placementd2f2-account-delete-njqwh" event={"ID":"bc8efd61-e4fb-4ec0-834a-b495797039a1","Type":"ContainerStarted","Data":"a780d8823ffe3c5dd9906301489f93f6eb3e416a0b447863f72599192f111c37"} Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.520001 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican34ac-account-delete-cd525"] Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.531118 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-b59kh"] Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.545056 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fcd6f8f8f-b59kh"] Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.557689 5050 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ca28ba4-2b37-4836-9d51-8dea84046163-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.557719 5050 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ca28ba4-2b37-4836-9d51-8dea84046163-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.576499 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2396be70-52b5-4a91-b8f8-463803fcc4d0" path="/var/lib/kubelet/pods/2396be70-52b5-4a91-b8f8-463803fcc4d0/volumes" Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.580492 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5371a32d-3998-4ddc-93d6-27e9afdb9712" path="/var/lib/kubelet/pods/5371a32d-3998-4ddc-93d6-27e9afdb9712/volumes" Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.583083 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58cdcd05-e81a-4ed4-8357-249649b17449" path="/var/lib/kubelet/pods/58cdcd05-e81a-4ed4-8357-249649b17449/volumes" Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.586632 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3f691ef-0109-459b-bbb9-eb08838d3dd0" path="/var/lib/kubelet/pods/a3f691ef-0109-459b-bbb9-eb08838d3dd0/volumes" Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.587534 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c443a35b-44e5-495f-b23b-75ff35319194" path="/var/lib/kubelet/pods/c443a35b-44e5-495f-b23b-75ff35319194/volumes" Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.599759 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cffff412-bf3c-4739-8bb8-3d099c8c83fe" path="/var/lib/kubelet/pods/cffff412-bf3c-4739-8bb8-3d099c8c83fe/volumes" Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.615944 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.628426 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1627fd8-6a34-432b-a4c8-8a39b534f4f2" path="/var/lib/kubelet/pods/e1627fd8-6a34-432b-a4c8-8a39b534f4f2/volumes" Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.628446 5050 generic.go:334] "Generic (PLEG): container finished" podID="7c13a1ff-0952-40b8-9157-3f1ba8b232c0" containerID="dcd6241bf625d5260eb81af83e760bdddc10ab5c6ec8bd47adbb59c1809dd3e4" exitCode=0 Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.629872 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron7594-account-delete-8wjht" event={"ID":"bc2e956f-6026-4a75-b11a-5106aad626a5","Type":"ContainerStarted","Data":"117a329727dfb44face47ce70f00fb31fbaad2c13a6f99a74d30819f0877a421"} Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.629909 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance2653-account-delete-rgdnl"] Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.629933 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.629946 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron7594-account-delete-8wjht" event={"ID":"bc2e956f-6026-4a75-b11a-5106aad626a5","Type":"ContainerStarted","Data":"aca8a1f074ec9c79e910d77e4d9193bfb252981c026235cc3d9b6067c7e6325d"} Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.629958 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0ca28ba4-2b37-4836-9d51-8dea84046163","Type":"ContainerDied","Data":"603a01b16aa18321f03b4cf26f49d47fea569c2a318a7b8a94f0b9703c75750d"} Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.629974 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7c13a1ff-0952-40b8-9157-3f1ba8b232c0","Type":"ContainerDied","Data":"dcd6241bf625d5260eb81af83e760bdddc10ab5c6ec8bd47adbb59c1809dd3e4"} Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.629992 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.643839 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance2653-account-delete-rgdnl" event={"ID":"1d484c84-7333-4701-a4f3-655c3d2cbfa7","Type":"ContainerStarted","Data":"e9083430e24059def49fce8b8dffea65371542e4f2d51b5504b0837c83c03e56"} Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.644684 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-47tvr"] Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.657554 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder27b7-account-delete-h9wjn" event={"ID":"e365d825-a3cb-42a3-8a00-8a9be42ed290","Type":"ContainerStarted","Data":"85159ba9b1716a8a69e6686b8adf1ee117c8e387a73c07fba65098f82adf2cd6"} Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.664703 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="003b423c-92a0-47f6-8358-003f3ad24ded" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.163:8776/healthcheck\": read tcp 10.217.0.2:58814->10.217.0.163:8776: read: connection reset by peer" Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.672361 5050 generic.go:334] "Generic (PLEG): container finished" podID="29a26d59-027f-428e-928e-12222b61a350" containerID="68a9a87e998a4bb0563913fd86e150d1605935b84a4da45aa67210b036a699f2" exitCode=0 Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.672911 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"29a26d59-027f-428e-928e-12222b61a350","Type":"ContainerDied","Data":"68a9a87e998a4bb0563913fd86e150d1605935b84a4da45aa67210b036a699f2"} Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.684050 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi4326-account-delete-9bmsz" event={"ID":"d917f471-6630-4e96-a0e4-cbde631da4a8","Type":"ContainerStarted","Data":"b36256477183642b6fa08f2717a1915fe2ce3b93abc1181edc865af5e306f1f4"} Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.695711 5050 generic.go:334] "Generic (PLEG): container finished" podID="c8b3d8cd-9278-4639-86fe-1aa7696fecca" containerID="8498e742367424482ed9a44ca42a11a58844241a90788c1a5e431a1e93f23131" exitCode=0 Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.695823 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c8b3d8cd-9278-4639-86fe-1aa7696fecca","Type":"ContainerDied","Data":"8498e742367424482ed9a44ca42a11a58844241a90788c1a5e431a1e93f23131"} Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.695907 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"c8b3d8cd-9278-4639-86fe-1aa7696fecca","Type":"ContainerDied","Data":"60fc2ae7bd55a26f15c78cb791549aa82b5904e02f97e0b62c1c2744d47e12eb"} Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.695925 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60fc2ae7bd55a26f15c78cb791549aa82b5904e02f97e0b62c1c2744d47e12eb" Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.709055 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/novacell1d6ec-account-delete-cwf97" Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.709142 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell0e20d-account-delete-ntvhn" event={"ID":"ca005c2d-f7de-486a-bbd6-a32443582833","Type":"ContainerStarted","Data":"2c85789ec5a0ae6495c4ef59c97eea63484bc9c142ec4843fc3fdc3c1aa20aeb"} Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.714365 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-47tvr"] Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.738116 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.752202 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 11 14:12:09 crc kubenswrapper[5050]: E1211 14:12:09.779281 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="927455209d981bf727b535faeebc64bff29620b7370b60af99e0a5354d586a34" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 11 14:12:09 crc kubenswrapper[5050]: E1211 14:12:09.787616 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="927455209d981bf727b535faeebc64bff29620b7370b60af99e0a5354d586a34" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 11 14:12:09 crc kubenswrapper[5050]: E1211 14:12:09.792410 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="927455209d981bf727b535faeebc64bff29620b7370b60af99e0a5354d586a34" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 11 14:12:09 crc kubenswrapper[5050]: E1211 14:12:09.792507 5050 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7" containerName="nova-cell1-conductor-conductor" Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.829296 5050 scope.go:117] "RemoveContainer" containerID="d2e5c82ae90e1137ec73bef8dd6ce2e374ca6ff4f54e4da5f33502be7443eb03" Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.846886 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron7594-account-delete-8wjht" podStartSLOduration=6.846857093 podStartE2EDuration="6.846857093s" podCreationTimestamp="2025-12-11 14:12:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:12:09.709640286 +0000 UTC m=+1420.553362882" watchObservedRunningTime="2025-12-11 14:12:09.846857093 +0000 UTC m=+1420.690579669" Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.967620 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7896842-a23c-4427-ab0e-702138a8cdd0-operator-scripts\") pod \"novacell1d6ec-account-delete-cwf97\" (UID: \"e7896842-a23c-4427-ab0e-702138a8cdd0\") " pod="openstack/novacell1d6ec-account-delete-cwf97" Dec 11 14:12:09 crc kubenswrapper[5050]: I1211 14:12:09.967732 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bh5r\" (UniqueName: \"kubernetes.io/projected/e7896842-a23c-4427-ab0e-702138a8cdd0-kube-api-access-2bh5r\") pod \"novacell1d6ec-account-delete-cwf97\" (UID: \"e7896842-a23c-4427-ab0e-702138a8cdd0\") " pod="openstack/novacell1d6ec-account-delete-cwf97" Dec 11 14:12:09 crc kubenswrapper[5050]: E1211 14:12:09.967853 5050 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Dec 11 14:12:09 crc kubenswrapper[5050]: E1211 14:12:09.967970 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e7896842-a23c-4427-ab0e-702138a8cdd0-operator-scripts podName:e7896842-a23c-4427-ab0e-702138a8cdd0 nodeName:}" failed. No retries permitted until 2025-12-11 14:12:13.967939564 +0000 UTC m=+1424.811662220 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/e7896842-a23c-4427-ab0e-702138a8cdd0-operator-scripts") pod "novacell1d6ec-account-delete-cwf97" (UID: "e7896842-a23c-4427-ab0e-702138a8cdd0") : configmap "openstack-cell1-scripts" not found Dec 11 14:12:09 crc kubenswrapper[5050]: E1211 14:12:09.973568 5050 projected.go:194] Error preparing data for projected volume kube-api-access-2bh5r for pod openstack/novacell1d6ec-account-delete-cwf97: failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Dec 11 14:12:09 crc kubenswrapper[5050]: E1211 14:12:09.973647 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e7896842-a23c-4427-ab0e-702138a8cdd0-kube-api-access-2bh5r podName:e7896842-a23c-4427-ab0e-702138a8cdd0 nodeName:}" failed. No retries permitted until 2025-12-11 14:12:13.973625728 +0000 UTC m=+1424.817348314 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2bh5r" (UniqueName: "kubernetes.io/projected/e7896842-a23c-4427-ab0e-702138a8cdd0-kube-api-access-2bh5r") pod "novacell1d6ec-account-delete-cwf97" (UID: "e7896842-a23c-4427-ab0e-702138a8cdd0") : failed to fetch token: serviceaccounts "galera-openstack-cell1" not found Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.225643 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.267252 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="1eb418aa-1d3c-469c-8ff4-2b3c86a71e97" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": read tcp 10.217.0.2:53066->10.217.0.202:8775: read: connection reset by peer" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.267335 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="1eb418aa-1d3c-469c-8ff4-2b3c86a71e97" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.202:8775/\": read tcp 10.217.0.2:53072->10.217.0.202:8775: read: connection reset by peer" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.295745 5050 scope.go:117] "RemoveContainer" containerID="c4c46a2b59720049a8e8b5a6204d154b1116f9503accccc382e625d7853d1369" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.298960 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.388341 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c8b3d8cd-9278-4639-86fe-1aa7696fecca-config-data-default\") pod \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\" (UID: \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\") " Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.388578 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c8b3d8cd-9278-4639-86fe-1aa7696fecca-kolla-config\") pod \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\" (UID: \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\") " Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.388619 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c13a1ff-0952-40b8-9157-3f1ba8b232c0-config-data\") pod \"7c13a1ff-0952-40b8-9157-3f1ba8b232c0\" (UID: \"7c13a1ff-0952-40b8-9157-3f1ba8b232c0\") " Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.388656 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\" (UID: \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\") " Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.388697 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c8b3d8cd-9278-4639-86fe-1aa7696fecca-config-data-generated\") pod \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\" (UID: \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\") " Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.388813 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zk7xp\" (UniqueName: \"kubernetes.io/projected/7c13a1ff-0952-40b8-9157-3f1ba8b232c0-kube-api-access-zk7xp\") pod \"7c13a1ff-0952-40b8-9157-3f1ba8b232c0\" (UID: \"7c13a1ff-0952-40b8-9157-3f1ba8b232c0\") " Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.388895 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8b3d8cd-9278-4639-86fe-1aa7696fecca-combined-ca-bundle\") pod \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\" (UID: \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\") " Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.388937 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c13a1ff-0952-40b8-9157-3f1ba8b232c0-combined-ca-bundle\") pod \"7c13a1ff-0952-40b8-9157-3f1ba8b232c0\" (UID: \"7c13a1ff-0952-40b8-9157-3f1ba8b232c0\") " Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.388995 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b3d8cd-9278-4639-86fe-1aa7696fecca-operator-scripts\") pod \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\" (UID: \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\") " Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.389042 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twplg\" (UniqueName: \"kubernetes.io/projected/c8b3d8cd-9278-4639-86fe-1aa7696fecca-kube-api-access-twplg\") pod \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\" (UID: \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\") " Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.389092 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8b3d8cd-9278-4639-86fe-1aa7696fecca-galera-tls-certs\") pod \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\" (UID: \"c8b3d8cd-9278-4639-86fe-1aa7696fecca\") " Dec 11 14:12:10 crc kubenswrapper[5050]: E1211 14:12:10.389836 5050 secret.go:188] Couldn't get secret openstack/barbican-api-config-data: secret "barbican-api-config-data" not found Dec 11 14:12:10 crc kubenswrapper[5050]: E1211 14:12:10.389932 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-config-data-custom podName:4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf nodeName:}" failed. No retries permitted until 2025-12-11 14:12:14.389910724 +0000 UTC m=+1425.233633320 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data-custom" (UniqueName: "kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-config-data-custom") pod "barbican-api-57f899fb58-v2lwj" (UID: "4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf") : secret "barbican-api-config-data" not found Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.391529 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8b3d8cd-9278-4639-86fe-1aa7696fecca-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "c8b3d8cd-9278-4639-86fe-1aa7696fecca" (UID: "c8b3d8cd-9278-4639-86fe-1aa7696fecca"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.391467 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8b3d8cd-9278-4639-86fe-1aa7696fecca-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "c8b3d8cd-9278-4639-86fe-1aa7696fecca" (UID: "c8b3d8cd-9278-4639-86fe-1aa7696fecca"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.393393 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8b3d8cd-9278-4639-86fe-1aa7696fecca-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "c8b3d8cd-9278-4639-86fe-1aa7696fecca" (UID: "c8b3d8cd-9278-4639-86fe-1aa7696fecca"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:12:10 crc kubenswrapper[5050]: E1211 14:12:10.397403 5050 secret.go:188] Couldn't get secret openstack/barbican-config-data: secret "barbican-config-data" not found Dec 11 14:12:10 crc kubenswrapper[5050]: E1211 14:12:10.397472 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-config-data podName:4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf nodeName:}" failed. No retries permitted until 2025-12-11 14:12:14.397448207 +0000 UTC m=+1425.241170793 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-config-data") pod "barbican-api-57f899fb58-v2lwj" (UID: "4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf") : secret "barbican-config-data" not found Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.401532 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8b3d8cd-9278-4639-86fe-1aa7696fecca-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c8b3d8cd-9278-4639-86fe-1aa7696fecca" (UID: "c8b3d8cd-9278-4639-86fe-1aa7696fecca"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.422800 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8b3d8cd-9278-4639-86fe-1aa7696fecca-kube-api-access-twplg" (OuterVolumeSpecName: "kube-api-access-twplg") pod "c8b3d8cd-9278-4639-86fe-1aa7696fecca" (UID: "c8b3d8cd-9278-4639-86fe-1aa7696fecca"). InnerVolumeSpecName "kube-api-access-twplg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.423354 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c13a1ff-0952-40b8-9157-3f1ba8b232c0-kube-api-access-zk7xp" (OuterVolumeSpecName: "kube-api-access-zk7xp") pod "7c13a1ff-0952-40b8-9157-3f1ba8b232c0" (UID: "7c13a1ff-0952-40b8-9157-3f1ba8b232c0"). InnerVolumeSpecName "kube-api-access-zk7xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.487408 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c13a1ff-0952-40b8-9157-3f1ba8b232c0-config-data" (OuterVolumeSpecName: "config-data") pod "7c13a1ff-0952-40b8-9157-3f1ba8b232c0" (UID: "7c13a1ff-0952-40b8-9157-3f1ba8b232c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.491819 5050 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c8b3d8cd-9278-4639-86fe-1aa7696fecca-kolla-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.491874 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c13a1ff-0952-40b8-9157-3f1ba8b232c0-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.491884 5050 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c8b3d8cd-9278-4639-86fe-1aa7696fecca-config-data-generated\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.491895 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zk7xp\" (UniqueName: \"kubernetes.io/projected/7c13a1ff-0952-40b8-9157-3f1ba8b232c0-kube-api-access-zk7xp\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.491905 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c8b3d8cd-9278-4639-86fe-1aa7696fecca-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.491914 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twplg\" (UniqueName: \"kubernetes.io/projected/c8b3d8cd-9278-4639-86fe-1aa7696fecca-kube-api-access-twplg\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.491923 5050 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c8b3d8cd-9278-4639-86fe-1aa7696fecca-config-data-default\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.500265 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c13a1ff-0952-40b8-9157-3f1ba8b232c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c13a1ff-0952-40b8-9157-3f1ba8b232c0" (UID: "7c13a1ff-0952-40b8-9157-3f1ba8b232c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.508433 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "mysql-db") pod "c8b3d8cd-9278-4639-86fe-1aa7696fecca" (UID: "c8b3d8cd-9278-4639-86fe-1aa7696fecca"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.567160 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-57f899fb58-v2lwj" podUID="4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.158:9311/healthcheck\": dial tcp 10.217.0.158:9311: connect: connection refused" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.578102 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-57f899fb58-v2lwj" podUID="4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.158:9311/healthcheck\": dial tcp 10.217.0.158:9311: connect: connection refused" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.595454 5050 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.595519 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c13a1ff-0952-40b8-9157-3f1ba8b232c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.634445 5050 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.652898 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8b3d8cd-9278-4639-86fe-1aa7696fecca-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "c8b3d8cd-9278-4639-86fe-1aa7696fecca" (UID: "c8b3d8cd-9278-4639-86fe-1aa7696fecca"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.710105 5050 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8b3d8cd-9278-4639-86fe-1aa7696fecca-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.710175 5050 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.723586 5050 generic.go:334] "Generic (PLEG): container finished" podID="4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf" containerID="9048b99f225c02588f0acf6ab078d23ba9d748c49478356c85f61a74df87c960" exitCode=0 Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.723651 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57f899fb58-v2lwj" event={"ID":"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf","Type":"ContainerDied","Data":"9048b99f225c02588f0acf6ab078d23ba9d748c49478356c85f61a74df87c960"} Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.726263 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican34ac-account-delete-cd525" event={"ID":"66cb4589-6296-417b-87eb-4bcbff7bf580","Type":"ContainerStarted","Data":"a5cbbc32ab9e17e0b16cae9b6c24bc2ae8a263c163cc7bf6a747898f4c3c76e8"} Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.734234 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"29a26d59-027f-428e-928e-12222b61a350","Type":"ContainerDied","Data":"a321c9b52046a17c9cfc26a8db814515a1542befeca7b048ebd9bb1061d031ec"} Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.734289 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a321c9b52046a17c9cfc26a8db814515a1542befeca7b048ebd9bb1061d031ec" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.738609 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"7c13a1ff-0952-40b8-9157-3f1ba8b232c0","Type":"ContainerDied","Data":"73f0c1f35fdb366392825d7ee60c87a2fe20f55cff52c244eaa4032c06b97a77"} Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.738769 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.746710 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8b3d8cd-9278-4639-86fe-1aa7696fecca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c8b3d8cd-9278-4639-86fe-1aa7696fecca" (UID: "c8b3d8cd-9278-4639-86fe-1aa7696fecca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.761615 5050 generic.go:334] "Generic (PLEG): container finished" podID="1eb418aa-1d3c-469c-8ff4-2b3c86a71e97" containerID="c851fe09742b366b6fb0fe111786b9251ff34ac22d585974055bba383135d605" exitCode=0 Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.761741 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1eb418aa-1d3c-469c-8ff4-2b3c86a71e97","Type":"ContainerDied","Data":"c851fe09742b366b6fb0fe111786b9251ff34ac22d585974055bba383135d605"} Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.769331 5050 scope.go:117] "RemoveContainer" containerID="3111270185c2d5c0f18227cafa21198528d2d2e6ed0667e53d97a8edfd2d1860" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.772431 5050 generic.go:334] "Generic (PLEG): container finished" podID="bc2e956f-6026-4a75-b11a-5106aad626a5" containerID="117a329727dfb44face47ce70f00fb31fbaad2c13a6f99a74d30819f0877a421" exitCode=0 Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.772665 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron7594-account-delete-8wjht" event={"ID":"bc2e956f-6026-4a75-b11a-5106aad626a5","Type":"ContainerDied","Data":"117a329727dfb44face47ce70f00fb31fbaad2c13a6f99a74d30819f0877a421"} Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.786764 5050 generic.go:334] "Generic (PLEG): container finished" podID="213cfec6-ba42-4dbc-bd9c-051b193e4577" containerID="e6dc15c8d2821c9d66fa830b0740353eeecafc3c6002947a42891501a4a72dfd" exitCode=0 Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.786913 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"213cfec6-ba42-4dbc-bd9c-051b193e4577","Type":"ContainerDied","Data":"e6dc15c8d2821c9d66fa830b0740353eeecafc3c6002947a42891501a4a72dfd"} Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.796777 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.796858 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.818530 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8b3d8cd-9278-4639-86fe-1aa7696fecca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.827705 5050 generic.go:334] "Generic (PLEG): container finished" podID="3386ffea-45ca-41e8-9aa5-61a2923a3394" containerID="b8472b1400fec5a115ddf8be1b9aa9e96f77b6e60231027d5893fbbc8989bdac" exitCode=0 Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.827804 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3386ffea-45ca-41e8-9aa5-61a2923a3394","Type":"ContainerDied","Data":"b8472b1400fec5a115ddf8be1b9aa9e96f77b6e60231027d5893fbbc8989bdac"} Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.837067 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.838642 5050 generic.go:334] "Generic (PLEG): container finished" podID="38b1a06e-804a-44dc-8e77-a7d8162f38bd" containerID="0a03b521b37d9fb0f7030177a2bb20787cfd84f0c0449bc65282aede0e194ffc" exitCode=0 Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.838697 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"38b1a06e-804a-44dc-8e77-a7d8162f38bd","Type":"ContainerDied","Data":"0a03b521b37d9fb0f7030177a2bb20787cfd84f0c0449bc65282aede0e194ffc"} Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.840664 5050 generic.go:334] "Generic (PLEG): container finished" podID="003b423c-92a0-47f6-8358-003f3ad24ded" containerID="204ac42ef63a05788b1880c5f6c33e7a413d56ea5c69370c5a87fa4d156de0ba" exitCode=0 Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.840704 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"003b423c-92a0-47f6-8358-003f3ad24ded","Type":"ContainerDied","Data":"204ac42ef63a05788b1880c5f6c33e7a413d56ea5c69370c5a87fa4d156de0ba"} Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.840718 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"003b423c-92a0-47f6-8358-003f3ad24ded","Type":"ContainerDied","Data":"cd6e59093115a20d14dadf8024c232fb89638e93979b549a2eb575875c007b09"} Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.840731 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd6e59093115a20d14dadf8024c232fb89638e93979b549a2eb575875c007b09" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.842112 5050 generic.go:334] "Generic (PLEG): container finished" podID="4de557a0-8b74-4d40-8c91-351ba127eb13" containerID="8af0220738d7b4267aab1e60eaa3da9d17f3f47fefe09dc1901f5e2bee442704" exitCode=0 Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.842190 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.842286 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78ccc9f8bd-jdg2t" event={"ID":"4de557a0-8b74-4d40-8c91-351ba127eb13","Type":"ContainerDied","Data":"8af0220738d7b4267aab1e60eaa3da9d17f3f47fefe09dc1901f5e2bee442704"} Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.842314 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-78ccc9f8bd-jdg2t" event={"ID":"4de557a0-8b74-4d40-8c91-351ba127eb13","Type":"ContainerDied","Data":"990e9b23f5d093ce7839248fb892b423f7090078e0fff3c4669e7c3068b46283"} Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.842329 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="990e9b23f5d093ce7839248fb892b423f7090078e0fff3c4669e7c3068b46283" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.843881 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="458f05be-2fd6-44d9-8034-f077356964ce" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.920193 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29a26d59-027f-428e-928e-12222b61a350-scripts\") pod \"29a26d59-027f-428e-928e-12222b61a350\" (UID: \"29a26d59-027f-428e-928e-12222b61a350\") " Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.920308 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29a26d59-027f-428e-928e-12222b61a350-config-data\") pod \"29a26d59-027f-428e-928e-12222b61a350\" (UID: \"29a26d59-027f-428e-928e-12222b61a350\") " Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.920441 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29a26d59-027f-428e-928e-12222b61a350-config-data-custom\") pod \"29a26d59-027f-428e-928e-12222b61a350\" (UID: \"29a26d59-027f-428e-928e-12222b61a350\") " Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.920581 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29a26d59-027f-428e-928e-12222b61a350-combined-ca-bundle\") pod \"29a26d59-027f-428e-928e-12222b61a350\" (UID: \"29a26d59-027f-428e-928e-12222b61a350\") " Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.920631 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29a26d59-027f-428e-928e-12222b61a350-etc-machine-id\") pod \"29a26d59-027f-428e-928e-12222b61a350\" (UID: \"29a26d59-027f-428e-928e-12222b61a350\") " Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.920683 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv72n\" (UniqueName: \"kubernetes.io/projected/29a26d59-027f-428e-928e-12222b61a350-kube-api-access-xv72n\") pod \"29a26d59-027f-428e-928e-12222b61a350\" (UID: \"29a26d59-027f-428e-928e-12222b61a350\") " Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.921238 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29a26d59-027f-428e-928e-12222b61a350-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "29a26d59-027f-428e-928e-12222b61a350" (UID: "29a26d59-027f-428e-928e-12222b61a350"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.921598 5050 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29a26d59-027f-428e-928e-12222b61a350-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.929995 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29a26d59-027f-428e-928e-12222b61a350-kube-api-access-xv72n" (OuterVolumeSpecName: "kube-api-access-xv72n") pod "29a26d59-027f-428e-928e-12222b61a350" (UID: "29a26d59-027f-428e-928e-12222b61a350"). InnerVolumeSpecName "kube-api-access-xv72n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.931278 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29a26d59-027f-428e-928e-12222b61a350-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "29a26d59-027f-428e-928e-12222b61a350" (UID: "29a26d59-027f-428e-928e-12222b61a350"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.933258 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29a26d59-027f-428e-928e-12222b61a350-scripts" (OuterVolumeSpecName: "scripts") pod "29a26d59-027f-428e-928e-12222b61a350" (UID: "29a26d59-027f-428e-928e-12222b61a350"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:10 crc kubenswrapper[5050]: I1211 14:12:10.952339 5050 scope.go:117] "RemoveContainer" containerID="3768c3d6cf415867973810bdb14c5966684aab657f8614b9d4062545081db44d" Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.031999 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xv72n\" (UniqueName: \"kubernetes.io/projected/29a26d59-027f-428e-928e-12222b61a350-kube-api-access-xv72n\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.032050 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29a26d59-027f-428e-928e-12222b61a350-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.032065 5050 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29a26d59-027f-428e-928e-12222b61a350-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.035255 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78ccc9f8bd-jdg2t" Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.057460 5050 scope.go:117] "RemoveContainer" containerID="5891d989abb1d991a1a438c7eb2a7b5afaeabc910991c4571e2dac6645358de5" Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.111274 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell1d6ec-account-delete-cwf97"] Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.119231 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29a26d59-027f-428e-928e-12222b61a350-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29a26d59-027f-428e-928e-12222b61a350" (UID: "29a26d59-027f-428e-928e-12222b61a350"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.128410 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novacell1d6ec-account-delete-cwf97"] Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.140391 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4de557a0-8b74-4d40-8c91-351ba127eb13-logs\") pod \"4de557a0-8b74-4d40-8c91-351ba127eb13\" (UID: \"4de557a0-8b74-4d40-8c91-351ba127eb13\") " Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.140618 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4de557a0-8b74-4d40-8c91-351ba127eb13-public-tls-certs\") pod \"4de557a0-8b74-4d40-8c91-351ba127eb13\" (UID: \"4de557a0-8b74-4d40-8c91-351ba127eb13\") " Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.140686 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4de557a0-8b74-4d40-8c91-351ba127eb13-config-data\") pod \"4de557a0-8b74-4d40-8c91-351ba127eb13\" (UID: \"4de557a0-8b74-4d40-8c91-351ba127eb13\") " Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.140782 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzrcd\" (UniqueName: \"kubernetes.io/projected/4de557a0-8b74-4d40-8c91-351ba127eb13-kube-api-access-vzrcd\") pod \"4de557a0-8b74-4d40-8c91-351ba127eb13\" (UID: \"4de557a0-8b74-4d40-8c91-351ba127eb13\") " Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.140862 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4de557a0-8b74-4d40-8c91-351ba127eb13-internal-tls-certs\") pod \"4de557a0-8b74-4d40-8c91-351ba127eb13\" (UID: \"4de557a0-8b74-4d40-8c91-351ba127eb13\") " Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.140932 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4de557a0-8b74-4d40-8c91-351ba127eb13-scripts\") pod \"4de557a0-8b74-4d40-8c91-351ba127eb13\" (UID: \"4de557a0-8b74-4d40-8c91-351ba127eb13\") " Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.141156 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4de557a0-8b74-4d40-8c91-351ba127eb13-combined-ca-bundle\") pod \"4de557a0-8b74-4d40-8c91-351ba127eb13\" (UID: \"4de557a0-8b74-4d40-8c91-351ba127eb13\") " Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.141540 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4de557a0-8b74-4d40-8c91-351ba127eb13-logs" (OuterVolumeSpecName: "logs") pod "4de557a0-8b74-4d40-8c91-351ba127eb13" (UID: "4de557a0-8b74-4d40-8c91-351ba127eb13"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.144665 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29a26d59-027f-428e-928e-12222b61a350-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.145793 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4de557a0-8b74-4d40-8c91-351ba127eb13-logs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.145588 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.147332 5050 scope.go:117] "RemoveContainer" containerID="da35097d4938c80747d4330c14a62405c62267dddef20cb1d8041548fb7caa56" Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.153891 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.163346 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.165945 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="0891f075-8101-475b-b844-e7cb42a4990b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.103:5671: connect: connection refused" Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.177517 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.184731 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.195920 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.203764 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.226550 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4de557a0-8b74-4d40-8c91-351ba127eb13-kube-api-access-vzrcd" (OuterVolumeSpecName: "kube-api-access-vzrcd") pod "4de557a0-8b74-4d40-8c91-351ba127eb13" (UID: "4de557a0-8b74-4d40-8c91-351ba127eb13"). InnerVolumeSpecName "kube-api-access-vzrcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.243410 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4de557a0-8b74-4d40-8c91-351ba127eb13-scripts" (OuterVolumeSpecName: "scripts") pod "4de557a0-8b74-4d40-8c91-351ba127eb13" (UID: "4de557a0-8b74-4d40-8c91-351ba127eb13"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.247654 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-config-data-custom\") pod \"003b423c-92a0-47f6-8358-003f3ad24ded\" (UID: \"003b423c-92a0-47f6-8358-003f3ad24ded\") " Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.247748 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-combined-ca-bundle\") pod \"003b423c-92a0-47f6-8358-003f3ad24ded\" (UID: \"003b423c-92a0-47f6-8358-003f3ad24ded\") " Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.247856 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-public-tls-certs\") pod \"003b423c-92a0-47f6-8358-003f3ad24ded\" (UID: \"003b423c-92a0-47f6-8358-003f3ad24ded\") " Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.247911 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-internal-tls-certs\") pod \"003b423c-92a0-47f6-8358-003f3ad24ded\" (UID: \"003b423c-92a0-47f6-8358-003f3ad24ded\") " Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.247929 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-scripts\") pod \"003b423c-92a0-47f6-8358-003f3ad24ded\" (UID: \"003b423c-92a0-47f6-8358-003f3ad24ded\") " Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.248065 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/003b423c-92a0-47f6-8358-003f3ad24ded-etc-machine-id\") pod \"003b423c-92a0-47f6-8358-003f3ad24ded\" (UID: \"003b423c-92a0-47f6-8358-003f3ad24ded\") " Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.248093 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n52gw\" (UniqueName: \"kubernetes.io/projected/003b423c-92a0-47f6-8358-003f3ad24ded-kube-api-access-n52gw\") pod \"003b423c-92a0-47f6-8358-003f3ad24ded\" (UID: \"003b423c-92a0-47f6-8358-003f3ad24ded\") " Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.248143 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-config-data\") pod \"003b423c-92a0-47f6-8358-003f3ad24ded\" (UID: \"003b423c-92a0-47f6-8358-003f3ad24ded\") " Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.248186 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/003b423c-92a0-47f6-8358-003f3ad24ded-logs\") pod \"003b423c-92a0-47f6-8358-003f3ad24ded\" (UID: \"003b423c-92a0-47f6-8358-003f3ad24ded\") " Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.248636 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzrcd\" (UniqueName: \"kubernetes.io/projected/4de557a0-8b74-4d40-8c91-351ba127eb13-kube-api-access-vzrcd\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.248650 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bh5r\" (UniqueName: \"kubernetes.io/projected/e7896842-a23c-4427-ab0e-702138a8cdd0-kube-api-access-2bh5r\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.248660 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4de557a0-8b74-4d40-8c91-351ba127eb13-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.248670 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e7896842-a23c-4427-ab0e-702138a8cdd0-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.249726 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/003b423c-92a0-47f6-8358-003f3ad24ded-logs" (OuterVolumeSpecName: "logs") pod "003b423c-92a0-47f6-8358-003f3ad24ded" (UID: "003b423c-92a0-47f6-8358-003f3ad24ded"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.249768 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/003b423c-92a0-47f6-8358-003f3ad24ded-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "003b423c-92a0-47f6-8358-003f3ad24ded" (UID: "003b423c-92a0-47f6-8358-003f3ad24ded"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.286726 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "003b423c-92a0-47f6-8358-003f3ad24ded" (UID: "003b423c-92a0-47f6-8358-003f3ad24ded"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.300920 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/003b423c-92a0-47f6-8358-003f3ad24ded-kube-api-access-n52gw" (OuterVolumeSpecName: "kube-api-access-n52gw") pod "003b423c-92a0-47f6-8358-003f3ad24ded" (UID: "003b423c-92a0-47f6-8358-003f3ad24ded"). InnerVolumeSpecName "kube-api-access-n52gw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.308096 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-scripts" (OuterVolumeSpecName: "scripts") pod "003b423c-92a0-47f6-8358-003f3ad24ded" (UID: "003b423c-92a0-47f6-8358-003f3ad24ded"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.351473 5050 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.351510 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.351525 5050 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/003b423c-92a0-47f6-8358-003f3ad24ded-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.351540 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n52gw\" (UniqueName: \"kubernetes.io/projected/003b423c-92a0-47f6-8358-003f3ad24ded-kube-api-access-n52gw\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.351555 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/003b423c-92a0-47f6-8358-003f3ad24ded-logs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:11 crc kubenswrapper[5050]: E1211 14:12:11.364698 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8106f10bed3bf46c894b57795f1b60477931a5aba11442b13a0172969049c89a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 11 14:12:11 crc kubenswrapper[5050]: E1211 14:12:11.366757 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8106f10bed3bf46c894b57795f1b60477931a5aba11442b13a0172969049c89a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 11 14:12:11 crc kubenswrapper[5050]: E1211 14:12:11.370352 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8106f10bed3bf46c894b57795f1b60477931a5aba11442b13a0172969049c89a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 11 14:12:11 crc kubenswrapper[5050]: E1211 14:12:11.370397 5050 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="87937f27-2525-4fed-88bb-38a90404860c" containerName="nova-scheduler-scheduler" Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.473146 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.473792 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2ef32727-7bbd-4a50-8292-4740b34107cc" containerName="ceilometer-central-agent" containerID="cri-o://f6d56512f1f5665c8ed3d4a8969144001b8fe4f5868e5cfd74524cc0c96c62f3" gracePeriod=30 Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.474050 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2ef32727-7bbd-4a50-8292-4740b34107cc" containerName="proxy-httpd" containerID="cri-o://2df25eb8394651a0a8bc136ef8c2753058167b5dbecc7a2a0c134fede2448a38" gracePeriod=30 Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.474109 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2ef32727-7bbd-4a50-8292-4740b34107cc" containerName="sg-core" containerID="cri-o://8295f391211fffee759bcdccec5b566c3a41abb5db15ca1a987be72e3711b3cf" gracePeriod=30 Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.474297 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2ef32727-7bbd-4a50-8292-4740b34107cc" containerName="ceilometer-notification-agent" containerID="cri-o://de890e47f39aab39e39d2eca3d15df094ac343a9fec0b6242bea5fd2f19dbaf8" gracePeriod=30 Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.498520 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.498765 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="6817c570-f6ff-4b08-825a-027a9c8630b0" containerName="kube-state-metrics" containerID="cri-o://99b0a4e0ddebc7695b430edc234ac8f69f475befeae07527d5e1dffee8ce52e4" gracePeriod=30 Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.590393 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01fa4d89-aae5-451a-8798-2700053fe3d4" path="/var/lib/kubelet/pods/01fa4d89-aae5-451a-8798-2700053fe3d4/volumes" Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.600774 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ca28ba4-2b37-4836-9d51-8dea84046163" path="/var/lib/kubelet/pods/0ca28ba4-2b37-4836-9d51-8dea84046163/volumes" Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.602332 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c13a1ff-0952-40b8-9157-3f1ba8b232c0" path="/var/lib/kubelet/pods/7c13a1ff-0952-40b8-9157-3f1ba8b232c0/volumes" Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.603003 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8b3d8cd-9278-4639-86fe-1aa7696fecca" path="/var/lib/kubelet/pods/c8b3d8cd-9278-4639-86fe-1aa7696fecca/volumes" Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.614996 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c928931c-d49d-41dc-9181-11d856ed3bd0" path="/var/lib/kubelet/pods/c928931c-d49d-41dc-9181-11d856ed3bd0/volumes" Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.628141 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7896842-a23c-4427-ab0e-702138a8cdd0" path="/var/lib/kubelet/pods/e7896842-a23c-4427-ab0e-702138a8cdd0/volumes" Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.635945 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e96be66c-07f2-47c0-a784-6af473c8a2a8" path="/var/lib/kubelet/pods/e96be66c-07f2-47c0-a784-6af473c8a2a8/volumes" Dec 11 14:12:11 crc kubenswrapper[5050]: E1211 14:12:11.890654 5050 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Dec 11 14:12:11 crc kubenswrapper[5050]: E1211 14:12:11.890735 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/458f05be-2fd6-44d9-8034-f077356964ce-config-data podName:458f05be-2fd6-44d9-8034-f077356964ce nodeName:}" failed. No retries permitted until 2025-12-11 14:12:19.890712396 +0000 UTC m=+1430.734434992 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/458f05be-2fd6-44d9-8034-f077356964ce-config-data") pod "rabbitmq-cell1-server-0" (UID: "458f05be-2fd6-44d9-8034-f077356964ce") : configmap "rabbitmq-cell1-config-data" not found Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.909446 5050 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/barbican34ac-account-delete-cd525" secret="" err="secret \"galera-openstack-dockercfg-mgt2f\" not found" Dec 11 14:12:11 crc kubenswrapper[5050]: I1211 14:12:11.961148 5050 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/novacell0e20d-account-delete-ntvhn" secret="" err="secret \"galera-openstack-dockercfg-mgt2f\" not found" Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.033523 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican34ac-account-delete-cd525" podStartSLOduration=8.033493023 podStartE2EDuration="8.033493023s" podCreationTimestamp="2025-12-11 14:12:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:12:11.933336848 +0000 UTC m=+1422.777059434" watchObservedRunningTime="2025-12-11 14:12:12.033493023 +0000 UTC m=+1422.877215609" Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.039453 5050 generic.go:334] "Generic (PLEG): container finished" podID="6817c570-f6ff-4b08-825a-027a9c8630b0" containerID="99b0a4e0ddebc7695b430edc234ac8f69f475befeae07527d5e1dffee8ce52e4" exitCode=2 Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.052639 5050 generic.go:334] "Generic (PLEG): container finished" podID="ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7" containerID="927455209d981bf727b535faeebc64bff29620b7370b60af99e0a5354d586a34" exitCode=0 Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.063224 5050 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/glance2653-account-delete-rgdnl" secret="" err="secret \"galera-openstack-dockercfg-mgt2f\" not found" Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.085923 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/novacell0e20d-account-delete-ntvhn" podStartSLOduration=7.085885959 podStartE2EDuration="7.085885959s" podCreationTimestamp="2025-12-11 14:12:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:12:11.987939293 +0000 UTC m=+1422.831661879" watchObservedRunningTime="2025-12-11 14:12:12.085885959 +0000 UTC m=+1422.929608545" Dec 11 14:12:12 crc kubenswrapper[5050]: E1211 14:12:12.142579 5050 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 14:12:12 crc kubenswrapper[5050]: E1211 14:12:12.142675 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ca005c2d-f7de-486a-bbd6-a32443582833-operator-scripts podName:ca005c2d-f7de-486a-bbd6-a32443582833 nodeName:}" failed. No retries permitted until 2025-12-11 14:12:12.642657972 +0000 UTC m=+1423.486380558 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/ca005c2d-f7de-486a-bbd6-a32443582833-operator-scripts") pod "novacell0e20d-account-delete-ntvhn" (UID: "ca005c2d-f7de-486a-bbd6-a32443582833") : configmap "openstack-scripts" not found Dec 11 14:12:12 crc kubenswrapper[5050]: E1211 14:12:12.143428 5050 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 14:12:12 crc kubenswrapper[5050]: E1211 14:12:12.143543 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/66cb4589-6296-417b-87eb-4bcbff7bf580-operator-scripts podName:66cb4589-6296-417b-87eb-4bcbff7bf580 nodeName:}" failed. No retries permitted until 2025-12-11 14:12:12.643512075 +0000 UTC m=+1423.487234661 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/66cb4589-6296-417b-87eb-4bcbff7bf580-operator-scripts") pod "barbican34ac-account-delete-cd525" (UID: "66cb4589-6296-417b-87eb-4bcbff7bf580") : configmap "openstack-scripts" not found Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.185311 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance2653-account-delete-rgdnl" podStartSLOduration=8.185283464 podStartE2EDuration="8.185283464s" podCreationTimestamp="2025-12-11 14:12:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:12:12.13331819 +0000 UTC m=+1422.977040786" watchObservedRunningTime="2025-12-11 14:12:12.185283464 +0000 UTC m=+1423.029006050" Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.201245 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-z72zl" podStartSLOduration=5.8090886600000005 podStartE2EDuration="11.201216735s" podCreationTimestamp="2025-12-11 14:12:01 +0000 UTC" firstStartedPulling="2025-12-11 14:12:03.471807094 +0000 UTC m=+1414.315529680" lastFinishedPulling="2025-12-11 14:12:08.863935169 +0000 UTC m=+1419.707657755" observedRunningTime="2025-12-11 14:12:12.164467272 +0000 UTC m=+1423.008189848" watchObservedRunningTime="2025-12-11 14:12:12.201216735 +0000 UTC m=+1423.044939321" Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.201565 5050 generic.go:334] "Generic (PLEG): container finished" podID="2ef32727-7bbd-4a50-8292-4740b34107cc" containerID="8295f391211fffee759bcdccec5b566c3a41abb5db15ca1a987be72e3711b3cf" exitCode=2 Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.201700 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.202743 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-78ccc9f8bd-jdg2t" Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.202753 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 14:12:12 crc kubenswrapper[5050]: E1211 14:12:12.249493 5050 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 14:12:12 crc kubenswrapper[5050]: E1211 14:12:12.250202 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d484c84-7333-4701-a4f3-655c3d2cbfa7-operator-scripts podName:1d484c84-7333-4701-a4f3-655c3d2cbfa7 nodeName:}" failed. No retries permitted until 2025-12-11 14:12:12.750172337 +0000 UTC m=+1423.593894923 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/1d484c84-7333-4701-a4f3-655c3d2cbfa7-operator-scripts") pod "glance2653-account-delete-rgdnl" (UID: "1d484c84-7333-4701-a4f3-655c3d2cbfa7") : configmap "openstack-scripts" not found Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.278974 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-config-data" (OuterVolumeSpecName: "config-data") pod "003b423c-92a0-47f6-8358-003f3ad24ded" (UID: "003b423c-92a0-47f6-8358-003f3ad24ded"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.291167 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "003b423c-92a0-47f6-8358-003f3ad24ded" (UID: "003b423c-92a0-47f6-8358-003f3ad24ded"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.369392 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.369424 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.394750 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4de557a0-8b74-4d40-8c91-351ba127eb13-config-data" (OuterVolumeSpecName: "config-data") pod "4de557a0-8b74-4d40-8c91-351ba127eb13" (UID: "4de557a0-8b74-4d40-8c91-351ba127eb13"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.417369 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4de557a0-8b74-4d40-8c91-351ba127eb13-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4de557a0-8b74-4d40-8c91-351ba127eb13" (UID: "4de557a0-8b74-4d40-8c91-351ba127eb13"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.446496 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29a26d59-027f-428e-928e-12222b61a350-config-data" (OuterVolumeSpecName: "config-data") pod "29a26d59-027f-428e-928e-12222b61a350" (UID: "29a26d59-027f-428e-928e-12222b61a350"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.470198 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "003b423c-92a0-47f6-8358-003f3ad24ded" (UID: "003b423c-92a0-47f6-8358-003f3ad24ded"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.472093 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-public-tls-certs\") pod \"003b423c-92a0-47f6-8358-003f3ad24ded\" (UID: \"003b423c-92a0-47f6-8358-003f3ad24ded\") " Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.473691 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4de557a0-8b74-4d40-8c91-351ba127eb13-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.473725 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29a26d59-027f-428e-928e-12222b61a350-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.473759 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4de557a0-8b74-4d40-8c91-351ba127eb13-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:12 crc kubenswrapper[5050]: W1211 14:12:12.473896 5050 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/003b423c-92a0-47f6-8358-003f3ad24ded/volumes/kubernetes.io~secret/public-tls-certs Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.473933 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "003b423c-92a0-47f6-8358-003f3ad24ded" (UID: "003b423c-92a0-47f6-8358-003f3ad24ded"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.495779 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "003b423c-92a0-47f6-8358-003f3ad24ded" (UID: "003b423c-92a0-47f6-8358-003f3ad24ded"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.499389 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4de557a0-8b74-4d40-8c91-351ba127eb13-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4de557a0-8b74-4d40-8c91-351ba127eb13" (UID: "4de557a0-8b74-4d40-8c91-351ba127eb13"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.560284 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4de557a0-8b74-4d40-8c91-351ba127eb13-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4de557a0-8b74-4d40-8c91-351ba127eb13" (UID: "4de557a0-8b74-4d40-8c91-351ba127eb13"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.575894 5050 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.575947 5050 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4de557a0-8b74-4d40-8c91-351ba127eb13-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.575961 5050 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4de557a0-8b74-4d40-8c91-351ba127eb13-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.575974 5050 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/003b423c-92a0-47f6-8358-003f3ad24ded-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:12 crc kubenswrapper[5050]: E1211 14:12:12.677864 5050 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 14:12:12 crc kubenswrapper[5050]: E1211 14:12:12.677976 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ca005c2d-f7de-486a-bbd6-a32443582833-operator-scripts podName:ca005c2d-f7de-486a-bbd6-a32443582833 nodeName:}" failed. No retries permitted until 2025-12-11 14:12:13.677959784 +0000 UTC m=+1424.521682370 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/ca005c2d-f7de-486a-bbd6-a32443582833-operator-scripts") pod "novacell0e20d-account-delete-ntvhn" (UID: "ca005c2d-f7de-486a-bbd6-a32443582833") : configmap "openstack-scripts" not found Dec 11 14:12:12 crc kubenswrapper[5050]: E1211 14:12:12.678061 5050 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 14:12:12 crc kubenswrapper[5050]: E1211 14:12:12.678082 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/66cb4589-6296-417b-87eb-4bcbff7bf580-operator-scripts podName:66cb4589-6296-417b-87eb-4bcbff7bf580 nodeName:}" failed. No retries permitted until 2025-12-11 14:12:13.678076437 +0000 UTC m=+1424.521799013 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/66cb4589-6296-417b-87eb-4bcbff7bf580-operator-scripts") pod "barbican34ac-account-delete-cd525" (UID: "66cb4589-6296-417b-87eb-4bcbff7bf580") : configmap "openstack-scripts" not found Dec 11 14:12:12 crc kubenswrapper[5050]: E1211 14:12:12.780252 5050 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Dec 11 14:12:12 crc kubenswrapper[5050]: E1211 14:12:12.780306 5050 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 14:12:12 crc kubenswrapper[5050]: E1211 14:12:12.780423 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0891f075-8101-475b-b844-e7cb42a4990b-config-data podName:0891f075-8101-475b-b844-e7cb42a4990b nodeName:}" failed. No retries permitted until 2025-12-11 14:12:20.780397001 +0000 UTC m=+1431.624119577 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/0891f075-8101-475b-b844-e7cb42a4990b-config-data") pod "rabbitmq-server-0" (UID: "0891f075-8101-475b-b844-e7cb42a4990b") : configmap "rabbitmq-config-data" not found Dec 11 14:12:12 crc kubenswrapper[5050]: E1211 14:12:12.780469 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d484c84-7333-4701-a4f3-655c3d2cbfa7-operator-scripts podName:1d484c84-7333-4701-a4f3-655c3d2cbfa7 nodeName:}" failed. No retries permitted until 2025-12-11 14:12:13.780461003 +0000 UTC m=+1424.624183589 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/1d484c84-7333-4701-a4f3-655c3d2cbfa7-operator-scripts") pod "glance2653-account-delete-rgdnl" (UID: "1d484c84-7333-4701-a4f3-655c3d2cbfa7") : configmap "openstack-scripts" not found Dec 11 14:12:12 crc kubenswrapper[5050]: E1211 14:12:12.855322 5050 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.308s" Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.855367 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican34ac-account-delete-cd525" event={"ID":"66cb4589-6296-417b-87eb-4bcbff7bf580","Type":"ContainerStarted","Data":"75e75d11fd4bc8154957dc778a581689e55af1797f4b8a6f72486493adf59c2a"} Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.855409 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"3386ffea-45ca-41e8-9aa5-61a2923a3394","Type":"ContainerDied","Data":"9ea84eaaf8c76e0514c788691c988324c41b656d2f9f56a1293298caf825c84c"} Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.855428 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ea84eaaf8c76e0514c788691c988324c41b656d2f9f56a1293298caf825c84c" Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.855440 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.855460 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-kfqnl"] Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.855520 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"38b1a06e-804a-44dc-8e77-a7d8162f38bd","Type":"ContainerDied","Data":"649e31b89346341e7c81146ba46ab96d338648e2c60254b84fa6acf787ca62d7"} Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.855532 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="649e31b89346341e7c81146ba46ab96d338648e2c60254b84fa6acf787ca62d7" Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.855543 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-kfqnl"] Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.855556 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell0e20d-account-delete-ntvhn" event={"ID":"ca005c2d-f7de-486a-bbd6-a32443582833","Type":"ContainerStarted","Data":"5c8175cc977b1ffa644105126aaa9c04e5ea135e96b268ad6d76b8186e59e7ee"} Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.855567 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1eb418aa-1d3c-469c-8ff4-2b3c86a71e97","Type":"ContainerDied","Data":"b78a33f6104b5f79be70fd93a1678bdfe6caecfc71974569402b8c36abef844e"} Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.855582 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b78a33f6104b5f79be70fd93a1678bdfe6caecfc71974569402b8c36abef844e" Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.855592 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-tnv94"] Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.855602 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6817c570-f6ff-4b08-825a-027a9c8630b0","Type":"ContainerDied","Data":"99b0a4e0ddebc7695b430edc234ac8f69f475befeae07527d5e1dffee8ce52e4"} Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.855615 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7","Type":"ContainerDied","Data":"927455209d981bf727b535faeebc64bff29620b7370b60af99e0a5354d586a34"} Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.855629 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance2653-account-delete-rgdnl" event={"ID":"1d484c84-7333-4701-a4f3-655c3d2cbfa7","Type":"ContainerStarted","Data":"aa73f68df593bc0e997d1c636dc2b25763537ada8a021dc55f2b2753a9b161de"} Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.855644 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7f54bc974d-nvhbp"] Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.855660 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-tnv94"] Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.855672 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"213cfec6-ba42-4dbc-bd9c-051b193e4577","Type":"ContainerDied","Data":"97f6f60fa1e0a707c6a961adcbc21614be41fe1e743674218e2a861231dc6493"} Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.855684 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97f6f60fa1e0a707c6a961adcbc21614be41fe1e743674218e2a861231dc6493" Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.855692 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.855707 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-j9vlr"] Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.855718 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-j9vlr"] Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.855728 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-4d4b-account-create-update-4tzvj"] Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.855738 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-4d4b-account-create-update-4tzvj"] Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.855749 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z72zl" event={"ID":"3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8","Type":"ContainerStarted","Data":"f2162cdc516afc0854a82b78677134d83e4dfa653ed135b76204de329c5a08c2"} Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.855760 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-57f899fb58-v2lwj" event={"ID":"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf","Type":"ContainerDied","Data":"963f1743399fd3044bb51fa705a06e61782331edb27271c64f977e938ec02d43"} Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.855772 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="963f1743399fd3044bb51fa705a06e61782331edb27271c64f977e938ec02d43" Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.855784 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ef32727-7bbd-4a50-8292-4740b34107cc","Type":"ContainerDied","Data":"8295f391211fffee759bcdccec5b566c3a41abb5db15ca1a987be72e3711b3cf"} Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.858549 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="defedffb-9310-4b18-b7ee-b54040aa5447" containerName="memcached" containerID="cri-o://2a9d3c07c9884ff572de5d859d886e5e90497f2bc5adb397f3f64151ee6e7fd3" gracePeriod=30 Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.860586 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-7f54bc974d-nvhbp" podUID="36acbdf3-346e-4207-8391-b2a03ef839e5" containerName="keystone-api" containerID="cri-o://5f2c03aa348522be8e65276f7ae37004bcd483651a98c207f1fa66f6b76162d4" gracePeriod=30 Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.896003 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.929470 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.931986 5050 scope.go:117] "RemoveContainer" containerID="0834c0f817c98ccee2d1fc466612fef74fe94fe1c05828492e898fa4014682d9" Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.952506 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.954438 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.983908 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38b1a06e-804a-44dc-8e77-a7d8162f38bd-logs\") pod \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\" (UID: \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\") " Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.984093 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgv24\" (UniqueName: \"kubernetes.io/projected/38b1a06e-804a-44dc-8e77-a7d8162f38bd-kube-api-access-bgv24\") pod \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\" (UID: \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\") " Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.984152 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38b1a06e-804a-44dc-8e77-a7d8162f38bd-scripts\") pod \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\" (UID: \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\") " Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.984320 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38b1a06e-804a-44dc-8e77-a7d8162f38bd-combined-ca-bundle\") pod \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\" (UID: \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\") " Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.984441 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/38b1a06e-804a-44dc-8e77-a7d8162f38bd-httpd-run\") pod \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\" (UID: \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\") " Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.984593 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38b1a06e-804a-44dc-8e77-a7d8162f38bd-config-data\") pod \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\" (UID: \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\") " Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.984702 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/38b1a06e-804a-44dc-8e77-a7d8162f38bd-internal-tls-certs\") pod \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\" (UID: \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\") " Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.984851 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\" (UID: \"38b1a06e-804a-44dc-8e77-a7d8162f38bd\") " Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.990274 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.991149 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38b1a06e-804a-44dc-8e77-a7d8162f38bd-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "38b1a06e-804a-44dc-8e77-a7d8162f38bd" (UID: "38b1a06e-804a-44dc-8e77-a7d8162f38bd"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:12:12 crc kubenswrapper[5050]: I1211 14:12:12.994507 5050 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/38b1a06e-804a-44dc-8e77-a7d8162f38bd-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.003351 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38b1a06e-804a-44dc-8e77-a7d8162f38bd-logs" (OuterVolumeSpecName: "logs") pod "38b1a06e-804a-44dc-8e77-a7d8162f38bd" (UID: "38b1a06e-804a-44dc-8e77-a7d8162f38bd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.004410 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38b1a06e-804a-44dc-8e77-a7d8162f38bd-scripts" (OuterVolumeSpecName: "scripts") pod "38b1a06e-804a-44dc-8e77-a7d8162f38bd" (UID: "38b1a06e-804a-44dc-8e77-a7d8162f38bd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.009919 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "glance") pod "38b1a06e-804a-44dc-8e77-a7d8162f38bd" (UID: "38b1a06e-804a-44dc-8e77-a7d8162f38bd"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.013164 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.015407 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38b1a06e-804a-44dc-8e77-a7d8162f38bd-kube-api-access-bgv24" (OuterVolumeSpecName: "kube-api-access-bgv24") pod "38b1a06e-804a-44dc-8e77-a7d8162f38bd" (UID: "38b1a06e-804a-44dc-8e77-a7d8162f38bd"). InnerVolumeSpecName "kube-api-access-bgv24". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.020886 5050 scope.go:117] "RemoveContainer" containerID="bab4849e060d80aa6504ac4459a8bc36b1517d2feed47ced5829f252e6c45900" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.027952 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.028715 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.034687 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38b1a06e-804a-44dc-8e77-a7d8162f38bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38b1a06e-804a-44dc-8e77-a7d8162f38bd" (UID: "38b1a06e-804a-44dc-8e77-a7d8162f38bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.036672 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.036968 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-57f899fb58-v2lwj" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.050942 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.084791 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron7594-account-delete-8wjht" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.092745 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38b1a06e-804a-44dc-8e77-a7d8162f38bd-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "38b1a06e-804a-44dc-8e77-a7d8162f38bd" (UID: "38b1a06e-804a-44dc-8e77-a7d8162f38bd"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.093458 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38b1a06e-804a-44dc-8e77-a7d8162f38bd-config-data" (OuterVolumeSpecName: "config-data") pod "38b1a06e-804a-44dc-8e77-a7d8162f38bd" (UID: "38b1a06e-804a-44dc-8e77-a7d8162f38bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.093980 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-78ccc9f8bd-jdg2t"] Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.097536 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzgf4\" (UniqueName: \"kubernetes.io/projected/3386ffea-45ca-41e8-9aa5-61a2923a3394-kube-api-access-wzgf4\") pod \"3386ffea-45ca-41e8-9aa5-61a2923a3394\" (UID: \"3386ffea-45ca-41e8-9aa5-61a2923a3394\") " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.097616 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3386ffea-45ca-41e8-9aa5-61a2923a3394-logs\") pod \"3386ffea-45ca-41e8-9aa5-61a2923a3394\" (UID: \"3386ffea-45ca-41e8-9aa5-61a2923a3394\") " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.097666 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3386ffea-45ca-41e8-9aa5-61a2923a3394-combined-ca-bundle\") pod \"3386ffea-45ca-41e8-9aa5-61a2923a3394\" (UID: \"3386ffea-45ca-41e8-9aa5-61a2923a3394\") " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.097766 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/213cfec6-ba42-4dbc-bd9c-051b193e4577-combined-ca-bundle\") pod \"213cfec6-ba42-4dbc-bd9c-051b193e4577\" (UID: \"213cfec6-ba42-4dbc-bd9c-051b193e4577\") " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.097796 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/213cfec6-ba42-4dbc-bd9c-051b193e4577-public-tls-certs\") pod \"213cfec6-ba42-4dbc-bd9c-051b193e4577\" (UID: \"213cfec6-ba42-4dbc-bd9c-051b193e4577\") " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.097876 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"213cfec6-ba42-4dbc-bd9c-051b193e4577\" (UID: \"213cfec6-ba42-4dbc-bd9c-051b193e4577\") " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.097914 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3386ffea-45ca-41e8-9aa5-61a2923a3394-internal-tls-certs\") pod \"3386ffea-45ca-41e8-9aa5-61a2923a3394\" (UID: \"3386ffea-45ca-41e8-9aa5-61a2923a3394\") " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.097980 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5d5t\" (UniqueName: \"kubernetes.io/projected/213cfec6-ba42-4dbc-bd9c-051b193e4577-kube-api-access-p5d5t\") pod \"213cfec6-ba42-4dbc-bd9c-051b193e4577\" (UID: \"213cfec6-ba42-4dbc-bd9c-051b193e4577\") " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.098028 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/213cfec6-ba42-4dbc-bd9c-051b193e4577-scripts\") pod \"213cfec6-ba42-4dbc-bd9c-051b193e4577\" (UID: \"213cfec6-ba42-4dbc-bd9c-051b193e4577\") " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.098097 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3386ffea-45ca-41e8-9aa5-61a2923a3394-config-data\") pod \"3386ffea-45ca-41e8-9aa5-61a2923a3394\" (UID: \"3386ffea-45ca-41e8-9aa5-61a2923a3394\") " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.098115 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/213cfec6-ba42-4dbc-bd9c-051b193e4577-logs\") pod \"213cfec6-ba42-4dbc-bd9c-051b193e4577\" (UID: \"213cfec6-ba42-4dbc-bd9c-051b193e4577\") " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.098151 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/213cfec6-ba42-4dbc-bd9c-051b193e4577-httpd-run\") pod \"213cfec6-ba42-4dbc-bd9c-051b193e4577\" (UID: \"213cfec6-ba42-4dbc-bd9c-051b193e4577\") " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.098204 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/213cfec6-ba42-4dbc-bd9c-051b193e4577-config-data\") pod \"213cfec6-ba42-4dbc-bd9c-051b193e4577\" (UID: \"213cfec6-ba42-4dbc-bd9c-051b193e4577\") " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.098227 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3386ffea-45ca-41e8-9aa5-61a2923a3394-public-tls-certs\") pod \"3386ffea-45ca-41e8-9aa5-61a2923a3394\" (UID: \"3386ffea-45ca-41e8-9aa5-61a2923a3394\") " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.098544 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3386ffea-45ca-41e8-9aa5-61a2923a3394-logs" (OuterVolumeSpecName: "logs") pod "3386ffea-45ca-41e8-9aa5-61a2923a3394" (UID: "3386ffea-45ca-41e8-9aa5-61a2923a3394"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.099098 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38b1a06e-804a-44dc-8e77-a7d8162f38bd-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.099121 5050 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/38b1a06e-804a-44dc-8e77-a7d8162f38bd-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.099150 5050 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.099162 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/38b1a06e-804a-44dc-8e77-a7d8162f38bd-logs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.099172 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgv24\" (UniqueName: \"kubernetes.io/projected/38b1a06e-804a-44dc-8e77-a7d8162f38bd-kube-api-access-bgv24\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.099224 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38b1a06e-804a-44dc-8e77-a7d8162f38bd-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.099234 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38b1a06e-804a-44dc-8e77-a7d8162f38bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.099244 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3386ffea-45ca-41e8-9aa5-61a2923a3394-logs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.101304 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/213cfec6-ba42-4dbc-bd9c-051b193e4577-logs" (OuterVolumeSpecName: "logs") pod "213cfec6-ba42-4dbc-bd9c-051b193e4577" (UID: "213cfec6-ba42-4dbc-bd9c-051b193e4577"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.103148 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/213cfec6-ba42-4dbc-bd9c-051b193e4577-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "213cfec6-ba42-4dbc-bd9c-051b193e4577" (UID: "213cfec6-ba42-4dbc-bd9c-051b193e4577"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.113646 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3386ffea-45ca-41e8-9aa5-61a2923a3394-kube-api-access-wzgf4" (OuterVolumeSpecName: "kube-api-access-wzgf4") pod "3386ffea-45ca-41e8-9aa5-61a2923a3394" (UID: "3386ffea-45ca-41e8-9aa5-61a2923a3394"). InnerVolumeSpecName "kube-api-access-wzgf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.115078 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/213cfec6-ba42-4dbc-bd9c-051b193e4577-kube-api-access-p5d5t" (OuterVolumeSpecName: "kube-api-access-p5d5t") pod "213cfec6-ba42-4dbc-bd9c-051b193e4577" (UID: "213cfec6-ba42-4dbc-bd9c-051b193e4577"). InnerVolumeSpecName "kube-api-access-p5d5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.115483 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-78ccc9f8bd-jdg2t"] Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.121719 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "213cfec6-ba42-4dbc-bd9c-051b193e4577" (UID: "213cfec6-ba42-4dbc-bd9c-051b193e4577"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.127885 5050 scope.go:117] "RemoveContainer" containerID="0834c0f817c98ccee2d1fc466612fef74fe94fe1c05828492e898fa4014682d9" Dec 11 14:12:13 crc kubenswrapper[5050]: E1211 14:12:13.131254 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0834c0f817c98ccee2d1fc466612fef74fe94fe1c05828492e898fa4014682d9\": container with ID starting with 0834c0f817c98ccee2d1fc466612fef74fe94fe1c05828492e898fa4014682d9 not found: ID does not exist" containerID="0834c0f817c98ccee2d1fc466612fef74fe94fe1c05828492e898fa4014682d9" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.131328 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0834c0f817c98ccee2d1fc466612fef74fe94fe1c05828492e898fa4014682d9"} err="failed to get container status \"0834c0f817c98ccee2d1fc466612fef74fe94fe1c05828492e898fa4014682d9\": rpc error: code = NotFound desc = could not find container \"0834c0f817c98ccee2d1fc466612fef74fe94fe1c05828492e898fa4014682d9\": container with ID starting with 0834c0f817c98ccee2d1fc466612fef74fe94fe1c05828492e898fa4014682d9 not found: ID does not exist" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.131374 5050 scope.go:117] "RemoveContainer" containerID="bab4849e060d80aa6504ac4459a8bc36b1517d2feed47ced5829f252e6c45900" Dec 11 14:12:13 crc kubenswrapper[5050]: E1211 14:12:13.131929 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bab4849e060d80aa6504ac4459a8bc36b1517d2feed47ced5829f252e6c45900\": container with ID starting with bab4849e060d80aa6504ac4459a8bc36b1517d2feed47ced5829f252e6c45900 not found: ID does not exist" containerID="bab4849e060d80aa6504ac4459a8bc36b1517d2feed47ced5829f252e6c45900" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.131978 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bab4849e060d80aa6504ac4459a8bc36b1517d2feed47ced5829f252e6c45900"} err="failed to get container status \"bab4849e060d80aa6504ac4459a8bc36b1517d2feed47ced5829f252e6c45900\": rpc error: code = NotFound desc = could not find container \"bab4849e060d80aa6504ac4459a8bc36b1517d2feed47ced5829f252e6c45900\": container with ID starting with bab4849e060d80aa6504ac4459a8bc36b1517d2feed47ced5829f252e6c45900 not found: ID does not exist" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.132032 5050 scope.go:117] "RemoveContainer" containerID="0834c0f817c98ccee2d1fc466612fef74fe94fe1c05828492e898fa4014682d9" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.132182 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/213cfec6-ba42-4dbc-bd9c-051b193e4577-scripts" (OuterVolumeSpecName: "scripts") pod "213cfec6-ba42-4dbc-bd9c-051b193e4577" (UID: "213cfec6-ba42-4dbc-bd9c-051b193e4577"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.133232 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0834c0f817c98ccee2d1fc466612fef74fe94fe1c05828492e898fa4014682d9"} err="failed to get container status \"0834c0f817c98ccee2d1fc466612fef74fe94fe1c05828492e898fa4014682d9\": rpc error: code = NotFound desc = could not find container \"0834c0f817c98ccee2d1fc466612fef74fe94fe1c05828492e898fa4014682d9\": container with ID starting with 0834c0f817c98ccee2d1fc466612fef74fe94fe1c05828492e898fa4014682d9 not found: ID does not exist" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.133268 5050 scope.go:117] "RemoveContainer" containerID="bab4849e060d80aa6504ac4459a8bc36b1517d2feed47ced5829f252e6c45900" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.140692 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bab4849e060d80aa6504ac4459a8bc36b1517d2feed47ced5829f252e6c45900"} err="failed to get container status \"bab4849e060d80aa6504ac4459a8bc36b1517d2feed47ced5829f252e6c45900\": rpc error: code = NotFound desc = could not find container \"bab4849e060d80aa6504ac4459a8bc36b1517d2feed47ced5829f252e6c45900\": container with ID starting with bab4849e060d80aa6504ac4459a8bc36b1517d2feed47ced5829f252e6c45900 not found: ID does not exist" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.140756 5050 scope.go:117] "RemoveContainer" containerID="c999df53c600f82fa92bc84444337d1373326ba3f2b76682afad53362cb34c3d" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.142256 5050 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.160000 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3386ffea-45ca-41e8-9aa5-61a2923a3394-config-data" (OuterVolumeSpecName: "config-data") pod "3386ffea-45ca-41e8-9aa5-61a2923a3394" (UID: "3386ffea-45ca-41e8-9aa5-61a2923a3394"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.184439 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/213cfec6-ba42-4dbc-bd9c-051b193e4577-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "213cfec6-ba42-4dbc-bd9c-051b193e4577" (UID: "213cfec6-ba42-4dbc-bd9c-051b193e4577"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.200761 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6817c570-f6ff-4b08-825a-027a9c8630b0-combined-ca-bundle\") pod \"6817c570-f6ff-4b08-825a-027a9c8630b0\" (UID: \"6817c570-f6ff-4b08-825a-027a9c8630b0\") " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.200818 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-internal-tls-certs\") pod \"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf\" (UID: \"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf\") " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.200880 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-config-data-custom\") pod \"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf\" (UID: \"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf\") " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.200900 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-combined-ca-bundle\") pod \"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf\" (UID: \"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf\") " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.200923 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-config-data\") pod \"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf\" (UID: \"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf\") " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.200951 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kg874\" (UniqueName: \"kubernetes.io/projected/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-kube-api-access-kg874\") pod \"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf\" (UID: \"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf\") " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.200979 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1eb418aa-1d3c-469c-8ff4-2b3c86a71e97-logs\") pod \"1eb418aa-1d3c-469c-8ff4-2b3c86a71e97\" (UID: \"1eb418aa-1d3c-469c-8ff4-2b3c86a71e97\") " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.201047 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9hsj\" (UniqueName: \"kubernetes.io/projected/bc2e956f-6026-4a75-b11a-5106aad626a5-kube-api-access-h9hsj\") pod \"bc2e956f-6026-4a75-b11a-5106aad626a5\" (UID: \"bc2e956f-6026-4a75-b11a-5106aad626a5\") " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.201091 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trkj4\" (UniqueName: \"kubernetes.io/projected/6817c570-f6ff-4b08-825a-027a9c8630b0-kube-api-access-trkj4\") pod \"6817c570-f6ff-4b08-825a-027a9c8630b0\" (UID: \"6817c570-f6ff-4b08-825a-027a9c8630b0\") " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.201126 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eb418aa-1d3c-469c-8ff4-2b3c86a71e97-combined-ca-bundle\") pod \"1eb418aa-1d3c-469c-8ff4-2b3c86a71e97\" (UID: \"1eb418aa-1d3c-469c-8ff4-2b3c86a71e97\") " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.201158 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eb418aa-1d3c-469c-8ff4-2b3c86a71e97-config-data\") pod \"1eb418aa-1d3c-469c-8ff4-2b3c86a71e97\" (UID: \"1eb418aa-1d3c-469c-8ff4-2b3c86a71e97\") " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.201245 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc2e956f-6026-4a75-b11a-5106aad626a5-operator-scripts\") pod \"bc2e956f-6026-4a75-b11a-5106aad626a5\" (UID: \"bc2e956f-6026-4a75-b11a-5106aad626a5\") " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.201307 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6817c570-f6ff-4b08-825a-027a9c8630b0-kube-state-metrics-tls-config\") pod \"6817c570-f6ff-4b08-825a-027a9c8630b0\" (UID: \"6817c570-f6ff-4b08-825a-027a9c8630b0\") " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.201385 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wckq9\" (UniqueName: \"kubernetes.io/projected/1eb418aa-1d3c-469c-8ff4-2b3c86a71e97-kube-api-access-wckq9\") pod \"1eb418aa-1d3c-469c-8ff4-2b3c86a71e97\" (UID: \"1eb418aa-1d3c-469c-8ff4-2b3c86a71e97\") " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.201405 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7-combined-ca-bundle\") pod \"ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7\" (UID: \"ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7\") " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.201437 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6817c570-f6ff-4b08-825a-027a9c8630b0-kube-state-metrics-tls-certs\") pod \"6817c570-f6ff-4b08-825a-027a9c8630b0\" (UID: \"6817c570-f6ff-4b08-825a-027a9c8630b0\") " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.201483 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-public-tls-certs\") pod \"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf\" (UID: \"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf\") " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.201501 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7-config-data\") pod \"ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7\" (UID: \"ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7\") " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.201522 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1eb418aa-1d3c-469c-8ff4-2b3c86a71e97-nova-metadata-tls-certs\") pod \"1eb418aa-1d3c-469c-8ff4-2b3c86a71e97\" (UID: \"1eb418aa-1d3c-469c-8ff4-2b3c86a71e97\") " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.201561 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77m6b\" (UniqueName: \"kubernetes.io/projected/ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7-kube-api-access-77m6b\") pod \"ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7\" (UID: \"ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7\") " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.201581 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-logs\") pod \"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf\" (UID: \"4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf\") " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.202076 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzgf4\" (UniqueName: \"kubernetes.io/projected/3386ffea-45ca-41e8-9aa5-61a2923a3394-kube-api-access-wzgf4\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.202100 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/213cfec6-ba42-4dbc-bd9c-051b193e4577-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.202112 5050 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.202137 5050 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.202148 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5d5t\" (UniqueName: \"kubernetes.io/projected/213cfec6-ba42-4dbc-bd9c-051b193e4577-kube-api-access-p5d5t\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.202158 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/213cfec6-ba42-4dbc-bd9c-051b193e4577-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.202167 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3386ffea-45ca-41e8-9aa5-61a2923a3394-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.202177 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/213cfec6-ba42-4dbc-bd9c-051b193e4577-logs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.202188 5050 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/213cfec6-ba42-4dbc-bd9c-051b193e4577-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.204303 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3386ffea-45ca-41e8-9aa5-61a2923a3394-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3386ffea-45ca-41e8-9aa5-61a2923a3394" (UID: "3386ffea-45ca-41e8-9aa5-61a2923a3394"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.211296 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc2e956f-6026-4a75-b11a-5106aad626a5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bc2e956f-6026-4a75-b11a-5106aad626a5" (UID: "bc2e956f-6026-4a75-b11a-5106aad626a5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.212120 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1eb418aa-1d3c-469c-8ff4-2b3c86a71e97-logs" (OuterVolumeSpecName: "logs") pod "1eb418aa-1d3c-469c-8ff4-2b3c86a71e97" (UID: "1eb418aa-1d3c-469c-8ff4-2b3c86a71e97"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.214597 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1eb418aa-1d3c-469c-8ff4-2b3c86a71e97-kube-api-access-wckq9" (OuterVolumeSpecName: "kube-api-access-wckq9") pod "1eb418aa-1d3c-469c-8ff4-2b3c86a71e97" (UID: "1eb418aa-1d3c-469c-8ff4-2b3c86a71e97"). InnerVolumeSpecName "kube-api-access-wckq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.215281 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-logs" (OuterVolumeSpecName: "logs") pod "4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf" (UID: "4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.222135 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-kube-api-access-kg874" (OuterVolumeSpecName: "kube-api-access-kg874") pod "4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf" (UID: "4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf"). InnerVolumeSpecName "kube-api-access-kg874". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.222936 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf" (UID: "4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.227283 5050 generic.go:334] "Generic (PLEG): container finished" podID="d917f471-6630-4e96-a0e4-cbde631da4a8" containerID="b9c35ee719250fa6dc750f32fbbe1dbad446b365702d9a03226e35b24b159714" exitCode=1 Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.227539 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi4326-account-delete-9bmsz" event={"ID":"d917f471-6630-4e96-a0e4-cbde631da4a8","Type":"ContainerDied","Data":"b9c35ee719250fa6dc750f32fbbe1dbad446b365702d9a03226e35b24b159714"} Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.228937 5050 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/novaapi4326-account-delete-9bmsz" secret="" err="secret \"galera-openstack-dockercfg-mgt2f\" not found" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.228994 5050 scope.go:117] "RemoveContainer" containerID="b9c35ee719250fa6dc750f32fbbe1dbad446b365702d9a03226e35b24b159714" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.244779 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7-kube-api-access-77m6b" (OuterVolumeSpecName: "kube-api-access-77m6b") pod "ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7" (UID: "ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7"). InnerVolumeSpecName "kube-api-access-77m6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.252243 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/213cfec6-ba42-4dbc-bd9c-051b193e4577-config-data" (OuterVolumeSpecName: "config-data") pod "213cfec6-ba42-4dbc-bd9c-051b193e4577" (UID: "213cfec6-ba42-4dbc-bd9c-051b193e4577"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.260653 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc2e956f-6026-4a75-b11a-5106aad626a5-kube-api-access-h9hsj" (OuterVolumeSpecName: "kube-api-access-h9hsj") pod "bc2e956f-6026-4a75-b11a-5106aad626a5" (UID: "bc2e956f-6026-4a75-b11a-5106aad626a5"). InnerVolumeSpecName "kube-api-access-h9hsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.270656 5050 generic.go:334] "Generic (PLEG): container finished" podID="e365d825-a3cb-42a3-8a00-8a9be42ed290" containerID="38bea9b8fa28a96b68c698d0a2cad56e8bf2ac240b8c07d2c11031d667e89fc2" exitCode=1 Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.270767 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder27b7-account-delete-h9wjn" event={"ID":"e365d825-a3cb-42a3-8a00-8a9be42ed290","Type":"ContainerDied","Data":"38bea9b8fa28a96b68c698d0a2cad56e8bf2ac240b8c07d2c11031d667e89fc2"} Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.271499 5050 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/cinder27b7-account-delete-h9wjn" secret="" err="secret \"galera-openstack-dockercfg-mgt2f\" not found" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.271564 5050 scope.go:117] "RemoveContainer" containerID="38bea9b8fa28a96b68c698d0a2cad56e8bf2ac240b8c07d2c11031d667e89fc2" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.282740 5050 generic.go:334] "Generic (PLEG): container finished" podID="66cb4589-6296-417b-87eb-4bcbff7bf580" containerID="75e75d11fd4bc8154957dc778a581689e55af1797f4b8a6f72486493adf59c2a" exitCode=1 Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.282853 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican34ac-account-delete-cd525" event={"ID":"66cb4589-6296-417b-87eb-4bcbff7bf580","Type":"ContainerDied","Data":"75e75d11fd4bc8154957dc778a581689e55af1797f4b8a6f72486493adf59c2a"} Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.284974 5050 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/barbican34ac-account-delete-cd525" secret="" err="secret \"galera-openstack-dockercfg-mgt2f\" not found" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.285049 5050 scope.go:117] "RemoveContainer" containerID="75e75d11fd4bc8154957dc778a581689e55af1797f4b8a6f72486493adf59c2a" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.300221 5050 generic.go:334] "Generic (PLEG): container finished" podID="ca005c2d-f7de-486a-bbd6-a32443582833" containerID="5c8175cc977b1ffa644105126aaa9c04e5ea135e96b268ad6d76b8186e59e7ee" exitCode=1 Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.300321 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell0e20d-account-delete-ntvhn" event={"ID":"ca005c2d-f7de-486a-bbd6-a32443582833","Type":"ContainerDied","Data":"5c8175cc977b1ffa644105126aaa9c04e5ea135e96b268ad6d76b8186e59e7ee"} Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.301176 5050 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/novacell0e20d-account-delete-ntvhn" secret="" err="secret \"galera-openstack-dockercfg-mgt2f\" not found" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.301232 5050 scope.go:117] "RemoveContainer" containerID="5c8175cc977b1ffa644105126aaa9c04e5ea135e96b268ad6d76b8186e59e7ee" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.305198 5050 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.305241 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kg874\" (UniqueName: \"kubernetes.io/projected/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-kube-api-access-kg874\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.305256 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1eb418aa-1d3c-469c-8ff4-2b3c86a71e97-logs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.305269 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9hsj\" (UniqueName: \"kubernetes.io/projected/bc2e956f-6026-4a75-b11a-5106aad626a5-kube-api-access-h9hsj\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.305281 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc2e956f-6026-4a75-b11a-5106aad626a5-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.305292 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wckq9\" (UniqueName: \"kubernetes.io/projected/1eb418aa-1d3c-469c-8ff4-2b3c86a71e97-kube-api-access-wckq9\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.305304 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/213cfec6-ba42-4dbc-bd9c-051b193e4577-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.305314 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77m6b\" (UniqueName: \"kubernetes.io/projected/ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7-kube-api-access-77m6b\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.305324 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-logs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.305333 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3386ffea-45ca-41e8-9aa5-61a2923a3394-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: E1211 14:12:13.305342 5050 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 14:12:13 crc kubenswrapper[5050]: E1211 14:12:13.305445 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d917f471-6630-4e96-a0e4-cbde631da4a8-operator-scripts podName:d917f471-6630-4e96-a0e4-cbde631da4a8 nodeName:}" failed. No retries permitted until 2025-12-11 14:12:13.805414494 +0000 UTC m=+1424.649137280 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/d917f471-6630-4e96-a0e4-cbde631da4a8-operator-scripts") pod "novaapi4326-account-delete-9bmsz" (UID: "d917f471-6630-4e96-a0e4-cbde631da4a8") : configmap "openstack-scripts" not found Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.307477 5050 generic.go:334] "Generic (PLEG): container finished" podID="bc8efd61-e4fb-4ec0-834a-b495797039a1" containerID="d5af5116657e860183a9cb6b884cd9869678b597a90e497443cb7f1f22d522ac" exitCode=1 Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.308121 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placementd2f2-account-delete-njqwh" event={"ID":"bc8efd61-e4fb-4ec0-834a-b495797039a1","Type":"ContainerDied","Data":"d5af5116657e860183a9cb6b884cd9869678b597a90e497443cb7f1f22d522ac"} Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.308247 5050 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/placementd2f2-account-delete-njqwh" secret="" err="secret \"galera-openstack-dockercfg-mgt2f\" not found" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.308284 5050 scope.go:117] "RemoveContainer" containerID="d5af5116657e860183a9cb6b884cd9869678b597a90e497443cb7f1f22d522ac" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.309296 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6817c570-f6ff-4b08-825a-027a9c8630b0-kube-api-access-trkj4" (OuterVolumeSpecName: "kube-api-access-trkj4") pod "6817c570-f6ff-4b08-825a-027a9c8630b0" (UID: "6817c570-f6ff-4b08-825a-027a9c8630b0"). InnerVolumeSpecName "kube-api-access-trkj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.319360 5050 generic.go:334] "Generic (PLEG): container finished" podID="1d484c84-7333-4701-a4f3-655c3d2cbfa7" containerID="aa73f68df593bc0e997d1c636dc2b25763537ada8a021dc55f2b2753a9b161de" exitCode=1 Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.319484 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance2653-account-delete-rgdnl" event={"ID":"1d484c84-7333-4701-a4f3-655c3d2cbfa7","Type":"ContainerDied","Data":"aa73f68df593bc0e997d1c636dc2b25763537ada8a021dc55f2b2753a9b161de"} Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.320788 5050 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/glance2653-account-delete-rgdnl" secret="" err="secret \"galera-openstack-dockercfg-mgt2f\" not found" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.320868 5050 scope.go:117] "RemoveContainer" containerID="aa73f68df593bc0e997d1c636dc2b25763537ada8a021dc55f2b2753a9b161de" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.332938 5050 generic.go:334] "Generic (PLEG): container finished" podID="2ef32727-7bbd-4a50-8292-4740b34107cc" containerID="2df25eb8394651a0a8bc136ef8c2753058167b5dbecc7a2a0c134fede2448a38" exitCode=0 Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.332974 5050 generic.go:334] "Generic (PLEG): container finished" podID="2ef32727-7bbd-4a50-8292-4740b34107cc" containerID="f6d56512f1f5665c8ed3d4a8969144001b8fe4f5868e5cfd74524cc0c96c62f3" exitCode=0 Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.333047 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ef32727-7bbd-4a50-8292-4740b34107cc","Type":"ContainerDied","Data":"2df25eb8394651a0a8bc136ef8c2753058167b5dbecc7a2a0c134fede2448a38"} Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.333082 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ef32727-7bbd-4a50-8292-4740b34107cc","Type":"ContainerDied","Data":"f6d56512f1f5665c8ed3d4a8969144001b8fe4f5868e5cfd74524cc0c96c62f3"} Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.335078 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron7594-account-delete-8wjht" event={"ID":"bc2e956f-6026-4a75-b11a-5106aad626a5","Type":"ContainerDied","Data":"aca8a1f074ec9c79e910d77e4d9193bfb252981c026235cc3d9b6067c7e6325d"} Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.335099 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aca8a1f074ec9c79e910d77e4d9193bfb252981c026235cc3d9b6067c7e6325d" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.335160 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron7594-account-delete-8wjht" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.348323 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.348608 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6817c570-f6ff-4b08-825a-027a9c8630b0","Type":"ContainerDied","Data":"be9cff5d067d029402380a5e60a79c3e8a579a7a9b9b4d9e3b7081d07b9b74e2"} Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.354597 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7","Type":"ContainerDied","Data":"af19112315c9608df5732560911f46a812f8af4824b95694609e39556ada9276"} Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.354618 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.376953 5050 generic.go:334] "Generic (PLEG): container finished" podID="458f05be-2fd6-44d9-8034-f077356964ce" containerID="7fc0726972676985eb911b818bc159c8c1b12a1ca0e646ddda6558ea21079201" exitCode=0 Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.377071 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.377575 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"458f05be-2fd6-44d9-8034-f077356964ce","Type":"ContainerDied","Data":"7fc0726972676985eb911b818bc159c8c1b12a1ca0e646ddda6558ea21079201"} Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.378566 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.380228 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.380919 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.381503 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-57f899fb58-v2lwj" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.407077 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trkj4\" (UniqueName: \"kubernetes.io/projected/6817c570-f6ff-4b08-825a-027a9c8630b0-kube-api-access-trkj4\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: E1211 14:12:13.407568 5050 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 14:12:13 crc kubenswrapper[5050]: E1211 14:12:13.407624 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bc8efd61-e4fb-4ec0-834a-b495797039a1-operator-scripts podName:bc8efd61-e4fb-4ec0-834a-b495797039a1 nodeName:}" failed. No retries permitted until 2025-12-11 14:12:13.907608575 +0000 UTC m=+1424.751331161 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/bc8efd61-e4fb-4ec0-834a-b495797039a1-operator-scripts") pod "placementd2f2-account-delete-njqwh" (UID: "bc8efd61-e4fb-4ec0-834a-b495797039a1") : configmap "openstack-scripts" not found Dec 11 14:12:13 crc kubenswrapper[5050]: E1211 14:12:13.407867 5050 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 14:12:13 crc kubenswrapper[5050]: E1211 14:12:13.407896 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e365d825-a3cb-42a3-8a00-8a9be42ed290-operator-scripts podName:e365d825-a3cb-42a3-8a00-8a9be42ed290 nodeName:}" failed. No retries permitted until 2025-12-11 14:12:13.907889442 +0000 UTC m=+1424.751612018 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/e365d825-a3cb-42a3-8a00-8a9be42ed290-operator-scripts") pod "cinder27b7-account-delete-h9wjn" (UID: "e365d825-a3cb-42a3-8a00-8a9be42ed290") : configmap "openstack-scripts" not found Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.495112 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="55b7e535-46f6-403b-9cdf-bf172dba97b6" containerName="galera" containerID="cri-o://867f3427f68bf3282e541c138203522b401306d30e252d2e6f7d84ff42514106" gracePeriod=30 Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.514777 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3386ffea-45ca-41e8-9aa5-61a2923a3394-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3386ffea-45ca-41e8-9aa5-61a2923a3394" (UID: "3386ffea-45ca-41e8-9aa5-61a2923a3394"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.515662 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eb418aa-1d3c-469c-8ff4-2b3c86a71e97-config-data" (OuterVolumeSpecName: "config-data") pod "1eb418aa-1d3c-469c-8ff4-2b3c86a71e97" (UID: "1eb418aa-1d3c-469c-8ff4-2b3c86a71e97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.516419 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3386ffea-45ca-41e8-9aa5-61a2923a3394-public-tls-certs\") pod \"3386ffea-45ca-41e8-9aa5-61a2923a3394\" (UID: \"3386ffea-45ca-41e8-9aa5-61a2923a3394\") " Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.527070 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/213cfec6-ba42-4dbc-bd9c-051b193e4577-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "213cfec6-ba42-4dbc-bd9c-051b193e4577" (UID: "213cfec6-ba42-4dbc-bd9c-051b193e4577"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.540517 5050 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.544096 5050 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/213cfec6-ba42-4dbc-bd9c-051b193e4577-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.544147 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1eb418aa-1d3c-469c-8ff4-2b3c86a71e97-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.544159 5050 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.574104 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="003b423c-92a0-47f6-8358-003f3ad24ded" path="/var/lib/kubelet/pods/003b423c-92a0-47f6-8358-003f3ad24ded/volumes" Dec 11 14:12:13 crc kubenswrapper[5050]: W1211 14:12:13.574393 5050 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/3386ffea-45ca-41e8-9aa5-61a2923a3394/volumes/kubernetes.io~secret/public-tls-certs Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.574459 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3386ffea-45ca-41e8-9aa5-61a2923a3394-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3386ffea-45ca-41e8-9aa5-61a2923a3394" (UID: "3386ffea-45ca-41e8-9aa5-61a2923a3394"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.575537 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bff77bb-7533-45dd-9c1c-d20368964bc6" path="/var/lib/kubelet/pods/1bff77bb-7533-45dd-9c1c-d20368964bc6/volumes" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.576542 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29a26d59-027f-428e-928e-12222b61a350" path="/var/lib/kubelet/pods/29a26d59-027f-428e-928e-12222b61a350/volumes" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.578603 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4de557a0-8b74-4d40-8c91-351ba127eb13" path="/var/lib/kubelet/pods/4de557a0-8b74-4d40-8c91-351ba127eb13/volumes" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.580128 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab97d90f-f85d-4d2b-8b8e-6c62d74b7a07" path="/var/lib/kubelet/pods/ab97d90f-f85d-4d2b-8b8e-6c62d74b7a07/volumes" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.581271 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c69742c2-ef6f-478b-bf96-754808e9a127" path="/var/lib/kubelet/pods/c69742c2-ef6f-478b-bf96-754808e9a127/volumes" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.583140 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9f0ade5-6144-4596-a78b-afeca167af55" path="/var/lib/kubelet/pods/e9f0ade5-6144-4596-a78b-afeca167af55/volumes" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.618977 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7-config-data" (OuterVolumeSpecName: "config-data") pod "ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7" (UID: "ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.619922 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eb418aa-1d3c-469c-8ff4-2b3c86a71e97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1eb418aa-1d3c-469c-8ff4-2b3c86a71e97" (UID: "1eb418aa-1d3c-469c-8ff4-2b3c86a71e97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.642374 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6817c570-f6ff-4b08-825a-027a9c8630b0-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "6817c570-f6ff-4b08-825a-027a9c8630b0" (UID: "6817c570-f6ff-4b08-825a-027a9c8630b0"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.646655 5050 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6817c570-f6ff-4b08-825a-027a9c8630b0-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.646842 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.646950 5050 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3386ffea-45ca-41e8-9aa5-61a2923a3394-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.647051 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1eb418aa-1d3c-469c-8ff4-2b3c86a71e97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.661047 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6817c570-f6ff-4b08-825a-027a9c8630b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6817c570-f6ff-4b08-825a-027a9c8630b0" (UID: "6817c570-f6ff-4b08-825a-027a9c8630b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.663555 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7" (UID: "ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.674039 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf" (UID: "4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.712492 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6817c570-f6ff-4b08-825a-027a9c8630b0-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "6817c570-f6ff-4b08-825a-027a9c8630b0" (UID: "6817c570-f6ff-4b08-825a-027a9c8630b0"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.724469 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1eb418aa-1d3c-469c-8ff4-2b3c86a71e97-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "1eb418aa-1d3c-469c-8ff4-2b3c86a71e97" (UID: "1eb418aa-1d3c-469c-8ff4-2b3c86a71e97"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.726162 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf" (UID: "4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.727185 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3386ffea-45ca-41e8-9aa5-61a2923a3394-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3386ffea-45ca-41e8-9aa5-61a2923a3394" (UID: "3386ffea-45ca-41e8-9aa5-61a2923a3394"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.738976 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-config-data" (OuterVolumeSpecName: "config-data") pod "4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf" (UID: "4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.740485 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf" (UID: "4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.751158 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.751330 5050 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6817c570-f6ff-4b08-825a-027a9c8630b0-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.751355 5050 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.751373 5050 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1eb418aa-1d3c-469c-8ff4-2b3c86a71e97-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.751388 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6817c570-f6ff-4b08-825a-027a9c8630b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.751401 5050 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.751415 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.751431 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.751447 5050 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3386ffea-45ca-41e8-9aa5-61a2923a3394-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:13 crc kubenswrapper[5050]: E1211 14:12:13.751567 5050 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 14:12:13 crc kubenswrapper[5050]: E1211 14:12:13.751646 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/66cb4589-6296-417b-87eb-4bcbff7bf580-operator-scripts podName:66cb4589-6296-417b-87eb-4bcbff7bf580 nodeName:}" failed. No retries permitted until 2025-12-11 14:12:15.751618898 +0000 UTC m=+1426.595341494 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/66cb4589-6296-417b-87eb-4bcbff7bf580-operator-scripts") pod "barbican34ac-account-delete-cd525" (UID: "66cb4589-6296-417b-87eb-4bcbff7bf580") : configmap "openstack-scripts" not found Dec 11 14:12:13 crc kubenswrapper[5050]: E1211 14:12:13.753420 5050 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 14:12:13 crc kubenswrapper[5050]: E1211 14:12:13.753503 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ca005c2d-f7de-486a-bbd6-a32443582833-operator-scripts podName:ca005c2d-f7de-486a-bbd6-a32443582833 nodeName:}" failed. No retries permitted until 2025-12-11 14:12:15.753482809 +0000 UTC m=+1426.597205575 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/ca005c2d-f7de-486a-bbd6-a32443582833-operator-scripts") pod "novacell0e20d-account-delete-ntvhn" (UID: "ca005c2d-f7de-486a-bbd6-a32443582833") : configmap "openstack-scripts" not found Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.835872 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-44xtk"] Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.835951 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-44xtk"] Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.842723 5050 scope.go:117] "RemoveContainer" containerID="dcd6241bf625d5260eb81af83e760bdddc10ab5c6ec8bd47adbb59c1809dd3e4" Dec 11 14:12:13 crc kubenswrapper[5050]: E1211 14:12:13.853657 5050 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 14:12:13 crc kubenswrapper[5050]: E1211 14:12:13.853769 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d917f471-6630-4e96-a0e4-cbde631da4a8-operator-scripts podName:d917f471-6630-4e96-a0e4-cbde631da4a8 nodeName:}" failed. No retries permitted until 2025-12-11 14:12:14.853744077 +0000 UTC m=+1425.697466663 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/d917f471-6630-4e96-a0e4-cbde631da4a8-operator-scripts") pod "novaapi4326-account-delete-9bmsz" (UID: "d917f471-6630-4e96-a0e4-cbde631da4a8") : configmap "openstack-scripts" not found Dec 11 14:12:13 crc kubenswrapper[5050]: E1211 14:12:13.854324 5050 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 14:12:13 crc kubenswrapper[5050]: E1211 14:12:13.854413 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d484c84-7333-4701-a4f3-655c3d2cbfa7-operator-scripts podName:1d484c84-7333-4701-a4f3-655c3d2cbfa7 nodeName:}" failed. No retries permitted until 2025-12-11 14:12:15.854356764 +0000 UTC m=+1426.698079350 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/1d484c84-7333-4701-a4f3-655c3d2cbfa7-operator-scripts") pod "glance2653-account-delete-rgdnl" (UID: "1d484c84-7333-4701-a4f3-655c3d2cbfa7") : configmap "openstack-scripts" not found Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.861542 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7594-account-create-update-9fmt4"] Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.870634 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron7594-account-delete-8wjht"] Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.880128 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron7594-account-delete-8wjht"] Dec 11 14:12:13 crc kubenswrapper[5050]: I1211 14:12:13.888452 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7594-account-create-update-9fmt4"] Dec 11 14:12:13 crc kubenswrapper[5050]: E1211 14:12:13.947331 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c is running failed: container process not found" containerID="5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 11 14:12:13 crc kubenswrapper[5050]: E1211 14:12:13.948558 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c is running failed: container process not found" containerID="5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 11 14:12:13 crc kubenswrapper[5050]: E1211 14:12:13.948896 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3713fc357ee4f2a6f119e91ece32b1dd727d67185d218ebb887b345bbd9015d1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 11 14:12:13 crc kubenswrapper[5050]: E1211 14:12:13.949537 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c is running failed: container process not found" containerID="5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 11 14:12:13 crc kubenswrapper[5050]: E1211 14:12:13.949646 5050 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-pjzpq" podUID="88b4966d-124b-4cf4-b52b-704955059220" containerName="ovsdb-server" Dec 11 14:12:13 crc kubenswrapper[5050]: E1211 14:12:13.957036 5050 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 14:12:13 crc kubenswrapper[5050]: E1211 14:12:13.957183 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bc8efd61-e4fb-4ec0-834a-b495797039a1-operator-scripts podName:bc8efd61-e4fb-4ec0-834a-b495797039a1 nodeName:}" failed. No retries permitted until 2025-12-11 14:12:14.9571426 +0000 UTC m=+1425.800865186 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/bc8efd61-e4fb-4ec0-834a-b495797039a1-operator-scripts") pod "placementd2f2-account-delete-njqwh" (UID: "bc8efd61-e4fb-4ec0-834a-b495797039a1") : configmap "openstack-scripts" not found Dec 11 14:12:13 crc kubenswrapper[5050]: E1211 14:12:13.957066 5050 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 14:12:13 crc kubenswrapper[5050]: E1211 14:12:13.957286 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e365d825-a3cb-42a3-8a00-8a9be42ed290-operator-scripts podName:e365d825-a3cb-42a3-8a00-8a9be42ed290 nodeName:}" failed. No retries permitted until 2025-12-11 14:12:14.957261994 +0000 UTC m=+1425.800984570 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/e365d825-a3cb-42a3-8a00-8a9be42ed290-operator-scripts") pod "cinder27b7-account-delete-h9wjn" (UID: "e365d825-a3cb-42a3-8a00-8a9be42ed290") : configmap "openstack-scripts" not found Dec 11 14:12:13 crc kubenswrapper[5050]: E1211 14:12:13.962906 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3713fc357ee4f2a6f119e91ece32b1dd727d67185d218ebb887b345bbd9015d1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 11 14:12:13 crc kubenswrapper[5050]: E1211 14:12:13.965714 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3713fc357ee4f2a6f119e91ece32b1dd727d67185d218ebb887b345bbd9015d1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 11 14:12:13 crc kubenswrapper[5050]: E1211 14:12:13.965821 5050 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-pjzpq" podUID="88b4966d-124b-4cf4-b52b-704955059220" containerName="ovs-vswitchd" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.142106 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-862c9"] Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.168077 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-862c9"] Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.186061 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder27b7-account-delete-h9wjn"] Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.195436 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-27b7-account-create-update-7xmsw"] Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.204439 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-27b7-account-create-update-7xmsw"] Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.389895 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.435168 5050 generic.go:334] "Generic (PLEG): container finished" podID="defedffb-9310-4b18-b7ee-b54040aa5447" containerID="2a9d3c07c9884ff572de5d859d886e5e90497f2bc5adb397f3f64151ee6e7fd3" exitCode=0 Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.435317 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-hblvw"] Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.435349 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"defedffb-9310-4b18-b7ee-b54040aa5447","Type":"ContainerDied","Data":"2a9d3c07c9884ff572de5d859d886e5e90497f2bc5adb397f3f64151ee6e7fd3"} Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.440495 5050 generic.go:334] "Generic (PLEG): container finished" podID="0891f075-8101-475b-b844-e7cb42a4990b" containerID="db23d3f3f27190827f163f21b2da4cd0ca1fc9aa0bfb390a14b8c83a5ed2ee47" exitCode=0 Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.440587 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0891f075-8101-475b-b844-e7cb42a4990b","Type":"ContainerDied","Data":"db23d3f3f27190827f163f21b2da4cd0ca1fc9aa0bfb390a14b8c83a5ed2ee47"} Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.440621 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"0891f075-8101-475b-b844-e7cb42a4990b","Type":"ContainerDied","Data":"76671437bcb30dc3b3007bb03f71b029d9481abc8524be43f277c131fcc5cd96"} Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.440632 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76671437bcb30dc3b3007bb03f71b029d9481abc8524be43f277c131fcc5cd96" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.442780 5050 scope.go:117] "RemoveContainer" containerID="99b0a4e0ddebc7695b430edc234ac8f69f475befeae07527d5e1dffee8ce52e4" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.456704 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.457053 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"458f05be-2fd6-44d9-8034-f077356964ce","Type":"ContainerDied","Data":"2d0451eab14fb448dcdc0b7ce30cc6a358bc5517d182993f5b4b8a3785edf30b"} Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.463243 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.469271 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-hblvw"] Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.481692 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.495287 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.529291 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican34ac-account-delete-cd525"] Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.568264 5050 scope.go:117] "RemoveContainer" containerID="927455209d981bf727b535faeebc64bff29620b7370b60af99e0a5354d586a34" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.572709 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0891f075-8101-475b-b844-e7cb42a4990b-server-conf\") pod \"0891f075-8101-475b-b844-e7cb42a4990b\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.572808 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0891f075-8101-475b-b844-e7cb42a4990b-rabbitmq-erlang-cookie\") pod \"0891f075-8101-475b-b844-e7cb42a4990b\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.572843 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"458f05be-2fd6-44d9-8034-f077356964ce\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.572865 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0891f075-8101-475b-b844-e7cb42a4990b-erlang-cookie-secret\") pod \"0891f075-8101-475b-b844-e7cb42a4990b\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.572915 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"0891f075-8101-475b-b844-e7cb42a4990b\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.572975 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6p5z\" (UniqueName: \"kubernetes.io/projected/458f05be-2fd6-44d9-8034-f077356964ce-kube-api-access-z6p5z\") pod \"458f05be-2fd6-44d9-8034-f077356964ce\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.573569 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/458f05be-2fd6-44d9-8034-f077356964ce-erlang-cookie-secret\") pod \"458f05be-2fd6-44d9-8034-f077356964ce\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.573638 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/458f05be-2fd6-44d9-8034-f077356964ce-pod-info\") pod \"458f05be-2fd6-44d9-8034-f077356964ce\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.573668 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0891f075-8101-475b-b844-e7cb42a4990b-plugins-conf\") pod \"0891f075-8101-475b-b844-e7cb42a4990b\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.573722 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/458f05be-2fd6-44d9-8034-f077356964ce-config-data\") pod \"458f05be-2fd6-44d9-8034-f077356964ce\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.573758 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0891f075-8101-475b-b844-e7cb42a4990b-rabbitmq-confd\") pod \"0891f075-8101-475b-b844-e7cb42a4990b\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.573777 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0891f075-8101-475b-b844-e7cb42a4990b-config-data\") pod \"0891f075-8101-475b-b844-e7cb42a4990b\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.573800 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0891f075-8101-475b-b844-e7cb42a4990b-rabbitmq-plugins\") pod \"0891f075-8101-475b-b844-e7cb42a4990b\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.573827 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0891f075-8101-475b-b844-e7cb42a4990b-rabbitmq-tls\") pod \"0891f075-8101-475b-b844-e7cb42a4990b\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.573848 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/458f05be-2fd6-44d9-8034-f077356964ce-rabbitmq-plugins\") pod \"458f05be-2fd6-44d9-8034-f077356964ce\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.576408 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/458f05be-2fd6-44d9-8034-f077356964ce-server-conf\") pod \"458f05be-2fd6-44d9-8034-f077356964ce\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.576468 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/458f05be-2fd6-44d9-8034-f077356964ce-rabbitmq-tls\") pod \"458f05be-2fd6-44d9-8034-f077356964ce\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.576513 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0891f075-8101-475b-b844-e7cb42a4990b-pod-info\") pod \"0891f075-8101-475b-b844-e7cb42a4990b\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.576539 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/458f05be-2fd6-44d9-8034-f077356964ce-rabbitmq-confd\") pod \"458f05be-2fd6-44d9-8034-f077356964ce\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.576581 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-548z5\" (UniqueName: \"kubernetes.io/projected/0891f075-8101-475b-b844-e7cb42a4990b-kube-api-access-548z5\") pod \"0891f075-8101-475b-b844-e7cb42a4990b\" (UID: \"0891f075-8101-475b-b844-e7cb42a4990b\") " Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.576638 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/458f05be-2fd6-44d9-8034-f077356964ce-plugins-conf\") pod \"458f05be-2fd6-44d9-8034-f077356964ce\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.576679 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/458f05be-2fd6-44d9-8034-f077356964ce-rabbitmq-erlang-cookie\") pod \"458f05be-2fd6-44d9-8034-f077356964ce\" (UID: \"458f05be-2fd6-44d9-8034-f077356964ce\") " Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.584183 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/458f05be-2fd6-44d9-8034-f077356964ce-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "458f05be-2fd6-44d9-8034-f077356964ce" (UID: "458f05be-2fd6-44d9-8034-f077356964ce"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.584208 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0891f075-8101-475b-b844-e7cb42a4990b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "0891f075-8101-475b-b844-e7cb42a4990b" (UID: "0891f075-8101-475b-b844-e7cb42a4990b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.585920 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0891f075-8101-475b-b844-e7cb42a4990b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "0891f075-8101-475b-b844-e7cb42a4990b" (UID: "0891f075-8101-475b-b844-e7cb42a4990b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.594029 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-34ac-account-create-update-q8nhn"] Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.597697 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0891f075-8101-475b-b844-e7cb42a4990b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "0891f075-8101-475b-b844-e7cb42a4990b" (UID: "0891f075-8101-475b-b844-e7cb42a4990b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.598831 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/458f05be-2fd6-44d9-8034-f077356964ce-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "458f05be-2fd6-44d9-8034-f077356964ce" (UID: "458f05be-2fd6-44d9-8034-f077356964ce"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.599294 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0891f075-8101-475b-b844-e7cb42a4990b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "0891f075-8101-475b-b844-e7cb42a4990b" (UID: "0891f075-8101-475b-b844-e7cb42a4990b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.604100 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-34ac-account-create-update-q8nhn"] Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.604862 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/458f05be-2fd6-44d9-8034-f077356964ce-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "458f05be-2fd6-44d9-8034-f077356964ce" (UID: "458f05be-2fd6-44d9-8034-f077356964ce"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.613511 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "458f05be-2fd6-44d9-8034-f077356964ce" (UID: "458f05be-2fd6-44d9-8034-f077356964ce"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.613621 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/458f05be-2fd6-44d9-8034-f077356964ce-kube-api-access-z6p5z" (OuterVolumeSpecName: "kube-api-access-z6p5z") pod "458f05be-2fd6-44d9-8034-f077356964ce" (UID: "458f05be-2fd6-44d9-8034-f077356964ce"). InnerVolumeSpecName "kube-api-access-z6p5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.637379 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/458f05be-2fd6-44d9-8034-f077356964ce-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "458f05be-2fd6-44d9-8034-f077356964ce" (UID: "458f05be-2fd6-44d9-8034-f077356964ce"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.637458 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "persistence") pod "0891f075-8101-475b-b844-e7cb42a4990b" (UID: "0891f075-8101-475b-b844-e7cb42a4990b"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.637643 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/458f05be-2fd6-44d9-8034-f077356964ce-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "458f05be-2fd6-44d9-8034-f077356964ce" (UID: "458f05be-2fd6-44d9-8034-f077356964ce"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.646935 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-p6ttc"] Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.647730 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0891f075-8101-475b-b844-e7cb42a4990b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "0891f075-8101-475b-b844-e7cb42a4990b" (UID: "0891f075-8101-475b-b844-e7cb42a4990b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.649741 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/458f05be-2fd6-44d9-8034-f077356964ce-pod-info" (OuterVolumeSpecName: "pod-info") pod "458f05be-2fd6-44d9-8034-f077356964ce" (UID: "458f05be-2fd6-44d9-8034-f077356964ce"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.650503 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/0891f075-8101-475b-b844-e7cb42a4990b-pod-info" (OuterVolumeSpecName: "pod-info") pod "0891f075-8101-475b-b844-e7cb42a4990b" (UID: "0891f075-8101-475b-b844-e7cb42a4990b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.665233 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-p6ttc"] Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.666164 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0891f075-8101-475b-b844-e7cb42a4990b-kube-api-access-548z5" (OuterVolumeSpecName: "kube-api-access-548z5") pod "0891f075-8101-475b-b844-e7cb42a4990b" (UID: "0891f075-8101-475b-b844-e7cb42a4990b"). InnerVolumeSpecName "kube-api-access-548z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.686411 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-d2f2-account-create-update-v2nsg"] Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.687512 5050 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/458f05be-2fd6-44d9-8034-f077356964ce-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.687569 5050 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/458f05be-2fd6-44d9-8034-f077356964ce-pod-info\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.687584 5050 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/0891f075-8101-475b-b844-e7cb42a4990b-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.687621 5050 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/0891f075-8101-475b-b844-e7cb42a4990b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.687635 5050 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/0891f075-8101-475b-b844-e7cb42a4990b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.687646 5050 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/458f05be-2fd6-44d9-8034-f077356964ce-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.687657 5050 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/458f05be-2fd6-44d9-8034-f077356964ce-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.687669 5050 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/0891f075-8101-475b-b844-e7cb42a4990b-pod-info\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.687681 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-548z5\" (UniqueName: \"kubernetes.io/projected/0891f075-8101-475b-b844-e7cb42a4990b-kube-api-access-548z5\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.687697 5050 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/458f05be-2fd6-44d9-8034-f077356964ce-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.687708 5050 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/458f05be-2fd6-44d9-8034-f077356964ce-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.687720 5050 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/0891f075-8101-475b-b844-e7cb42a4990b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.687747 5050 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.687764 5050 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/0891f075-8101-475b-b844-e7cb42a4990b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.687786 5050 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.687801 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6p5z\" (UniqueName: \"kubernetes.io/projected/458f05be-2fd6-44d9-8034-f077356964ce-kube-api-access-z6p5z\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.695560 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placementd2f2-account-delete-njqwh"] Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.704393 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-d2f2-account-create-update-v2nsg"] Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.853943 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-ql2cg"] Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.872810 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-ql2cg"] Dec 11 14:12:14 crc kubenswrapper[5050]: E1211 14:12:14.893360 5050 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 14:12:14 crc kubenswrapper[5050]: E1211 14:12:14.893457 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d917f471-6630-4e96-a0e4-cbde631da4a8-operator-scripts podName:d917f471-6630-4e96-a0e4-cbde631da4a8 nodeName:}" failed. No retries permitted until 2025-12-11 14:12:16.893435214 +0000 UTC m=+1427.737157800 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/d917f471-6630-4e96-a0e4-cbde631da4a8-operator-scripts") pod "novaapi4326-account-delete-9bmsz" (UID: "d917f471-6630-4e96-a0e4-cbde631da4a8") : configmap "openstack-scripts" not found Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.896951 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-2653-account-create-update-gzdx8"] Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.907223 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance2653-account-delete-rgdnl"] Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.914983 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-2653-account-create-update-gzdx8"] Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.983242 5050 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Dec 11 14:12:14 crc kubenswrapper[5050]: I1211 14:12:14.995172 5050 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:14 crc kubenswrapper[5050]: E1211 14:12:14.995264 5050 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 14:12:14 crc kubenswrapper[5050]: E1211 14:12:14.995307 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e365d825-a3cb-42a3-8a00-8a9be42ed290-operator-scripts podName:e365d825-a3cb-42a3-8a00-8a9be42ed290 nodeName:}" failed. No retries permitted until 2025-12-11 14:12:16.995292435 +0000 UTC m=+1427.839015021 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/e365d825-a3cb-42a3-8a00-8a9be42ed290-operator-scripts") pod "cinder27b7-account-delete-h9wjn" (UID: "e365d825-a3cb-42a3-8a00-8a9be42ed290") : configmap "openstack-scripts" not found Dec 11 14:12:14 crc kubenswrapper[5050]: E1211 14:12:14.995336 5050 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 14:12:14 crc kubenswrapper[5050]: E1211 14:12:14.995354 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bc8efd61-e4fb-4ec0-834a-b495797039a1-operator-scripts podName:bc8efd61-e4fb-4ec0-834a-b495797039a1 nodeName:}" failed. No retries permitted until 2025-12-11 14:12:16.995348676 +0000 UTC m=+1427.839071262 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/bc8efd61-e4fb-4ec0-834a-b495797039a1-operator-scripts") pod "placementd2f2-account-delete-njqwh" (UID: "bc8efd61-e4fb-4ec0-834a-b495797039a1") : configmap "openstack-scripts" not found Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.027126 5050 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.100431 5050 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.239768 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0891f075-8101-475b-b844-e7cb42a4990b-config-data" (OuterVolumeSpecName: "config-data") pod "0891f075-8101-475b-b844-e7cb42a4990b" (UID: "0891f075-8101-475b-b844-e7cb42a4990b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.308082 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0891f075-8101-475b-b844-e7cb42a4990b-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.398177 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-brs2f"] Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.421124 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-brs2f"] Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.422468 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/458f05be-2fd6-44d9-8034-f077356964ce-config-data" (OuterVolumeSpecName: "config-data") pod "458f05be-2fd6-44d9-8034-f077356964ce" (UID: "458f05be-2fd6-44d9-8034-f077356964ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.469370 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0891f075-8101-475b-b844-e7cb42a4990b-server-conf" (OuterVolumeSpecName: "server-conf") pod "0891f075-8101-475b-b844-e7cb42a4990b" (UID: "0891f075-8101-475b-b844-e7cb42a4990b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.475573 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell0e20d-account-delete-ntvhn"] Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.486156 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/458f05be-2fd6-44d9-8034-f077356964ce-server-conf" (OuterVolumeSpecName: "server-conf") pod "458f05be-2fd6-44d9-8034-f077356964ce" (UID: "458f05be-2fd6-44d9-8034-f077356964ce"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.513162 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/458f05be-2fd6-44d9-8034-f077356964ce-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.513195 5050 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/458f05be-2fd6-44d9-8034-f077356964ce-server-conf\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.513207 5050 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/0891f075-8101-475b-b844-e7cb42a4990b-server-conf\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.520761 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-e20d-account-create-update-6hllh"] Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.538426 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-e20d-account-create-update-6hllh"] Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.551394 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/458f05be-2fd6-44d9-8034-f077356964ce-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "458f05be-2fd6-44d9-8034-f077356964ce" (UID: "458f05be-2fd6-44d9-8034-f077356964ce"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.562919 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance2653-account-delete-rgdnl" podUID="1d484c84-7333-4701-a4f3-655c3d2cbfa7" containerName="mariadb-account-delete" containerID="cri-o://da09975c6998c658bb6956da61c91b298081b08244ab21dcf8062177373472dc" gracePeriod=30 Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.583086 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="118a06f4-3d12-4a10-8de7-bfcb56b3f237" path="/var/lib/kubelet/pods/118a06f4-3d12-4a10-8de7-bfcb56b3f237/volumes" Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.584470 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38b1a06e-804a-44dc-8e77-a7d8162f38bd" path="/var/lib/kubelet/pods/38b1a06e-804a-44dc-8e77-a7d8162f38bd/volumes" Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.586310 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="392d68e3-ec0f-4e16-b58f-d1bbdbce674f" path="/var/lib/kubelet/pods/392d68e3-ec0f-4e16-b58f-d1bbdbce674f/volumes" Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.593843 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4130a67a-7d8d-4eff-b6ea-be9f43992443" path="/var/lib/kubelet/pods/4130a67a-7d8d-4eff-b6ea-be9f43992443/volumes" Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.594997 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8169d760-5539-44ed-9586-6dd71f7fcda5" path="/var/lib/kubelet/pods/8169d760-5539-44ed-9586-6dd71f7fcda5/volumes" Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.595604 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81a009ad-1a05-40c1-9c75-7a559592eadf" path="/var/lib/kubelet/pods/81a009ad-1a05-40c1-9c75-7a559592eadf/volumes" Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.596247 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a71df796-b040-4319-bc57-96a894dada33" path="/var/lib/kubelet/pods/a71df796-b040-4319-bc57-96a894dada33/volumes" Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.599182 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae8fd88c-6bf8-483c-950f-1466ea49c607" path="/var/lib/kubelet/pods/ae8fd88c-6bf8-483c-950f-1466ea49c607/volumes" Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.599889 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc2e956f-6026-4a75-b11a-5106aad626a5" path="/var/lib/kubelet/pods/bc2e956f-6026-4a75-b11a-5106aad626a5/volumes" Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.600453 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf2cdf1b-29dd-484e-ad40-e287454d8534" path="/var/lib/kubelet/pods/bf2cdf1b-29dd-484e-ad40-e287454d8534/volumes" Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.607908 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cca0032a-ceed-4a6a-9d4e-9a782c3bfe55" path="/var/lib/kubelet/pods/cca0032a-ceed-4a6a-9d4e-9a782c3bfe55/volumes" Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.608515 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7f3c014-a9d3-4424-be41-e87a3736a58d" path="/var/lib/kubelet/pods/e7f3c014-a9d3-4424-be41-e87a3736a58d/volumes" Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.609001 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6bb80a3-78fe-4854-91bf-69a0f93a2f48" path="/var/lib/kubelet/pods/f6bb80a3-78fe-4854-91bf-69a0f93a2f48/volumes" Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.612721 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9d5e3e1-bee1-4425-a1ed-a6234cf3db49" path="/var/lib/kubelet/pods/f9d5e3e1-bee1-4425-a1ed-a6234cf3db49/volumes" Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.615401 5050 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/458f05be-2fd6-44d9-8034-f077356964ce-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.617315 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0891f075-8101-475b-b844-e7cb42a4990b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "0891f075-8101-475b-b844-e7cb42a4990b" (UID: "0891f075-8101-475b-b844-e7cb42a4990b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.624071 5050 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/novaapi4326-account-delete-9bmsz" secret="" err="secret \"galera-openstack-dockercfg-mgt2f\" not found" Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.660963 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/novaapi4326-account-delete-9bmsz" podStartSLOduration=10.66093119 podStartE2EDuration="10.66093119s" podCreationTimestamp="2025-12-11 14:12:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:12:15.653415808 +0000 UTC m=+1426.497138394" watchObservedRunningTime="2025-12-11 14:12:15.66093119 +0000 UTC m=+1426.504653766" Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.688683 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder27b7-account-delete-h9wjn" podUID="e365d825-a3cb-42a3-8a00-8a9be42ed290" containerName="mariadb-account-delete" containerID="cri-o://71ab99722ee6cf9688769ef5f2db696be7f9ef481521ac4426c154a7f02debc7" gracePeriod=30 Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.696158 5050 generic.go:334] "Generic (PLEG): container finished" podID="87937f27-2525-4fed-88bb-38a90404860c" containerID="8106f10bed3bf46c894b57795f1b60477931a5aba11442b13a0172969049c89a" exitCode=0 Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.707459 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder27b7-account-delete-h9wjn" podStartSLOduration=12.707441112 podStartE2EDuration="12.707441112s" podCreationTimestamp="2025-12-11 14:12:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 14:12:15.705642043 +0000 UTC m=+1426.549364629" watchObservedRunningTime="2025-12-11 14:12:15.707441112 +0000 UTC m=+1426.551163698" Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.710345 5050 generic.go:334] "Generic (PLEG): container finished" podID="569cb143-086a-42f1-9e8c-6f6f614c9ee2" containerID="e402b883c564b7c4156be1691f2f8af60f04df5e1dc8aa45ac6e3435d54ea395" exitCode=0 Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.717523 5050 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/0891f075-8101-475b-b844-e7cb42a4990b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.743846 5050 generic.go:334] "Generic (PLEG): container finished" podID="934cae9e-c75b-434d-b1e1-d566d6fb8b7d" containerID="d2f88cb82773ad5f567925e106c60ec7bef84c6e078be7c5e2a9bd340e19b35c" exitCode=0 Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.745784 5050 generic.go:334] "Generic (PLEG): container finished" podID="bc8efd61-e4fb-4ec0-834a-b495797039a1" containerID="820172aaadd0f74b40c79e4bb0bcfc72118d1d7fa7c79271701f0e5c49f8af53" exitCode=1 Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.781713 5050 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/novacell0e20d-account-delete-ntvhn" secret="" err="secret \"galera-openstack-dockercfg-mgt2f\" not found" Dec 11 14:12:15 crc kubenswrapper[5050]: I1211 14:12:15.814758 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 11 14:12:15 crc kubenswrapper[5050]: E1211 14:12:15.837953 5050 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 14:12:15 crc kubenswrapper[5050]: E1211 14:12:15.838106 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/66cb4589-6296-417b-87eb-4bcbff7bf580-operator-scripts podName:66cb4589-6296-417b-87eb-4bcbff7bf580 nodeName:}" failed. No retries permitted until 2025-12-11 14:12:19.838088307 +0000 UTC m=+1430.681810893 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/66cb4589-6296-417b-87eb-4bcbff7bf580-operator-scripts") pod "barbican34ac-account-delete-cd525" (UID: "66cb4589-6296-417b-87eb-4bcbff7bf580") : configmap "openstack-scripts" not found Dec 11 14:12:15 crc kubenswrapper[5050]: E1211 14:12:15.838556 5050 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 14:12:15 crc kubenswrapper[5050]: E1211 14:12:15.838603 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ca005c2d-f7de-486a-bbd6-a32443582833-operator-scripts podName:ca005c2d-f7de-486a-bbd6-a32443582833 nodeName:}" failed. No retries permitted until 2025-12-11 14:12:19.83858705 +0000 UTC m=+1430.682309636 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/ca005c2d-f7de-486a-bbd6-a32443582833-operator-scripts") pod "novacell0e20d-account-delete-ntvhn" (UID: "ca005c2d-f7de-486a-bbd6-a32443582833") : configmap "openstack-scripts" not found Dec 11 14:12:15 crc kubenswrapper[5050]: E1211 14:12:15.943048 5050 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 14:12:15 crc kubenswrapper[5050]: E1211 14:12:15.943174 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d484c84-7333-4701-a4f3-655c3d2cbfa7-operator-scripts podName:1d484c84-7333-4701-a4f3-655c3d2cbfa7 nodeName:}" failed. No retries permitted until 2025-12-11 14:12:19.943146114 +0000 UTC m=+1430.786868700 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/1d484c84-7333-4701-a4f3-655c3d2cbfa7-operator-scripts") pod "glance2653-account-delete-rgdnl" (UID: "1d484c84-7333-4701-a4f3-655c3d2cbfa7") : configmap "openstack-scripts" not found Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.307469 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance2653-account-delete-rgdnl" event={"ID":"1d484c84-7333-4701-a4f3-655c3d2cbfa7","Type":"ContainerStarted","Data":"da09975c6998c658bb6956da61c91b298081b08244ab21dcf8062177373472dc"} Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.307977 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi4326-account-delete-9bmsz" event={"ID":"d917f471-6630-4e96-a0e4-cbde631da4a8","Type":"ContainerStarted","Data":"e30b2c4e6ff129a1294024884ec6faf856995da2c5c40ac6cf9f3cdc4cf71b8a"} Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.308002 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-2cbbr"] Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.308047 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder27b7-account-delete-h9wjn" event={"ID":"e365d825-a3cb-42a3-8a00-8a9be42ed290","Type":"ContainerStarted","Data":"71ab99722ee6cf9688769ef5f2db696be7f9ef481521ac4426c154a7f02debc7"} Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.308070 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-2cbbr"] Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.308098 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"87937f27-2525-4fed-88bb-38a90404860c","Type":"ContainerDied","Data":"8106f10bed3bf46c894b57795f1b60477931a5aba11442b13a0172969049c89a"} Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.308122 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-655647566b-n2tcs" event={"ID":"569cb143-086a-42f1-9e8c-6f6f614c9ee2","Type":"ContainerDied","Data":"e402b883c564b7c4156be1691f2f8af60f04df5e1dc8aa45ac6e3435d54ea395"} Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.308145 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novaapi4326-account-delete-9bmsz"] Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.308168 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"defedffb-9310-4b18-b7ee-b54040aa5447","Type":"ContainerDied","Data":"aa551191fb3a0ea98347fca4525dc93cee1f4c93fbca070cbdc38382a4dcbbc2"} Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.308186 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa551191fb3a0ea98347fca4525dc93cee1f4c93fbca070cbdc38382a4dcbbc2" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.308201 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-54bb9c4d69-975sg" event={"ID":"934cae9e-c75b-434d-b1e1-d566d6fb8b7d","Type":"ContainerDied","Data":"d2f88cb82773ad5f567925e106c60ec7bef84c6e078be7c5e2a9bd340e19b35c"} Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.308219 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-54bb9c4d69-975sg" event={"ID":"934cae9e-c75b-434d-b1e1-d566d6fb8b7d","Type":"ContainerDied","Data":"abdd71658e59797dc9fcd008fc688f874bf25398b05b5bbc4d1e561c2d75f9c0"} Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.308230 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abdd71658e59797dc9fcd008fc688f874bf25398b05b5bbc4d1e561c2d75f9c0" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.308242 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placementd2f2-account-delete-njqwh" event={"ID":"bc8efd61-e4fb-4ec0-834a-b495797039a1","Type":"ContainerDied","Data":"820172aaadd0f74b40c79e4bb0bcfc72118d1d7fa7c79271701f0e5c49f8af53"} Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.308262 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-4326-account-create-update-mx2wn"] Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.308278 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican34ac-account-delete-cd525" event={"ID":"66cb4589-6296-417b-87eb-4bcbff7bf580","Type":"ContainerStarted","Data":"91aedb14b57d8c859204563249993412033e4ca1945b5f8067581ca33e70d57a"} Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.308296 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-4326-account-create-update-mx2wn"] Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.308313 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell0e20d-account-delete-ntvhn" event={"ID":"ca005c2d-f7de-486a-bbd6-a32443582833","Type":"ContainerStarted","Data":"256a10ae9fc0b3f305dca6de262f804a0556a28126926031e10ecf611af1ad9a"} Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.323477 5050 scope.go:117] "RemoveContainer" containerID="7fc0726972676985eb911b818bc159c8c1b12a1ca0e646ddda6558ea21079201" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.329934 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 11 14:12:16 crc kubenswrapper[5050]: E1211 14:12:16.363290 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8106f10bed3bf46c894b57795f1b60477931a5aba11442b13a0172969049c89a is running failed: container process not found" containerID="8106f10bed3bf46c894b57795f1b60477931a5aba11442b13a0172969049c89a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 11 14:12:16 crc kubenswrapper[5050]: E1211 14:12:16.363724 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8106f10bed3bf46c894b57795f1b60477931a5aba11442b13a0172969049c89a is running failed: container process not found" containerID="8106f10bed3bf46c894b57795f1b60477931a5aba11442b13a0172969049c89a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 11 14:12:16 crc kubenswrapper[5050]: E1211 14:12:16.364187 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8106f10bed3bf46c894b57795f1b60477931a5aba11442b13a0172969049c89a is running failed: container process not found" containerID="8106f10bed3bf46c894b57795f1b60477931a5aba11442b13a0172969049c89a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 11 14:12:16 crc kubenswrapper[5050]: E1211 14:12:16.364278 5050 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8106f10bed3bf46c894b57795f1b60477931a5aba11442b13a0172969049c89a is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="87937f27-2525-4fed-88bb-38a90404860c" containerName="nova-scheduler-scheduler" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.407029 5050 scope.go:117] "RemoveContainer" containerID="510949b6fa4514794979cb46d1baa4411178e70e74985dbfb206b0b3da3f4cc4" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.458850 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-54bb9c4d69-975sg" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.464081 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/defedffb-9310-4b18-b7ee-b54040aa5447-memcached-tls-certs\") pod \"defedffb-9310-4b18-b7ee-b54040aa5447\" (UID: \"defedffb-9310-4b18-b7ee-b54040aa5447\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.464153 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defedffb-9310-4b18-b7ee-b54040aa5447-combined-ca-bundle\") pod \"defedffb-9310-4b18-b7ee-b54040aa5447\" (UID: \"defedffb-9310-4b18-b7ee-b54040aa5447\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.464350 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/defedffb-9310-4b18-b7ee-b54040aa5447-kolla-config\") pod \"defedffb-9310-4b18-b7ee-b54040aa5447\" (UID: \"defedffb-9310-4b18-b7ee-b54040aa5447\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.464460 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcwl9\" (UniqueName: \"kubernetes.io/projected/defedffb-9310-4b18-b7ee-b54040aa5447-kube-api-access-lcwl9\") pod \"defedffb-9310-4b18-b7ee-b54040aa5447\" (UID: \"defedffb-9310-4b18-b7ee-b54040aa5447\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.464551 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/defedffb-9310-4b18-b7ee-b54040aa5447-config-data\") pod \"defedffb-9310-4b18-b7ee-b54040aa5447\" (UID: \"defedffb-9310-4b18-b7ee-b54040aa5447\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.466027 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/defedffb-9310-4b18-b7ee-b54040aa5447-config-data" (OuterVolumeSpecName: "config-data") pod "defedffb-9310-4b18-b7ee-b54040aa5447" (UID: "defedffb-9310-4b18-b7ee-b54040aa5447"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.466989 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/defedffb-9310-4b18-b7ee-b54040aa5447-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "defedffb-9310-4b18-b7ee-b54040aa5447" (UID: "defedffb-9310-4b18-b7ee-b54040aa5447"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.472607 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/defedffb-9310-4b18-b7ee-b54040aa5447-kube-api-access-lcwl9" (OuterVolumeSpecName: "kube-api-access-lcwl9") pod "defedffb-9310-4b18-b7ee-b54040aa5447" (UID: "defedffb-9310-4b18-b7ee-b54040aa5447"). InnerVolumeSpecName "kube-api-access-lcwl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.518114 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/defedffb-9310-4b18-b7ee-b54040aa5447-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "defedffb-9310-4b18-b7ee-b54040aa5447" (UID: "defedffb-9310-4b18-b7ee-b54040aa5447"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.566207 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/934cae9e-c75b-434d-b1e1-d566d6fb8b7d-config-data-custom\") pod \"934cae9e-c75b-434d-b1e1-d566d6fb8b7d\" (UID: \"934cae9e-c75b-434d-b1e1-d566d6fb8b7d\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.566251 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlt4x\" (UniqueName: \"kubernetes.io/projected/934cae9e-c75b-434d-b1e1-d566d6fb8b7d-kube-api-access-dlt4x\") pod \"934cae9e-c75b-434d-b1e1-d566d6fb8b7d\" (UID: \"934cae9e-c75b-434d-b1e1-d566d6fb8b7d\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.566463 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934cae9e-c75b-434d-b1e1-d566d6fb8b7d-combined-ca-bundle\") pod \"934cae9e-c75b-434d-b1e1-d566d6fb8b7d\" (UID: \"934cae9e-c75b-434d-b1e1-d566d6fb8b7d\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.566619 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934cae9e-c75b-434d-b1e1-d566d6fb8b7d-config-data\") pod \"934cae9e-c75b-434d-b1e1-d566d6fb8b7d\" (UID: \"934cae9e-c75b-434d-b1e1-d566d6fb8b7d\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.566659 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/934cae9e-c75b-434d-b1e1-d566d6fb8b7d-logs\") pod \"934cae9e-c75b-434d-b1e1-d566d6fb8b7d\" (UID: \"934cae9e-c75b-434d-b1e1-d566d6fb8b7d\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.567300 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/defedffb-9310-4b18-b7ee-b54040aa5447-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.567315 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/defedffb-9310-4b18-b7ee-b54040aa5447-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.567327 5050 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/defedffb-9310-4b18-b7ee-b54040aa5447-kolla-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.567356 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcwl9\" (UniqueName: \"kubernetes.io/projected/defedffb-9310-4b18-b7ee-b54040aa5447-kube-api-access-lcwl9\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.567744 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/934cae9e-c75b-434d-b1e1-d566d6fb8b7d-logs" (OuterVolumeSpecName: "logs") pod "934cae9e-c75b-434d-b1e1-d566d6fb8b7d" (UID: "934cae9e-c75b-434d-b1e1-d566d6fb8b7d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.568179 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/defedffb-9310-4b18-b7ee-b54040aa5447-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "defedffb-9310-4b18-b7ee-b54040aa5447" (UID: "defedffb-9310-4b18-b7ee-b54040aa5447"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.571273 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/934cae9e-c75b-434d-b1e1-d566d6fb8b7d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "934cae9e-c75b-434d-b1e1-d566d6fb8b7d" (UID: "934cae9e-c75b-434d-b1e1-d566d6fb8b7d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.572345 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/934cae9e-c75b-434d-b1e1-d566d6fb8b7d-kube-api-access-dlt4x" (OuterVolumeSpecName: "kube-api-access-dlt4x") pod "934cae9e-c75b-434d-b1e1-d566d6fb8b7d" (UID: "934cae9e-c75b-434d-b1e1-d566d6fb8b7d"). InnerVolumeSpecName "kube-api-access-dlt4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.594156 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/934cae9e-c75b-434d-b1e1-d566d6fb8b7d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "934cae9e-c75b-434d-b1e1-d566d6fb8b7d" (UID: "934cae9e-c75b-434d-b1e1-d566d6fb8b7d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.649190 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/934cae9e-c75b-434d-b1e1-d566d6fb8b7d-config-data" (OuterVolumeSpecName: "config-data") pod "934cae9e-c75b-434d-b1e1-d566d6fb8b7d" (UID: "934cae9e-c75b-434d-b1e1-d566d6fb8b7d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.668600 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934cae9e-c75b-434d-b1e1-d566d6fb8b7d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.668750 5050 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/defedffb-9310-4b18-b7ee-b54040aa5447-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.668895 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934cae9e-c75b-434d-b1e1-d566d6fb8b7d-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.668943 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/934cae9e-c75b-434d-b1e1-d566d6fb8b7d-logs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.668958 5050 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/934cae9e-c75b-434d-b1e1-d566d6fb8b7d-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.668969 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlt4x\" (UniqueName: \"kubernetes.io/projected/934cae9e-c75b-434d-b1e1-d566d6fb8b7d-kube-api-access-dlt4x\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.737295 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-655647566b-n2tcs" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.749713 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.750329 5050 scope.go:117] "RemoveContainer" containerID="d5af5116657e860183a9cb6b884cd9869678b597a90e497443cb7f1f22d522ac" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.758715 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 14:12:16 crc kubenswrapper[5050]: E1211 14:12:16.775911 5050 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod458f05be_2fd6_44d9_8034_f077356964ce.slice\": RecentStats: unable to find data in memory cache]" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.781936 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.788034 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.795838 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican34ac-account-delete-cd525" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.801142 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.807896 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.844956 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.850356 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder27b7-account-delete-h9wjn" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.856717 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.866219 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.867662 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placementd2f2-account-delete-njqwh" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.872923 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-655647566b-n2tcs" event={"ID":"569cb143-086a-42f1-9e8c-6f6f614c9ee2","Type":"ContainerDied","Data":"6fac34aac4fda442dfc8957aad3488b09240cc41d9a9c76bd6bf15fff9fd9fc8"} Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.872978 5050 scope.go:117] "RemoveContainer" containerID="e402b883c564b7c4156be1691f2f8af60f04df5e1dc8aa45ac6e3435d54ea395" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.873271 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-655647566b-n2tcs" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.877842 5050 generic.go:334] "Generic (PLEG): container finished" podID="36acbdf3-346e-4207-8391-b2a03ef839e5" containerID="5f2c03aa348522be8e65276f7ae37004bcd483651a98c207f1fa66f6b76162d4" exitCode=0 Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.877908 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7f54bc974d-nvhbp" event={"ID":"36acbdf3-346e-4207-8391-b2a03ef839e5","Type":"ContainerDied","Data":"5f2c03aa348522be8e65276f7ae37004bcd483651a98c207f1fa66f6b76162d4"} Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.878285 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance2653-account-delete-rgdnl" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.878640 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrsfm\" (UniqueName: \"kubernetes.io/projected/55b7e535-46f6-403b-9cdf-bf172dba97b6-kube-api-access-lrsfm\") pod \"55b7e535-46f6-403b-9cdf-bf172dba97b6\" (UID: \"55b7e535-46f6-403b-9cdf-bf172dba97b6\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.878689 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/569cb143-086a-42f1-9e8c-6f6f614c9ee2-combined-ca-bundle\") pod \"569cb143-086a-42f1-9e8c-6f6f614c9ee2\" (UID: \"569cb143-086a-42f1-9e8c-6f6f614c9ee2\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.878719 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/55b7e535-46f6-403b-9cdf-bf172dba97b6-config-data-default\") pod \"55b7e535-46f6-403b-9cdf-bf172dba97b6\" (UID: \"55b7e535-46f6-403b-9cdf-bf172dba97b6\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.878806 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glxxg\" (UniqueName: \"kubernetes.io/projected/87937f27-2525-4fed-88bb-38a90404860c-kube-api-access-glxxg\") pod \"87937f27-2525-4fed-88bb-38a90404860c\" (UID: \"87937f27-2525-4fed-88bb-38a90404860c\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.878835 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66cb4589-6296-417b-87eb-4bcbff7bf580-operator-scripts\") pod \"66cb4589-6296-417b-87eb-4bcbff7bf580\" (UID: \"66cb4589-6296-417b-87eb-4bcbff7bf580\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.878866 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/569cb143-086a-42f1-9e8c-6f6f614c9ee2-logs\") pod \"569cb143-086a-42f1-9e8c-6f6f614c9ee2\" (UID: \"569cb143-086a-42f1-9e8c-6f6f614c9ee2\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.878900 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"55b7e535-46f6-403b-9cdf-bf172dba97b6\" (UID: \"55b7e535-46f6-403b-9cdf-bf172dba97b6\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.878921 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/55b7e535-46f6-403b-9cdf-bf172dba97b6-config-data-generated\") pod \"55b7e535-46f6-403b-9cdf-bf172dba97b6\" (UID: \"55b7e535-46f6-403b-9cdf-bf172dba97b6\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.878985 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/569cb143-086a-42f1-9e8c-6f6f614c9ee2-config-data-custom\") pod \"569cb143-086a-42f1-9e8c-6f6f614c9ee2\" (UID: \"569cb143-086a-42f1-9e8c-6f6f614c9ee2\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.879216 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/55b7e535-46f6-403b-9cdf-bf172dba97b6-galera-tls-certs\") pod \"55b7e535-46f6-403b-9cdf-bf172dba97b6\" (UID: \"55b7e535-46f6-403b-9cdf-bf172dba97b6\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.879257 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/55b7e535-46f6-403b-9cdf-bf172dba97b6-kolla-config\") pod \"55b7e535-46f6-403b-9cdf-bf172dba97b6\" (UID: \"55b7e535-46f6-403b-9cdf-bf172dba97b6\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.879276 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87937f27-2525-4fed-88bb-38a90404860c-combined-ca-bundle\") pod \"87937f27-2525-4fed-88bb-38a90404860c\" (UID: \"87937f27-2525-4fed-88bb-38a90404860c\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.879353 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55b7e535-46f6-403b-9cdf-bf172dba97b6-operator-scripts\") pod \"55b7e535-46f6-403b-9cdf-bf172dba97b6\" (UID: \"55b7e535-46f6-403b-9cdf-bf172dba97b6\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.879379 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/569cb143-086a-42f1-9e8c-6f6f614c9ee2-config-data\") pod \"569cb143-086a-42f1-9e8c-6f6f614c9ee2\" (UID: \"569cb143-086a-42f1-9e8c-6f6f614c9ee2\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.879397 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87937f27-2525-4fed-88bb-38a90404860c-config-data\") pod \"87937f27-2525-4fed-88bb-38a90404860c\" (UID: \"87937f27-2525-4fed-88bb-38a90404860c\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.879438 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-956wq\" (UniqueName: \"kubernetes.io/projected/66cb4589-6296-417b-87eb-4bcbff7bf580-kube-api-access-956wq\") pod \"66cb4589-6296-417b-87eb-4bcbff7bf580\" (UID: \"66cb4589-6296-417b-87eb-4bcbff7bf580\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.879463 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72shl\" (UniqueName: \"kubernetes.io/projected/569cb143-086a-42f1-9e8c-6f6f614c9ee2-kube-api-access-72shl\") pod \"569cb143-086a-42f1-9e8c-6f6f614c9ee2\" (UID: \"569cb143-086a-42f1-9e8c-6f6f614c9ee2\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.879493 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55b7e535-46f6-403b-9cdf-bf172dba97b6-combined-ca-bundle\") pod \"55b7e535-46f6-403b-9cdf-bf172dba97b6\" (UID: \"55b7e535-46f6-403b-9cdf-bf172dba97b6\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.883162 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55b7e535-46f6-403b-9cdf-bf172dba97b6-kube-api-access-lrsfm" (OuterVolumeSpecName: "kube-api-access-lrsfm") pod "55b7e535-46f6-403b-9cdf-bf172dba97b6" (UID: "55b7e535-46f6-403b-9cdf-bf172dba97b6"). InnerVolumeSpecName "kube-api-access-lrsfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.883386 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.883870 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66cb4589-6296-417b-87eb-4bcbff7bf580-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "66cb4589-6296-417b-87eb-4bcbff7bf580" (UID: "66cb4589-6296-417b-87eb-4bcbff7bf580"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.884383 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55b7e535-46f6-403b-9cdf-bf172dba97b6-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "55b7e535-46f6-403b-9cdf-bf172dba97b6" (UID: "55b7e535-46f6-403b-9cdf-bf172dba97b6"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.885061 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55b7e535-46f6-403b-9cdf-bf172dba97b6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "55b7e535-46f6-403b-9cdf-bf172dba97b6" (UID: "55b7e535-46f6-403b-9cdf-bf172dba97b6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.885501 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55b7e535-46f6-403b-9cdf-bf172dba97b6-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "55b7e535-46f6-403b-9cdf-bf172dba97b6" (UID: "55b7e535-46f6-403b-9cdf-bf172dba97b6"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.887327 5050 generic.go:334] "Generic (PLEG): container finished" podID="d917f471-6630-4e96-a0e4-cbde631da4a8" containerID="e30b2c4e6ff129a1294024884ec6faf856995da2c5c40ac6cf9f3cdc4cf71b8a" exitCode=1 Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.887602 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi4326-account-delete-9bmsz" event={"ID":"d917f471-6630-4e96-a0e4-cbde631da4a8","Type":"ContainerDied","Data":"e30b2c4e6ff129a1294024884ec6faf856995da2c5c40ac6cf9f3cdc4cf71b8a"} Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.890928 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87937f27-2525-4fed-88bb-38a90404860c-kube-api-access-glxxg" (OuterVolumeSpecName: "kube-api-access-glxxg") pod "87937f27-2525-4fed-88bb-38a90404860c" (UID: "87937f27-2525-4fed-88bb-38a90404860c"). InnerVolumeSpecName "kube-api-access-glxxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.891336 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/569cb143-086a-42f1-9e8c-6f6f614c9ee2-logs" (OuterVolumeSpecName: "logs") pod "569cb143-086a-42f1-9e8c-6f6f614c9ee2" (UID: "569cb143-086a-42f1-9e8c-6f6f614c9ee2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.891963 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55b7e535-46f6-403b-9cdf-bf172dba97b6-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "55b7e535-46f6-403b-9cdf-bf172dba97b6" (UID: "55b7e535-46f6-403b-9cdf-bf172dba97b6"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.896908 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66cb4589-6296-417b-87eb-4bcbff7bf580-kube-api-access-956wq" (OuterVolumeSpecName: "kube-api-access-956wq") pod "66cb4589-6296-417b-87eb-4bcbff7bf580" (UID: "66cb4589-6296-417b-87eb-4bcbff7bf580"). InnerVolumeSpecName "kube-api-access-956wq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.899297 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/569cb143-086a-42f1-9e8c-6f6f614c9ee2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "569cb143-086a-42f1-9e8c-6f6f614c9ee2" (UID: "569cb143-086a-42f1-9e8c-6f6f614c9ee2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.900236 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7f54bc974d-nvhbp" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.901840 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "mysql-db") pod "55b7e535-46f6-403b-9cdf-bf172dba97b6" (UID: "55b7e535-46f6-403b-9cdf-bf172dba97b6"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.904194 5050 generic.go:334] "Generic (PLEG): container finished" podID="66cb4589-6296-417b-87eb-4bcbff7bf580" containerID="91aedb14b57d8c859204563249993412033e4ca1945b5f8067581ca33e70d57a" exitCode=1 Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.904353 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican34ac-account-delete-cd525" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.904969 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican34ac-account-delete-cd525" event={"ID":"66cb4589-6296-417b-87eb-4bcbff7bf580","Type":"ContainerDied","Data":"91aedb14b57d8c859204563249993412033e4ca1945b5f8067581ca33e70d57a"} Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.905002 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican34ac-account-delete-cd525" event={"ID":"66cb4589-6296-417b-87eb-4bcbff7bf580","Type":"ContainerDied","Data":"a5cbbc32ab9e17e0b16cae9b6c24bc2ae8a263c163cc7bf6a747898f4c3c76e8"} Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.909045 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.920820 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/569cb143-086a-42f1-9e8c-6f6f614c9ee2-kube-api-access-72shl" (OuterVolumeSpecName: "kube-api-access-72shl") pod "569cb143-086a-42f1-9e8c-6f6f614c9ee2" (UID: "569cb143-086a-42f1-9e8c-6f6f614c9ee2"). InnerVolumeSpecName "kube-api-access-72shl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.925903 5050 scope.go:117] "RemoveContainer" containerID="d11f1570a4983e90360fd498bdee9b19f208c7f5acd61496d60bf9cadd7bc16f" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.926340 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-57f899fb58-v2lwj"] Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.927320 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/569cb143-086a-42f1-9e8c-6f6f614c9ee2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "569cb143-086a-42f1-9e8c-6f6f614c9ee2" (UID: "569cb143-086a-42f1-9e8c-6f6f614c9ee2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.939225 5050 generic.go:334] "Generic (PLEG): container finished" podID="ca005c2d-f7de-486a-bbd6-a32443582833" containerID="256a10ae9fc0b3f305dca6de262f804a0556a28126926031e10ecf611af1ad9a" exitCode=1 Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.939306 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell0e20d-account-delete-ntvhn" event={"ID":"ca005c2d-f7de-486a-bbd6-a32443582833","Type":"ContainerDied","Data":"256a10ae9fc0b3f305dca6de262f804a0556a28126926031e10ecf611af1ad9a"} Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.943477 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-57f899fb58-v2lwj"] Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.947282 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55b7e535-46f6-403b-9cdf-bf172dba97b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55b7e535-46f6-403b-9cdf-bf172dba97b6" (UID: "55b7e535-46f6-403b-9cdf-bf172dba97b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.947638 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placementd2f2-account-delete-njqwh" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.947759 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placementd2f2-account-delete-njqwh" event={"ID":"bc8efd61-e4fb-4ec0-834a-b495797039a1","Type":"ContainerDied","Data":"a780d8823ffe3c5dd9906301489f93f6eb3e416a0b447863f72599192f111c37"} Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.959145 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.965258 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87937f27-2525-4fed-88bb-38a90404860c-config-data" (OuterVolumeSpecName: "config-data") pod "87937f27-2525-4fed-88bb-38a90404860c" (UID: "87937f27-2525-4fed-88bb-38a90404860c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.965422 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87937f27-2525-4fed-88bb-38a90404860c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87937f27-2525-4fed-88bb-38a90404860c" (UID: "87937f27-2525-4fed-88bb-38a90404860c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.965830 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.975796 5050 generic.go:334] "Generic (PLEG): container finished" podID="1d484c84-7333-4701-a4f3-655c3d2cbfa7" containerID="da09975c6998c658bb6956da61c91b298081b08244ab21dcf8062177373472dc" exitCode=1 Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.975924 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance2653-account-delete-rgdnl" event={"ID":"1d484c84-7333-4701-a4f3-655c3d2cbfa7","Type":"ContainerDied","Data":"da09975c6998c658bb6956da61c91b298081b08244ab21dcf8062177373472dc"} Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.975958 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance2653-account-delete-rgdnl" event={"ID":"1d484c84-7333-4701-a4f3-655c3d2cbfa7","Type":"ContainerDied","Data":"e9083430e24059def49fce8b8dffea65371542e4f2d51b5504b0837c83c03e56"} Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.976067 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance2653-account-delete-rgdnl" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.980289 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e365d825-a3cb-42a3-8a00-8a9be42ed290-operator-scripts\") pod \"e365d825-a3cb-42a3-8a00-8a9be42ed290\" (UID: \"e365d825-a3cb-42a3-8a00-8a9be42ed290\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.980345 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef32727-7bbd-4a50-8292-4740b34107cc-config-data\") pod \"2ef32727-7bbd-4a50-8292-4740b34107cc\" (UID: \"2ef32727-7bbd-4a50-8292-4740b34107cc\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.980375 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-internal-tls-certs\") pod \"36acbdf3-346e-4207-8391-b2a03ef839e5\" (UID: \"36acbdf3-346e-4207-8391-b2a03ef839e5\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.980414 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-scripts\") pod \"36acbdf3-346e-4207-8391-b2a03ef839e5\" (UID: \"36acbdf3-346e-4207-8391-b2a03ef839e5\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.980437 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnlzl\" (UniqueName: \"kubernetes.io/projected/e365d825-a3cb-42a3-8a00-8a9be42ed290-kube-api-access-nnlzl\") pod \"e365d825-a3cb-42a3-8a00-8a9be42ed290\" (UID: \"e365d825-a3cb-42a3-8a00-8a9be42ed290\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.980484 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-fernet-keys\") pod \"36acbdf3-346e-4207-8391-b2a03ef839e5\" (UID: \"36acbdf3-346e-4207-8391-b2a03ef839e5\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.980523 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-credential-keys\") pod \"36acbdf3-346e-4207-8391-b2a03ef839e5\" (UID: \"36acbdf3-346e-4207-8391-b2a03ef839e5\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.980585 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhvqd\" (UniqueName: \"kubernetes.io/projected/1d484c84-7333-4701-a4f3-655c3d2cbfa7-kube-api-access-hhvqd\") pod \"1d484c84-7333-4701-a4f3-655c3d2cbfa7\" (UID: \"1d484c84-7333-4701-a4f3-655c3d2cbfa7\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.980606 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d484c84-7333-4701-a4f3-655c3d2cbfa7-operator-scripts\") pod \"1d484c84-7333-4701-a4f3-655c3d2cbfa7\" (UID: \"1d484c84-7333-4701-a4f3-655c3d2cbfa7\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.980640 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-combined-ca-bundle\") pod \"36acbdf3-346e-4207-8391-b2a03ef839e5\" (UID: \"36acbdf3-346e-4207-8391-b2a03ef839e5\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.980673 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5rsd\" (UniqueName: \"kubernetes.io/projected/bc8efd61-e4fb-4ec0-834a-b495797039a1-kube-api-access-d5rsd\") pod \"bc8efd61-e4fb-4ec0-834a-b495797039a1\" (UID: \"bc8efd61-e4fb-4ec0-834a-b495797039a1\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.980713 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ef32727-7bbd-4a50-8292-4740b34107cc-scripts\") pod \"2ef32727-7bbd-4a50-8292-4740b34107cc\" (UID: \"2ef32727-7bbd-4a50-8292-4740b34107cc\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.980749 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ef32727-7bbd-4a50-8292-4740b34107cc-log-httpd\") pod \"2ef32727-7bbd-4a50-8292-4740b34107cc\" (UID: \"2ef32727-7bbd-4a50-8292-4740b34107cc\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.980778 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef32727-7bbd-4a50-8292-4740b34107cc-combined-ca-bundle\") pod \"2ef32727-7bbd-4a50-8292-4740b34107cc\" (UID: \"2ef32727-7bbd-4a50-8292-4740b34107cc\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.980824 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bgb9\" (UniqueName: \"kubernetes.io/projected/2ef32727-7bbd-4a50-8292-4740b34107cc-kube-api-access-4bgb9\") pod \"2ef32727-7bbd-4a50-8292-4740b34107cc\" (UID: \"2ef32727-7bbd-4a50-8292-4740b34107cc\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.980850 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ef32727-7bbd-4a50-8292-4740b34107cc-ceilometer-tls-certs\") pod \"2ef32727-7bbd-4a50-8292-4740b34107cc\" (UID: \"2ef32727-7bbd-4a50-8292-4740b34107cc\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.980873 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-public-tls-certs\") pod \"36acbdf3-346e-4207-8391-b2a03ef839e5\" (UID: \"36acbdf3-346e-4207-8391-b2a03ef839e5\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.980928 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ef32727-7bbd-4a50-8292-4740b34107cc-run-httpd\") pod \"2ef32727-7bbd-4a50-8292-4740b34107cc\" (UID: \"2ef32727-7bbd-4a50-8292-4740b34107cc\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.980965 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ef32727-7bbd-4a50-8292-4740b34107cc-sg-core-conf-yaml\") pod \"2ef32727-7bbd-4a50-8292-4740b34107cc\" (UID: \"2ef32727-7bbd-4a50-8292-4740b34107cc\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.980997 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c868\" (UniqueName: \"kubernetes.io/projected/36acbdf3-346e-4207-8391-b2a03ef839e5-kube-api-access-6c868\") pod \"36acbdf3-346e-4207-8391-b2a03ef839e5\" (UID: \"36acbdf3-346e-4207-8391-b2a03ef839e5\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.981074 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc8efd61-e4fb-4ec0-834a-b495797039a1-operator-scripts\") pod \"bc8efd61-e4fb-4ec0-834a-b495797039a1\" (UID: \"bc8efd61-e4fb-4ec0-834a-b495797039a1\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.981114 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-config-data\") pod \"36acbdf3-346e-4207-8391-b2a03ef839e5\" (UID: \"36acbdf3-346e-4207-8391-b2a03ef839e5\") " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.981458 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66cb4589-6296-417b-87eb-4bcbff7bf580-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.981470 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/569cb143-086a-42f1-9e8c-6f6f614c9ee2-logs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.981492 5050 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.981504 5050 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/55b7e535-46f6-403b-9cdf-bf172dba97b6-config-data-generated\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.981515 5050 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/569cb143-086a-42f1-9e8c-6f6f614c9ee2-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.981525 5050 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/55b7e535-46f6-403b-9cdf-bf172dba97b6-kolla-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.981536 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87937f27-2525-4fed-88bb-38a90404860c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.981545 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/55b7e535-46f6-403b-9cdf-bf172dba97b6-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.981555 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87937f27-2525-4fed-88bb-38a90404860c-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.981566 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-956wq\" (UniqueName: \"kubernetes.io/projected/66cb4589-6296-417b-87eb-4bcbff7bf580-kube-api-access-956wq\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.981575 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72shl\" (UniqueName: \"kubernetes.io/projected/569cb143-086a-42f1-9e8c-6f6f614c9ee2-kube-api-access-72shl\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.981586 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55b7e535-46f6-403b-9cdf-bf172dba97b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.981595 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrsfm\" (UniqueName: \"kubernetes.io/projected/55b7e535-46f6-403b-9cdf-bf172dba97b6-kube-api-access-lrsfm\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.981604 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/569cb143-086a-42f1-9e8c-6f6f614c9ee2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.981612 5050 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/55b7e535-46f6-403b-9cdf-bf172dba97b6-config-data-default\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.981621 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glxxg\" (UniqueName: \"kubernetes.io/projected/87937f27-2525-4fed-88bb-38a90404860c-kube-api-access-glxxg\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.981646 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e365d825-a3cb-42a3-8a00-8a9be42ed290-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e365d825-a3cb-42a3-8a00-8a9be42ed290" (UID: "e365d825-a3cb-42a3-8a00-8a9be42ed290"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.986266 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d484c84-7333-4701-a4f3-655c3d2cbfa7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1d484c84-7333-4701-a4f3-655c3d2cbfa7" (UID: "1d484c84-7333-4701-a4f3-655c3d2cbfa7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.991959 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ef32727-7bbd-4a50-8292-4740b34107cc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2ef32727-7bbd-4a50-8292-4740b34107cc" (UID: "2ef32727-7bbd-4a50-8292-4740b34107cc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:12:16 crc kubenswrapper[5050]: I1211 14:12:16.992866 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ef32727-7bbd-4a50-8292-4740b34107cc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2ef32727-7bbd-4a50-8292-4740b34107cc" (UID: "2ef32727-7bbd-4a50-8292-4740b34107cc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:16.998267 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d484c84-7333-4701-a4f3-655c3d2cbfa7-kube-api-access-hhvqd" (OuterVolumeSpecName: "kube-api-access-hhvqd") pod "1d484c84-7333-4701-a4f3-655c3d2cbfa7" (UID: "1d484c84-7333-4701-a4f3-655c3d2cbfa7"). InnerVolumeSpecName "kube-api-access-hhvqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.002320 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc8efd61-e4fb-4ec0-834a-b495797039a1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bc8efd61-e4fb-4ec0-834a-b495797039a1" (UID: "bc8efd61-e4fb-4ec0-834a-b495797039a1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:12:17 crc kubenswrapper[5050]: E1211 14:12:17.003646 5050 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Dec 11 14:12:17 crc kubenswrapper[5050]: E1211 14:12:17.003802 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d917f471-6630-4e96-a0e4-cbde631da4a8-operator-scripts podName:d917f471-6630-4e96-a0e4-cbde631da4a8 nodeName:}" failed. No retries permitted until 2025-12-11 14:12:21.003748972 +0000 UTC m=+1431.847471748 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/d917f471-6630-4e96-a0e4-cbde631da4a8-operator-scripts") pod "novaapi4326-account-delete-9bmsz" (UID: "d917f471-6630-4e96-a0e4-cbde631da4a8") : configmap "openstack-scripts" not found Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.004367 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ef32727-7bbd-4a50-8292-4740b34107cc-scripts" (OuterVolumeSpecName: "scripts") pod "2ef32727-7bbd-4a50-8292-4740b34107cc" (UID: "2ef32727-7bbd-4a50-8292-4740b34107cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.006542 5050 scope.go:117] "RemoveContainer" containerID="b9c35ee719250fa6dc750f32fbbe1dbad446b365702d9a03226e35b24b159714" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.016347 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc8efd61-e4fb-4ec0-834a-b495797039a1-kube-api-access-d5rsd" (OuterVolumeSpecName: "kube-api-access-d5rsd") pod "bc8efd61-e4fb-4ec0-834a-b495797039a1" (UID: "bc8efd61-e4fb-4ec0-834a-b495797039a1"). InnerVolumeSpecName "kube-api-access-d5rsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.019110 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "36acbdf3-346e-4207-8391-b2a03ef839e5" (UID: "36acbdf3-346e-4207-8391-b2a03ef839e5"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.040797 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-scripts" (OuterVolumeSpecName: "scripts") pod "36acbdf3-346e-4207-8391-b2a03ef839e5" (UID: "36acbdf3-346e-4207-8391-b2a03ef839e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.044825 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ef32727-7bbd-4a50-8292-4740b34107cc-kube-api-access-4bgb9" (OuterVolumeSpecName: "kube-api-access-4bgb9") pod "2ef32727-7bbd-4a50-8292-4740b34107cc" (UID: "2ef32727-7bbd-4a50-8292-4740b34107cc"). InnerVolumeSpecName "kube-api-access-4bgb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.044976 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e365d825-a3cb-42a3-8a00-8a9be42ed290-kube-api-access-nnlzl" (OuterVolumeSpecName: "kube-api-access-nnlzl") pod "e365d825-a3cb-42a3-8a00-8a9be42ed290" (UID: "e365d825-a3cb-42a3-8a00-8a9be42ed290"). InnerVolumeSpecName "kube-api-access-nnlzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.045737 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55b7e535-46f6-403b-9cdf-bf172dba97b6-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "55b7e535-46f6-403b-9cdf-bf172dba97b6" (UID: "55b7e535-46f6-403b-9cdf-bf172dba97b6"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.047651 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "36acbdf3-346e-4207-8391-b2a03ef839e5" (UID: "36acbdf3-346e-4207-8391-b2a03ef839e5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.047901 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36acbdf3-346e-4207-8391-b2a03ef839e5-kube-api-access-6c868" (OuterVolumeSpecName: "kube-api-access-6c868") pod "36acbdf3-346e-4207-8391-b2a03ef839e5" (UID: "36acbdf3-346e-4207-8391-b2a03ef839e5"). InnerVolumeSpecName "kube-api-access-6c868". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.048870 5050 generic.go:334] "Generic (PLEG): container finished" podID="2ef32727-7bbd-4a50-8292-4740b34107cc" containerID="de890e47f39aab39e39d2eca3d15df094ac343a9fec0b6242bea5fd2f19dbaf8" exitCode=0 Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.048949 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ef32727-7bbd-4a50-8292-4740b34107cc","Type":"ContainerDied","Data":"de890e47f39aab39e39d2eca3d15df094ac343a9fec0b6242bea5fd2f19dbaf8"} Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.048985 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ef32727-7bbd-4a50-8292-4740b34107cc","Type":"ContainerDied","Data":"7bf26eb4fda28e68ea4407e15ac13b2c882d4b170f7d5683b64dce95795af9b9"} Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.049135 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.066521 5050 generic.go:334] "Generic (PLEG): container finished" podID="55b7e535-46f6-403b-9cdf-bf172dba97b6" containerID="867f3427f68bf3282e541c138203522b401306d30e252d2e6f7d84ff42514106" exitCode=0 Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.066688 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"55b7e535-46f6-403b-9cdf-bf172dba97b6","Type":"ContainerDied","Data":"867f3427f68bf3282e541c138203522b401306d30e252d2e6f7d84ff42514106"} Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.066730 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"55b7e535-46f6-403b-9cdf-bf172dba97b6","Type":"ContainerDied","Data":"40067b474165e1cb78f1321cfd27b60fed349c9f2208ca7671993f05feb28cf8"} Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.066882 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.082702 5050 generic.go:334] "Generic (PLEG): container finished" podID="e365d825-a3cb-42a3-8a00-8a9be42ed290" containerID="71ab99722ee6cf9688769ef5f2db696be7f9ef481521ac4426c154a7f02debc7" exitCode=1 Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.082844 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder27b7-account-delete-h9wjn" event={"ID":"e365d825-a3cb-42a3-8a00-8a9be42ed290","Type":"ContainerDied","Data":"71ab99722ee6cf9688769ef5f2db696be7f9ef481521ac4426c154a7f02debc7"} Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.082884 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder27b7-account-delete-h9wjn" event={"ID":"e365d825-a3cb-42a3-8a00-8a9be42ed290","Type":"ContainerDied","Data":"85159ba9b1716a8a69e6686b8adf1ee117c8e387a73c07fba65098f82adf2cd6"} Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.083004 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder27b7-account-delete-h9wjn" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.092210 5050 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ef32727-7bbd-4a50-8292-4740b34107cc-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.092250 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6c868\" (UniqueName: \"kubernetes.io/projected/36acbdf3-346e-4207-8391-b2a03ef839e5-kube-api-access-6c868\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.092267 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc8efd61-e4fb-4ec0-834a-b495797039a1-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.092281 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e365d825-a3cb-42a3-8a00-8a9be42ed290-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.092292 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.092302 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnlzl\" (UniqueName: \"kubernetes.io/projected/e365d825-a3cb-42a3-8a00-8a9be42ed290-kube-api-access-nnlzl\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.092315 5050 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.092324 5050 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.092334 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhvqd\" (UniqueName: \"kubernetes.io/projected/1d484c84-7333-4701-a4f3-655c3d2cbfa7-kube-api-access-hhvqd\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.092348 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d484c84-7333-4701-a4f3-655c3d2cbfa7-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.092358 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5rsd\" (UniqueName: \"kubernetes.io/projected/bc8efd61-e4fb-4ec0-834a-b495797039a1-kube-api-access-d5rsd\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.092368 5050 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/55b7e535-46f6-403b-9cdf-bf172dba97b6-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.092377 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ef32727-7bbd-4a50-8292-4740b34107cc-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.092389 5050 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ef32727-7bbd-4a50-8292-4740b34107cc-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.092398 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bgb9\" (UniqueName: \"kubernetes.io/projected/2ef32727-7bbd-4a50-8292-4740b34107cc-kube-api-access-4bgb9\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.120273 5050 scope.go:117] "RemoveContainer" containerID="91aedb14b57d8c859204563249993412033e4ca1945b5f8067581ca33e70d57a" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.141495 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/569cb143-086a-42f1-9e8c-6f6f614c9ee2-config-data" (OuterVolumeSpecName: "config-data") pod "569cb143-086a-42f1-9e8c-6f6f614c9ee2" (UID: "569cb143-086a-42f1-9e8c-6f6f614c9ee2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.154647 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-config-data" (OuterVolumeSpecName: "config-data") pod "36acbdf3-346e-4207-8391-b2a03ef839e5" (UID: "36acbdf3-346e-4207-8391-b2a03ef839e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.156559 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-54bb9c4d69-975sg" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.156746 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.156807 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"87937f27-2525-4fed-88bb-38a90404860c","Type":"ContainerDied","Data":"12f511cd7b7e0497850929955d2ffdd39e255588a6ae04bfa636080187e6b832"} Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.156869 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.166561 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36acbdf3-346e-4207-8391-b2a03ef839e5" (UID: "36acbdf3-346e-4207-8391-b2a03ef839e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.168781 5050 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.183417 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "36acbdf3-346e-4207-8391-b2a03ef839e5" (UID: "36acbdf3-346e-4207-8391-b2a03ef839e5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.183540 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican34ac-account-delete-cd525"] Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.185571 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ef32727-7bbd-4a50-8292-4740b34107cc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2ef32727-7bbd-4a50-8292-4740b34107cc" (UID: "2ef32727-7bbd-4a50-8292-4740b34107cc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.187711 5050 scope.go:117] "RemoveContainer" containerID="75e75d11fd4bc8154957dc778a581689e55af1797f4b8a6f72486493adf59c2a" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.192421 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican34ac-account-delete-cd525"] Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.193640 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.193660 5050 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ef32727-7bbd-4a50-8292-4740b34107cc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.193670 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/569cb143-086a-42f1-9e8c-6f6f614c9ee2-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.193684 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.193693 5050 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.193702 5050 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.194205 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "36acbdf3-346e-4207-8391-b2a03ef839e5" (UID: "36acbdf3-346e-4207-8391-b2a03ef839e5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.194710 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ef32727-7bbd-4a50-8292-4740b34107cc-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "2ef32727-7bbd-4a50-8292-4740b34107cc" (UID: "2ef32727-7bbd-4a50-8292-4740b34107cc"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.224800 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ef32727-7bbd-4a50-8292-4740b34107cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ef32727-7bbd-4a50-8292-4740b34107cc" (UID: "2ef32727-7bbd-4a50-8292-4740b34107cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.228138 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder27b7-account-delete-h9wjn"] Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.241466 5050 scope.go:117] "RemoveContainer" containerID="91aedb14b57d8c859204563249993412033e4ca1945b5f8067581ca33e70d57a" Dec 11 14:12:17 crc kubenswrapper[5050]: E1211 14:12:17.242397 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91aedb14b57d8c859204563249993412033e4ca1945b5f8067581ca33e70d57a\": container with ID starting with 91aedb14b57d8c859204563249993412033e4ca1945b5f8067581ca33e70d57a not found: ID does not exist" containerID="91aedb14b57d8c859204563249993412033e4ca1945b5f8067581ca33e70d57a" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.242451 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91aedb14b57d8c859204563249993412033e4ca1945b5f8067581ca33e70d57a"} err="failed to get container status \"91aedb14b57d8c859204563249993412033e4ca1945b5f8067581ca33e70d57a\": rpc error: code = NotFound desc = could not find container \"91aedb14b57d8c859204563249993412033e4ca1945b5f8067581ca33e70d57a\": container with ID starting with 91aedb14b57d8c859204563249993412033e4ca1945b5f8067581ca33e70d57a not found: ID does not exist" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.242486 5050 scope.go:117] "RemoveContainer" containerID="75e75d11fd4bc8154957dc778a581689e55af1797f4b8a6f72486493adf59c2a" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.242642 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder27b7-account-delete-h9wjn"] Dec 11 14:12:17 crc kubenswrapper[5050]: E1211 14:12:17.244964 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75e75d11fd4bc8154957dc778a581689e55af1797f4b8a6f72486493adf59c2a\": container with ID starting with 75e75d11fd4bc8154957dc778a581689e55af1797f4b8a6f72486493adf59c2a not found: ID does not exist" containerID="75e75d11fd4bc8154957dc778a581689e55af1797f4b8a6f72486493adf59c2a" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.245006 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75e75d11fd4bc8154957dc778a581689e55af1797f4b8a6f72486493adf59c2a"} err="failed to get container status \"75e75d11fd4bc8154957dc778a581689e55af1797f4b8a6f72486493adf59c2a\": rpc error: code = NotFound desc = could not find container \"75e75d11fd4bc8154957dc778a581689e55af1797f4b8a6f72486493adf59c2a\": container with ID starting with 75e75d11fd4bc8154957dc778a581689e55af1797f4b8a6f72486493adf59c2a not found: ID does not exist" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.245081 5050 scope.go:117] "RemoveContainer" containerID="5c8175cc977b1ffa644105126aaa9c04e5ea135e96b268ad6d76b8186e59e7ee" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.255743 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.256641 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.257559 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ef32727-7bbd-4a50-8292-4740b34107cc-config-data" (OuterVolumeSpecName: "config-data") pod "2ef32727-7bbd-4a50-8292-4740b34107cc" (UID: "2ef32727-7bbd-4a50-8292-4740b34107cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.292880 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.302988 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef32727-7bbd-4a50-8292-4740b34107cc-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.303060 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef32727-7bbd-4a50-8292-4740b34107cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.303071 5050 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ef32727-7bbd-4a50-8292-4740b34107cc-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.303082 5050 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/36acbdf3-346e-4207-8391-b2a03ef839e5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.305659 5050 scope.go:117] "RemoveContainer" containerID="820172aaadd0f74b40c79e4bb0bcfc72118d1d7fa7c79271701f0e5c49f8af53" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.307544 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.334217 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.355504 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.361517 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi4326-account-delete-9bmsz" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.363054 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-54bb9c4d69-975sg"] Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.380704 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-54bb9c4d69-975sg"] Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.403637 5050 scope.go:117] "RemoveContainer" containerID="da09975c6998c658bb6956da61c91b298081b08244ab21dcf8062177373472dc" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.403947 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d917f471-6630-4e96-a0e4-cbde631da4a8-operator-scripts\") pod \"d917f471-6630-4e96-a0e4-cbde631da4a8\" (UID: \"d917f471-6630-4e96-a0e4-cbde631da4a8\") " Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.404112 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzt76\" (UniqueName: \"kubernetes.io/projected/d917f471-6630-4e96-a0e4-cbde631da4a8-kube-api-access-tzt76\") pod \"d917f471-6630-4e96-a0e4-cbde631da4a8\" (UID: \"d917f471-6630-4e96-a0e4-cbde631da4a8\") " Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.405030 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placementd2f2-account-delete-njqwh"] Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.405134 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d917f471-6630-4e96-a0e4-cbde631da4a8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d917f471-6630-4e96-a0e4-cbde631da4a8" (UID: "d917f471-6630-4e96-a0e4-cbde631da4a8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.413483 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placementd2f2-account-delete-njqwh"] Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.423285 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d917f471-6630-4e96-a0e4-cbde631da4a8-kube-api-access-tzt76" (OuterVolumeSpecName: "kube-api-access-tzt76") pod "d917f471-6630-4e96-a0e4-cbde631da4a8" (UID: "d917f471-6630-4e96-a0e4-cbde631da4a8"). InnerVolumeSpecName "kube-api-access-tzt76". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.423786 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-655647566b-n2tcs"] Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.431116 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-655647566b-n2tcs"] Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.439447 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance2653-account-delete-rgdnl"] Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.445250 5050 scope.go:117] "RemoveContainer" containerID="aa73f68df593bc0e997d1c636dc2b25763537ada8a021dc55f2b2753a9b161de" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.447500 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance2653-account-delete-rgdnl"] Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.465382 5050 scope.go:117] "RemoveContainer" containerID="da09975c6998c658bb6956da61c91b298081b08244ab21dcf8062177373472dc" Dec 11 14:12:17 crc kubenswrapper[5050]: E1211 14:12:17.465866 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da09975c6998c658bb6956da61c91b298081b08244ab21dcf8062177373472dc\": container with ID starting with da09975c6998c658bb6956da61c91b298081b08244ab21dcf8062177373472dc not found: ID does not exist" containerID="da09975c6998c658bb6956da61c91b298081b08244ab21dcf8062177373472dc" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.465927 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da09975c6998c658bb6956da61c91b298081b08244ab21dcf8062177373472dc"} err="failed to get container status \"da09975c6998c658bb6956da61c91b298081b08244ab21dcf8062177373472dc\": rpc error: code = NotFound desc = could not find container \"da09975c6998c658bb6956da61c91b298081b08244ab21dcf8062177373472dc\": container with ID starting with da09975c6998c658bb6956da61c91b298081b08244ab21dcf8062177373472dc not found: ID does not exist" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.465965 5050 scope.go:117] "RemoveContainer" containerID="aa73f68df593bc0e997d1c636dc2b25763537ada8a021dc55f2b2753a9b161de" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.466226 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:12:17 crc kubenswrapper[5050]: E1211 14:12:17.466393 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa73f68df593bc0e997d1c636dc2b25763537ada8a021dc55f2b2753a9b161de\": container with ID starting with aa73f68df593bc0e997d1c636dc2b25763537ada8a021dc55f2b2753a9b161de not found: ID does not exist" containerID="aa73f68df593bc0e997d1c636dc2b25763537ada8a021dc55f2b2753a9b161de" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.466495 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa73f68df593bc0e997d1c636dc2b25763537ada8a021dc55f2b2753a9b161de"} err="failed to get container status \"aa73f68df593bc0e997d1c636dc2b25763537ada8a021dc55f2b2753a9b161de\": rpc error: code = NotFound desc = could not find container \"aa73f68df593bc0e997d1c636dc2b25763537ada8a021dc55f2b2753a9b161de\": container with ID starting with aa73f68df593bc0e997d1c636dc2b25763537ada8a021dc55f2b2753a9b161de not found: ID does not exist" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.466600 5050 scope.go:117] "RemoveContainer" containerID="2df25eb8394651a0a8bc136ef8c2753058167b5dbecc7a2a0c134fede2448a38" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.473877 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.510866 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d917f471-6630-4e96-a0e4-cbde631da4a8-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.510904 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzt76\" (UniqueName: \"kubernetes.io/projected/d917f471-6630-4e96-a0e4-cbde631da4a8-kube-api-access-tzt76\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.564104 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0891f075-8101-475b-b844-e7cb42a4990b" path="/var/lib/kubelet/pods/0891f075-8101-475b-b844-e7cb42a4990b/volumes" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.571564 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d484c84-7333-4701-a4f3-655c3d2cbfa7" path="/var/lib/kubelet/pods/1d484c84-7333-4701-a4f3-655c3d2cbfa7/volumes" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.572178 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1eb418aa-1d3c-469c-8ff4-2b3c86a71e97" path="/var/lib/kubelet/pods/1eb418aa-1d3c-469c-8ff4-2b3c86a71e97/volumes" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.572805 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ef32727-7bbd-4a50-8292-4740b34107cc" path="/var/lib/kubelet/pods/2ef32727-7bbd-4a50-8292-4740b34107cc/volumes" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.576693 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell0e20d-account-delete-ntvhn" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.579056 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3386ffea-45ca-41e8-9aa5-61a2923a3394" path="/var/lib/kubelet/pods/3386ffea-45ca-41e8-9aa5-61a2923a3394/volumes" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.579860 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf" path="/var/lib/kubelet/pods/4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf/volumes" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.584316 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4affe74d-e417-48c1-9c71-7cca7d0729db" path="/var/lib/kubelet/pods/4affe74d-e417-48c1-9c71-7cca7d0729db/volumes" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.585399 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55b7e535-46f6-403b-9cdf-bf172dba97b6" path="/var/lib/kubelet/pods/55b7e535-46f6-403b-9cdf-bf172dba97b6/volumes" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.586347 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="569cb143-086a-42f1-9e8c-6f6f614c9ee2" path="/var/lib/kubelet/pods/569cb143-086a-42f1-9e8c-6f6f614c9ee2/volumes" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.588120 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66cb4589-6296-417b-87eb-4bcbff7bf580" path="/var/lib/kubelet/pods/66cb4589-6296-417b-87eb-4bcbff7bf580/volumes" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.588863 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6817c570-f6ff-4b08-825a-027a9c8630b0" path="/var/lib/kubelet/pods/6817c570-f6ff-4b08-825a-027a9c8630b0/volumes" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.589543 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87937f27-2525-4fed-88bb-38a90404860c" path="/var/lib/kubelet/pods/87937f27-2525-4fed-88bb-38a90404860c/volumes" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.591894 5050 scope.go:117] "RemoveContainer" containerID="8295f391211fffee759bcdccec5b566c3a41abb5db15ca1a987be72e3711b3cf" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.592394 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="934cae9e-c75b-434d-b1e1-d566d6fb8b7d" path="/var/lib/kubelet/pods/934cae9e-c75b-434d-b1e1-d566d6fb8b7d/volumes" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.593594 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc8efd61-e4fb-4ec0-834a-b495797039a1" path="/var/lib/kubelet/pods/bc8efd61-e4fb-4ec0-834a-b495797039a1/volumes" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.598926 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="defedffb-9310-4b18-b7ee-b54040aa5447" path="/var/lib/kubelet/pods/defedffb-9310-4b18-b7ee-b54040aa5447/volumes" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.600407 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e365d825-a3cb-42a3-8a00-8a9be42ed290" path="/var/lib/kubelet/pods/e365d825-a3cb-42a3-8a00-8a9be42ed290/volumes" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.601328 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7" path="/var/lib/kubelet/pods/ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7/volumes" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.601999 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee81095a-fe79-47c7-aa3e-e1768a655b86" path="/var/lib/kubelet/pods/ee81095a-fe79-47c7-aa3e-e1768a655b86/volumes" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.636032 5050 scope.go:117] "RemoveContainer" containerID="de890e47f39aab39e39d2eca3d15df094ac343a9fec0b6242bea5fd2f19dbaf8" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.661255 5050 scope.go:117] "RemoveContainer" containerID="f6d56512f1f5665c8ed3d4a8969144001b8fe4f5868e5cfd74524cc0c96c62f3" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.682836 5050 scope.go:117] "RemoveContainer" containerID="2df25eb8394651a0a8bc136ef8c2753058167b5dbecc7a2a0c134fede2448a38" Dec 11 14:12:17 crc kubenswrapper[5050]: E1211 14:12:17.683549 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2df25eb8394651a0a8bc136ef8c2753058167b5dbecc7a2a0c134fede2448a38\": container with ID starting with 2df25eb8394651a0a8bc136ef8c2753058167b5dbecc7a2a0c134fede2448a38 not found: ID does not exist" containerID="2df25eb8394651a0a8bc136ef8c2753058167b5dbecc7a2a0c134fede2448a38" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.683711 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2df25eb8394651a0a8bc136ef8c2753058167b5dbecc7a2a0c134fede2448a38"} err="failed to get container status \"2df25eb8394651a0a8bc136ef8c2753058167b5dbecc7a2a0c134fede2448a38\": rpc error: code = NotFound desc = could not find container \"2df25eb8394651a0a8bc136ef8c2753058167b5dbecc7a2a0c134fede2448a38\": container with ID starting with 2df25eb8394651a0a8bc136ef8c2753058167b5dbecc7a2a0c134fede2448a38 not found: ID does not exist" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.683929 5050 scope.go:117] "RemoveContainer" containerID="8295f391211fffee759bcdccec5b566c3a41abb5db15ca1a987be72e3711b3cf" Dec 11 14:12:17 crc kubenswrapper[5050]: E1211 14:12:17.684555 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8295f391211fffee759bcdccec5b566c3a41abb5db15ca1a987be72e3711b3cf\": container with ID starting with 8295f391211fffee759bcdccec5b566c3a41abb5db15ca1a987be72e3711b3cf not found: ID does not exist" containerID="8295f391211fffee759bcdccec5b566c3a41abb5db15ca1a987be72e3711b3cf" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.684594 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8295f391211fffee759bcdccec5b566c3a41abb5db15ca1a987be72e3711b3cf"} err="failed to get container status \"8295f391211fffee759bcdccec5b566c3a41abb5db15ca1a987be72e3711b3cf\": rpc error: code = NotFound desc = could not find container \"8295f391211fffee759bcdccec5b566c3a41abb5db15ca1a987be72e3711b3cf\": container with ID starting with 8295f391211fffee759bcdccec5b566c3a41abb5db15ca1a987be72e3711b3cf not found: ID does not exist" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.684623 5050 scope.go:117] "RemoveContainer" containerID="de890e47f39aab39e39d2eca3d15df094ac343a9fec0b6242bea5fd2f19dbaf8" Dec 11 14:12:17 crc kubenswrapper[5050]: E1211 14:12:17.684916 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de890e47f39aab39e39d2eca3d15df094ac343a9fec0b6242bea5fd2f19dbaf8\": container with ID starting with de890e47f39aab39e39d2eca3d15df094ac343a9fec0b6242bea5fd2f19dbaf8 not found: ID does not exist" containerID="de890e47f39aab39e39d2eca3d15df094ac343a9fec0b6242bea5fd2f19dbaf8" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.684946 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de890e47f39aab39e39d2eca3d15df094ac343a9fec0b6242bea5fd2f19dbaf8"} err="failed to get container status \"de890e47f39aab39e39d2eca3d15df094ac343a9fec0b6242bea5fd2f19dbaf8\": rpc error: code = NotFound desc = could not find container \"de890e47f39aab39e39d2eca3d15df094ac343a9fec0b6242bea5fd2f19dbaf8\": container with ID starting with de890e47f39aab39e39d2eca3d15df094ac343a9fec0b6242bea5fd2f19dbaf8 not found: ID does not exist" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.684965 5050 scope.go:117] "RemoveContainer" containerID="f6d56512f1f5665c8ed3d4a8969144001b8fe4f5868e5cfd74524cc0c96c62f3" Dec 11 14:12:17 crc kubenswrapper[5050]: E1211 14:12:17.685875 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6d56512f1f5665c8ed3d4a8969144001b8fe4f5868e5cfd74524cc0c96c62f3\": container with ID starting with f6d56512f1f5665c8ed3d4a8969144001b8fe4f5868e5cfd74524cc0c96c62f3 not found: ID does not exist" containerID="f6d56512f1f5665c8ed3d4a8969144001b8fe4f5868e5cfd74524cc0c96c62f3" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.686002 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6d56512f1f5665c8ed3d4a8969144001b8fe4f5868e5cfd74524cc0c96c62f3"} err="failed to get container status \"f6d56512f1f5665c8ed3d4a8969144001b8fe4f5868e5cfd74524cc0c96c62f3\": rpc error: code = NotFound desc = could not find container \"f6d56512f1f5665c8ed3d4a8969144001b8fe4f5868e5cfd74524cc0c96c62f3\": container with ID starting with f6d56512f1f5665c8ed3d4a8969144001b8fe4f5868e5cfd74524cc0c96c62f3 not found: ID does not exist" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.686150 5050 scope.go:117] "RemoveContainer" containerID="867f3427f68bf3282e541c138203522b401306d30e252d2e6f7d84ff42514106" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.713706 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcmzh\" (UniqueName: \"kubernetes.io/projected/ca005c2d-f7de-486a-bbd6-a32443582833-kube-api-access-dcmzh\") pod \"ca005c2d-f7de-486a-bbd6-a32443582833\" (UID: \"ca005c2d-f7de-486a-bbd6-a32443582833\") " Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.714086 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca005c2d-f7de-486a-bbd6-a32443582833-operator-scripts\") pod \"ca005c2d-f7de-486a-bbd6-a32443582833\" (UID: \"ca005c2d-f7de-486a-bbd6-a32443582833\") " Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.714569 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca005c2d-f7de-486a-bbd6-a32443582833-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ca005c2d-f7de-486a-bbd6-a32443582833" (UID: "ca005c2d-f7de-486a-bbd6-a32443582833"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.714835 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ca005c2d-f7de-486a-bbd6-a32443582833-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.719278 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca005c2d-f7de-486a-bbd6-a32443582833-kube-api-access-dcmzh" (OuterVolumeSpecName: "kube-api-access-dcmzh") pod "ca005c2d-f7de-486a-bbd6-a32443582833" (UID: "ca005c2d-f7de-486a-bbd6-a32443582833"). InnerVolumeSpecName "kube-api-access-dcmzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.722166 5050 scope.go:117] "RemoveContainer" containerID="48b2599eb07fbfc258247376195b9fee5c9054df813653e7696ec82ef8a7ca4b" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.768203 5050 scope.go:117] "RemoveContainer" containerID="867f3427f68bf3282e541c138203522b401306d30e252d2e6f7d84ff42514106" Dec 11 14:12:17 crc kubenswrapper[5050]: E1211 14:12:17.769058 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"867f3427f68bf3282e541c138203522b401306d30e252d2e6f7d84ff42514106\": container with ID starting with 867f3427f68bf3282e541c138203522b401306d30e252d2e6f7d84ff42514106 not found: ID does not exist" containerID="867f3427f68bf3282e541c138203522b401306d30e252d2e6f7d84ff42514106" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.769101 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"867f3427f68bf3282e541c138203522b401306d30e252d2e6f7d84ff42514106"} err="failed to get container status \"867f3427f68bf3282e541c138203522b401306d30e252d2e6f7d84ff42514106\": rpc error: code = NotFound desc = could not find container \"867f3427f68bf3282e541c138203522b401306d30e252d2e6f7d84ff42514106\": container with ID starting with 867f3427f68bf3282e541c138203522b401306d30e252d2e6f7d84ff42514106 not found: ID does not exist" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.769133 5050 scope.go:117] "RemoveContainer" containerID="48b2599eb07fbfc258247376195b9fee5c9054df813653e7696ec82ef8a7ca4b" Dec 11 14:12:17 crc kubenswrapper[5050]: E1211 14:12:17.769494 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48b2599eb07fbfc258247376195b9fee5c9054df813653e7696ec82ef8a7ca4b\": container with ID starting with 48b2599eb07fbfc258247376195b9fee5c9054df813653e7696ec82ef8a7ca4b not found: ID does not exist" containerID="48b2599eb07fbfc258247376195b9fee5c9054df813653e7696ec82ef8a7ca4b" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.769512 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48b2599eb07fbfc258247376195b9fee5c9054df813653e7696ec82ef8a7ca4b"} err="failed to get container status \"48b2599eb07fbfc258247376195b9fee5c9054df813653e7696ec82ef8a7ca4b\": rpc error: code = NotFound desc = could not find container \"48b2599eb07fbfc258247376195b9fee5c9054df813653e7696ec82ef8a7ca4b\": container with ID starting with 48b2599eb07fbfc258247376195b9fee5c9054df813653e7696ec82ef8a7ca4b not found: ID does not exist" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.769531 5050 scope.go:117] "RemoveContainer" containerID="71ab99722ee6cf9688769ef5f2db696be7f9ef481521ac4426c154a7f02debc7" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.793602 5050 scope.go:117] "RemoveContainer" containerID="38bea9b8fa28a96b68c698d0a2cad56e8bf2ac240b8c07d2c11031d667e89fc2" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.816193 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcmzh\" (UniqueName: \"kubernetes.io/projected/ca005c2d-f7de-486a-bbd6-a32443582833-kube-api-access-dcmzh\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.831131 5050 scope.go:117] "RemoveContainer" containerID="71ab99722ee6cf9688769ef5f2db696be7f9ef481521ac4426c154a7f02debc7" Dec 11 14:12:17 crc kubenswrapper[5050]: E1211 14:12:17.831924 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71ab99722ee6cf9688769ef5f2db696be7f9ef481521ac4426c154a7f02debc7\": container with ID starting with 71ab99722ee6cf9688769ef5f2db696be7f9ef481521ac4426c154a7f02debc7 not found: ID does not exist" containerID="71ab99722ee6cf9688769ef5f2db696be7f9ef481521ac4426c154a7f02debc7" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.831971 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71ab99722ee6cf9688769ef5f2db696be7f9ef481521ac4426c154a7f02debc7"} err="failed to get container status \"71ab99722ee6cf9688769ef5f2db696be7f9ef481521ac4426c154a7f02debc7\": rpc error: code = NotFound desc = could not find container \"71ab99722ee6cf9688769ef5f2db696be7f9ef481521ac4426c154a7f02debc7\": container with ID starting with 71ab99722ee6cf9688769ef5f2db696be7f9ef481521ac4426c154a7f02debc7 not found: ID does not exist" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.831997 5050 scope.go:117] "RemoveContainer" containerID="38bea9b8fa28a96b68c698d0a2cad56e8bf2ac240b8c07d2c11031d667e89fc2" Dec 11 14:12:17 crc kubenswrapper[5050]: E1211 14:12:17.832376 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38bea9b8fa28a96b68c698d0a2cad56e8bf2ac240b8c07d2c11031d667e89fc2\": container with ID starting with 38bea9b8fa28a96b68c698d0a2cad56e8bf2ac240b8c07d2c11031d667e89fc2 not found: ID does not exist" containerID="38bea9b8fa28a96b68c698d0a2cad56e8bf2ac240b8c07d2c11031d667e89fc2" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.832403 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38bea9b8fa28a96b68c698d0a2cad56e8bf2ac240b8c07d2c11031d667e89fc2"} err="failed to get container status \"38bea9b8fa28a96b68c698d0a2cad56e8bf2ac240b8c07d2c11031d667e89fc2\": rpc error: code = NotFound desc = could not find container \"38bea9b8fa28a96b68c698d0a2cad56e8bf2ac240b8c07d2c11031d667e89fc2\": container with ID starting with 38bea9b8fa28a96b68c698d0a2cad56e8bf2ac240b8c07d2c11031d667e89fc2 not found: ID does not exist" Dec 11 14:12:17 crc kubenswrapper[5050]: I1211 14:12:17.832418 5050 scope.go:117] "RemoveContainer" containerID="8106f10bed3bf46c894b57795f1b60477931a5aba11442b13a0172969049c89a" Dec 11 14:12:18 crc kubenswrapper[5050]: I1211 14:12:18.178673 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novacell0e20d-account-delete-ntvhn" event={"ID":"ca005c2d-f7de-486a-bbd6-a32443582833","Type":"ContainerDied","Data":"2c85789ec5a0ae6495c4ef59c97eea63484bc9c142ec4843fc3fdc3c1aa20aeb"} Dec 11 14:12:18 crc kubenswrapper[5050]: I1211 14:12:18.178728 5050 scope.go:117] "RemoveContainer" containerID="256a10ae9fc0b3f305dca6de262f804a0556a28126926031e10ecf611af1ad9a" Dec 11 14:12:18 crc kubenswrapper[5050]: I1211 14:12:18.178850 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novacell0e20d-account-delete-ntvhn" Dec 11 14:12:18 crc kubenswrapper[5050]: I1211 14:12:18.215713 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7f54bc974d-nvhbp" event={"ID":"36acbdf3-346e-4207-8391-b2a03ef839e5","Type":"ContainerDied","Data":"f829f2cd18d36054e8757d223545d93ec81ebd794ad7abd983c707e9f0df6efd"} Dec 11 14:12:18 crc kubenswrapper[5050]: I1211 14:12:18.215813 5050 scope.go:117] "RemoveContainer" containerID="5f2c03aa348522be8e65276f7ae37004bcd483651a98c207f1fa66f6b76162d4" Dec 11 14:12:18 crc kubenswrapper[5050]: I1211 14:12:18.216068 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7f54bc974d-nvhbp" Dec 11 14:12:18 crc kubenswrapper[5050]: I1211 14:12:18.225654 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/novaapi4326-account-delete-9bmsz" Dec 11 14:12:18 crc kubenswrapper[5050]: I1211 14:12:18.225864 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/novaapi4326-account-delete-9bmsz" event={"ID":"d917f471-6630-4e96-a0e4-cbde631da4a8","Type":"ContainerDied","Data":"b36256477183642b6fa08f2717a1915fe2ce3b93abc1181edc865af5e306f1f4"} Dec 11 14:12:18 crc kubenswrapper[5050]: I1211 14:12:18.258558 5050 scope.go:117] "RemoveContainer" containerID="e30b2c4e6ff129a1294024884ec6faf856995da2c5c40ac6cf9f3cdc4cf71b8a" Dec 11 14:12:18 crc kubenswrapper[5050]: I1211 14:12:18.260002 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novacell0e20d-account-delete-ntvhn"] Dec 11 14:12:18 crc kubenswrapper[5050]: I1211 14:12:18.277017 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novacell0e20d-account-delete-ntvhn"] Dec 11 14:12:18 crc kubenswrapper[5050]: I1211 14:12:18.287831 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/novaapi4326-account-delete-9bmsz"] Dec 11 14:12:18 crc kubenswrapper[5050]: I1211 14:12:18.296032 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/novaapi4326-account-delete-9bmsz"] Dec 11 14:12:18 crc kubenswrapper[5050]: I1211 14:12:18.302735 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7f54bc974d-nvhbp"] Dec 11 14:12:18 crc kubenswrapper[5050]: I1211 14:12:18.307693 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7f54bc974d-nvhbp"] Dec 11 14:12:18 crc kubenswrapper[5050]: E1211 14:12:18.946671 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c is running failed: container process not found" containerID="5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 11 14:12:18 crc kubenswrapper[5050]: E1211 14:12:18.947199 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c is running failed: container process not found" containerID="5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 11 14:12:18 crc kubenswrapper[5050]: E1211 14:12:18.947546 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c is running failed: container process not found" containerID="5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 11 14:12:18 crc kubenswrapper[5050]: E1211 14:12:18.947593 5050 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-pjzpq" podUID="88b4966d-124b-4cf4-b52b-704955059220" containerName="ovsdb-server" Dec 11 14:12:18 crc kubenswrapper[5050]: E1211 14:12:18.950084 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3713fc357ee4f2a6f119e91ece32b1dd727d67185d218ebb887b345bbd9015d1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 11 14:12:18 crc kubenswrapper[5050]: E1211 14:12:18.951663 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3713fc357ee4f2a6f119e91ece32b1dd727d67185d218ebb887b345bbd9015d1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 11 14:12:18 crc kubenswrapper[5050]: E1211 14:12:18.953231 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3713fc357ee4f2a6f119e91ece32b1dd727d67185d218ebb887b345bbd9015d1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 11 14:12:18 crc kubenswrapper[5050]: E1211 14:12:18.953305 5050 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-pjzpq" podUID="88b4966d-124b-4cf4-b52b-704955059220" containerName="ovs-vswitchd" Dec 11 14:12:19 crc kubenswrapper[5050]: I1211 14:12:19.560098 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36acbdf3-346e-4207-8391-b2a03ef839e5" path="/var/lib/kubelet/pods/36acbdf3-346e-4207-8391-b2a03ef839e5/volumes" Dec 11 14:12:19 crc kubenswrapper[5050]: I1211 14:12:19.560929 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca005c2d-f7de-486a-bbd6-a32443582833" path="/var/lib/kubelet/pods/ca005c2d-f7de-486a-bbd6-a32443582833/volumes" Dec 11 14:12:19 crc kubenswrapper[5050]: I1211 14:12:19.561623 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d917f471-6630-4e96-a0e4-cbde631da4a8" path="/var/lib/kubelet/pods/d917f471-6630-4e96-a0e4-cbde631da4a8/volumes" Dec 11 14:12:21 crc kubenswrapper[5050]: I1211 14:12:21.855832 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7766777c65-2rcww" Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.001685 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnh6w\" (UniqueName: \"kubernetes.io/projected/ee634ad2-5f9a-4183-bddc-d076b6456276-kube-api-access-mnh6w\") pod \"ee634ad2-5f9a-4183-bddc-d076b6456276\" (UID: \"ee634ad2-5f9a-4183-bddc-d076b6456276\") " Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.001806 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee634ad2-5f9a-4183-bddc-d076b6456276-ovndb-tls-certs\") pod \"ee634ad2-5f9a-4183-bddc-d076b6456276\" (UID: \"ee634ad2-5f9a-4183-bddc-d076b6456276\") " Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.001883 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ee634ad2-5f9a-4183-bddc-d076b6456276-config\") pod \"ee634ad2-5f9a-4183-bddc-d076b6456276\" (UID: \"ee634ad2-5f9a-4183-bddc-d076b6456276\") " Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.001990 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee634ad2-5f9a-4183-bddc-d076b6456276-public-tls-certs\") pod \"ee634ad2-5f9a-4183-bddc-d076b6456276\" (UID: \"ee634ad2-5f9a-4183-bddc-d076b6456276\") " Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.002038 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee634ad2-5f9a-4183-bddc-d076b6456276-internal-tls-certs\") pod \"ee634ad2-5f9a-4183-bddc-d076b6456276\" (UID: \"ee634ad2-5f9a-4183-bddc-d076b6456276\") " Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.002068 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ee634ad2-5f9a-4183-bddc-d076b6456276-httpd-config\") pod \"ee634ad2-5f9a-4183-bddc-d076b6456276\" (UID: \"ee634ad2-5f9a-4183-bddc-d076b6456276\") " Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.002139 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee634ad2-5f9a-4183-bddc-d076b6456276-combined-ca-bundle\") pod \"ee634ad2-5f9a-4183-bddc-d076b6456276\" (UID: \"ee634ad2-5f9a-4183-bddc-d076b6456276\") " Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.008919 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee634ad2-5f9a-4183-bddc-d076b6456276-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "ee634ad2-5f9a-4183-bddc-d076b6456276" (UID: "ee634ad2-5f9a-4183-bddc-d076b6456276"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.014803 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee634ad2-5f9a-4183-bddc-d076b6456276-kube-api-access-mnh6w" (OuterVolumeSpecName: "kube-api-access-mnh6w") pod "ee634ad2-5f9a-4183-bddc-d076b6456276" (UID: "ee634ad2-5f9a-4183-bddc-d076b6456276"). InnerVolumeSpecName "kube-api-access-mnh6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.057132 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee634ad2-5f9a-4183-bddc-d076b6456276-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ee634ad2-5f9a-4183-bddc-d076b6456276" (UID: "ee634ad2-5f9a-4183-bddc-d076b6456276"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.059285 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee634ad2-5f9a-4183-bddc-d076b6456276-config" (OuterVolumeSpecName: "config") pod "ee634ad2-5f9a-4183-bddc-d076b6456276" (UID: "ee634ad2-5f9a-4183-bddc-d076b6456276"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.062285 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee634ad2-5f9a-4183-bddc-d076b6456276-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ee634ad2-5f9a-4183-bddc-d076b6456276" (UID: "ee634ad2-5f9a-4183-bddc-d076b6456276"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.066601 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-z72zl" Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.066991 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-z72zl" Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.077176 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee634ad2-5f9a-4183-bddc-d076b6456276-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee634ad2-5f9a-4183-bddc-d076b6456276" (UID: "ee634ad2-5f9a-4183-bddc-d076b6456276"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.085181 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee634ad2-5f9a-4183-bddc-d076b6456276-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "ee634ad2-5f9a-4183-bddc-d076b6456276" (UID: "ee634ad2-5f9a-4183-bddc-d076b6456276"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.105959 5050 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee634ad2-5f9a-4183-bddc-d076b6456276-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.106029 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ee634ad2-5f9a-4183-bddc-d076b6456276-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.106043 5050 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee634ad2-5f9a-4183-bddc-d076b6456276-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.106059 5050 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee634ad2-5f9a-4183-bddc-d076b6456276-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.106072 5050 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/ee634ad2-5f9a-4183-bddc-d076b6456276-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.106083 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee634ad2-5f9a-4183-bddc-d076b6456276-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.106093 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnh6w\" (UniqueName: \"kubernetes.io/projected/ee634ad2-5f9a-4183-bddc-d076b6456276-kube-api-access-mnh6w\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.126308 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-z72zl" Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.275147 5050 generic.go:334] "Generic (PLEG): container finished" podID="ee634ad2-5f9a-4183-bddc-d076b6456276" containerID="b54b96dfb39cdff4b8a6a73d84c57d9222db83cd6d5c351fcff1ac130593b742" exitCode=0 Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.276213 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7766777c65-2rcww" Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.285199 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7766777c65-2rcww" event={"ID":"ee634ad2-5f9a-4183-bddc-d076b6456276","Type":"ContainerDied","Data":"b54b96dfb39cdff4b8a6a73d84c57d9222db83cd6d5c351fcff1ac130593b742"} Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.285247 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7766777c65-2rcww" event={"ID":"ee634ad2-5f9a-4183-bddc-d076b6456276","Type":"ContainerDied","Data":"cdad9081ffb3f2872de560a2ff42fb6d940f170002761e5918354dad1b365fd6"} Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.285272 5050 scope.go:117] "RemoveContainer" containerID="61e67897af4ee162906e77cb4c294b5d3db5572b3070be4672c47d685548ceec" Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.320209 5050 scope.go:117] "RemoveContainer" containerID="b54b96dfb39cdff4b8a6a73d84c57d9222db83cd6d5c351fcff1ac130593b742" Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.317784 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7766777c65-2rcww"] Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.325628 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7766777c65-2rcww"] Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.343838 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-z72zl" Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.347342 5050 scope.go:117] "RemoveContainer" containerID="61e67897af4ee162906e77cb4c294b5d3db5572b3070be4672c47d685548ceec" Dec 11 14:12:22 crc kubenswrapper[5050]: E1211 14:12:22.347915 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61e67897af4ee162906e77cb4c294b5d3db5572b3070be4672c47d685548ceec\": container with ID starting with 61e67897af4ee162906e77cb4c294b5d3db5572b3070be4672c47d685548ceec not found: ID does not exist" containerID="61e67897af4ee162906e77cb4c294b5d3db5572b3070be4672c47d685548ceec" Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.347973 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61e67897af4ee162906e77cb4c294b5d3db5572b3070be4672c47d685548ceec"} err="failed to get container status \"61e67897af4ee162906e77cb4c294b5d3db5572b3070be4672c47d685548ceec\": rpc error: code = NotFound desc = could not find container \"61e67897af4ee162906e77cb4c294b5d3db5572b3070be4672c47d685548ceec\": container with ID starting with 61e67897af4ee162906e77cb4c294b5d3db5572b3070be4672c47d685548ceec not found: ID does not exist" Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.348000 5050 scope.go:117] "RemoveContainer" containerID="b54b96dfb39cdff4b8a6a73d84c57d9222db83cd6d5c351fcff1ac130593b742" Dec 11 14:12:22 crc kubenswrapper[5050]: E1211 14:12:22.348377 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b54b96dfb39cdff4b8a6a73d84c57d9222db83cd6d5c351fcff1ac130593b742\": container with ID starting with b54b96dfb39cdff4b8a6a73d84c57d9222db83cd6d5c351fcff1ac130593b742 not found: ID does not exist" containerID="b54b96dfb39cdff4b8a6a73d84c57d9222db83cd6d5c351fcff1ac130593b742" Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.348404 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b54b96dfb39cdff4b8a6a73d84c57d9222db83cd6d5c351fcff1ac130593b742"} err="failed to get container status \"b54b96dfb39cdff4b8a6a73d84c57d9222db83cd6d5c351fcff1ac130593b742\": rpc error: code = NotFound desc = could not find container \"b54b96dfb39cdff4b8a6a73d84c57d9222db83cd6d5c351fcff1ac130593b742\": container with ID starting with b54b96dfb39cdff4b8a6a73d84c57d9222db83cd6d5c351fcff1ac130593b742 not found: ID does not exist" Dec 11 14:12:22 crc kubenswrapper[5050]: I1211 14:12:22.407943 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z72zl"] Dec 11 14:12:23 crc kubenswrapper[5050]: I1211 14:12:23.559614 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee634ad2-5f9a-4183-bddc-d076b6456276" path="/var/lib/kubelet/pods/ee634ad2-5f9a-4183-bddc-d076b6456276/volumes" Dec 11 14:12:23 crc kubenswrapper[5050]: E1211 14:12:23.947529 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c is running failed: container process not found" containerID="5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 11 14:12:23 crc kubenswrapper[5050]: E1211 14:12:23.949204 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c is running failed: container process not found" containerID="5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 11 14:12:23 crc kubenswrapper[5050]: E1211 14:12:23.949785 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c is running failed: container process not found" containerID="5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 11 14:12:23 crc kubenswrapper[5050]: E1211 14:12:23.949887 5050 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-pjzpq" podUID="88b4966d-124b-4cf4-b52b-704955059220" containerName="ovsdb-server" Dec 11 14:12:23 crc kubenswrapper[5050]: E1211 14:12:23.949950 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3713fc357ee4f2a6f119e91ece32b1dd727d67185d218ebb887b345bbd9015d1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 11 14:12:23 crc kubenswrapper[5050]: E1211 14:12:23.952343 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3713fc357ee4f2a6f119e91ece32b1dd727d67185d218ebb887b345bbd9015d1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 11 14:12:23 crc kubenswrapper[5050]: E1211 14:12:23.954251 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3713fc357ee4f2a6f119e91ece32b1dd727d67185d218ebb887b345bbd9015d1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 11 14:12:23 crc kubenswrapper[5050]: E1211 14:12:23.954335 5050 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-pjzpq" podUID="88b4966d-124b-4cf4-b52b-704955059220" containerName="ovs-vswitchd" Dec 11 14:12:24 crc kubenswrapper[5050]: I1211 14:12:24.298663 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-z72zl" podUID="3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8" containerName="registry-server" containerID="cri-o://f2162cdc516afc0854a82b78677134d83e4dfa653ed135b76204de329c5a08c2" gracePeriod=2 Dec 11 14:12:25 crc kubenswrapper[5050]: I1211 14:12:25.308134 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z72zl" Dec 11 14:12:25 crc kubenswrapper[5050]: I1211 14:12:25.309352 5050 generic.go:334] "Generic (PLEG): container finished" podID="3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8" containerID="f2162cdc516afc0854a82b78677134d83e4dfa653ed135b76204de329c5a08c2" exitCode=0 Dec 11 14:12:25 crc kubenswrapper[5050]: I1211 14:12:25.309403 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z72zl" event={"ID":"3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8","Type":"ContainerDied","Data":"f2162cdc516afc0854a82b78677134d83e4dfa653ed135b76204de329c5a08c2"} Dec 11 14:12:25 crc kubenswrapper[5050]: I1211 14:12:25.309439 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-z72zl" event={"ID":"3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8","Type":"ContainerDied","Data":"c15320225f101e1a2e912cef7830173e3b944e61d96c75a72c6b39390d43a440"} Dec 11 14:12:25 crc kubenswrapper[5050]: I1211 14:12:25.309458 5050 scope.go:117] "RemoveContainer" containerID="f2162cdc516afc0854a82b78677134d83e4dfa653ed135b76204de329c5a08c2" Dec 11 14:12:25 crc kubenswrapper[5050]: I1211 14:12:25.337452 5050 scope.go:117] "RemoveContainer" containerID="09344e4a97a8ab6a4dd14bdb19ffc7b857d5e251f7e429a179a74c21dc2fac38" Dec 11 14:12:25 crc kubenswrapper[5050]: I1211 14:12:25.377051 5050 scope.go:117] "RemoveContainer" containerID="0d96e38916ee974bfb00beafad9631882e17ac13c98ca0ed36655136da275949" Dec 11 14:12:25 crc kubenswrapper[5050]: I1211 14:12:25.419266 5050 scope.go:117] "RemoveContainer" containerID="f2162cdc516afc0854a82b78677134d83e4dfa653ed135b76204de329c5a08c2" Dec 11 14:12:25 crc kubenswrapper[5050]: E1211 14:12:25.421462 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2162cdc516afc0854a82b78677134d83e4dfa653ed135b76204de329c5a08c2\": container with ID starting with f2162cdc516afc0854a82b78677134d83e4dfa653ed135b76204de329c5a08c2 not found: ID does not exist" containerID="f2162cdc516afc0854a82b78677134d83e4dfa653ed135b76204de329c5a08c2" Dec 11 14:12:25 crc kubenswrapper[5050]: I1211 14:12:25.421517 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2162cdc516afc0854a82b78677134d83e4dfa653ed135b76204de329c5a08c2"} err="failed to get container status \"f2162cdc516afc0854a82b78677134d83e4dfa653ed135b76204de329c5a08c2\": rpc error: code = NotFound desc = could not find container \"f2162cdc516afc0854a82b78677134d83e4dfa653ed135b76204de329c5a08c2\": container with ID starting with f2162cdc516afc0854a82b78677134d83e4dfa653ed135b76204de329c5a08c2 not found: ID does not exist" Dec 11 14:12:25 crc kubenswrapper[5050]: I1211 14:12:25.421552 5050 scope.go:117] "RemoveContainer" containerID="09344e4a97a8ab6a4dd14bdb19ffc7b857d5e251f7e429a179a74c21dc2fac38" Dec 11 14:12:25 crc kubenswrapper[5050]: E1211 14:12:25.422942 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09344e4a97a8ab6a4dd14bdb19ffc7b857d5e251f7e429a179a74c21dc2fac38\": container with ID starting with 09344e4a97a8ab6a4dd14bdb19ffc7b857d5e251f7e429a179a74c21dc2fac38 not found: ID does not exist" containerID="09344e4a97a8ab6a4dd14bdb19ffc7b857d5e251f7e429a179a74c21dc2fac38" Dec 11 14:12:25 crc kubenswrapper[5050]: I1211 14:12:25.423004 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09344e4a97a8ab6a4dd14bdb19ffc7b857d5e251f7e429a179a74c21dc2fac38"} err="failed to get container status \"09344e4a97a8ab6a4dd14bdb19ffc7b857d5e251f7e429a179a74c21dc2fac38\": rpc error: code = NotFound desc = could not find container \"09344e4a97a8ab6a4dd14bdb19ffc7b857d5e251f7e429a179a74c21dc2fac38\": container with ID starting with 09344e4a97a8ab6a4dd14bdb19ffc7b857d5e251f7e429a179a74c21dc2fac38 not found: ID does not exist" Dec 11 14:12:25 crc kubenswrapper[5050]: I1211 14:12:25.423054 5050 scope.go:117] "RemoveContainer" containerID="0d96e38916ee974bfb00beafad9631882e17ac13c98ca0ed36655136da275949" Dec 11 14:12:25 crc kubenswrapper[5050]: E1211 14:12:25.424309 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d96e38916ee974bfb00beafad9631882e17ac13c98ca0ed36655136da275949\": container with ID starting with 0d96e38916ee974bfb00beafad9631882e17ac13c98ca0ed36655136da275949 not found: ID does not exist" containerID="0d96e38916ee974bfb00beafad9631882e17ac13c98ca0ed36655136da275949" Dec 11 14:12:25 crc kubenswrapper[5050]: I1211 14:12:25.424348 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d96e38916ee974bfb00beafad9631882e17ac13c98ca0ed36655136da275949"} err="failed to get container status \"0d96e38916ee974bfb00beafad9631882e17ac13c98ca0ed36655136da275949\": rpc error: code = NotFound desc = could not find container \"0d96e38916ee974bfb00beafad9631882e17ac13c98ca0ed36655136da275949\": container with ID starting with 0d96e38916ee974bfb00beafad9631882e17ac13c98ca0ed36655136da275949 not found: ID does not exist" Dec 11 14:12:25 crc kubenswrapper[5050]: I1211 14:12:25.471093 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8-catalog-content\") pod \"3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8\" (UID: \"3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8\") " Dec 11 14:12:25 crc kubenswrapper[5050]: I1211 14:12:25.471301 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8-utilities\") pod \"3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8\" (UID: \"3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8\") " Dec 11 14:12:25 crc kubenswrapper[5050]: I1211 14:12:25.471427 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9mxx\" (UniqueName: \"kubernetes.io/projected/3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8-kube-api-access-w9mxx\") pod \"3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8\" (UID: \"3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8\") " Dec 11 14:12:25 crc kubenswrapper[5050]: I1211 14:12:25.472439 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8-utilities" (OuterVolumeSpecName: "utilities") pod "3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8" (UID: "3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:12:25 crc kubenswrapper[5050]: I1211 14:12:25.480774 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8-kube-api-access-w9mxx" (OuterVolumeSpecName: "kube-api-access-w9mxx") pod "3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8" (UID: "3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8"). InnerVolumeSpecName "kube-api-access-w9mxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:25 crc kubenswrapper[5050]: I1211 14:12:25.542552 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8" (UID: "3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:12:25 crc kubenswrapper[5050]: I1211 14:12:25.573375 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:25 crc kubenswrapper[5050]: I1211 14:12:25.573413 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9mxx\" (UniqueName: \"kubernetes.io/projected/3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8-kube-api-access-w9mxx\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:25 crc kubenswrapper[5050]: I1211 14:12:25.573425 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:26 crc kubenswrapper[5050]: I1211 14:12:26.323592 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-z72zl" Dec 11 14:12:26 crc kubenswrapper[5050]: I1211 14:12:26.348744 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-z72zl"] Dec 11 14:12:26 crc kubenswrapper[5050]: I1211 14:12:26.356431 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-z72zl"] Dec 11 14:12:27 crc kubenswrapper[5050]: I1211 14:12:27.558500 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8" path="/var/lib/kubelet/pods/3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8/volumes" Dec 11 14:12:28 crc kubenswrapper[5050]: E1211 14:12:28.946947 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c is running failed: container process not found" containerID="5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 11 14:12:28 crc kubenswrapper[5050]: E1211 14:12:28.948369 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c is running failed: container process not found" containerID="5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 11 14:12:28 crc kubenswrapper[5050]: E1211 14:12:28.948641 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3713fc357ee4f2a6f119e91ece32b1dd727d67185d218ebb887b345bbd9015d1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 11 14:12:28 crc kubenswrapper[5050]: E1211 14:12:28.948831 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c is running failed: container process not found" containerID="5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 11 14:12:28 crc kubenswrapper[5050]: E1211 14:12:28.949068 5050 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-pjzpq" podUID="88b4966d-124b-4cf4-b52b-704955059220" containerName="ovsdb-server" Dec 11 14:12:28 crc kubenswrapper[5050]: E1211 14:12:28.950168 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3713fc357ee4f2a6f119e91ece32b1dd727d67185d218ebb887b345bbd9015d1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 11 14:12:28 crc kubenswrapper[5050]: E1211 14:12:28.951415 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3713fc357ee4f2a6f119e91ece32b1dd727d67185d218ebb887b345bbd9015d1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 11 14:12:28 crc kubenswrapper[5050]: E1211 14:12:28.951456 5050 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-pjzpq" podUID="88b4966d-124b-4cf4-b52b-704955059220" containerName="ovs-vswitchd" Dec 11 14:12:33 crc kubenswrapper[5050]: E1211 14:12:33.947776 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c is running failed: container process not found" containerID="5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 11 14:12:33 crc kubenswrapper[5050]: E1211 14:12:33.949065 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c is running failed: container process not found" containerID="5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 11 14:12:33 crc kubenswrapper[5050]: E1211 14:12:33.949363 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3713fc357ee4f2a6f119e91ece32b1dd727d67185d218ebb887b345bbd9015d1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 11 14:12:33 crc kubenswrapper[5050]: E1211 14:12:33.949525 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c is running failed: container process not found" containerID="5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Dec 11 14:12:33 crc kubenswrapper[5050]: E1211 14:12:33.949575 5050 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-pjzpq" podUID="88b4966d-124b-4cf4-b52b-704955059220" containerName="ovsdb-server" Dec 11 14:12:33 crc kubenswrapper[5050]: E1211 14:12:33.951971 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3713fc357ee4f2a6f119e91ece32b1dd727d67185d218ebb887b345bbd9015d1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 11 14:12:33 crc kubenswrapper[5050]: E1211 14:12:33.953874 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3713fc357ee4f2a6f119e91ece32b1dd727d67185d218ebb887b345bbd9015d1" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Dec 11 14:12:33 crc kubenswrapper[5050]: E1211 14:12:33.953982 5050 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-pjzpq" podUID="88b4966d-124b-4cf4-b52b-704955059220" containerName="ovs-vswitchd" Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.437319 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-pjzpq_88b4966d-124b-4cf4-b52b-704955059220/ovs-vswitchd/0.log" Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.440218 5050 generic.go:334] "Generic (PLEG): container finished" podID="88b4966d-124b-4cf4-b52b-704955059220" containerID="3713fc357ee4f2a6f119e91ece32b1dd727d67185d218ebb887b345bbd9015d1" exitCode=137 Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.440313 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pjzpq" event={"ID":"88b4966d-124b-4cf4-b52b-704955059220","Type":"ContainerDied","Data":"3713fc357ee4f2a6f119e91ece32b1dd727d67185d218ebb887b345bbd9015d1"} Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.452489 5050 generic.go:334] "Generic (PLEG): container finished" podID="a5dabf50-534b-45cb-87db-45373930fe82" containerID="44acc22d4dbaf9801a70faf934b08100e13594f1cab4f854bc7c2b3dd8963fb5" exitCode=137 Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.452542 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a5dabf50-534b-45cb-87db-45373930fe82","Type":"ContainerDied","Data":"44acc22d4dbaf9801a70faf934b08100e13594f1cab4f854bc7c2b3dd8963fb5"} Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.578685 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.661474 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtxbm\" (UniqueName: \"kubernetes.io/projected/a5dabf50-534b-45cb-87db-45373930fe82-kube-api-access-jtxbm\") pod \"a5dabf50-534b-45cb-87db-45373930fe82\" (UID: \"a5dabf50-534b-45cb-87db-45373930fe82\") " Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.661539 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a5dabf50-534b-45cb-87db-45373930fe82-etc-swift\") pod \"a5dabf50-534b-45cb-87db-45373930fe82\" (UID: \"a5dabf50-534b-45cb-87db-45373930fe82\") " Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.661642 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a5dabf50-534b-45cb-87db-45373930fe82-cache\") pod \"a5dabf50-534b-45cb-87db-45373930fe82\" (UID: \"a5dabf50-534b-45cb-87db-45373930fe82\") " Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.661847 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"a5dabf50-534b-45cb-87db-45373930fe82\" (UID: \"a5dabf50-534b-45cb-87db-45373930fe82\") " Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.661942 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a5dabf50-534b-45cb-87db-45373930fe82-lock\") pod \"a5dabf50-534b-45cb-87db-45373930fe82\" (UID: \"a5dabf50-534b-45cb-87db-45373930fe82\") " Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.662385 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5dabf50-534b-45cb-87db-45373930fe82-cache" (OuterVolumeSpecName: "cache") pod "a5dabf50-534b-45cb-87db-45373930fe82" (UID: "a5dabf50-534b-45cb-87db-45373930fe82"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.662862 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5dabf50-534b-45cb-87db-45373930fe82-lock" (OuterVolumeSpecName: "lock") pod "a5dabf50-534b-45cb-87db-45373930fe82" (UID: "a5dabf50-534b-45cb-87db-45373930fe82"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.663202 5050 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a5dabf50-534b-45cb-87db-45373930fe82-lock\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.663219 5050 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a5dabf50-534b-45cb-87db-45373930fe82-cache\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.666985 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5dabf50-534b-45cb-87db-45373930fe82-kube-api-access-jtxbm" (OuterVolumeSpecName: "kube-api-access-jtxbm") pod "a5dabf50-534b-45cb-87db-45373930fe82" (UID: "a5dabf50-534b-45cb-87db-45373930fe82"). InnerVolumeSpecName "kube-api-access-jtxbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.673555 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "swift") pod "a5dabf50-534b-45cb-87db-45373930fe82" (UID: "a5dabf50-534b-45cb-87db-45373930fe82"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.679958 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5dabf50-534b-45cb-87db-45373930fe82-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "a5dabf50-534b-45cb-87db-45373930fe82" (UID: "a5dabf50-534b-45cb-87db-45373930fe82"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.765067 5050 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.765103 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtxbm\" (UniqueName: \"kubernetes.io/projected/a5dabf50-534b-45cb-87db-45373930fe82-kube-api-access-jtxbm\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.765117 5050 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a5dabf50-534b-45cb-87db-45373930fe82-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.771264 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-pjzpq_88b4966d-124b-4cf4-b52b-704955059220/ovs-vswitchd/0.log" Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.772331 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-pjzpq" Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.780136 5050 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.866458 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/88b4966d-124b-4cf4-b52b-704955059220-scripts\") pod \"88b4966d-124b-4cf4-b52b-704955059220\" (UID: \"88b4966d-124b-4cf4-b52b-704955059220\") " Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.866541 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/88b4966d-124b-4cf4-b52b-704955059220-var-lib\") pod \"88b4966d-124b-4cf4-b52b-704955059220\" (UID: \"88b4966d-124b-4cf4-b52b-704955059220\") " Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.866568 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcvpj\" (UniqueName: \"kubernetes.io/projected/88b4966d-124b-4cf4-b52b-704955059220-kube-api-access-fcvpj\") pod \"88b4966d-124b-4cf4-b52b-704955059220\" (UID: \"88b4966d-124b-4cf4-b52b-704955059220\") " Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.866593 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/88b4966d-124b-4cf4-b52b-704955059220-etc-ovs\") pod \"88b4966d-124b-4cf4-b52b-704955059220\" (UID: \"88b4966d-124b-4cf4-b52b-704955059220\") " Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.866632 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/88b4966d-124b-4cf4-b52b-704955059220-var-run\") pod \"88b4966d-124b-4cf4-b52b-704955059220\" (UID: \"88b4966d-124b-4cf4-b52b-704955059220\") " Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.866776 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/88b4966d-124b-4cf4-b52b-704955059220-var-log\") pod \"88b4966d-124b-4cf4-b52b-704955059220\" (UID: \"88b4966d-124b-4cf4-b52b-704955059220\") " Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.866788 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88b4966d-124b-4cf4-b52b-704955059220-var-lib" (OuterVolumeSpecName: "var-lib") pod "88b4966d-124b-4cf4-b52b-704955059220" (UID: "88b4966d-124b-4cf4-b52b-704955059220"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.866807 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88b4966d-124b-4cf4-b52b-704955059220-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "88b4966d-124b-4cf4-b52b-704955059220" (UID: "88b4966d-124b-4cf4-b52b-704955059220"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.866850 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88b4966d-124b-4cf4-b52b-704955059220-var-run" (OuterVolumeSpecName: "var-run") pod "88b4966d-124b-4cf4-b52b-704955059220" (UID: "88b4966d-124b-4cf4-b52b-704955059220"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.866954 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/88b4966d-124b-4cf4-b52b-704955059220-var-log" (OuterVolumeSpecName: "var-log") pod "88b4966d-124b-4cf4-b52b-704955059220" (UID: "88b4966d-124b-4cf4-b52b-704955059220"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.867122 5050 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/88b4966d-124b-4cf4-b52b-704955059220-var-lib\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.867141 5050 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/88b4966d-124b-4cf4-b52b-704955059220-etc-ovs\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.867152 5050 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/88b4966d-124b-4cf4-b52b-704955059220-var-run\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.867162 5050 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.867174 5050 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/88b4966d-124b-4cf4-b52b-704955059220-var-log\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.870105 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88b4966d-124b-4cf4-b52b-704955059220-scripts" (OuterVolumeSpecName: "scripts") pod "88b4966d-124b-4cf4-b52b-704955059220" (UID: "88b4966d-124b-4cf4-b52b-704955059220"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.870942 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88b4966d-124b-4cf4-b52b-704955059220-kube-api-access-fcvpj" (OuterVolumeSpecName: "kube-api-access-fcvpj") pod "88b4966d-124b-4cf4-b52b-704955059220" (UID: "88b4966d-124b-4cf4-b52b-704955059220"). InnerVolumeSpecName "kube-api-access-fcvpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.968159 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/88b4966d-124b-4cf4-b52b-704955059220-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:36 crc kubenswrapper[5050]: I1211 14:12:36.968210 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcvpj\" (UniqueName: \"kubernetes.io/projected/88b4966d-124b-4cf4-b52b-704955059220-kube-api-access-fcvpj\") on node \"crc\" DevicePath \"\"" Dec 11 14:12:37 crc kubenswrapper[5050]: I1211 14:12:37.466806 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-pjzpq_88b4966d-124b-4cf4-b52b-704955059220/ovs-vswitchd/0.log" Dec 11 14:12:37 crc kubenswrapper[5050]: I1211 14:12:37.468075 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pjzpq" event={"ID":"88b4966d-124b-4cf4-b52b-704955059220","Type":"ContainerDied","Data":"31794c1b02c4feda1b83378dcdbbca471105b28edc8768160a171cede3872d9f"} Dec 11 14:12:37 crc kubenswrapper[5050]: I1211 14:12:37.468136 5050 scope.go:117] "RemoveContainer" containerID="3713fc357ee4f2a6f119e91ece32b1dd727d67185d218ebb887b345bbd9015d1" Dec 11 14:12:37 crc kubenswrapper[5050]: I1211 14:12:37.468093 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-pjzpq" Dec 11 14:12:37 crc kubenswrapper[5050]: I1211 14:12:37.477131 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a5dabf50-534b-45cb-87db-45373930fe82","Type":"ContainerDied","Data":"e4960e7eda3e1efa4061f07af255476e8516177687e0f468f6a9a0c6571c04a9"} Dec 11 14:12:37 crc kubenswrapper[5050]: I1211 14:12:37.477238 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 11 14:12:37 crc kubenswrapper[5050]: I1211 14:12:37.506257 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-pjzpq"] Dec 11 14:12:37 crc kubenswrapper[5050]: I1211 14:12:37.512665 5050 scope.go:117] "RemoveContainer" containerID="5ce7ebe2679d23b41ab50f0b7d865ea6308f725963cbd7e3ed9e1c34f506375c" Dec 11 14:12:37 crc kubenswrapper[5050]: I1211 14:12:37.517881 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-pjzpq"] Dec 11 14:12:37 crc kubenswrapper[5050]: I1211 14:12:37.526002 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Dec 11 14:12:37 crc kubenswrapper[5050]: I1211 14:12:37.537411 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Dec 11 14:12:37 crc kubenswrapper[5050]: I1211 14:12:37.544407 5050 scope.go:117] "RemoveContainer" containerID="869a4abfd180a5e436ede22ec0513f606237aa2e6ae7e715fd1ae502f1b97492" Dec 11 14:12:37 crc kubenswrapper[5050]: I1211 14:12:37.558561 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88b4966d-124b-4cf4-b52b-704955059220" path="/var/lib/kubelet/pods/88b4966d-124b-4cf4-b52b-704955059220/volumes" Dec 11 14:12:37 crc kubenswrapper[5050]: I1211 14:12:37.559792 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5dabf50-534b-45cb-87db-45373930fe82" path="/var/lib/kubelet/pods/a5dabf50-534b-45cb-87db-45373930fe82/volumes" Dec 11 14:12:37 crc kubenswrapper[5050]: I1211 14:12:37.571322 5050 scope.go:117] "RemoveContainer" containerID="44acc22d4dbaf9801a70faf934b08100e13594f1cab4f854bc7c2b3dd8963fb5" Dec 11 14:12:37 crc kubenswrapper[5050]: I1211 14:12:37.595912 5050 scope.go:117] "RemoveContainer" containerID="bfa20bc6bb25080f92169274679704ad90a7e9f219408ae8226d21d94b1cbce8" Dec 11 14:12:37 crc kubenswrapper[5050]: I1211 14:12:37.612675 5050 scope.go:117] "RemoveContainer" containerID="c59f8bb548eec4e62535766386e180811808e5f7cf7913a3c02582a806b4073f" Dec 11 14:12:37 crc kubenswrapper[5050]: I1211 14:12:37.630268 5050 scope.go:117] "RemoveContainer" containerID="038b0092de538faefca3e8ca1075a18dd7d58853d0c6eb5fdadf157d7e0f2147" Dec 11 14:12:37 crc kubenswrapper[5050]: I1211 14:12:37.648988 5050 scope.go:117] "RemoveContainer" containerID="cf225480f25db60b0e9d83e3b98e796c13673684172c9b6129d91e173f39beb6" Dec 11 14:12:37 crc kubenswrapper[5050]: I1211 14:12:37.667073 5050 scope.go:117] "RemoveContainer" containerID="cca45253ddc48fc0f165034563c70e630dd7fac3f3c0cf0ba23d657266869519" Dec 11 14:12:37 crc kubenswrapper[5050]: I1211 14:12:37.684639 5050 scope.go:117] "RemoveContainer" containerID="0cac73e478a996fa3e9d0714853b7480372b37e951d6e3e0667c3722790407c8" Dec 11 14:12:37 crc kubenswrapper[5050]: I1211 14:12:37.701218 5050 scope.go:117] "RemoveContainer" containerID="b6c3d2263c2a8d964cc7422913cdc01c0a98e50a91cd20af0a8e5219f5c49d84" Dec 11 14:12:37 crc kubenswrapper[5050]: I1211 14:12:37.719319 5050 scope.go:117] "RemoveContainer" containerID="3c2652501efb162ddb07fcdf676ff7b425046c43c56a32e87cf2a1b7f86d8517" Dec 11 14:12:37 crc kubenswrapper[5050]: I1211 14:12:37.741404 5050 scope.go:117] "RemoveContainer" containerID="0b5dadf04d75b453da45faaba622c4ca8fa3ea72335c7e1cbe662af9423d5319" Dec 11 14:12:37 crc kubenswrapper[5050]: I1211 14:12:37.767229 5050 scope.go:117] "RemoveContainer" containerID="a92c2e4e55be6c0dccf533363df9021ca510e9f14d1f5a908a2795582d914ca4" Dec 11 14:12:37 crc kubenswrapper[5050]: I1211 14:12:37.786361 5050 scope.go:117] "RemoveContainer" containerID="cc96ca859857b932852bab79b175e12e28dd66ea2b3f97528e65f1c394df699c" Dec 11 14:12:37 crc kubenswrapper[5050]: I1211 14:12:37.805259 5050 scope.go:117] "RemoveContainer" containerID="c1a089eb1d8d523f1a786eee0915def7fa7aab5c3e4514f0c035a46c61eef1cb" Dec 11 14:12:37 crc kubenswrapper[5050]: I1211 14:12:37.826639 5050 scope.go:117] "RemoveContainer" containerID="feac1300b5d8a5ea16c8321c45cc457e5dbf72ac6aab1103080d7accf21709e1" Dec 11 14:12:37 crc kubenswrapper[5050]: I1211 14:12:37.848098 5050 scope.go:117] "RemoveContainer" containerID="69f5ff7e4ffed5e07ece2e747c39e17e11ce1252b75c01b5c3313338481c02f5" Dec 11 14:12:38 crc kubenswrapper[5050]: I1211 14:12:38.493034 5050 scope.go:117] "RemoveContainer" containerID="8498e742367424482ed9a44ca42a11a58844241a90788c1a5e431a1e93f23131" Dec 11 14:12:38 crc kubenswrapper[5050]: I1211 14:12:38.523213 5050 scope.go:117] "RemoveContainer" containerID="9363f944bb00bf65a18a77d105a5c3acb2935d1c6a51699593ad0beb061d83c4" Dec 11 14:12:38 crc kubenswrapper[5050]: I1211 14:12:38.552978 5050 scope.go:117] "RemoveContainer" containerID="c4d83d8bcd5be1a638da2b5e58c918cebe164f68a9a419b211a09b8c18d559ca" Dec 11 14:12:38 crc kubenswrapper[5050]: I1211 14:12:38.585484 5050 scope.go:117] "RemoveContainer" containerID="c1583bcb8328969c751cd4b4397c74eb88bd573926b6bcd1b686432e9ee9696e" Dec 11 14:12:38 crc kubenswrapper[5050]: I1211 14:12:38.633856 5050 scope.go:117] "RemoveContainer" containerID="0dcfe8c85171116ddfd570f8fd726877506b5485bc64bfae0b8fa6e75c5ea7d8" Dec 11 14:12:38 crc kubenswrapper[5050]: I1211 14:12:38.668803 5050 scope.go:117] "RemoveContainer" containerID="b030d9ea1d520c633a941cacbfc01b8167a3e4ea9d95d099c782876dc0ce6862" Dec 11 14:12:38 crc kubenswrapper[5050]: I1211 14:12:38.701568 5050 scope.go:117] "RemoveContainer" containerID="3f6ab43d7c44f6f5b8c73954b9c98393b51e2f88daf2fc69efb6768d87c72dd3" Dec 11 14:12:38 crc kubenswrapper[5050]: I1211 14:12:38.724549 5050 scope.go:117] "RemoveContainer" containerID="2a9d3c07c9884ff572de5d859d886e5e90497f2bc5adb397f3f64151ee6e7fd3" Dec 11 14:12:38 crc kubenswrapper[5050]: I1211 14:12:38.749509 5050 scope.go:117] "RemoveContainer" containerID="0815f59a5ea8b9a1ce5a7cc867a781d8ee6b9ccda7be00873eebb4be9026b907" Dec 11 14:12:40 crc kubenswrapper[5050]: I1211 14:12:40.221924 5050 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod0ca28ba4-2b37-4836-9d51-8dea84046163"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod0ca28ba4-2b37-4836-9d51-8dea84046163] : Timed out while waiting for systemd to remove kubepods-besteffort-pod0ca28ba4_2b37_4836_9d51_8dea84046163.slice" Dec 11 14:12:40 crc kubenswrapper[5050]: I1211 14:12:40.796376 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:12:40 crc kubenswrapper[5050]: I1211 14:12:40.796494 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:12:44 crc kubenswrapper[5050]: I1211 14:12:44.361670 5050 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod213cfec6-ba42-4dbc-bd9c-051b193e4577"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod213cfec6-ba42-4dbc-bd9c-051b193e4577] : Timed out while waiting for systemd to remove kubepods-besteffort-pod213cfec6_ba42_4dbc_bd9c_051b193e4577.slice" Dec 11 14:12:44 crc kubenswrapper[5050]: E1211 14:12:44.362236 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod213cfec6-ba42-4dbc-bd9c-051b193e4577] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod213cfec6-ba42-4dbc-bd9c-051b193e4577] : Timed out while waiting for systemd to remove kubepods-besteffort-pod213cfec6_ba42_4dbc_bd9c_051b193e4577.slice" pod="openstack/glance-default-external-api-0" podUID="213cfec6-ba42-4dbc-bd9c-051b193e4577" Dec 11 14:12:44 crc kubenswrapper[5050]: I1211 14:12:44.570388 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 14:12:44 crc kubenswrapper[5050]: I1211 14:12:44.592676 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 14:12:44 crc kubenswrapper[5050]: I1211 14:12:44.597991 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 14:12:45 crc kubenswrapper[5050]: I1211 14:12:45.557941 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="213cfec6-ba42-4dbc-bd9c-051b193e4577" path="/var/lib/kubelet/pods/213cfec6-ba42-4dbc-bd9c-051b193e4577/volumes" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.079754 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ph492"] Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080175 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cffff412-bf3c-4739-8bb8-3d099c8c83fe" containerName="proxy-httpd" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080198 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="cffff412-bf3c-4739-8bb8-3d099c8c83fe" containerName="proxy-httpd" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080220 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ef32727-7bbd-4a50-8292-4740b34107cc" containerName="sg-core" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080231 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ef32727-7bbd-4a50-8292-4740b34107cc" containerName="sg-core" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080247 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66cb4589-6296-417b-87eb-4bcbff7bf580" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080255 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="66cb4589-6296-417b-87eb-4bcbff7bf580" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080266 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e365d825-a3cb-42a3-8a00-8a9be42ed290" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080274 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e365d825-a3cb-42a3-8a00-8a9be42ed290" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080284 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="rsync" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080291 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="rsync" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080298 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="569cb143-086a-42f1-9e8c-6f6f614c9ee2" containerName="barbican-keystone-listener" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080307 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="569cb143-086a-42f1-9e8c-6f6f614c9ee2" containerName="barbican-keystone-listener" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080318 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55b7e535-46f6-403b-9cdf-bf172dba97b6" containerName="galera" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080325 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="55b7e535-46f6-403b-9cdf-bf172dba97b6" containerName="galera" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080336 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="container-auditor" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080343 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="container-auditor" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080408 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="934cae9e-c75b-434d-b1e1-d566d6fb8b7d" containerName="barbican-worker-log" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080418 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="934cae9e-c75b-434d-b1e1-d566d6fb8b7d" containerName="barbican-worker-log" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080429 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88b4966d-124b-4cf4-b52b-704955059220" containerName="ovs-vswitchd" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080435 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="88b4966d-124b-4cf4-b52b-704955059220" containerName="ovs-vswitchd" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080448 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0891f075-8101-475b-b844-e7cb42a4990b" containerName="setup-container" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080456 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="0891f075-8101-475b-b844-e7cb42a4990b" containerName="setup-container" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080469 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc2e956f-6026-4a75-b11a-5106aad626a5" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080476 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc2e956f-6026-4a75-b11a-5106aad626a5" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080493 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01fa4d89-aae5-451a-8798-2700053fe3d4" containerName="ovn-controller" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080502 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="01fa4d89-aae5-451a-8798-2700053fe3d4" containerName="ovn-controller" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080521 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="213cfec6-ba42-4dbc-bd9c-051b193e4577" containerName="glance-httpd" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080528 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="213cfec6-ba42-4dbc-bd9c-051b193e4577" containerName="glance-httpd" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080537 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c443a35b-44e5-495f-b23b-75ff35319194" containerName="init" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080545 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="c443a35b-44e5-495f-b23b-75ff35319194" containerName="init" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080557 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="569cb143-086a-42f1-9e8c-6f6f614c9ee2" containerName="barbican-keystone-listener-log" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080564 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="569cb143-086a-42f1-9e8c-6f6f614c9ee2" containerName="barbican-keystone-listener-log" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080574 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c928931c-d49d-41dc-9181-11d856ed3bd0" containerName="openstack-network-exporter" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080581 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="c928931c-d49d-41dc-9181-11d856ed3bd0" containerName="openstack-network-exporter" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080595 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="934cae9e-c75b-434d-b1e1-d566d6fb8b7d" containerName="barbican-worker" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080602 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="934cae9e-c75b-434d-b1e1-d566d6fb8b7d" containerName="barbican-worker" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080617 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c13a1ff-0952-40b8-9157-3f1ba8b232c0" containerName="nova-cell0-conductor-conductor" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080624 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c13a1ff-0952-40b8-9157-3f1ba8b232c0" containerName="nova-cell0-conductor-conductor" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080634 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ca28ba4-2b37-4836-9d51-8dea84046163" containerName="nova-cell1-novncproxy-novncproxy" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080644 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ca28ba4-2b37-4836-9d51-8dea84046163" containerName="nova-cell1-novncproxy-novncproxy" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080653 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87937f27-2525-4fed-88bb-38a90404860c" containerName="nova-scheduler-scheduler" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080659 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="87937f27-2525-4fed-88bb-38a90404860c" containerName="nova-scheduler-scheduler" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080670 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca005c2d-f7de-486a-bbd6-a32443582833" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080676 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca005c2d-f7de-486a-bbd6-a32443582833" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080684 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d917f471-6630-4e96-a0e4-cbde631da4a8" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080691 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="d917f471-6630-4e96-a0e4-cbde631da4a8" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080703 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="account-server" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080711 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="account-server" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080724 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e96be66c-07f2-47c0-a784-6af473c8a2a8" containerName="ovsdbserver-nb" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080731 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e96be66c-07f2-47c0-a784-6af473c8a2a8" containerName="ovsdbserver-nb" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080741 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e96be66c-07f2-47c0-a784-6af473c8a2a8" containerName="openstack-network-exporter" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080747 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e96be66c-07f2-47c0-a784-6af473c8a2a8" containerName="openstack-network-exporter" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080754 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0891f075-8101-475b-b844-e7cb42a4990b" containerName="rabbitmq" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080759 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="0891f075-8101-475b-b844-e7cb42a4990b" containerName="rabbitmq" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080766 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="container-updater" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080772 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="container-updater" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080780 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="object-replicator" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080786 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="object-replicator" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080796 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55b7e535-46f6-403b-9cdf-bf172dba97b6" containerName="mysql-bootstrap" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080802 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="55b7e535-46f6-403b-9cdf-bf172dba97b6" containerName="mysql-bootstrap" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080808 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c928931c-d49d-41dc-9181-11d856ed3bd0" containerName="ovsdbserver-sb" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080814 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="c928931c-d49d-41dc-9181-11d856ed3bd0" containerName="ovsdbserver-sb" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080824 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8b3d8cd-9278-4639-86fe-1aa7696fecca" containerName="mysql-bootstrap" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080829 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8b3d8cd-9278-4639-86fe-1aa7696fecca" containerName="mysql-bootstrap" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080838 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee634ad2-5f9a-4183-bddc-d076b6456276" containerName="neutron-httpd" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080845 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee634ad2-5f9a-4183-bddc-d076b6456276" containerName="neutron-httpd" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080852 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6817c570-f6ff-4b08-825a-027a9c8630b0" containerName="kube-state-metrics" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080859 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="6817c570-f6ff-4b08-825a-027a9c8630b0" containerName="kube-state-metrics" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080865 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ef32727-7bbd-4a50-8292-4740b34107cc" containerName="proxy-httpd" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080870 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ef32727-7bbd-4a50-8292-4740b34107cc" containerName="proxy-httpd" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080880 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cffff412-bf3c-4739-8bb8-3d099c8c83fe" containerName="proxy-server" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080886 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="cffff412-bf3c-4739-8bb8-3d099c8c83fe" containerName="proxy-server" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080894 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b1a06e-804a-44dc-8e77-a7d8162f38bd" containerName="glance-log" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080900 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b1a06e-804a-44dc-8e77-a7d8162f38bd" containerName="glance-log" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080907 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ef32727-7bbd-4a50-8292-4740b34107cc" containerName="ceilometer-notification-agent" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080914 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ef32727-7bbd-4a50-8292-4740b34107cc" containerName="ceilometer-notification-agent" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080924 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d917f471-6630-4e96-a0e4-cbde631da4a8" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080931 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="d917f471-6630-4e96-a0e4-cbde631da4a8" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080941 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc8efd61-e4fb-4ec0-834a-b495797039a1" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080947 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc8efd61-e4fb-4ec0-834a-b495797039a1" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080954 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ef32727-7bbd-4a50-8292-4740b34107cc" containerName="ceilometer-central-agent" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080960 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ef32727-7bbd-4a50-8292-4740b34107cc" containerName="ceilometer-central-agent" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080969 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="account-auditor" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080975 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="account-auditor" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080983 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7" containerName="nova-cell1-conductor-conductor" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.080990 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7" containerName="nova-cell1-conductor-conductor" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.080999 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="003b423c-92a0-47f6-8358-003f3ad24ded" containerName="cinder-api-log" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.081005 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="003b423c-92a0-47f6-8358-003f3ad24ded" containerName="cinder-api-log" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.081032 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88b4966d-124b-4cf4-b52b-704955059220" containerName="ovsdb-server" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.081039 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="88b4966d-124b-4cf4-b52b-704955059220" containerName="ovsdb-server" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.081047 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3386ffea-45ca-41e8-9aa5-61a2923a3394" containerName="nova-api-log" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.081053 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="3386ffea-45ca-41e8-9aa5-61a2923a3394" containerName="nova-api-log" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.081062 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="container-server" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.081067 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="container-server" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.081076 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29a26d59-027f-428e-928e-12222b61a350" containerName="probe" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.081083 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a26d59-027f-428e-928e-12222b61a350" containerName="probe" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.081092 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="458f05be-2fd6-44d9-8034-f077356964ce" containerName="rabbitmq" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.081099 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="458f05be-2fd6-44d9-8034-f077356964ce" containerName="rabbitmq" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.081109 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3386ffea-45ca-41e8-9aa5-61a2923a3394" containerName="nova-api-api" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.081116 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="3386ffea-45ca-41e8-9aa5-61a2923a3394" containerName="nova-api-api" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.081127 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d484c84-7333-4701-a4f3-655c3d2cbfa7" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.081133 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d484c84-7333-4701-a4f3-655c3d2cbfa7" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.081141 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4de557a0-8b74-4d40-8c91-351ba127eb13" containerName="placement-api" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.081146 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="4de557a0-8b74-4d40-8c91-351ba127eb13" containerName="placement-api" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.081154 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5371a32d-3998-4ddc-93d6-27e9afdb9712" containerName="openstack-network-exporter" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.081159 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="5371a32d-3998-4ddc-93d6-27e9afdb9712" containerName="openstack-network-exporter" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.081165 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="object-expirer" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.081170 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="object-expirer" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.081178 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36acbdf3-346e-4207-8391-b2a03ef839e5" containerName="keystone-api" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.081187 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="36acbdf3-346e-4207-8391-b2a03ef839e5" containerName="keystone-api" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.081198 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8" containerName="registry-server" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.081205 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8" containerName="registry-server" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.081218 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="object-auditor" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.081226 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="object-auditor" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.081235 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eb418aa-1d3c-469c-8ff4-2b3c86a71e97" containerName="nova-metadata-metadata" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.081242 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eb418aa-1d3c-469c-8ff4-2b3c86a71e97" containerName="nova-metadata-metadata" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.081253 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d484c84-7333-4701-a4f3-655c3d2cbfa7" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.081260 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d484c84-7333-4701-a4f3-655c3d2cbfa7" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.081272 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="account-reaper" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.081279 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="account-reaper" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.081290 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88b4966d-124b-4cf4-b52b-704955059220" containerName="ovsdb-server-init" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.081297 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="88b4966d-124b-4cf4-b52b-704955059220" containerName="ovsdb-server-init" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.081305 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="object-server" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.081311 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="object-server" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.081323 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="defedffb-9310-4b18-b7ee-b54040aa5447" containerName="memcached" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.081329 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="defedffb-9310-4b18-b7ee-b54040aa5447" containerName="memcached" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.081343 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee634ad2-5f9a-4183-bddc-d076b6456276" containerName="neutron-api" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.081351 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee634ad2-5f9a-4183-bddc-d076b6456276" containerName="neutron-api" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.081362 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf" containerName="barbican-api" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.081376 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf" containerName="barbican-api" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.081385 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="213cfec6-ba42-4dbc-bd9c-051b193e4577" containerName="glance-log" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.081393 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="213cfec6-ba42-4dbc-bd9c-051b193e4577" containerName="glance-log" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.081405 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca005c2d-f7de-486a-bbd6-a32443582833" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.081413 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca005c2d-f7de-486a-bbd6-a32443582833" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.081422 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5371a32d-3998-4ddc-93d6-27e9afdb9712" containerName="ovn-northd" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.081430 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="5371a32d-3998-4ddc-93d6-27e9afdb9712" containerName="ovn-northd" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.081440 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4de557a0-8b74-4d40-8c91-351ba127eb13" containerName="placement-log" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.081447 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="4de557a0-8b74-4d40-8c91-351ba127eb13" containerName="placement-log" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.081866 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="003b423c-92a0-47f6-8358-003f3ad24ded" containerName="cinder-api" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.081877 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="003b423c-92a0-47f6-8358-003f3ad24ded" containerName="cinder-api" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.081888 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="458f05be-2fd6-44d9-8034-f077356964ce" containerName="setup-container" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.081895 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="458f05be-2fd6-44d9-8034-f077356964ce" containerName="setup-container" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.081908 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf" containerName="barbican-api-log" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.081915 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf" containerName="barbican-api-log" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.081925 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c443a35b-44e5-495f-b23b-75ff35319194" containerName="dnsmasq-dns" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.081932 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="c443a35b-44e5-495f-b23b-75ff35319194" containerName="dnsmasq-dns" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.081944 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8" containerName="extract-utilities" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.081953 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8" containerName="extract-utilities" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.081962 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8" containerName="extract-content" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.081969 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8" containerName="extract-content" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.081981 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="object-updater" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.081987 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="object-updater" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.081993 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29a26d59-027f-428e-928e-12222b61a350" containerName="cinder-scheduler" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.081999 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a26d59-027f-428e-928e-12222b61a350" containerName="cinder-scheduler" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.082062 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eb418aa-1d3c-469c-8ff4-2b3c86a71e97" containerName="nova-metadata-log" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082072 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eb418aa-1d3c-469c-8ff4-2b3c86a71e97" containerName="nova-metadata-log" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.082105 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8b3d8cd-9278-4639-86fe-1aa7696fecca" containerName="galera" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082113 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8b3d8cd-9278-4639-86fe-1aa7696fecca" containerName="galera" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.082126 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58cdcd05-e81a-4ed4-8357-249649b17449" containerName="openstack-network-exporter" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082133 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="58cdcd05-e81a-4ed4-8357-249649b17449" containerName="openstack-network-exporter" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.082145 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="container-replicator" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082152 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="container-replicator" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.082162 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="swift-recon-cron" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082169 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="swift-recon-cron" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.082183 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="account-replicator" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082203 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="account-replicator" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.082216 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b1a06e-804a-44dc-8e77-a7d8162f38bd" containerName="glance-httpd" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082224 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b1a06e-804a-44dc-8e77-a7d8162f38bd" containerName="glance-httpd" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082458 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="5371a32d-3998-4ddc-93d6-27e9afdb9712" containerName="ovn-northd" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082470 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="87937f27-2525-4fed-88bb-38a90404860c" containerName="nova-scheduler-scheduler" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082483 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="88b4966d-124b-4cf4-b52b-704955059220" containerName="ovs-vswitchd" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082495 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="account-reaper" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082502 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="213cfec6-ba42-4dbc-bd9c-051b193e4577" containerName="glance-httpd" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082516 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="account-auditor" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082526 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf" containerName="barbican-api-log" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082537 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="container-auditor" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082545 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="88b4966d-124b-4cf4-b52b-704955059220" containerName="ovsdb-server" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082557 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="swift-recon-cron" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082569 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="55b7e535-46f6-403b-9cdf-bf172dba97b6" containerName="galera" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082582 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="01fa4d89-aae5-451a-8798-2700053fe3d4" containerName="ovn-controller" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082590 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="934cae9e-c75b-434d-b1e1-d566d6fb8b7d" containerName="barbican-worker-log" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082600 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="458f05be-2fd6-44d9-8034-f077356964ce" containerName="rabbitmq" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082613 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc2e956f-6026-4a75-b11a-5106aad626a5" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082621 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eb418aa-1d3c-469c-8ff4-2b3c86a71e97" containerName="nova-metadata-log" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082628 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="5371a32d-3998-4ddc-93d6-27e9afdb9712" containerName="openstack-network-exporter" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082639 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="3386ffea-45ca-41e8-9aa5-61a2923a3394" containerName="nova-api-api" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082648 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8b3d8cd-9278-4639-86fe-1aa7696fecca" containerName="galera" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082655 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="rsync" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082662 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc8efd61-e4fb-4ec0-834a-b495797039a1" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082670 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="account-server" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082685 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="container-updater" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082695 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="object-replicator" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082702 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c13a1ff-0952-40b8-9157-3f1ba8b232c0" containerName="nova-cell0-conductor-conductor" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082714 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e5dbd8a-7796-49d6-b2e3-23f15d35b7f8" containerName="registry-server" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082724 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="e365d825-a3cb-42a3-8a00-8a9be42ed290" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082733 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="e365d825-a3cb-42a3-8a00-8a9be42ed290" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082744 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="account-replicator" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082752 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="58cdcd05-e81a-4ed4-8357-249649b17449" containerName="openstack-network-exporter" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082761 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee634ad2-5f9a-4183-bddc-d076b6456276" containerName="neutron-api" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082768 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ef32727-7bbd-4a50-8292-4740b34107cc" containerName="proxy-httpd" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082778 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="934cae9e-c75b-434d-b1e1-d566d6fb8b7d" containerName="barbican-worker" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082788 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="569cb143-086a-42f1-9e8c-6f6f614c9ee2" containerName="barbican-keystone-listener" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082797 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca005c2d-f7de-486a-bbd6-a32443582833" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082805 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="003b423c-92a0-47f6-8358-003f3ad24ded" containerName="cinder-api-log" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082815 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca005c2d-f7de-486a-bbd6-a32443582833" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082829 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="container-server" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082838 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ef32727-7bbd-4a50-8292-4740b34107cc" containerName="sg-core" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082847 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ad4414f-ca3e-4ff4-9e2a-3ab029df2ebf" containerName="barbican-api" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082857 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="29a26d59-027f-428e-928e-12222b61a350" containerName="probe" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082865 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="d917f471-6630-4e96-a0e4-cbde631da4a8" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082873 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="213cfec6-ba42-4dbc-bd9c-051b193e4577" containerName="glance-log" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082884 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d484c84-7333-4701-a4f3-655c3d2cbfa7" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082895 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="container-replicator" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082907 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="object-auditor" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082917 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="c928931c-d49d-41dc-9181-11d856ed3bd0" containerName="ovsdbserver-sb" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082928 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="36acbdf3-346e-4207-8391-b2a03ef839e5" containerName="keystone-api" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082936 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="cffff412-bf3c-4739-8bb8-3d099c8c83fe" containerName="proxy-server" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082945 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc8efd61-e4fb-4ec0-834a-b495797039a1" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082955 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="cffff412-bf3c-4739-8bb8-3d099c8c83fe" containerName="proxy-httpd" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082963 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d484c84-7333-4701-a4f3-655c3d2cbfa7" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082974 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ca28ba4-2b37-4836-9d51-8dea84046163" containerName="nova-cell1-novncproxy-novncproxy" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082985 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="object-server" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.082994 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ef32727-7bbd-4a50-8292-4740b34107cc" containerName="ceilometer-notification-agent" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.083003 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="66cb4589-6296-417b-87eb-4bcbff7bf580" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.083035 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="66cb4589-6296-417b-87eb-4bcbff7bf580" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.083047 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="e96be66c-07f2-47c0-a784-6af473c8a2a8" containerName="openstack-network-exporter" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.083059 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="569cb143-086a-42f1-9e8c-6f6f614c9ee2" containerName="barbican-keystone-listener-log" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.083069 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="c443a35b-44e5-495f-b23b-75ff35319194" containerName="dnsmasq-dns" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.083080 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="003b423c-92a0-47f6-8358-003f3ad24ded" containerName="cinder-api" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.083090 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="38b1a06e-804a-44dc-8e77-a7d8162f38bd" containerName="glance-log" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.083099 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="object-expirer" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.083106 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee634ad2-5f9a-4183-bddc-d076b6456276" containerName="neutron-httpd" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.083116 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="d917f471-6630-4e96-a0e4-cbde631da4a8" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.083125 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="4de557a0-8b74-4d40-8c91-351ba127eb13" containerName="placement-log" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.083137 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea5ac39a-6fb6-42bb-8ffd-e0036e93a1d7" containerName="nova-cell1-conductor-conductor" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.083148 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eb418aa-1d3c-469c-8ff4-2b3c86a71e97" containerName="nova-metadata-metadata" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.083159 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="3386ffea-45ca-41e8-9aa5-61a2923a3394" containerName="nova-api-log" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.083170 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="e96be66c-07f2-47c0-a784-6af473c8a2a8" containerName="ovsdbserver-nb" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.083178 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ef32727-7bbd-4a50-8292-4740b34107cc" containerName="ceilometer-central-agent" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.083188 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="defedffb-9310-4b18-b7ee-b54040aa5447" containerName="memcached" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.083199 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="6817c570-f6ff-4b08-825a-027a9c8630b0" containerName="kube-state-metrics" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.083208 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="38b1a06e-804a-44dc-8e77-a7d8162f38bd" containerName="glance-httpd" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.083218 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="c928931c-d49d-41dc-9181-11d856ed3bd0" containerName="openstack-network-exporter" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.083226 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="0891f075-8101-475b-b844-e7cb42a4990b" containerName="rabbitmq" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.083235 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="29a26d59-027f-428e-928e-12222b61a350" containerName="cinder-scheduler" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.083246 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="4de557a0-8b74-4d40-8c91-351ba127eb13" containerName="placement-api" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.083257 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5dabf50-534b-45cb-87db-45373930fe82" containerName="object-updater" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.083433 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc8efd61-e4fb-4ec0-834a-b495797039a1" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.083443 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc8efd61-e4fb-4ec0-834a-b495797039a1" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.083466 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66cb4589-6296-417b-87eb-4bcbff7bf580" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.083475 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="66cb4589-6296-417b-87eb-4bcbff7bf580" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.083486 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e365d825-a3cb-42a3-8a00-8a9be42ed290" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.083495 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e365d825-a3cb-42a3-8a00-8a9be42ed290" containerName="mariadb-account-delete" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.084669 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ph492" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.095917 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ph492"] Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.223122 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/173d34cc-03fb-4f77-b375-53a00825480f-catalog-content\") pod \"community-operators-ph492\" (UID: \"173d34cc-03fb-4f77-b375-53a00825480f\") " pod="openshift-marketplace/community-operators-ph492" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.223434 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/173d34cc-03fb-4f77-b375-53a00825480f-utilities\") pod \"community-operators-ph492\" (UID: \"173d34cc-03fb-4f77-b375-53a00825480f\") " pod="openshift-marketplace/community-operators-ph492" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.223494 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gxcm\" (UniqueName: \"kubernetes.io/projected/173d34cc-03fb-4f77-b375-53a00825480f-kube-api-access-7gxcm\") pod \"community-operators-ph492\" (UID: \"173d34cc-03fb-4f77-b375-53a00825480f\") " pod="openshift-marketplace/community-operators-ph492" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.324956 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gxcm\" (UniqueName: \"kubernetes.io/projected/173d34cc-03fb-4f77-b375-53a00825480f-kube-api-access-7gxcm\") pod \"community-operators-ph492\" (UID: \"173d34cc-03fb-4f77-b375-53a00825480f\") " pod="openshift-marketplace/community-operators-ph492" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.325077 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/173d34cc-03fb-4f77-b375-53a00825480f-catalog-content\") pod \"community-operators-ph492\" (UID: \"173d34cc-03fb-4f77-b375-53a00825480f\") " pod="openshift-marketplace/community-operators-ph492" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.325116 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/173d34cc-03fb-4f77-b375-53a00825480f-utilities\") pod \"community-operators-ph492\" (UID: \"173d34cc-03fb-4f77-b375-53a00825480f\") " pod="openshift-marketplace/community-operators-ph492" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.325675 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/173d34cc-03fb-4f77-b375-53a00825480f-catalog-content\") pod \"community-operators-ph492\" (UID: \"173d34cc-03fb-4f77-b375-53a00825480f\") " pod="openshift-marketplace/community-operators-ph492" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.325792 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/173d34cc-03fb-4f77-b375-53a00825480f-utilities\") pod \"community-operators-ph492\" (UID: \"173d34cc-03fb-4f77-b375-53a00825480f\") " pod="openshift-marketplace/community-operators-ph492" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.354500 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gxcm\" (UniqueName: \"kubernetes.io/projected/173d34cc-03fb-4f77-b375-53a00825480f-kube-api-access-7gxcm\") pod \"community-operators-ph492\" (UID: \"173d34cc-03fb-4f77-b375-53a00825480f\") " pod="openshift-marketplace/community-operators-ph492" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.450333 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ph492" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.726859 5050 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod458f05be-2fd6-44d9-8034-f077356964ce"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod458f05be-2fd6-44d9-8034-f077356964ce] : Timed out while waiting for systemd to remove kubepods-burstable-pod458f05be_2fd6_44d9_8034_f077356964ce.slice" Dec 11 14:12:46 crc kubenswrapper[5050]: E1211 14:12:46.727228 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable pod458f05be-2fd6-44d9-8034-f077356964ce] : unable to destroy cgroup paths for cgroup [kubepods burstable pod458f05be-2fd6-44d9-8034-f077356964ce] : Timed out while waiting for systemd to remove kubepods-burstable-pod458f05be_2fd6_44d9_8034_f077356964ce.slice" pod="openstack/rabbitmq-cell1-server-0" podUID="458f05be-2fd6-44d9-8034-f077356964ce" Dec 11 14:12:46 crc kubenswrapper[5050]: I1211 14:12:46.929348 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ph492"] Dec 11 14:12:46 crc kubenswrapper[5050]: W1211 14:12:46.937120 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod173d34cc_03fb_4f77_b375_53a00825480f.slice/crio-0fe00107c55dfd60e02e58a24c15d874b0f0e36a79a910f50414a0aaa4639f79 WatchSource:0}: Error finding container 0fe00107c55dfd60e02e58a24c15d874b0f0e36a79a910f50414a0aaa4639f79: Status 404 returned error can't find the container with id 0fe00107c55dfd60e02e58a24c15d874b0f0e36a79a910f50414a0aaa4639f79 Dec 11 14:12:47 crc kubenswrapper[5050]: I1211 14:12:47.603137 5050 generic.go:334] "Generic (PLEG): container finished" podID="173d34cc-03fb-4f77-b375-53a00825480f" containerID="874f146b79acd90bff7b00d5cf861616fb26e7164b195c7b1c5590136b61eb06" exitCode=0 Dec 11 14:12:47 crc kubenswrapper[5050]: I1211 14:12:47.603229 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ph492" event={"ID":"173d34cc-03fb-4f77-b375-53a00825480f","Type":"ContainerDied","Data":"874f146b79acd90bff7b00d5cf861616fb26e7164b195c7b1c5590136b61eb06"} Dec 11 14:12:47 crc kubenswrapper[5050]: I1211 14:12:47.604351 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 11 14:12:47 crc kubenswrapper[5050]: I1211 14:12:47.604416 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ph492" event={"ID":"173d34cc-03fb-4f77-b375-53a00825480f","Type":"ContainerStarted","Data":"0fe00107c55dfd60e02e58a24c15d874b0f0e36a79a910f50414a0aaa4639f79"} Dec 11 14:12:47 crc kubenswrapper[5050]: I1211 14:12:47.678532 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 14:12:47 crc kubenswrapper[5050]: I1211 14:12:47.685153 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 14:12:48 crc kubenswrapper[5050]: I1211 14:12:48.614523 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ph492" event={"ID":"173d34cc-03fb-4f77-b375-53a00825480f","Type":"ContainerStarted","Data":"e26ddbdecc426cbd6b39e20bbab6854ac8101f8f5b9d60fd7b80bf888efe4703"} Dec 11 14:12:49 crc kubenswrapper[5050]: I1211 14:12:49.557372 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="458f05be-2fd6-44d9-8034-f077356964ce" path="/var/lib/kubelet/pods/458f05be-2fd6-44d9-8034-f077356964ce/volumes" Dec 11 14:12:49 crc kubenswrapper[5050]: I1211 14:12:49.628803 5050 generic.go:334] "Generic (PLEG): container finished" podID="173d34cc-03fb-4f77-b375-53a00825480f" containerID="e26ddbdecc426cbd6b39e20bbab6854ac8101f8f5b9d60fd7b80bf888efe4703" exitCode=0 Dec 11 14:12:49 crc kubenswrapper[5050]: I1211 14:12:49.628851 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ph492" event={"ID":"173d34cc-03fb-4f77-b375-53a00825480f","Type":"ContainerDied","Data":"e26ddbdecc426cbd6b39e20bbab6854ac8101f8f5b9d60fd7b80bf888efe4703"} Dec 11 14:12:50 crc kubenswrapper[5050]: I1211 14:12:50.644856 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ph492" event={"ID":"173d34cc-03fb-4f77-b375-53a00825480f","Type":"ContainerStarted","Data":"46c2462906cf1137fc7b18108522d0b852c03164c1f9de58ad680e3313bb9bb4"} Dec 11 14:12:50 crc kubenswrapper[5050]: I1211 14:12:50.666035 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ph492" podStartSLOduration=1.910404161 podStartE2EDuration="4.665989027s" podCreationTimestamp="2025-12-11 14:12:46 +0000 UTC" firstStartedPulling="2025-12-11 14:12:47.606120894 +0000 UTC m=+1458.449843480" lastFinishedPulling="2025-12-11 14:12:50.36170576 +0000 UTC m=+1461.205428346" observedRunningTime="2025-12-11 14:12:50.660229393 +0000 UTC m=+1461.503951979" watchObservedRunningTime="2025-12-11 14:12:50.665989027 +0000 UTC m=+1461.509711613" Dec 11 14:12:56 crc kubenswrapper[5050]: I1211 14:12:56.451027 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ph492" Dec 11 14:12:56 crc kubenswrapper[5050]: I1211 14:12:56.451990 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ph492" Dec 11 14:12:56 crc kubenswrapper[5050]: I1211 14:12:56.504671 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ph492" Dec 11 14:12:56 crc kubenswrapper[5050]: I1211 14:12:56.782300 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ph492" Dec 11 14:12:56 crc kubenswrapper[5050]: I1211 14:12:56.841112 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ph492"] Dec 11 14:12:58 crc kubenswrapper[5050]: I1211 14:12:58.743028 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ph492" podUID="173d34cc-03fb-4f77-b375-53a00825480f" containerName="registry-server" containerID="cri-o://46c2462906cf1137fc7b18108522d0b852c03164c1f9de58ad680e3313bb9bb4" gracePeriod=2 Dec 11 14:12:59 crc kubenswrapper[5050]: I1211 14:12:59.755748 5050 generic.go:334] "Generic (PLEG): container finished" podID="173d34cc-03fb-4f77-b375-53a00825480f" containerID="46c2462906cf1137fc7b18108522d0b852c03164c1f9de58ad680e3313bb9bb4" exitCode=0 Dec 11 14:12:59 crc kubenswrapper[5050]: I1211 14:12:59.755974 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ph492" event={"ID":"173d34cc-03fb-4f77-b375-53a00825480f","Type":"ContainerDied","Data":"46c2462906cf1137fc7b18108522d0b852c03164c1f9de58ad680e3313bb9bb4"} Dec 11 14:13:00 crc kubenswrapper[5050]: I1211 14:13:00.260372 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ph492" Dec 11 14:13:00 crc kubenswrapper[5050]: I1211 14:13:00.376300 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gxcm\" (UniqueName: \"kubernetes.io/projected/173d34cc-03fb-4f77-b375-53a00825480f-kube-api-access-7gxcm\") pod \"173d34cc-03fb-4f77-b375-53a00825480f\" (UID: \"173d34cc-03fb-4f77-b375-53a00825480f\") " Dec 11 14:13:00 crc kubenswrapper[5050]: I1211 14:13:00.376395 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/173d34cc-03fb-4f77-b375-53a00825480f-catalog-content\") pod \"173d34cc-03fb-4f77-b375-53a00825480f\" (UID: \"173d34cc-03fb-4f77-b375-53a00825480f\") " Dec 11 14:13:00 crc kubenswrapper[5050]: I1211 14:13:00.376462 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/173d34cc-03fb-4f77-b375-53a00825480f-utilities\") pod \"173d34cc-03fb-4f77-b375-53a00825480f\" (UID: \"173d34cc-03fb-4f77-b375-53a00825480f\") " Dec 11 14:13:00 crc kubenswrapper[5050]: I1211 14:13:00.377777 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/173d34cc-03fb-4f77-b375-53a00825480f-utilities" (OuterVolumeSpecName: "utilities") pod "173d34cc-03fb-4f77-b375-53a00825480f" (UID: "173d34cc-03fb-4f77-b375-53a00825480f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:13:00 crc kubenswrapper[5050]: I1211 14:13:00.387634 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/173d34cc-03fb-4f77-b375-53a00825480f-kube-api-access-7gxcm" (OuterVolumeSpecName: "kube-api-access-7gxcm") pod "173d34cc-03fb-4f77-b375-53a00825480f" (UID: "173d34cc-03fb-4f77-b375-53a00825480f"). InnerVolumeSpecName "kube-api-access-7gxcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:13:00 crc kubenswrapper[5050]: I1211 14:13:00.437347 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/173d34cc-03fb-4f77-b375-53a00825480f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "173d34cc-03fb-4f77-b375-53a00825480f" (UID: "173d34cc-03fb-4f77-b375-53a00825480f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:13:00 crc kubenswrapper[5050]: I1211 14:13:00.478121 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/173d34cc-03fb-4f77-b375-53a00825480f-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 14:13:00 crc kubenswrapper[5050]: I1211 14:13:00.478151 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gxcm\" (UniqueName: \"kubernetes.io/projected/173d34cc-03fb-4f77-b375-53a00825480f-kube-api-access-7gxcm\") on node \"crc\" DevicePath \"\"" Dec 11 14:13:00 crc kubenswrapper[5050]: I1211 14:13:00.478162 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/173d34cc-03fb-4f77-b375-53a00825480f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 14:13:00 crc kubenswrapper[5050]: I1211 14:13:00.768979 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ph492" event={"ID":"173d34cc-03fb-4f77-b375-53a00825480f","Type":"ContainerDied","Data":"0fe00107c55dfd60e02e58a24c15d874b0f0e36a79a910f50414a0aaa4639f79"} Dec 11 14:13:00 crc kubenswrapper[5050]: I1211 14:13:00.769087 5050 scope.go:117] "RemoveContainer" containerID="46c2462906cf1137fc7b18108522d0b852c03164c1f9de58ad680e3313bb9bb4" Dec 11 14:13:00 crc kubenswrapper[5050]: I1211 14:13:00.769147 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ph492" Dec 11 14:13:00 crc kubenswrapper[5050]: I1211 14:13:00.797092 5050 scope.go:117] "RemoveContainer" containerID="e26ddbdecc426cbd6b39e20bbab6854ac8101f8f5b9d60fd7b80bf888efe4703" Dec 11 14:13:00 crc kubenswrapper[5050]: I1211 14:13:00.819102 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ph492"] Dec 11 14:13:00 crc kubenswrapper[5050]: I1211 14:13:00.825350 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ph492"] Dec 11 14:13:00 crc kubenswrapper[5050]: I1211 14:13:00.845697 5050 scope.go:117] "RemoveContainer" containerID="874f146b79acd90bff7b00d5cf861616fb26e7164b195c7b1c5590136b61eb06" Dec 11 14:13:01 crc kubenswrapper[5050]: I1211 14:13:01.559882 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="173d34cc-03fb-4f77-b375-53a00825480f" path="/var/lib/kubelet/pods/173d34cc-03fb-4f77-b375-53a00825480f/volumes" Dec 11 14:13:10 crc kubenswrapper[5050]: I1211 14:13:10.796999 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:13:10 crc kubenswrapper[5050]: I1211 14:13:10.797907 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:13:10 crc kubenswrapper[5050]: I1211 14:13:10.797980 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 14:13:10 crc kubenswrapper[5050]: I1211 14:13:10.799177 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"806cebb86653f8bb0b24399056d7c69486e67776b18ac98fa0522064fc34a318"} pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 14:13:10 crc kubenswrapper[5050]: I1211 14:13:10.799262 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" containerID="cri-o://806cebb86653f8bb0b24399056d7c69486e67776b18ac98fa0522064fc34a318" gracePeriod=600 Dec 11 14:13:11 crc kubenswrapper[5050]: I1211 14:13:11.898743 5050 generic.go:334] "Generic (PLEG): container finished" podID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerID="806cebb86653f8bb0b24399056d7c69486e67776b18ac98fa0522064fc34a318" exitCode=0 Dec 11 14:13:11 crc kubenswrapper[5050]: I1211 14:13:11.898762 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerDied","Data":"806cebb86653f8bb0b24399056d7c69486e67776b18ac98fa0522064fc34a318"} Dec 11 14:13:11 crc kubenswrapper[5050]: I1211 14:13:11.899374 5050 scope.go:117] "RemoveContainer" containerID="9aa9a97e2005f8a9ae70c2d72cef618f936c2d3673f7194a8c95ac3ad3519511" Dec 11 14:13:12 crc kubenswrapper[5050]: I1211 14:13:12.913219 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerStarted","Data":"a6b9413520fd75e49a892cc5dba8a8fc1b927a604949a4d2b85854c65912d95e"} Dec 11 14:13:39 crc kubenswrapper[5050]: I1211 14:13:39.314494 5050 scope.go:117] "RemoveContainer" containerID="9c0ae62c16c8df7398252c5d9b6936e3fe54053471442f1e95b49a66210f7004" Dec 11 14:13:39 crc kubenswrapper[5050]: I1211 14:13:39.369889 5050 scope.go:117] "RemoveContainer" containerID="9438fa89f17dcb4f6482af1d497bd7752b9ebcbd02295a6a5a1d83d614b1180a" Dec 11 14:13:39 crc kubenswrapper[5050]: I1211 14:13:39.393468 5050 scope.go:117] "RemoveContainer" containerID="81545cbc54d359524dfbf5ab0186a09ed8e7e6cc553c36752bf700cb488097c1" Dec 11 14:13:39 crc kubenswrapper[5050]: I1211 14:13:39.425491 5050 scope.go:117] "RemoveContainer" containerID="88c804a2ae7858f1fafb5a1d5c8ca6fe31381dd1f0e6ee9034716872440fe5b4" Dec 11 14:13:39 crc kubenswrapper[5050]: I1211 14:13:39.446662 5050 scope.go:117] "RemoveContainer" containerID="db23d3f3f27190827f163f21b2da4cd0ca1fc9aa0bfb390a14b8c83a5ed2ee47" Dec 11 14:13:39 crc kubenswrapper[5050]: I1211 14:13:39.490699 5050 scope.go:117] "RemoveContainer" containerID="3624e6393f8a6eadd5c4286428ab748ba1155fa0943854327019eb997eadc689" Dec 11 14:13:39 crc kubenswrapper[5050]: I1211 14:13:39.532401 5050 scope.go:117] "RemoveContainer" containerID="70921762a5cd41a13f21b3df228b676e8d09da1a291c372ac76bbe1b1e001aa6" Dec 11 14:13:39 crc kubenswrapper[5050]: I1211 14:13:39.563057 5050 scope.go:117] "RemoveContainer" containerID="a1fbc95eb3b3987970a436f95d6c365fceacbe66e3d72a6ce3cf2ff678c4bb9f" Dec 11 14:13:39 crc kubenswrapper[5050]: I1211 14:13:39.621870 5050 scope.go:117] "RemoveContainer" containerID="38e653b3d3373170f5f490629772c2956f706a2d09203fe68bfcbb06130e8f4e" Dec 11 14:13:39 crc kubenswrapper[5050]: I1211 14:13:39.645485 5050 scope.go:117] "RemoveContainer" containerID="7b47caabbd51a8e4fa31011df4b0b71c1cfb2074ee3115985077cd353b1679e4" Dec 11 14:13:39 crc kubenswrapper[5050]: I1211 14:13:39.671103 5050 scope.go:117] "RemoveContainer" containerID="68ec03639f4c9549411c965fea1c418136ebf64d20c05ca0423c32eb7e1ab199" Dec 11 14:13:39 crc kubenswrapper[5050]: I1211 14:13:39.703621 5050 scope.go:117] "RemoveContainer" containerID="2f62aeb6162bd178083d9d882bc213a977e97d767cf138a5471e9b6d54190929" Dec 11 14:13:39 crc kubenswrapper[5050]: I1211 14:13:39.729062 5050 scope.go:117] "RemoveContainer" containerID="1791df88a6b816e0b72db7b665200275015d5dbb7dca85bbeb168f77b2438276" Dec 11 14:13:39 crc kubenswrapper[5050]: I1211 14:13:39.756630 5050 scope.go:117] "RemoveContainer" containerID="08bfa765d647f306601d0abaff12d769ea8332592ea0f0283de458df6c5e5537" Dec 11 14:14:40 crc kubenswrapper[5050]: I1211 14:14:40.060897 5050 scope.go:117] "RemoveContainer" containerID="9048b99f225c02588f0acf6ab078d23ba9d748c49478356c85f61a74df87c960" Dec 11 14:14:40 crc kubenswrapper[5050]: I1211 14:14:40.086358 5050 scope.go:117] "RemoveContainer" containerID="955a0ee0c9eed128222ddf5d6dedbc74a4c5d1d3bcc7732f13e94db5162a8ca2" Dec 11 14:14:40 crc kubenswrapper[5050]: I1211 14:14:40.105969 5050 scope.go:117] "RemoveContainer" containerID="887182e7bdf510cf5f8d29d8def14429f4899834fa471d481e28b9675086a309" Dec 11 14:14:40 crc kubenswrapper[5050]: I1211 14:14:40.125210 5050 scope.go:117] "RemoveContainer" containerID="7f6f169f1e21cd536cd2066b6883085b8233e7f19f2c348689687c417f9d7905" Dec 11 14:14:40 crc kubenswrapper[5050]: I1211 14:14:40.142687 5050 scope.go:117] "RemoveContainer" containerID="add249db91788c64fc0bc9abe12d8ebe0bbd0ac4df87c1a680f9ba5d9cae0685" Dec 11 14:14:40 crc kubenswrapper[5050]: I1211 14:14:40.192383 5050 scope.go:117] "RemoveContainer" containerID="bacc497a7091ad5c398a0eaf800ba5f2b65b322bae0d2d68a8be1b183b7f3d6f" Dec 11 14:14:40 crc kubenswrapper[5050]: I1211 14:14:40.213492 5050 scope.go:117] "RemoveContainer" containerID="d2f88cb82773ad5f567925e106c60ec7bef84c6e078be7c5e2a9bd340e19b35c" Dec 11 14:14:40 crc kubenswrapper[5050]: I1211 14:14:40.234198 5050 scope.go:117] "RemoveContainer" containerID="363bef5fc02a72922b8027ac00256b6492310726e090e9dab94b12db5a9c9a9e" Dec 11 14:14:40 crc kubenswrapper[5050]: I1211 14:14:40.259503 5050 scope.go:117] "RemoveContainer" containerID="a4e0e87c678bd4b93a483f4f15d4562ac37b3aa202a6b10e620dc13a9773d991" Dec 11 14:14:40 crc kubenswrapper[5050]: I1211 14:14:40.305337 5050 scope.go:117] "RemoveContainer" containerID="8af0220738d7b4267aab1e60eaa3da9d17f3f47fefe09dc1901f5e2bee442704" Dec 11 14:14:40 crc kubenswrapper[5050]: I1211 14:14:40.338033 5050 scope.go:117] "RemoveContainer" containerID="1977929bc424b057bf59a3155bf7f4cfdfe00b2e3f9856bd807dc72825864a27" Dec 11 14:14:40 crc kubenswrapper[5050]: I1211 14:14:40.366425 5050 scope.go:117] "RemoveContainer" containerID="7dd54d0b1083881060ea7b32dfadc3a16d5333ec68ac6f4cea282f6da888ac9d" Dec 11 14:14:40 crc kubenswrapper[5050]: I1211 14:14:40.390818 5050 scope.go:117] "RemoveContainer" containerID="673029f70ba162cc7b362003c7987e48d633f387b63e6ba5c4b0b70b4937b5a3" Dec 11 14:14:40 crc kubenswrapper[5050]: I1211 14:14:40.442409 5050 scope.go:117] "RemoveContainer" containerID="54c25f0b8e6964a434a0a013a4076df6309838c2adfc994b7ac912ba272f2845" Dec 11 14:14:40 crc kubenswrapper[5050]: I1211 14:14:40.475578 5050 scope.go:117] "RemoveContainer" containerID="bc5bd4da507e5e98c22354d37317440ad1b08d0fdef5e93aee4a399f722d5c89" Dec 11 14:15:00 crc kubenswrapper[5050]: I1211 14:15:00.153501 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424375-w2t68"] Dec 11 14:15:00 crc kubenswrapper[5050]: E1211 14:15:00.154738 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="173d34cc-03fb-4f77-b375-53a00825480f" containerName="registry-server" Dec 11 14:15:00 crc kubenswrapper[5050]: I1211 14:15:00.154762 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="173d34cc-03fb-4f77-b375-53a00825480f" containerName="registry-server" Dec 11 14:15:00 crc kubenswrapper[5050]: E1211 14:15:00.154772 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="173d34cc-03fb-4f77-b375-53a00825480f" containerName="extract-content" Dec 11 14:15:00 crc kubenswrapper[5050]: I1211 14:15:00.154780 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="173d34cc-03fb-4f77-b375-53a00825480f" containerName="extract-content" Dec 11 14:15:00 crc kubenswrapper[5050]: E1211 14:15:00.154794 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="173d34cc-03fb-4f77-b375-53a00825480f" containerName="extract-utilities" Dec 11 14:15:00 crc kubenswrapper[5050]: I1211 14:15:00.154802 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="173d34cc-03fb-4f77-b375-53a00825480f" containerName="extract-utilities" Dec 11 14:15:00 crc kubenswrapper[5050]: I1211 14:15:00.154987 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="173d34cc-03fb-4f77-b375-53a00825480f" containerName="registry-server" Dec 11 14:15:00 crc kubenswrapper[5050]: I1211 14:15:00.155649 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424375-w2t68" Dec 11 14:15:00 crc kubenswrapper[5050]: I1211 14:15:00.159342 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 11 14:15:00 crc kubenswrapper[5050]: I1211 14:15:00.161561 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 11 14:15:00 crc kubenswrapper[5050]: I1211 14:15:00.176357 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424375-w2t68"] Dec 11 14:15:00 crc kubenswrapper[5050]: I1211 14:15:00.235006 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7574660d-3967-453b-8cd4-6aa447aff652-secret-volume\") pod \"collect-profiles-29424375-w2t68\" (UID: \"7574660d-3967-453b-8cd4-6aa447aff652\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424375-w2t68" Dec 11 14:15:00 crc kubenswrapper[5050]: I1211 14:15:00.235138 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7574660d-3967-453b-8cd4-6aa447aff652-config-volume\") pod \"collect-profiles-29424375-w2t68\" (UID: \"7574660d-3967-453b-8cd4-6aa447aff652\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424375-w2t68" Dec 11 14:15:00 crc kubenswrapper[5050]: I1211 14:15:00.235453 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87wwp\" (UniqueName: \"kubernetes.io/projected/7574660d-3967-453b-8cd4-6aa447aff652-kube-api-access-87wwp\") pod \"collect-profiles-29424375-w2t68\" (UID: \"7574660d-3967-453b-8cd4-6aa447aff652\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424375-w2t68" Dec 11 14:15:00 crc kubenswrapper[5050]: I1211 14:15:00.337168 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87wwp\" (UniqueName: \"kubernetes.io/projected/7574660d-3967-453b-8cd4-6aa447aff652-kube-api-access-87wwp\") pod \"collect-profiles-29424375-w2t68\" (UID: \"7574660d-3967-453b-8cd4-6aa447aff652\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424375-w2t68" Dec 11 14:15:00 crc kubenswrapper[5050]: I1211 14:15:00.337251 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7574660d-3967-453b-8cd4-6aa447aff652-secret-volume\") pod \"collect-profiles-29424375-w2t68\" (UID: \"7574660d-3967-453b-8cd4-6aa447aff652\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424375-w2t68" Dec 11 14:15:00 crc kubenswrapper[5050]: I1211 14:15:00.337270 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7574660d-3967-453b-8cd4-6aa447aff652-config-volume\") pod \"collect-profiles-29424375-w2t68\" (UID: \"7574660d-3967-453b-8cd4-6aa447aff652\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424375-w2t68" Dec 11 14:15:00 crc kubenswrapper[5050]: I1211 14:15:00.338505 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7574660d-3967-453b-8cd4-6aa447aff652-config-volume\") pod \"collect-profiles-29424375-w2t68\" (UID: \"7574660d-3967-453b-8cd4-6aa447aff652\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424375-w2t68" Dec 11 14:15:00 crc kubenswrapper[5050]: I1211 14:15:00.343751 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7574660d-3967-453b-8cd4-6aa447aff652-secret-volume\") pod \"collect-profiles-29424375-w2t68\" (UID: \"7574660d-3967-453b-8cd4-6aa447aff652\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424375-w2t68" Dec 11 14:15:00 crc kubenswrapper[5050]: I1211 14:15:00.357291 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87wwp\" (UniqueName: \"kubernetes.io/projected/7574660d-3967-453b-8cd4-6aa447aff652-kube-api-access-87wwp\") pod \"collect-profiles-29424375-w2t68\" (UID: \"7574660d-3967-453b-8cd4-6aa447aff652\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424375-w2t68" Dec 11 14:15:00 crc kubenswrapper[5050]: I1211 14:15:00.497795 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424375-w2t68" Dec 11 14:15:00 crc kubenswrapper[5050]: I1211 14:15:00.946685 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424375-w2t68"] Dec 11 14:15:01 crc kubenswrapper[5050]: I1211 14:15:01.940504 5050 generic.go:334] "Generic (PLEG): container finished" podID="7574660d-3967-453b-8cd4-6aa447aff652" containerID="f69005345c1398c48053c76b41d03dd74e0ffaac52e6c09e7dc98a7000961900" exitCode=0 Dec 11 14:15:01 crc kubenswrapper[5050]: I1211 14:15:01.940715 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424375-w2t68" event={"ID":"7574660d-3967-453b-8cd4-6aa447aff652","Type":"ContainerDied","Data":"f69005345c1398c48053c76b41d03dd74e0ffaac52e6c09e7dc98a7000961900"} Dec 11 14:15:01 crc kubenswrapper[5050]: I1211 14:15:01.941074 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424375-w2t68" event={"ID":"7574660d-3967-453b-8cd4-6aa447aff652","Type":"ContainerStarted","Data":"89932be6b1f7e25b8aa2a5eb99b40040deda37059c7db824314bbceac58291d9"} Dec 11 14:15:03 crc kubenswrapper[5050]: I1211 14:15:03.272336 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424375-w2t68" Dec 11 14:15:03 crc kubenswrapper[5050]: I1211 14:15:03.387812 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87wwp\" (UniqueName: \"kubernetes.io/projected/7574660d-3967-453b-8cd4-6aa447aff652-kube-api-access-87wwp\") pod \"7574660d-3967-453b-8cd4-6aa447aff652\" (UID: \"7574660d-3967-453b-8cd4-6aa447aff652\") " Dec 11 14:15:03 crc kubenswrapper[5050]: I1211 14:15:03.387930 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7574660d-3967-453b-8cd4-6aa447aff652-secret-volume\") pod \"7574660d-3967-453b-8cd4-6aa447aff652\" (UID: \"7574660d-3967-453b-8cd4-6aa447aff652\") " Dec 11 14:15:03 crc kubenswrapper[5050]: I1211 14:15:03.389603 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7574660d-3967-453b-8cd4-6aa447aff652-config-volume\") pod \"7574660d-3967-453b-8cd4-6aa447aff652\" (UID: \"7574660d-3967-453b-8cd4-6aa447aff652\") " Dec 11 14:15:03 crc kubenswrapper[5050]: I1211 14:15:03.390542 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7574660d-3967-453b-8cd4-6aa447aff652-config-volume" (OuterVolumeSpecName: "config-volume") pod "7574660d-3967-453b-8cd4-6aa447aff652" (UID: "7574660d-3967-453b-8cd4-6aa447aff652"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:15:03 crc kubenswrapper[5050]: I1211 14:15:03.396456 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7574660d-3967-453b-8cd4-6aa447aff652-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7574660d-3967-453b-8cd4-6aa447aff652" (UID: "7574660d-3967-453b-8cd4-6aa447aff652"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:15:03 crc kubenswrapper[5050]: I1211 14:15:03.396858 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7574660d-3967-453b-8cd4-6aa447aff652-kube-api-access-87wwp" (OuterVolumeSpecName: "kube-api-access-87wwp") pod "7574660d-3967-453b-8cd4-6aa447aff652" (UID: "7574660d-3967-453b-8cd4-6aa447aff652"). InnerVolumeSpecName "kube-api-access-87wwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:15:03 crc kubenswrapper[5050]: I1211 14:15:03.491452 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87wwp\" (UniqueName: \"kubernetes.io/projected/7574660d-3967-453b-8cd4-6aa447aff652-kube-api-access-87wwp\") on node \"crc\" DevicePath \"\"" Dec 11 14:15:03 crc kubenswrapper[5050]: I1211 14:15:03.491494 5050 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7574660d-3967-453b-8cd4-6aa447aff652-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 14:15:03 crc kubenswrapper[5050]: I1211 14:15:03.491504 5050 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7574660d-3967-453b-8cd4-6aa447aff652-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 14:15:03 crc kubenswrapper[5050]: I1211 14:15:03.961819 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424375-w2t68" event={"ID":"7574660d-3967-453b-8cd4-6aa447aff652","Type":"ContainerDied","Data":"89932be6b1f7e25b8aa2a5eb99b40040deda37059c7db824314bbceac58291d9"} Dec 11 14:15:03 crc kubenswrapper[5050]: I1211 14:15:03.962297 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89932be6b1f7e25b8aa2a5eb99b40040deda37059c7db824314bbceac58291d9" Dec 11 14:15:03 crc kubenswrapper[5050]: I1211 14:15:03.961959 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424375-w2t68" Dec 11 14:15:40 crc kubenswrapper[5050]: I1211 14:15:40.641729 5050 scope.go:117] "RemoveContainer" containerID="68a9a87e998a4bb0563913fd86e150d1605935b84a4da45aa67210b036a699f2" Dec 11 14:15:40 crc kubenswrapper[5050]: I1211 14:15:40.673943 5050 scope.go:117] "RemoveContainer" containerID="1bef0680c44fff43ab5a9504ecc960a1b4317db8f23fbca332406cda6c7a3be5" Dec 11 14:15:40 crc kubenswrapper[5050]: I1211 14:15:40.727967 5050 scope.go:117] "RemoveContainer" containerID="204ac42ef63a05788b1880c5f6c33e7a413d56ea5c69370c5a87fa4d156de0ba" Dec 11 14:15:40 crc kubenswrapper[5050]: I1211 14:15:40.763471 5050 scope.go:117] "RemoveContainer" containerID="0fb4881029120d8bb5f547b1ad66c0f186f487a95a32a956e5d51f220e3cca47" Dec 11 14:15:40 crc kubenswrapper[5050]: I1211 14:15:40.781607 5050 scope.go:117] "RemoveContainer" containerID="80030e514e19d023c1bec72880044d75c75621951af814ba5560c38086dcbc3d" Dec 11 14:15:40 crc kubenswrapper[5050]: I1211 14:15:40.796230 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:15:40 crc kubenswrapper[5050]: I1211 14:15:40.796316 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:15:40 crc kubenswrapper[5050]: I1211 14:15:40.809546 5050 scope.go:117] "RemoveContainer" containerID="3e6132bd898662eb15caae20bc63d62858df7ed7da6bd64261b666f48768ec52" Dec 11 14:15:40 crc kubenswrapper[5050]: I1211 14:15:40.830447 5050 scope.go:117] "RemoveContainer" containerID="fdcecf14f741e53bec9526499e9c4b26c9749197a5f66f8e11b391a11024579f" Dec 11 14:15:40 crc kubenswrapper[5050]: I1211 14:15:40.854203 5050 scope.go:117] "RemoveContainer" containerID="6161a3de9821ffcd08ec84fa22ce5305398be067c2c6d3a6b39a6732c2fc1edb" Dec 11 14:15:40 crc kubenswrapper[5050]: I1211 14:15:40.886317 5050 scope.go:117] "RemoveContainer" containerID="a90ab157b3c623d184d49b39ff73cc98df65330afdc42c004c5a9becbab50b27" Dec 11 14:15:40 crc kubenswrapper[5050]: I1211 14:15:40.931868 5050 scope.go:117] "RemoveContainer" containerID="81b83b4349b2dc8f2b9b8ea3e181e622e1a06808c372e219e6cfa525077df28b" Dec 11 14:15:40 crc kubenswrapper[5050]: I1211 14:15:40.953597 5050 scope.go:117] "RemoveContainer" containerID="43ee0aafff3805d46aa0d5efae095bd970bdd079d7f8cf85ed78a9ebc421e4cb" Dec 11 14:16:10 crc kubenswrapper[5050]: I1211 14:16:10.796472 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:16:10 crc kubenswrapper[5050]: I1211 14:16:10.797389 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:16:40 crc kubenswrapper[5050]: I1211 14:16:40.796755 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:16:40 crc kubenswrapper[5050]: I1211 14:16:40.797520 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:16:40 crc kubenswrapper[5050]: I1211 14:16:40.797587 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 14:16:40 crc kubenswrapper[5050]: I1211 14:16:40.798424 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a6b9413520fd75e49a892cc5dba8a8fc1b927a604949a4d2b85854c65912d95e"} pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 14:16:40 crc kubenswrapper[5050]: I1211 14:16:40.798482 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" containerID="cri-o://a6b9413520fd75e49a892cc5dba8a8fc1b927a604949a4d2b85854c65912d95e" gracePeriod=600 Dec 11 14:16:40 crc kubenswrapper[5050]: E1211 14:16:40.924781 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:16:41 crc kubenswrapper[5050]: I1211 14:16:41.123151 5050 scope.go:117] "RemoveContainer" containerID="5fce3440d274f6239f70451c81cca3a699ec3610161019c9df0fe303fc7d4623" Dec 11 14:16:41 crc kubenswrapper[5050]: I1211 14:16:41.175772 5050 scope.go:117] "RemoveContainer" containerID="ed60fdd58c4339e3164d7f4e317f1114bf6ecb3ec2bef7cd7a80d1158c76ff29" Dec 11 14:16:41 crc kubenswrapper[5050]: I1211 14:16:41.196348 5050 scope.go:117] "RemoveContainer" containerID="0a03b521b37d9fb0f7030177a2bb20787cfd84f0c0449bc65282aede0e194ffc" Dec 11 14:16:41 crc kubenswrapper[5050]: I1211 14:16:41.228829 5050 scope.go:117] "RemoveContainer" containerID="e6dc15c8d2821c9d66fa830b0740353eeecafc3c6002947a42891501a4a72dfd" Dec 11 14:16:41 crc kubenswrapper[5050]: I1211 14:16:41.250853 5050 scope.go:117] "RemoveContainer" containerID="4a0dd3bf669f7beb6461a99c18d911c75efcecd8fddb14f47b6513fec2bf9b54" Dec 11 14:16:41 crc kubenswrapper[5050]: I1211 14:16:41.269833 5050 scope.go:117] "RemoveContainer" containerID="2e612e8cce6560b98967bcdbebffe901233053bef927a234d71f282f6b712a13" Dec 11 14:16:41 crc kubenswrapper[5050]: I1211 14:16:41.918936 5050 generic.go:334] "Generic (PLEG): container finished" podID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerID="a6b9413520fd75e49a892cc5dba8a8fc1b927a604949a4d2b85854c65912d95e" exitCode=0 Dec 11 14:16:41 crc kubenswrapper[5050]: I1211 14:16:41.918992 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerDied","Data":"a6b9413520fd75e49a892cc5dba8a8fc1b927a604949a4d2b85854c65912d95e"} Dec 11 14:16:41 crc kubenswrapper[5050]: I1211 14:16:41.919062 5050 scope.go:117] "RemoveContainer" containerID="806cebb86653f8bb0b24399056d7c69486e67776b18ac98fa0522064fc34a318" Dec 11 14:16:41 crc kubenswrapper[5050]: I1211 14:16:41.921572 5050 scope.go:117] "RemoveContainer" containerID="a6b9413520fd75e49a892cc5dba8a8fc1b927a604949a4d2b85854c65912d95e" Dec 11 14:16:41 crc kubenswrapper[5050]: E1211 14:16:41.922151 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:16:54 crc kubenswrapper[5050]: I1211 14:16:54.545737 5050 scope.go:117] "RemoveContainer" containerID="a6b9413520fd75e49a892cc5dba8a8fc1b927a604949a4d2b85854c65912d95e" Dec 11 14:16:54 crc kubenswrapper[5050]: E1211 14:16:54.546825 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:17:06 crc kubenswrapper[5050]: I1211 14:17:06.546245 5050 scope.go:117] "RemoveContainer" containerID="a6b9413520fd75e49a892cc5dba8a8fc1b927a604949a4d2b85854c65912d95e" Dec 11 14:17:06 crc kubenswrapper[5050]: E1211 14:17:06.547309 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:17:21 crc kubenswrapper[5050]: I1211 14:17:21.548861 5050 scope.go:117] "RemoveContainer" containerID="a6b9413520fd75e49a892cc5dba8a8fc1b927a604949a4d2b85854c65912d95e" Dec 11 14:17:21 crc kubenswrapper[5050]: E1211 14:17:21.550563 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:17:32 crc kubenswrapper[5050]: I1211 14:17:32.546513 5050 scope.go:117] "RemoveContainer" containerID="a6b9413520fd75e49a892cc5dba8a8fc1b927a604949a4d2b85854c65912d95e" Dec 11 14:17:32 crc kubenswrapper[5050]: E1211 14:17:32.547491 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:17:41 crc kubenswrapper[5050]: I1211 14:17:41.365420 5050 scope.go:117] "RemoveContainer" containerID="9ed3b2343fe57f3204bfcc9d8b8b6ffb7c52336371d620cd7dace42eedff80a0" Dec 11 14:17:41 crc kubenswrapper[5050]: I1211 14:17:41.394089 5050 scope.go:117] "RemoveContainer" containerID="26578d5a108fa4d1bfd89b54489a03a4c1c636d76f83751795e52594a63ff439" Dec 11 14:17:41 crc kubenswrapper[5050]: I1211 14:17:41.415740 5050 scope.go:117] "RemoveContainer" containerID="2ef3cf73755caba84b78f8b5a189480c1997d37bed0c94192044db0751dc4ded" Dec 11 14:17:41 crc kubenswrapper[5050]: I1211 14:17:41.467933 5050 scope.go:117] "RemoveContainer" containerID="b8472b1400fec5a115ddf8be1b9aa9e96f77b6e60231027d5893fbbc8989bdac" Dec 11 14:17:41 crc kubenswrapper[5050]: I1211 14:17:41.496684 5050 scope.go:117] "RemoveContainer" containerID="c851fe09742b366b6fb0fe111786b9251ff34ac22d585974055bba383135d605" Dec 11 14:17:45 crc kubenswrapper[5050]: I1211 14:17:45.546733 5050 scope.go:117] "RemoveContainer" containerID="a6b9413520fd75e49a892cc5dba8a8fc1b927a604949a4d2b85854c65912d95e" Dec 11 14:17:45 crc kubenswrapper[5050]: E1211 14:17:45.547503 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:18:00 crc kubenswrapper[5050]: I1211 14:18:00.546391 5050 scope.go:117] "RemoveContainer" containerID="a6b9413520fd75e49a892cc5dba8a8fc1b927a604949a4d2b85854c65912d95e" Dec 11 14:18:00 crc kubenswrapper[5050]: E1211 14:18:00.548327 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:18:12 crc kubenswrapper[5050]: I1211 14:18:12.545818 5050 scope.go:117] "RemoveContainer" containerID="a6b9413520fd75e49a892cc5dba8a8fc1b927a604949a4d2b85854c65912d95e" Dec 11 14:18:12 crc kubenswrapper[5050]: E1211 14:18:12.546809 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:18:23 crc kubenswrapper[5050]: I1211 14:18:23.545762 5050 scope.go:117] "RemoveContainer" containerID="a6b9413520fd75e49a892cc5dba8a8fc1b927a604949a4d2b85854c65912d95e" Dec 11 14:18:23 crc kubenswrapper[5050]: E1211 14:18:23.546509 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:18:35 crc kubenswrapper[5050]: I1211 14:18:35.545887 5050 scope.go:117] "RemoveContainer" containerID="a6b9413520fd75e49a892cc5dba8a8fc1b927a604949a4d2b85854c65912d95e" Dec 11 14:18:35 crc kubenswrapper[5050]: E1211 14:18:35.546706 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:18:41 crc kubenswrapper[5050]: I1211 14:18:41.587196 5050 scope.go:117] "RemoveContainer" containerID="117a329727dfb44face47ce70f00fb31fbaad2c13a6f99a74d30819f0877a421" Dec 11 14:18:46 crc kubenswrapper[5050]: I1211 14:18:46.546532 5050 scope.go:117] "RemoveContainer" containerID="a6b9413520fd75e49a892cc5dba8a8fc1b927a604949a4d2b85854c65912d95e" Dec 11 14:18:46 crc kubenswrapper[5050]: E1211 14:18:46.547073 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:18:59 crc kubenswrapper[5050]: I1211 14:18:59.555679 5050 scope.go:117] "RemoveContainer" containerID="a6b9413520fd75e49a892cc5dba8a8fc1b927a604949a4d2b85854c65912d95e" Dec 11 14:18:59 crc kubenswrapper[5050]: E1211 14:18:59.558206 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:19:12 crc kubenswrapper[5050]: I1211 14:19:12.546606 5050 scope.go:117] "RemoveContainer" containerID="a6b9413520fd75e49a892cc5dba8a8fc1b927a604949a4d2b85854c65912d95e" Dec 11 14:19:12 crc kubenswrapper[5050]: E1211 14:19:12.547469 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:19:23 crc kubenswrapper[5050]: I1211 14:19:23.545623 5050 scope.go:117] "RemoveContainer" containerID="a6b9413520fd75e49a892cc5dba8a8fc1b927a604949a4d2b85854c65912d95e" Dec 11 14:19:23 crc kubenswrapper[5050]: E1211 14:19:23.547520 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:19:38 crc kubenswrapper[5050]: I1211 14:19:38.545758 5050 scope.go:117] "RemoveContainer" containerID="a6b9413520fd75e49a892cc5dba8a8fc1b927a604949a4d2b85854c65912d95e" Dec 11 14:19:38 crc kubenswrapper[5050]: E1211 14:19:38.557900 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:19:51 crc kubenswrapper[5050]: I1211 14:19:51.547363 5050 scope.go:117] "RemoveContainer" containerID="a6b9413520fd75e49a892cc5dba8a8fc1b927a604949a4d2b85854c65912d95e" Dec 11 14:19:51 crc kubenswrapper[5050]: E1211 14:19:51.548470 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:20:06 crc kubenswrapper[5050]: I1211 14:20:06.545711 5050 scope.go:117] "RemoveContainer" containerID="a6b9413520fd75e49a892cc5dba8a8fc1b927a604949a4d2b85854c65912d95e" Dec 11 14:20:06 crc kubenswrapper[5050]: E1211 14:20:06.546692 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:20:20 crc kubenswrapper[5050]: I1211 14:20:20.545884 5050 scope.go:117] "RemoveContainer" containerID="a6b9413520fd75e49a892cc5dba8a8fc1b927a604949a4d2b85854c65912d95e" Dec 11 14:20:20 crc kubenswrapper[5050]: E1211 14:20:20.546767 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:20:35 crc kubenswrapper[5050]: I1211 14:20:35.546591 5050 scope.go:117] "RemoveContainer" containerID="a6b9413520fd75e49a892cc5dba8a8fc1b927a604949a4d2b85854c65912d95e" Dec 11 14:20:35 crc kubenswrapper[5050]: E1211 14:20:35.547542 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:20:36 crc kubenswrapper[5050]: I1211 14:20:36.692696 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nl9vn"] Dec 11 14:20:36 crc kubenswrapper[5050]: E1211 14:20:36.693603 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7574660d-3967-453b-8cd4-6aa447aff652" containerName="collect-profiles" Dec 11 14:20:36 crc kubenswrapper[5050]: I1211 14:20:36.693621 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="7574660d-3967-453b-8cd4-6aa447aff652" containerName="collect-profiles" Dec 11 14:20:36 crc kubenswrapper[5050]: I1211 14:20:36.694052 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="7574660d-3967-453b-8cd4-6aa447aff652" containerName="collect-profiles" Dec 11 14:20:36 crc kubenswrapper[5050]: I1211 14:20:36.695560 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nl9vn" Dec 11 14:20:36 crc kubenswrapper[5050]: I1211 14:20:36.704118 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nl9vn"] Dec 11 14:20:36 crc kubenswrapper[5050]: I1211 14:20:36.882566 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55349e7c-e3ae-496c-8a29-686e17df74af-utilities\") pod \"redhat-operators-nl9vn\" (UID: \"55349e7c-e3ae-496c-8a29-686e17df74af\") " pod="openshift-marketplace/redhat-operators-nl9vn" Dec 11 14:20:36 crc kubenswrapper[5050]: I1211 14:20:36.882622 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55349e7c-e3ae-496c-8a29-686e17df74af-catalog-content\") pod \"redhat-operators-nl9vn\" (UID: \"55349e7c-e3ae-496c-8a29-686e17df74af\") " pod="openshift-marketplace/redhat-operators-nl9vn" Dec 11 14:20:36 crc kubenswrapper[5050]: I1211 14:20:36.882657 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzkrj\" (UniqueName: \"kubernetes.io/projected/55349e7c-e3ae-496c-8a29-686e17df74af-kube-api-access-tzkrj\") pod \"redhat-operators-nl9vn\" (UID: \"55349e7c-e3ae-496c-8a29-686e17df74af\") " pod="openshift-marketplace/redhat-operators-nl9vn" Dec 11 14:20:36 crc kubenswrapper[5050]: I1211 14:20:36.983831 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55349e7c-e3ae-496c-8a29-686e17df74af-catalog-content\") pod \"redhat-operators-nl9vn\" (UID: \"55349e7c-e3ae-496c-8a29-686e17df74af\") " pod="openshift-marketplace/redhat-operators-nl9vn" Dec 11 14:20:36 crc kubenswrapper[5050]: I1211 14:20:36.984208 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzkrj\" (UniqueName: \"kubernetes.io/projected/55349e7c-e3ae-496c-8a29-686e17df74af-kube-api-access-tzkrj\") pod \"redhat-operators-nl9vn\" (UID: \"55349e7c-e3ae-496c-8a29-686e17df74af\") " pod="openshift-marketplace/redhat-operators-nl9vn" Dec 11 14:20:36 crc kubenswrapper[5050]: I1211 14:20:36.984387 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55349e7c-e3ae-496c-8a29-686e17df74af-utilities\") pod \"redhat-operators-nl9vn\" (UID: \"55349e7c-e3ae-496c-8a29-686e17df74af\") " pod="openshift-marketplace/redhat-operators-nl9vn" Dec 11 14:20:36 crc kubenswrapper[5050]: I1211 14:20:36.984382 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55349e7c-e3ae-496c-8a29-686e17df74af-catalog-content\") pod \"redhat-operators-nl9vn\" (UID: \"55349e7c-e3ae-496c-8a29-686e17df74af\") " pod="openshift-marketplace/redhat-operators-nl9vn" Dec 11 14:20:36 crc kubenswrapper[5050]: I1211 14:20:36.984594 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55349e7c-e3ae-496c-8a29-686e17df74af-utilities\") pod \"redhat-operators-nl9vn\" (UID: \"55349e7c-e3ae-496c-8a29-686e17df74af\") " pod="openshift-marketplace/redhat-operators-nl9vn" Dec 11 14:20:37 crc kubenswrapper[5050]: I1211 14:20:37.003883 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzkrj\" (UniqueName: \"kubernetes.io/projected/55349e7c-e3ae-496c-8a29-686e17df74af-kube-api-access-tzkrj\") pod \"redhat-operators-nl9vn\" (UID: \"55349e7c-e3ae-496c-8a29-686e17df74af\") " pod="openshift-marketplace/redhat-operators-nl9vn" Dec 11 14:20:37 crc kubenswrapper[5050]: I1211 14:20:37.020778 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nl9vn" Dec 11 14:20:37 crc kubenswrapper[5050]: I1211 14:20:37.243486 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nl9vn"] Dec 11 14:20:38 crc kubenswrapper[5050]: I1211 14:20:38.157300 5050 generic.go:334] "Generic (PLEG): container finished" podID="55349e7c-e3ae-496c-8a29-686e17df74af" containerID="44cbb2aa491114f176670a6b4a369f7a0571d64fbeb0bc6c51d84bee864ad966" exitCode=0 Dec 11 14:20:38 crc kubenswrapper[5050]: I1211 14:20:38.157367 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nl9vn" event={"ID":"55349e7c-e3ae-496c-8a29-686e17df74af","Type":"ContainerDied","Data":"44cbb2aa491114f176670a6b4a369f7a0571d64fbeb0bc6c51d84bee864ad966"} Dec 11 14:20:38 crc kubenswrapper[5050]: I1211 14:20:38.157598 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nl9vn" event={"ID":"55349e7c-e3ae-496c-8a29-686e17df74af","Type":"ContainerStarted","Data":"5931faeefb6e26949ba14f7d9b4c8f75f61d77980b1f14ad9015f7e99ddf0d78"} Dec 11 14:20:38 crc kubenswrapper[5050]: I1211 14:20:38.159575 5050 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 14:20:39 crc kubenswrapper[5050]: I1211 14:20:39.893448 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-26ljv"] Dec 11 14:20:39 crc kubenswrapper[5050]: I1211 14:20:39.895392 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-26ljv" Dec 11 14:20:39 crc kubenswrapper[5050]: I1211 14:20:39.901932 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-26ljv"] Dec 11 14:20:40 crc kubenswrapper[5050]: I1211 14:20:40.034847 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47449e7f-c2a9-4e7c-a359-a16d2cab16eb-utilities\") pod \"redhat-marketplace-26ljv\" (UID: \"47449e7f-c2a9-4e7c-a359-a16d2cab16eb\") " pod="openshift-marketplace/redhat-marketplace-26ljv" Dec 11 14:20:40 crc kubenswrapper[5050]: I1211 14:20:40.034917 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mghp\" (UniqueName: \"kubernetes.io/projected/47449e7f-c2a9-4e7c-a359-a16d2cab16eb-kube-api-access-9mghp\") pod \"redhat-marketplace-26ljv\" (UID: \"47449e7f-c2a9-4e7c-a359-a16d2cab16eb\") " pod="openshift-marketplace/redhat-marketplace-26ljv" Dec 11 14:20:40 crc kubenswrapper[5050]: I1211 14:20:40.035227 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47449e7f-c2a9-4e7c-a359-a16d2cab16eb-catalog-content\") pod \"redhat-marketplace-26ljv\" (UID: \"47449e7f-c2a9-4e7c-a359-a16d2cab16eb\") " pod="openshift-marketplace/redhat-marketplace-26ljv" Dec 11 14:20:40 crc kubenswrapper[5050]: I1211 14:20:40.137278 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47449e7f-c2a9-4e7c-a359-a16d2cab16eb-catalog-content\") pod \"redhat-marketplace-26ljv\" (UID: \"47449e7f-c2a9-4e7c-a359-a16d2cab16eb\") " pod="openshift-marketplace/redhat-marketplace-26ljv" Dec 11 14:20:40 crc kubenswrapper[5050]: I1211 14:20:40.137357 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47449e7f-c2a9-4e7c-a359-a16d2cab16eb-utilities\") pod \"redhat-marketplace-26ljv\" (UID: \"47449e7f-c2a9-4e7c-a359-a16d2cab16eb\") " pod="openshift-marketplace/redhat-marketplace-26ljv" Dec 11 14:20:40 crc kubenswrapper[5050]: I1211 14:20:40.137419 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mghp\" (UniqueName: \"kubernetes.io/projected/47449e7f-c2a9-4e7c-a359-a16d2cab16eb-kube-api-access-9mghp\") pod \"redhat-marketplace-26ljv\" (UID: \"47449e7f-c2a9-4e7c-a359-a16d2cab16eb\") " pod="openshift-marketplace/redhat-marketplace-26ljv" Dec 11 14:20:40 crc kubenswrapper[5050]: I1211 14:20:40.137898 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47449e7f-c2a9-4e7c-a359-a16d2cab16eb-catalog-content\") pod \"redhat-marketplace-26ljv\" (UID: \"47449e7f-c2a9-4e7c-a359-a16d2cab16eb\") " pod="openshift-marketplace/redhat-marketplace-26ljv" Dec 11 14:20:40 crc kubenswrapper[5050]: I1211 14:20:40.138265 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47449e7f-c2a9-4e7c-a359-a16d2cab16eb-utilities\") pod \"redhat-marketplace-26ljv\" (UID: \"47449e7f-c2a9-4e7c-a359-a16d2cab16eb\") " pod="openshift-marketplace/redhat-marketplace-26ljv" Dec 11 14:20:40 crc kubenswrapper[5050]: I1211 14:20:40.159224 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mghp\" (UniqueName: \"kubernetes.io/projected/47449e7f-c2a9-4e7c-a359-a16d2cab16eb-kube-api-access-9mghp\") pod \"redhat-marketplace-26ljv\" (UID: \"47449e7f-c2a9-4e7c-a359-a16d2cab16eb\") " pod="openshift-marketplace/redhat-marketplace-26ljv" Dec 11 14:20:40 crc kubenswrapper[5050]: I1211 14:20:40.174928 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nl9vn" event={"ID":"55349e7c-e3ae-496c-8a29-686e17df74af","Type":"ContainerStarted","Data":"77212be2d593450213e8410a1e632f5ea0a70231958ebade68046be57f2a6918"} Dec 11 14:20:40 crc kubenswrapper[5050]: I1211 14:20:40.217135 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-26ljv" Dec 11 14:20:40 crc kubenswrapper[5050]: I1211 14:20:40.477916 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-26ljv"] Dec 11 14:20:41 crc kubenswrapper[5050]: I1211 14:20:41.202901 5050 generic.go:334] "Generic (PLEG): container finished" podID="47449e7f-c2a9-4e7c-a359-a16d2cab16eb" containerID="9dfcd134457046ac27f80c2c2d55bca9f262d28846ae63617eeea3a099fabc5a" exitCode=0 Dec 11 14:20:41 crc kubenswrapper[5050]: I1211 14:20:41.203177 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26ljv" event={"ID":"47449e7f-c2a9-4e7c-a359-a16d2cab16eb","Type":"ContainerDied","Data":"9dfcd134457046ac27f80c2c2d55bca9f262d28846ae63617eeea3a099fabc5a"} Dec 11 14:20:41 crc kubenswrapper[5050]: I1211 14:20:41.203237 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26ljv" event={"ID":"47449e7f-c2a9-4e7c-a359-a16d2cab16eb","Type":"ContainerStarted","Data":"ba6df073404fb76332fe2552fce11e387d160d5c01a20c774c436437500b8481"} Dec 11 14:20:41 crc kubenswrapper[5050]: I1211 14:20:41.205343 5050 generic.go:334] "Generic (PLEG): container finished" podID="55349e7c-e3ae-496c-8a29-686e17df74af" containerID="77212be2d593450213e8410a1e632f5ea0a70231958ebade68046be57f2a6918" exitCode=0 Dec 11 14:20:41 crc kubenswrapper[5050]: I1211 14:20:41.205394 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nl9vn" event={"ID":"55349e7c-e3ae-496c-8a29-686e17df74af","Type":"ContainerDied","Data":"77212be2d593450213e8410a1e632f5ea0a70231958ebade68046be57f2a6918"} Dec 11 14:20:42 crc kubenswrapper[5050]: I1211 14:20:42.215394 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nl9vn" event={"ID":"55349e7c-e3ae-496c-8a29-686e17df74af","Type":"ContainerStarted","Data":"30374d78273b4aaa023299510a7387647eb5bce0a739d0b145b6bf7b0b014208"} Dec 11 14:20:42 crc kubenswrapper[5050]: I1211 14:20:42.238640 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nl9vn" podStartSLOduration=2.684226822 podStartE2EDuration="6.238588297s" podCreationTimestamp="2025-12-11 14:20:36 +0000 UTC" firstStartedPulling="2025-12-11 14:20:38.159247892 +0000 UTC m=+1929.002970488" lastFinishedPulling="2025-12-11 14:20:41.713609357 +0000 UTC m=+1932.557331963" observedRunningTime="2025-12-11 14:20:42.234556158 +0000 UTC m=+1933.078278744" watchObservedRunningTime="2025-12-11 14:20:42.238588297 +0000 UTC m=+1933.082310883" Dec 11 14:20:43 crc kubenswrapper[5050]: I1211 14:20:43.228767 5050 generic.go:334] "Generic (PLEG): container finished" podID="47449e7f-c2a9-4e7c-a359-a16d2cab16eb" containerID="473388f8482557943ecb729d6f4311d9bb363bea2a741671b6167506a971d304" exitCode=0 Dec 11 14:20:43 crc kubenswrapper[5050]: I1211 14:20:43.228842 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26ljv" event={"ID":"47449e7f-c2a9-4e7c-a359-a16d2cab16eb","Type":"ContainerDied","Data":"473388f8482557943ecb729d6f4311d9bb363bea2a741671b6167506a971d304"} Dec 11 14:20:44 crc kubenswrapper[5050]: I1211 14:20:44.239251 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26ljv" event={"ID":"47449e7f-c2a9-4e7c-a359-a16d2cab16eb","Type":"ContainerStarted","Data":"a49b4b74c2aa0fef93b4c4ad92c9b2b14d5df7c9cdfaa0a6288fe12aca6aa08a"} Dec 11 14:20:44 crc kubenswrapper[5050]: I1211 14:20:44.260111 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-26ljv" podStartSLOduration=2.470915657 podStartE2EDuration="5.260088479s" podCreationTimestamp="2025-12-11 14:20:39 +0000 UTC" firstStartedPulling="2025-12-11 14:20:41.207139057 +0000 UTC m=+1932.050861643" lastFinishedPulling="2025-12-11 14:20:43.996311879 +0000 UTC m=+1934.840034465" observedRunningTime="2025-12-11 14:20:44.255207157 +0000 UTC m=+1935.098929763" watchObservedRunningTime="2025-12-11 14:20:44.260088479 +0000 UTC m=+1935.103811065" Dec 11 14:20:47 crc kubenswrapper[5050]: I1211 14:20:47.021166 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nl9vn" Dec 11 14:20:47 crc kubenswrapper[5050]: I1211 14:20:47.022243 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nl9vn" Dec 11 14:20:47 crc kubenswrapper[5050]: I1211 14:20:47.546731 5050 scope.go:117] "RemoveContainer" containerID="a6b9413520fd75e49a892cc5dba8a8fc1b927a604949a4d2b85854c65912d95e" Dec 11 14:20:47 crc kubenswrapper[5050]: E1211 14:20:47.547342 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:20:48 crc kubenswrapper[5050]: I1211 14:20:48.073052 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nl9vn" podUID="55349e7c-e3ae-496c-8a29-686e17df74af" containerName="registry-server" probeResult="failure" output=< Dec 11 14:20:48 crc kubenswrapper[5050]: timeout: failed to connect service ":50051" within 1s Dec 11 14:20:48 crc kubenswrapper[5050]: > Dec 11 14:20:50 crc kubenswrapper[5050]: I1211 14:20:50.217298 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-26ljv" Dec 11 14:20:50 crc kubenswrapper[5050]: I1211 14:20:50.218545 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-26ljv" Dec 11 14:20:50 crc kubenswrapper[5050]: I1211 14:20:50.259173 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-26ljv" Dec 11 14:20:50 crc kubenswrapper[5050]: I1211 14:20:50.328927 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-26ljv" Dec 11 14:20:50 crc kubenswrapper[5050]: I1211 14:20:50.495396 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-26ljv"] Dec 11 14:20:52 crc kubenswrapper[5050]: I1211 14:20:52.304738 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-26ljv" podUID="47449e7f-c2a9-4e7c-a359-a16d2cab16eb" containerName="registry-server" containerID="cri-o://a49b4b74c2aa0fef93b4c4ad92c9b2b14d5df7c9cdfaa0a6288fe12aca6aa08a" gracePeriod=2 Dec 11 14:20:53 crc kubenswrapper[5050]: I1211 14:20:53.318301 5050 generic.go:334] "Generic (PLEG): container finished" podID="47449e7f-c2a9-4e7c-a359-a16d2cab16eb" containerID="a49b4b74c2aa0fef93b4c4ad92c9b2b14d5df7c9cdfaa0a6288fe12aca6aa08a" exitCode=0 Dec 11 14:20:53 crc kubenswrapper[5050]: I1211 14:20:53.318423 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26ljv" event={"ID":"47449e7f-c2a9-4e7c-a359-a16d2cab16eb","Type":"ContainerDied","Data":"a49b4b74c2aa0fef93b4c4ad92c9b2b14d5df7c9cdfaa0a6288fe12aca6aa08a"} Dec 11 14:20:53 crc kubenswrapper[5050]: I1211 14:20:53.825830 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-26ljv" Dec 11 14:20:54 crc kubenswrapper[5050]: I1211 14:20:54.002193 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47449e7f-c2a9-4e7c-a359-a16d2cab16eb-catalog-content\") pod \"47449e7f-c2a9-4e7c-a359-a16d2cab16eb\" (UID: \"47449e7f-c2a9-4e7c-a359-a16d2cab16eb\") " Dec 11 14:20:54 crc kubenswrapper[5050]: I1211 14:20:54.002475 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47449e7f-c2a9-4e7c-a359-a16d2cab16eb-utilities\") pod \"47449e7f-c2a9-4e7c-a359-a16d2cab16eb\" (UID: \"47449e7f-c2a9-4e7c-a359-a16d2cab16eb\") " Dec 11 14:20:54 crc kubenswrapper[5050]: I1211 14:20:54.002621 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mghp\" (UniqueName: \"kubernetes.io/projected/47449e7f-c2a9-4e7c-a359-a16d2cab16eb-kube-api-access-9mghp\") pod \"47449e7f-c2a9-4e7c-a359-a16d2cab16eb\" (UID: \"47449e7f-c2a9-4e7c-a359-a16d2cab16eb\") " Dec 11 14:20:54 crc kubenswrapper[5050]: I1211 14:20:54.003466 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47449e7f-c2a9-4e7c-a359-a16d2cab16eb-utilities" (OuterVolumeSpecName: "utilities") pod "47449e7f-c2a9-4e7c-a359-a16d2cab16eb" (UID: "47449e7f-c2a9-4e7c-a359-a16d2cab16eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:20:54 crc kubenswrapper[5050]: I1211 14:20:54.007948 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47449e7f-c2a9-4e7c-a359-a16d2cab16eb-kube-api-access-9mghp" (OuterVolumeSpecName: "kube-api-access-9mghp") pod "47449e7f-c2a9-4e7c-a359-a16d2cab16eb" (UID: "47449e7f-c2a9-4e7c-a359-a16d2cab16eb"). InnerVolumeSpecName "kube-api-access-9mghp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:20:54 crc kubenswrapper[5050]: I1211 14:20:54.025787 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47449e7f-c2a9-4e7c-a359-a16d2cab16eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "47449e7f-c2a9-4e7c-a359-a16d2cab16eb" (UID: "47449e7f-c2a9-4e7c-a359-a16d2cab16eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:20:54 crc kubenswrapper[5050]: I1211 14:20:54.103969 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47449e7f-c2a9-4e7c-a359-a16d2cab16eb-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 14:20:54 crc kubenswrapper[5050]: I1211 14:20:54.104028 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mghp\" (UniqueName: \"kubernetes.io/projected/47449e7f-c2a9-4e7c-a359-a16d2cab16eb-kube-api-access-9mghp\") on node \"crc\" DevicePath \"\"" Dec 11 14:20:54 crc kubenswrapper[5050]: I1211 14:20:54.104038 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47449e7f-c2a9-4e7c-a359-a16d2cab16eb-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 14:20:54 crc kubenswrapper[5050]: I1211 14:20:54.328030 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-26ljv" event={"ID":"47449e7f-c2a9-4e7c-a359-a16d2cab16eb","Type":"ContainerDied","Data":"ba6df073404fb76332fe2552fce11e387d160d5c01a20c774c436437500b8481"} Dec 11 14:20:54 crc kubenswrapper[5050]: I1211 14:20:54.328098 5050 scope.go:117] "RemoveContainer" containerID="a49b4b74c2aa0fef93b4c4ad92c9b2b14d5df7c9cdfaa0a6288fe12aca6aa08a" Dec 11 14:20:54 crc kubenswrapper[5050]: I1211 14:20:54.328160 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-26ljv" Dec 11 14:20:54 crc kubenswrapper[5050]: I1211 14:20:54.347638 5050 scope.go:117] "RemoveContainer" containerID="473388f8482557943ecb729d6f4311d9bb363bea2a741671b6167506a971d304" Dec 11 14:20:54 crc kubenswrapper[5050]: I1211 14:20:54.366320 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-26ljv"] Dec 11 14:20:54 crc kubenswrapper[5050]: I1211 14:20:54.372531 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-26ljv"] Dec 11 14:20:54 crc kubenswrapper[5050]: I1211 14:20:54.390243 5050 scope.go:117] "RemoveContainer" containerID="9dfcd134457046ac27f80c2c2d55bca9f262d28846ae63617eeea3a099fabc5a" Dec 11 14:20:55 crc kubenswrapper[5050]: I1211 14:20:55.556490 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47449e7f-c2a9-4e7c-a359-a16d2cab16eb" path="/var/lib/kubelet/pods/47449e7f-c2a9-4e7c-a359-a16d2cab16eb/volumes" Dec 11 14:20:57 crc kubenswrapper[5050]: I1211 14:20:57.063400 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nl9vn" Dec 11 14:20:57 crc kubenswrapper[5050]: I1211 14:20:57.115708 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nl9vn" Dec 11 14:20:57 crc kubenswrapper[5050]: I1211 14:20:57.302171 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nl9vn"] Dec 11 14:20:58 crc kubenswrapper[5050]: I1211 14:20:58.360077 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nl9vn" podUID="55349e7c-e3ae-496c-8a29-686e17df74af" containerName="registry-server" containerID="cri-o://30374d78273b4aaa023299510a7387647eb5bce0a739d0b145b6bf7b0b014208" gracePeriod=2 Dec 11 14:20:58 crc kubenswrapper[5050]: I1211 14:20:58.750883 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nl9vn" Dec 11 14:20:58 crc kubenswrapper[5050]: I1211 14:20:58.873641 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55349e7c-e3ae-496c-8a29-686e17df74af-catalog-content\") pod \"55349e7c-e3ae-496c-8a29-686e17df74af\" (UID: \"55349e7c-e3ae-496c-8a29-686e17df74af\") " Dec 11 14:20:58 crc kubenswrapper[5050]: I1211 14:20:58.874923 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzkrj\" (UniqueName: \"kubernetes.io/projected/55349e7c-e3ae-496c-8a29-686e17df74af-kube-api-access-tzkrj\") pod \"55349e7c-e3ae-496c-8a29-686e17df74af\" (UID: \"55349e7c-e3ae-496c-8a29-686e17df74af\") " Dec 11 14:20:58 crc kubenswrapper[5050]: I1211 14:20:58.875092 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55349e7c-e3ae-496c-8a29-686e17df74af-utilities\") pod \"55349e7c-e3ae-496c-8a29-686e17df74af\" (UID: \"55349e7c-e3ae-496c-8a29-686e17df74af\") " Dec 11 14:20:58 crc kubenswrapper[5050]: I1211 14:20:58.876026 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55349e7c-e3ae-496c-8a29-686e17df74af-utilities" (OuterVolumeSpecName: "utilities") pod "55349e7c-e3ae-496c-8a29-686e17df74af" (UID: "55349e7c-e3ae-496c-8a29-686e17df74af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:20:58 crc kubenswrapper[5050]: I1211 14:20:58.880221 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55349e7c-e3ae-496c-8a29-686e17df74af-kube-api-access-tzkrj" (OuterVolumeSpecName: "kube-api-access-tzkrj") pod "55349e7c-e3ae-496c-8a29-686e17df74af" (UID: "55349e7c-e3ae-496c-8a29-686e17df74af"). InnerVolumeSpecName "kube-api-access-tzkrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:20:58 crc kubenswrapper[5050]: I1211 14:20:58.976770 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55349e7c-e3ae-496c-8a29-686e17df74af-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 14:20:58 crc kubenswrapper[5050]: I1211 14:20:58.976822 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzkrj\" (UniqueName: \"kubernetes.io/projected/55349e7c-e3ae-496c-8a29-686e17df74af-kube-api-access-tzkrj\") on node \"crc\" DevicePath \"\"" Dec 11 14:20:58 crc kubenswrapper[5050]: I1211 14:20:58.995299 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55349e7c-e3ae-496c-8a29-686e17df74af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55349e7c-e3ae-496c-8a29-686e17df74af" (UID: "55349e7c-e3ae-496c-8a29-686e17df74af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:20:59 crc kubenswrapper[5050]: I1211 14:20:59.077730 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55349e7c-e3ae-496c-8a29-686e17df74af-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 14:20:59 crc kubenswrapper[5050]: I1211 14:20:59.369557 5050 generic.go:334] "Generic (PLEG): container finished" podID="55349e7c-e3ae-496c-8a29-686e17df74af" containerID="30374d78273b4aaa023299510a7387647eb5bce0a739d0b145b6bf7b0b014208" exitCode=0 Dec 11 14:20:59 crc kubenswrapper[5050]: I1211 14:20:59.369615 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nl9vn" event={"ID":"55349e7c-e3ae-496c-8a29-686e17df74af","Type":"ContainerDied","Data":"30374d78273b4aaa023299510a7387647eb5bce0a739d0b145b6bf7b0b014208"} Dec 11 14:20:59 crc kubenswrapper[5050]: I1211 14:20:59.369646 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nl9vn" event={"ID":"55349e7c-e3ae-496c-8a29-686e17df74af","Type":"ContainerDied","Data":"5931faeefb6e26949ba14f7d9b4c8f75f61d77980b1f14ad9015f7e99ddf0d78"} Dec 11 14:20:59 crc kubenswrapper[5050]: I1211 14:20:59.369663 5050 scope.go:117] "RemoveContainer" containerID="30374d78273b4aaa023299510a7387647eb5bce0a739d0b145b6bf7b0b014208" Dec 11 14:20:59 crc kubenswrapper[5050]: I1211 14:20:59.369868 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nl9vn" Dec 11 14:20:59 crc kubenswrapper[5050]: I1211 14:20:59.412912 5050 scope.go:117] "RemoveContainer" containerID="77212be2d593450213e8410a1e632f5ea0a70231958ebade68046be57f2a6918" Dec 11 14:20:59 crc kubenswrapper[5050]: I1211 14:20:59.419684 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nl9vn"] Dec 11 14:20:59 crc kubenswrapper[5050]: I1211 14:20:59.427694 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nl9vn"] Dec 11 14:20:59 crc kubenswrapper[5050]: I1211 14:20:59.435841 5050 scope.go:117] "RemoveContainer" containerID="44cbb2aa491114f176670a6b4a369f7a0571d64fbeb0bc6c51d84bee864ad966" Dec 11 14:20:59 crc kubenswrapper[5050]: I1211 14:20:59.458231 5050 scope.go:117] "RemoveContainer" containerID="30374d78273b4aaa023299510a7387647eb5bce0a739d0b145b6bf7b0b014208" Dec 11 14:20:59 crc kubenswrapper[5050]: E1211 14:20:59.458897 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30374d78273b4aaa023299510a7387647eb5bce0a739d0b145b6bf7b0b014208\": container with ID starting with 30374d78273b4aaa023299510a7387647eb5bce0a739d0b145b6bf7b0b014208 not found: ID does not exist" containerID="30374d78273b4aaa023299510a7387647eb5bce0a739d0b145b6bf7b0b014208" Dec 11 14:20:59 crc kubenswrapper[5050]: I1211 14:20:59.458967 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30374d78273b4aaa023299510a7387647eb5bce0a739d0b145b6bf7b0b014208"} err="failed to get container status \"30374d78273b4aaa023299510a7387647eb5bce0a739d0b145b6bf7b0b014208\": rpc error: code = NotFound desc = could not find container \"30374d78273b4aaa023299510a7387647eb5bce0a739d0b145b6bf7b0b014208\": container with ID starting with 30374d78273b4aaa023299510a7387647eb5bce0a739d0b145b6bf7b0b014208 not found: ID does not exist" Dec 11 14:20:59 crc kubenswrapper[5050]: I1211 14:20:59.459005 5050 scope.go:117] "RemoveContainer" containerID="77212be2d593450213e8410a1e632f5ea0a70231958ebade68046be57f2a6918" Dec 11 14:20:59 crc kubenswrapper[5050]: E1211 14:20:59.459572 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77212be2d593450213e8410a1e632f5ea0a70231958ebade68046be57f2a6918\": container with ID starting with 77212be2d593450213e8410a1e632f5ea0a70231958ebade68046be57f2a6918 not found: ID does not exist" containerID="77212be2d593450213e8410a1e632f5ea0a70231958ebade68046be57f2a6918" Dec 11 14:20:59 crc kubenswrapper[5050]: I1211 14:20:59.459625 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77212be2d593450213e8410a1e632f5ea0a70231958ebade68046be57f2a6918"} err="failed to get container status \"77212be2d593450213e8410a1e632f5ea0a70231958ebade68046be57f2a6918\": rpc error: code = NotFound desc = could not find container \"77212be2d593450213e8410a1e632f5ea0a70231958ebade68046be57f2a6918\": container with ID starting with 77212be2d593450213e8410a1e632f5ea0a70231958ebade68046be57f2a6918 not found: ID does not exist" Dec 11 14:20:59 crc kubenswrapper[5050]: I1211 14:20:59.459656 5050 scope.go:117] "RemoveContainer" containerID="44cbb2aa491114f176670a6b4a369f7a0571d64fbeb0bc6c51d84bee864ad966" Dec 11 14:20:59 crc kubenswrapper[5050]: E1211 14:20:59.460036 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44cbb2aa491114f176670a6b4a369f7a0571d64fbeb0bc6c51d84bee864ad966\": container with ID starting with 44cbb2aa491114f176670a6b4a369f7a0571d64fbeb0bc6c51d84bee864ad966 not found: ID does not exist" containerID="44cbb2aa491114f176670a6b4a369f7a0571d64fbeb0bc6c51d84bee864ad966" Dec 11 14:20:59 crc kubenswrapper[5050]: I1211 14:20:59.460064 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44cbb2aa491114f176670a6b4a369f7a0571d64fbeb0bc6c51d84bee864ad966"} err="failed to get container status \"44cbb2aa491114f176670a6b4a369f7a0571d64fbeb0bc6c51d84bee864ad966\": rpc error: code = NotFound desc = could not find container \"44cbb2aa491114f176670a6b4a369f7a0571d64fbeb0bc6c51d84bee864ad966\": container with ID starting with 44cbb2aa491114f176670a6b4a369f7a0571d64fbeb0bc6c51d84bee864ad966 not found: ID does not exist" Dec 11 14:20:59 crc kubenswrapper[5050]: I1211 14:20:59.558545 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55349e7c-e3ae-496c-8a29-686e17df74af" path="/var/lib/kubelet/pods/55349e7c-e3ae-496c-8a29-686e17df74af/volumes" Dec 11 14:21:02 crc kubenswrapper[5050]: I1211 14:21:02.546086 5050 scope.go:117] "RemoveContainer" containerID="a6b9413520fd75e49a892cc5dba8a8fc1b927a604949a4d2b85854c65912d95e" Dec 11 14:21:02 crc kubenswrapper[5050]: E1211 14:21:02.546859 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:21:16 crc kubenswrapper[5050]: I1211 14:21:16.546248 5050 scope.go:117] "RemoveContainer" containerID="a6b9413520fd75e49a892cc5dba8a8fc1b927a604949a4d2b85854c65912d95e" Dec 11 14:21:16 crc kubenswrapper[5050]: E1211 14:21:16.547192 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:21:29 crc kubenswrapper[5050]: I1211 14:21:29.551689 5050 scope.go:117] "RemoveContainer" containerID="a6b9413520fd75e49a892cc5dba8a8fc1b927a604949a4d2b85854c65912d95e" Dec 11 14:21:29 crc kubenswrapper[5050]: E1211 14:21:29.553105 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:21:42 crc kubenswrapper[5050]: I1211 14:21:42.545805 5050 scope.go:117] "RemoveContainer" containerID="a6b9413520fd75e49a892cc5dba8a8fc1b927a604949a4d2b85854c65912d95e" Dec 11 14:21:42 crc kubenswrapper[5050]: I1211 14:21:42.784083 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerStarted","Data":"ef76d01a8e4a52680360d68b90caf4e1a76a0862e277fd6a1a982900a1430822"} Dec 11 14:22:38 crc kubenswrapper[5050]: I1211 14:22:38.217110 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nlz9w"] Dec 11 14:22:38 crc kubenswrapper[5050]: E1211 14:22:38.218302 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55349e7c-e3ae-496c-8a29-686e17df74af" containerName="registry-server" Dec 11 14:22:38 crc kubenswrapper[5050]: I1211 14:22:38.218320 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="55349e7c-e3ae-496c-8a29-686e17df74af" containerName="registry-server" Dec 11 14:22:38 crc kubenswrapper[5050]: E1211 14:22:38.218337 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47449e7f-c2a9-4e7c-a359-a16d2cab16eb" containerName="extract-utilities" Dec 11 14:22:38 crc kubenswrapper[5050]: I1211 14:22:38.218345 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="47449e7f-c2a9-4e7c-a359-a16d2cab16eb" containerName="extract-utilities" Dec 11 14:22:38 crc kubenswrapper[5050]: E1211 14:22:38.218361 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47449e7f-c2a9-4e7c-a359-a16d2cab16eb" containerName="registry-server" Dec 11 14:22:38 crc kubenswrapper[5050]: I1211 14:22:38.218369 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="47449e7f-c2a9-4e7c-a359-a16d2cab16eb" containerName="registry-server" Dec 11 14:22:38 crc kubenswrapper[5050]: E1211 14:22:38.218391 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55349e7c-e3ae-496c-8a29-686e17df74af" containerName="extract-content" Dec 11 14:22:38 crc kubenswrapper[5050]: I1211 14:22:38.218420 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="55349e7c-e3ae-496c-8a29-686e17df74af" containerName="extract-content" Dec 11 14:22:38 crc kubenswrapper[5050]: E1211 14:22:38.218442 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55349e7c-e3ae-496c-8a29-686e17df74af" containerName="extract-utilities" Dec 11 14:22:38 crc kubenswrapper[5050]: I1211 14:22:38.218449 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="55349e7c-e3ae-496c-8a29-686e17df74af" containerName="extract-utilities" Dec 11 14:22:38 crc kubenswrapper[5050]: E1211 14:22:38.218462 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47449e7f-c2a9-4e7c-a359-a16d2cab16eb" containerName="extract-content" Dec 11 14:22:38 crc kubenswrapper[5050]: I1211 14:22:38.218468 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="47449e7f-c2a9-4e7c-a359-a16d2cab16eb" containerName="extract-content" Dec 11 14:22:38 crc kubenswrapper[5050]: I1211 14:22:38.218849 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="55349e7c-e3ae-496c-8a29-686e17df74af" containerName="registry-server" Dec 11 14:22:38 crc kubenswrapper[5050]: I1211 14:22:38.218869 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="47449e7f-c2a9-4e7c-a359-a16d2cab16eb" containerName="registry-server" Dec 11 14:22:38 crc kubenswrapper[5050]: I1211 14:22:38.220512 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nlz9w" Dec 11 14:22:38 crc kubenswrapper[5050]: I1211 14:22:38.229223 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nlz9w"] Dec 11 14:22:38 crc kubenswrapper[5050]: I1211 14:22:38.350974 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a566c35f-a562-4cb0-bae9-4f82171b5770-utilities\") pod \"certified-operators-nlz9w\" (UID: \"a566c35f-a562-4cb0-bae9-4f82171b5770\") " pod="openshift-marketplace/certified-operators-nlz9w" Dec 11 14:22:38 crc kubenswrapper[5050]: I1211 14:22:38.351072 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a566c35f-a562-4cb0-bae9-4f82171b5770-catalog-content\") pod \"certified-operators-nlz9w\" (UID: \"a566c35f-a562-4cb0-bae9-4f82171b5770\") " pod="openshift-marketplace/certified-operators-nlz9w" Dec 11 14:22:38 crc kubenswrapper[5050]: I1211 14:22:38.351132 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js242\" (UniqueName: \"kubernetes.io/projected/a566c35f-a562-4cb0-bae9-4f82171b5770-kube-api-access-js242\") pod \"certified-operators-nlz9w\" (UID: \"a566c35f-a562-4cb0-bae9-4f82171b5770\") " pod="openshift-marketplace/certified-operators-nlz9w" Dec 11 14:22:38 crc kubenswrapper[5050]: I1211 14:22:38.452830 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a566c35f-a562-4cb0-bae9-4f82171b5770-utilities\") pod \"certified-operators-nlz9w\" (UID: \"a566c35f-a562-4cb0-bae9-4f82171b5770\") " pod="openshift-marketplace/certified-operators-nlz9w" Dec 11 14:22:38 crc kubenswrapper[5050]: I1211 14:22:38.452205 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a566c35f-a562-4cb0-bae9-4f82171b5770-utilities\") pod \"certified-operators-nlz9w\" (UID: \"a566c35f-a562-4cb0-bae9-4f82171b5770\") " pod="openshift-marketplace/certified-operators-nlz9w" Dec 11 14:22:38 crc kubenswrapper[5050]: I1211 14:22:38.453329 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a566c35f-a562-4cb0-bae9-4f82171b5770-catalog-content\") pod \"certified-operators-nlz9w\" (UID: \"a566c35f-a562-4cb0-bae9-4f82171b5770\") " pod="openshift-marketplace/certified-operators-nlz9w" Dec 11 14:22:38 crc kubenswrapper[5050]: I1211 14:22:38.453375 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a566c35f-a562-4cb0-bae9-4f82171b5770-catalog-content\") pod \"certified-operators-nlz9w\" (UID: \"a566c35f-a562-4cb0-bae9-4f82171b5770\") " pod="openshift-marketplace/certified-operators-nlz9w" Dec 11 14:22:38 crc kubenswrapper[5050]: I1211 14:22:38.453452 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js242\" (UniqueName: \"kubernetes.io/projected/a566c35f-a562-4cb0-bae9-4f82171b5770-kube-api-access-js242\") pod \"certified-operators-nlz9w\" (UID: \"a566c35f-a562-4cb0-bae9-4f82171b5770\") " pod="openshift-marketplace/certified-operators-nlz9w" Dec 11 14:22:38 crc kubenswrapper[5050]: I1211 14:22:38.482688 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js242\" (UniqueName: \"kubernetes.io/projected/a566c35f-a562-4cb0-bae9-4f82171b5770-kube-api-access-js242\") pod \"certified-operators-nlz9w\" (UID: \"a566c35f-a562-4cb0-bae9-4f82171b5770\") " pod="openshift-marketplace/certified-operators-nlz9w" Dec 11 14:22:38 crc kubenswrapper[5050]: I1211 14:22:38.546035 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nlz9w" Dec 11 14:22:38 crc kubenswrapper[5050]: I1211 14:22:38.819887 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nlz9w"] Dec 11 14:22:39 crc kubenswrapper[5050]: I1211 14:22:39.300842 5050 generic.go:334] "Generic (PLEG): container finished" podID="a566c35f-a562-4cb0-bae9-4f82171b5770" containerID="6ed24476da58fe36d95e962c2b2ff4e1e032e0c8abcb0cd7fe6021df764593e2" exitCode=0 Dec 11 14:22:39 crc kubenswrapper[5050]: I1211 14:22:39.300903 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nlz9w" event={"ID":"a566c35f-a562-4cb0-bae9-4f82171b5770","Type":"ContainerDied","Data":"6ed24476da58fe36d95e962c2b2ff4e1e032e0c8abcb0cd7fe6021df764593e2"} Dec 11 14:22:39 crc kubenswrapper[5050]: I1211 14:22:39.300937 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nlz9w" event={"ID":"a566c35f-a562-4cb0-bae9-4f82171b5770","Type":"ContainerStarted","Data":"cfde3e4ae94a4a0bda09ec1580fe00f19be38d439fd61a7fa4108c50b0d5c5b4"} Dec 11 14:22:41 crc kubenswrapper[5050]: I1211 14:22:41.323307 5050 generic.go:334] "Generic (PLEG): container finished" podID="a566c35f-a562-4cb0-bae9-4f82171b5770" containerID="5d7f7f1a1008e3b0a92a62a77a0cabb98723814a3e07a9d9bbadaa2389e53136" exitCode=0 Dec 11 14:22:41 crc kubenswrapper[5050]: I1211 14:22:41.323364 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nlz9w" event={"ID":"a566c35f-a562-4cb0-bae9-4f82171b5770","Type":"ContainerDied","Data":"5d7f7f1a1008e3b0a92a62a77a0cabb98723814a3e07a9d9bbadaa2389e53136"} Dec 11 14:22:42 crc kubenswrapper[5050]: I1211 14:22:42.335741 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nlz9w" event={"ID":"a566c35f-a562-4cb0-bae9-4f82171b5770","Type":"ContainerStarted","Data":"99819a7f5a74302e3f52d1b0a5bcf26892797ceb36800f93ba80d56edcc5baef"} Dec 11 14:22:42 crc kubenswrapper[5050]: I1211 14:22:42.356613 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nlz9w" podStartSLOduration=1.643788225 podStartE2EDuration="4.35658828s" podCreationTimestamp="2025-12-11 14:22:38 +0000 UTC" firstStartedPulling="2025-12-11 14:22:39.303086021 +0000 UTC m=+2050.146808607" lastFinishedPulling="2025-12-11 14:22:42.015886056 +0000 UTC m=+2052.859608662" observedRunningTime="2025-12-11 14:22:42.353296702 +0000 UTC m=+2053.197019288" watchObservedRunningTime="2025-12-11 14:22:42.35658828 +0000 UTC m=+2053.200310856" Dec 11 14:22:48 crc kubenswrapper[5050]: I1211 14:22:48.546676 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nlz9w" Dec 11 14:22:48 crc kubenswrapper[5050]: I1211 14:22:48.547362 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nlz9w" Dec 11 14:22:48 crc kubenswrapper[5050]: I1211 14:22:48.595039 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nlz9w" Dec 11 14:22:49 crc kubenswrapper[5050]: I1211 14:22:49.444770 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nlz9w" Dec 11 14:22:49 crc kubenswrapper[5050]: I1211 14:22:49.498502 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nlz9w"] Dec 11 14:22:51 crc kubenswrapper[5050]: I1211 14:22:51.416278 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nlz9w" podUID="a566c35f-a562-4cb0-bae9-4f82171b5770" containerName="registry-server" containerID="cri-o://99819a7f5a74302e3f52d1b0a5bcf26892797ceb36800f93ba80d56edcc5baef" gracePeriod=2 Dec 11 14:22:51 crc kubenswrapper[5050]: I1211 14:22:51.843632 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nlz9w" Dec 11 14:22:51 crc kubenswrapper[5050]: I1211 14:22:51.954865 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a566c35f-a562-4cb0-bae9-4f82171b5770-utilities\") pod \"a566c35f-a562-4cb0-bae9-4f82171b5770\" (UID: \"a566c35f-a562-4cb0-bae9-4f82171b5770\") " Dec 11 14:22:51 crc kubenswrapper[5050]: I1211 14:22:51.954925 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js242\" (UniqueName: \"kubernetes.io/projected/a566c35f-a562-4cb0-bae9-4f82171b5770-kube-api-access-js242\") pod \"a566c35f-a562-4cb0-bae9-4f82171b5770\" (UID: \"a566c35f-a562-4cb0-bae9-4f82171b5770\") " Dec 11 14:22:51 crc kubenswrapper[5050]: I1211 14:22:51.955092 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a566c35f-a562-4cb0-bae9-4f82171b5770-catalog-content\") pod \"a566c35f-a562-4cb0-bae9-4f82171b5770\" (UID: \"a566c35f-a562-4cb0-bae9-4f82171b5770\") " Dec 11 14:22:51 crc kubenswrapper[5050]: I1211 14:22:51.956243 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a566c35f-a562-4cb0-bae9-4f82171b5770-utilities" (OuterVolumeSpecName: "utilities") pod "a566c35f-a562-4cb0-bae9-4f82171b5770" (UID: "a566c35f-a562-4cb0-bae9-4f82171b5770"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:22:51 crc kubenswrapper[5050]: I1211 14:22:51.962636 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a566c35f-a562-4cb0-bae9-4f82171b5770-kube-api-access-js242" (OuterVolumeSpecName: "kube-api-access-js242") pod "a566c35f-a562-4cb0-bae9-4f82171b5770" (UID: "a566c35f-a562-4cb0-bae9-4f82171b5770"). InnerVolumeSpecName "kube-api-access-js242". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:22:52 crc kubenswrapper[5050]: I1211 14:22:52.007558 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a566c35f-a562-4cb0-bae9-4f82171b5770-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a566c35f-a562-4cb0-bae9-4f82171b5770" (UID: "a566c35f-a562-4cb0-bae9-4f82171b5770"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:22:52 crc kubenswrapper[5050]: I1211 14:22:52.057718 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a566c35f-a562-4cb0-bae9-4f82171b5770-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 14:22:52 crc kubenswrapper[5050]: I1211 14:22:52.058226 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a566c35f-a562-4cb0-bae9-4f82171b5770-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 14:22:52 crc kubenswrapper[5050]: I1211 14:22:52.058240 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js242\" (UniqueName: \"kubernetes.io/projected/a566c35f-a562-4cb0-bae9-4f82171b5770-kube-api-access-js242\") on node \"crc\" DevicePath \"\"" Dec 11 14:22:52 crc kubenswrapper[5050]: I1211 14:22:52.428051 5050 generic.go:334] "Generic (PLEG): container finished" podID="a566c35f-a562-4cb0-bae9-4f82171b5770" containerID="99819a7f5a74302e3f52d1b0a5bcf26892797ceb36800f93ba80d56edcc5baef" exitCode=0 Dec 11 14:22:52 crc kubenswrapper[5050]: I1211 14:22:52.428119 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nlz9w" event={"ID":"a566c35f-a562-4cb0-bae9-4f82171b5770","Type":"ContainerDied","Data":"99819a7f5a74302e3f52d1b0a5bcf26892797ceb36800f93ba80d56edcc5baef"} Dec 11 14:22:52 crc kubenswrapper[5050]: I1211 14:22:52.428154 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nlz9w" event={"ID":"a566c35f-a562-4cb0-bae9-4f82171b5770","Type":"ContainerDied","Data":"cfde3e4ae94a4a0bda09ec1580fe00f19be38d439fd61a7fa4108c50b0d5c5b4"} Dec 11 14:22:52 crc kubenswrapper[5050]: I1211 14:22:52.428175 5050 scope.go:117] "RemoveContainer" containerID="99819a7f5a74302e3f52d1b0a5bcf26892797ceb36800f93ba80d56edcc5baef" Dec 11 14:22:52 crc kubenswrapper[5050]: I1211 14:22:52.428179 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nlz9w" Dec 11 14:22:52 crc kubenswrapper[5050]: I1211 14:22:52.460835 5050 scope.go:117] "RemoveContainer" containerID="5d7f7f1a1008e3b0a92a62a77a0cabb98723814a3e07a9d9bbadaa2389e53136" Dec 11 14:22:52 crc kubenswrapper[5050]: I1211 14:22:52.467287 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nlz9w"] Dec 11 14:22:52 crc kubenswrapper[5050]: I1211 14:22:52.480326 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nlz9w"] Dec 11 14:22:52 crc kubenswrapper[5050]: I1211 14:22:52.486295 5050 scope.go:117] "RemoveContainer" containerID="6ed24476da58fe36d95e962c2b2ff4e1e032e0c8abcb0cd7fe6021df764593e2" Dec 11 14:22:52 crc kubenswrapper[5050]: I1211 14:22:52.510794 5050 scope.go:117] "RemoveContainer" containerID="99819a7f5a74302e3f52d1b0a5bcf26892797ceb36800f93ba80d56edcc5baef" Dec 11 14:22:52 crc kubenswrapper[5050]: E1211 14:22:52.511444 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99819a7f5a74302e3f52d1b0a5bcf26892797ceb36800f93ba80d56edcc5baef\": container with ID starting with 99819a7f5a74302e3f52d1b0a5bcf26892797ceb36800f93ba80d56edcc5baef not found: ID does not exist" containerID="99819a7f5a74302e3f52d1b0a5bcf26892797ceb36800f93ba80d56edcc5baef" Dec 11 14:22:52 crc kubenswrapper[5050]: I1211 14:22:52.511523 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99819a7f5a74302e3f52d1b0a5bcf26892797ceb36800f93ba80d56edcc5baef"} err="failed to get container status \"99819a7f5a74302e3f52d1b0a5bcf26892797ceb36800f93ba80d56edcc5baef\": rpc error: code = NotFound desc = could not find container \"99819a7f5a74302e3f52d1b0a5bcf26892797ceb36800f93ba80d56edcc5baef\": container with ID starting with 99819a7f5a74302e3f52d1b0a5bcf26892797ceb36800f93ba80d56edcc5baef not found: ID does not exist" Dec 11 14:22:52 crc kubenswrapper[5050]: I1211 14:22:52.511565 5050 scope.go:117] "RemoveContainer" containerID="5d7f7f1a1008e3b0a92a62a77a0cabb98723814a3e07a9d9bbadaa2389e53136" Dec 11 14:22:52 crc kubenswrapper[5050]: E1211 14:22:52.512066 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d7f7f1a1008e3b0a92a62a77a0cabb98723814a3e07a9d9bbadaa2389e53136\": container with ID starting with 5d7f7f1a1008e3b0a92a62a77a0cabb98723814a3e07a9d9bbadaa2389e53136 not found: ID does not exist" containerID="5d7f7f1a1008e3b0a92a62a77a0cabb98723814a3e07a9d9bbadaa2389e53136" Dec 11 14:22:52 crc kubenswrapper[5050]: I1211 14:22:52.512104 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d7f7f1a1008e3b0a92a62a77a0cabb98723814a3e07a9d9bbadaa2389e53136"} err="failed to get container status \"5d7f7f1a1008e3b0a92a62a77a0cabb98723814a3e07a9d9bbadaa2389e53136\": rpc error: code = NotFound desc = could not find container \"5d7f7f1a1008e3b0a92a62a77a0cabb98723814a3e07a9d9bbadaa2389e53136\": container with ID starting with 5d7f7f1a1008e3b0a92a62a77a0cabb98723814a3e07a9d9bbadaa2389e53136 not found: ID does not exist" Dec 11 14:22:52 crc kubenswrapper[5050]: I1211 14:22:52.512131 5050 scope.go:117] "RemoveContainer" containerID="6ed24476da58fe36d95e962c2b2ff4e1e032e0c8abcb0cd7fe6021df764593e2" Dec 11 14:22:52 crc kubenswrapper[5050]: E1211 14:22:52.512844 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ed24476da58fe36d95e962c2b2ff4e1e032e0c8abcb0cd7fe6021df764593e2\": container with ID starting with 6ed24476da58fe36d95e962c2b2ff4e1e032e0c8abcb0cd7fe6021df764593e2 not found: ID does not exist" containerID="6ed24476da58fe36d95e962c2b2ff4e1e032e0c8abcb0cd7fe6021df764593e2" Dec 11 14:22:52 crc kubenswrapper[5050]: I1211 14:22:52.512900 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ed24476da58fe36d95e962c2b2ff4e1e032e0c8abcb0cd7fe6021df764593e2"} err="failed to get container status \"6ed24476da58fe36d95e962c2b2ff4e1e032e0c8abcb0cd7fe6021df764593e2\": rpc error: code = NotFound desc = could not find container \"6ed24476da58fe36d95e962c2b2ff4e1e032e0c8abcb0cd7fe6021df764593e2\": container with ID starting with 6ed24476da58fe36d95e962c2b2ff4e1e032e0c8abcb0cd7fe6021df764593e2 not found: ID does not exist" Dec 11 14:22:54 crc kubenswrapper[5050]: I1211 14:22:54.160927 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a566c35f-a562-4cb0-bae9-4f82171b5770" path="/var/lib/kubelet/pods/a566c35f-a562-4cb0-bae9-4f82171b5770/volumes" Dec 11 14:22:54 crc kubenswrapper[5050]: I1211 14:22:54.247564 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c6np6"] Dec 11 14:22:54 crc kubenswrapper[5050]: E1211 14:22:54.249044 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a566c35f-a562-4cb0-bae9-4f82171b5770" containerName="registry-server" Dec 11 14:22:54 crc kubenswrapper[5050]: I1211 14:22:54.249232 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a566c35f-a562-4cb0-bae9-4f82171b5770" containerName="registry-server" Dec 11 14:22:54 crc kubenswrapper[5050]: E1211 14:22:54.249504 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a566c35f-a562-4cb0-bae9-4f82171b5770" containerName="extract-content" Dec 11 14:22:54 crc kubenswrapper[5050]: I1211 14:22:54.249661 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a566c35f-a562-4cb0-bae9-4f82171b5770" containerName="extract-content" Dec 11 14:22:54 crc kubenswrapper[5050]: E1211 14:22:54.249820 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a566c35f-a562-4cb0-bae9-4f82171b5770" containerName="extract-utilities" Dec 11 14:22:54 crc kubenswrapper[5050]: I1211 14:22:54.249968 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a566c35f-a562-4cb0-bae9-4f82171b5770" containerName="extract-utilities" Dec 11 14:22:54 crc kubenswrapper[5050]: I1211 14:22:54.250433 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="a566c35f-a562-4cb0-bae9-4f82171b5770" containerName="registry-server" Dec 11 14:22:54 crc kubenswrapper[5050]: I1211 14:22:54.252864 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c6np6" Dec 11 14:22:54 crc kubenswrapper[5050]: I1211 14:22:54.259274 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c6np6"] Dec 11 14:22:54 crc kubenswrapper[5050]: I1211 14:22:54.400242 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c171605-1017-41dd-9441-52299efe3ec2-catalog-content\") pod \"community-operators-c6np6\" (UID: \"5c171605-1017-41dd-9441-52299efe3ec2\") " pod="openshift-marketplace/community-operators-c6np6" Dec 11 14:22:54 crc kubenswrapper[5050]: I1211 14:22:54.400317 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrtwc\" (UniqueName: \"kubernetes.io/projected/5c171605-1017-41dd-9441-52299efe3ec2-kube-api-access-nrtwc\") pod \"community-operators-c6np6\" (UID: \"5c171605-1017-41dd-9441-52299efe3ec2\") " pod="openshift-marketplace/community-operators-c6np6" Dec 11 14:22:54 crc kubenswrapper[5050]: I1211 14:22:54.400350 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c171605-1017-41dd-9441-52299efe3ec2-utilities\") pod \"community-operators-c6np6\" (UID: \"5c171605-1017-41dd-9441-52299efe3ec2\") " pod="openshift-marketplace/community-operators-c6np6" Dec 11 14:22:54 crc kubenswrapper[5050]: I1211 14:22:54.501706 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c171605-1017-41dd-9441-52299efe3ec2-catalog-content\") pod \"community-operators-c6np6\" (UID: \"5c171605-1017-41dd-9441-52299efe3ec2\") " pod="openshift-marketplace/community-operators-c6np6" Dec 11 14:22:54 crc kubenswrapper[5050]: I1211 14:22:54.501786 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrtwc\" (UniqueName: \"kubernetes.io/projected/5c171605-1017-41dd-9441-52299efe3ec2-kube-api-access-nrtwc\") pod \"community-operators-c6np6\" (UID: \"5c171605-1017-41dd-9441-52299efe3ec2\") " pod="openshift-marketplace/community-operators-c6np6" Dec 11 14:22:54 crc kubenswrapper[5050]: I1211 14:22:54.501819 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c171605-1017-41dd-9441-52299efe3ec2-utilities\") pod \"community-operators-c6np6\" (UID: \"5c171605-1017-41dd-9441-52299efe3ec2\") " pod="openshift-marketplace/community-operators-c6np6" Dec 11 14:22:54 crc kubenswrapper[5050]: I1211 14:22:54.502432 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c171605-1017-41dd-9441-52299efe3ec2-utilities\") pod \"community-operators-c6np6\" (UID: \"5c171605-1017-41dd-9441-52299efe3ec2\") " pod="openshift-marketplace/community-operators-c6np6" Dec 11 14:22:54 crc kubenswrapper[5050]: I1211 14:22:54.503138 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c171605-1017-41dd-9441-52299efe3ec2-catalog-content\") pod \"community-operators-c6np6\" (UID: \"5c171605-1017-41dd-9441-52299efe3ec2\") " pod="openshift-marketplace/community-operators-c6np6" Dec 11 14:22:54 crc kubenswrapper[5050]: I1211 14:22:54.543357 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrtwc\" (UniqueName: \"kubernetes.io/projected/5c171605-1017-41dd-9441-52299efe3ec2-kube-api-access-nrtwc\") pod \"community-operators-c6np6\" (UID: \"5c171605-1017-41dd-9441-52299efe3ec2\") " pod="openshift-marketplace/community-operators-c6np6" Dec 11 14:22:54 crc kubenswrapper[5050]: I1211 14:22:54.626622 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c6np6" Dec 11 14:22:55 crc kubenswrapper[5050]: I1211 14:22:55.164058 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c6np6"] Dec 11 14:22:56 crc kubenswrapper[5050]: I1211 14:22:56.186802 5050 generic.go:334] "Generic (PLEG): container finished" podID="5c171605-1017-41dd-9441-52299efe3ec2" containerID="f21fb7bde6bf0faad0dedf4493d9afe00455e4566f0bd9c9e827fe04917d1187" exitCode=0 Dec 11 14:22:56 crc kubenswrapper[5050]: I1211 14:22:56.186936 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6np6" event={"ID":"5c171605-1017-41dd-9441-52299efe3ec2","Type":"ContainerDied","Data":"f21fb7bde6bf0faad0dedf4493d9afe00455e4566f0bd9c9e827fe04917d1187"} Dec 11 14:22:56 crc kubenswrapper[5050]: I1211 14:22:56.187290 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6np6" event={"ID":"5c171605-1017-41dd-9441-52299efe3ec2","Type":"ContainerStarted","Data":"33682b1c5979b1d2f8c04f90fd2df886e800a874eda8b4eb2a7279f79579c2b6"} Dec 11 14:22:57 crc kubenswrapper[5050]: I1211 14:22:57.196994 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6np6" event={"ID":"5c171605-1017-41dd-9441-52299efe3ec2","Type":"ContainerStarted","Data":"a2741a7b68e1218499c884582f01388d4ab2b1887f60e78884afb610fd9e0dd9"} Dec 11 14:22:58 crc kubenswrapper[5050]: I1211 14:22:58.208698 5050 generic.go:334] "Generic (PLEG): container finished" podID="5c171605-1017-41dd-9441-52299efe3ec2" containerID="a2741a7b68e1218499c884582f01388d4ab2b1887f60e78884afb610fd9e0dd9" exitCode=0 Dec 11 14:22:58 crc kubenswrapper[5050]: I1211 14:22:58.208810 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6np6" event={"ID":"5c171605-1017-41dd-9441-52299efe3ec2","Type":"ContainerDied","Data":"a2741a7b68e1218499c884582f01388d4ab2b1887f60e78884afb610fd9e0dd9"} Dec 11 14:22:59 crc kubenswrapper[5050]: I1211 14:22:59.220709 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6np6" event={"ID":"5c171605-1017-41dd-9441-52299efe3ec2","Type":"ContainerStarted","Data":"63a7c27d14d01794cb19acb4b1814a1811ce29fccca88f6861070493411434db"} Dec 11 14:22:59 crc kubenswrapper[5050]: I1211 14:22:59.240784 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c6np6" podStartSLOduration=2.707801263 podStartE2EDuration="5.240755426s" podCreationTimestamp="2025-12-11 14:22:54 +0000 UTC" firstStartedPulling="2025-12-11 14:22:56.191160282 +0000 UTC m=+2067.034882908" lastFinishedPulling="2025-12-11 14:22:58.724114495 +0000 UTC m=+2069.567837071" observedRunningTime="2025-12-11 14:22:59.24016766 +0000 UTC m=+2070.083890256" watchObservedRunningTime="2025-12-11 14:22:59.240755426 +0000 UTC m=+2070.084478012" Dec 11 14:23:04 crc kubenswrapper[5050]: I1211 14:23:04.627624 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c6np6" Dec 11 14:23:04 crc kubenswrapper[5050]: I1211 14:23:04.628157 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c6np6" Dec 11 14:23:04 crc kubenswrapper[5050]: I1211 14:23:04.679529 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c6np6" Dec 11 14:23:05 crc kubenswrapper[5050]: I1211 14:23:05.309111 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c6np6" Dec 11 14:23:05 crc kubenswrapper[5050]: I1211 14:23:05.375887 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c6np6"] Dec 11 14:23:07 crc kubenswrapper[5050]: I1211 14:23:07.280215 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c6np6" podUID="5c171605-1017-41dd-9441-52299efe3ec2" containerName="registry-server" containerID="cri-o://63a7c27d14d01794cb19acb4b1814a1811ce29fccca88f6861070493411434db" gracePeriod=2 Dec 11 14:23:08 crc kubenswrapper[5050]: I1211 14:23:08.292874 5050 generic.go:334] "Generic (PLEG): container finished" podID="5c171605-1017-41dd-9441-52299efe3ec2" containerID="63a7c27d14d01794cb19acb4b1814a1811ce29fccca88f6861070493411434db" exitCode=0 Dec 11 14:23:08 crc kubenswrapper[5050]: I1211 14:23:08.293425 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6np6" event={"ID":"5c171605-1017-41dd-9441-52299efe3ec2","Type":"ContainerDied","Data":"63a7c27d14d01794cb19acb4b1814a1811ce29fccca88f6861070493411434db"} Dec 11 14:23:08 crc kubenswrapper[5050]: I1211 14:23:08.899134 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c6np6" Dec 11 14:23:09 crc kubenswrapper[5050]: I1211 14:23:09.067405 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c171605-1017-41dd-9441-52299efe3ec2-catalog-content\") pod \"5c171605-1017-41dd-9441-52299efe3ec2\" (UID: \"5c171605-1017-41dd-9441-52299efe3ec2\") " Dec 11 14:23:09 crc kubenswrapper[5050]: I1211 14:23:09.067806 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrtwc\" (UniqueName: \"kubernetes.io/projected/5c171605-1017-41dd-9441-52299efe3ec2-kube-api-access-nrtwc\") pod \"5c171605-1017-41dd-9441-52299efe3ec2\" (UID: \"5c171605-1017-41dd-9441-52299efe3ec2\") " Dec 11 14:23:09 crc kubenswrapper[5050]: I1211 14:23:09.067934 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c171605-1017-41dd-9441-52299efe3ec2-utilities\") pod \"5c171605-1017-41dd-9441-52299efe3ec2\" (UID: \"5c171605-1017-41dd-9441-52299efe3ec2\") " Dec 11 14:23:09 crc kubenswrapper[5050]: I1211 14:23:09.068985 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c171605-1017-41dd-9441-52299efe3ec2-utilities" (OuterVolumeSpecName: "utilities") pod "5c171605-1017-41dd-9441-52299efe3ec2" (UID: "5c171605-1017-41dd-9441-52299efe3ec2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:23:09 crc kubenswrapper[5050]: I1211 14:23:09.132430 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c171605-1017-41dd-9441-52299efe3ec2-kube-api-access-nrtwc" (OuterVolumeSpecName: "kube-api-access-nrtwc") pod "5c171605-1017-41dd-9441-52299efe3ec2" (UID: "5c171605-1017-41dd-9441-52299efe3ec2"). InnerVolumeSpecName "kube-api-access-nrtwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:23:09 crc kubenswrapper[5050]: I1211 14:23:09.142396 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c171605-1017-41dd-9441-52299efe3ec2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c171605-1017-41dd-9441-52299efe3ec2" (UID: "5c171605-1017-41dd-9441-52299efe3ec2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:23:09 crc kubenswrapper[5050]: I1211 14:23:09.169869 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c171605-1017-41dd-9441-52299efe3ec2-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 14:23:09 crc kubenswrapper[5050]: I1211 14:23:09.169943 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c171605-1017-41dd-9441-52299efe3ec2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 14:23:09 crc kubenswrapper[5050]: I1211 14:23:09.169959 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrtwc\" (UniqueName: \"kubernetes.io/projected/5c171605-1017-41dd-9441-52299efe3ec2-kube-api-access-nrtwc\") on node \"crc\" DevicePath \"\"" Dec 11 14:23:09 crc kubenswrapper[5050]: I1211 14:23:09.303331 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6np6" event={"ID":"5c171605-1017-41dd-9441-52299efe3ec2","Type":"ContainerDied","Data":"33682b1c5979b1d2f8c04f90fd2df886e800a874eda8b4eb2a7279f79579c2b6"} Dec 11 14:23:09 crc kubenswrapper[5050]: I1211 14:23:09.303397 5050 scope.go:117] "RemoveContainer" containerID="63a7c27d14d01794cb19acb4b1814a1811ce29fccca88f6861070493411434db" Dec 11 14:23:09 crc kubenswrapper[5050]: I1211 14:23:09.303566 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c6np6" Dec 11 14:23:09 crc kubenswrapper[5050]: I1211 14:23:09.328057 5050 scope.go:117] "RemoveContainer" containerID="a2741a7b68e1218499c884582f01388d4ab2b1887f60e78884afb610fd9e0dd9" Dec 11 14:23:09 crc kubenswrapper[5050]: I1211 14:23:09.352137 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c6np6"] Dec 11 14:23:09 crc kubenswrapper[5050]: I1211 14:23:09.381218 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c6np6"] Dec 11 14:23:09 crc kubenswrapper[5050]: I1211 14:23:09.383258 5050 scope.go:117] "RemoveContainer" containerID="f21fb7bde6bf0faad0dedf4493d9afe00455e4566f0bd9c9e827fe04917d1187" Dec 11 14:23:09 crc kubenswrapper[5050]: I1211 14:23:09.562922 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c171605-1017-41dd-9441-52299efe3ec2" path="/var/lib/kubelet/pods/5c171605-1017-41dd-9441-52299efe3ec2/volumes" Dec 11 14:24:10 crc kubenswrapper[5050]: I1211 14:24:10.797032 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:24:10 crc kubenswrapper[5050]: I1211 14:24:10.797824 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:24:40 crc kubenswrapper[5050]: I1211 14:24:40.796660 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:24:40 crc kubenswrapper[5050]: I1211 14:24:40.797393 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:25:10 crc kubenswrapper[5050]: I1211 14:25:10.796812 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:25:10 crc kubenswrapper[5050]: I1211 14:25:10.797462 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:25:10 crc kubenswrapper[5050]: I1211 14:25:10.797551 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 14:25:10 crc kubenswrapper[5050]: I1211 14:25:10.798521 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ef76d01a8e4a52680360d68b90caf4e1a76a0862e277fd6a1a982900a1430822"} pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 14:25:10 crc kubenswrapper[5050]: I1211 14:25:10.798588 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" containerID="cri-o://ef76d01a8e4a52680360d68b90caf4e1a76a0862e277fd6a1a982900a1430822" gracePeriod=600 Dec 11 14:25:11 crc kubenswrapper[5050]: I1211 14:25:11.438947 5050 generic.go:334] "Generic (PLEG): container finished" podID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerID="ef76d01a8e4a52680360d68b90caf4e1a76a0862e277fd6a1a982900a1430822" exitCode=0 Dec 11 14:25:11 crc kubenswrapper[5050]: I1211 14:25:11.439003 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerDied","Data":"ef76d01a8e4a52680360d68b90caf4e1a76a0862e277fd6a1a982900a1430822"} Dec 11 14:25:11 crc kubenswrapper[5050]: I1211 14:25:11.439070 5050 scope.go:117] "RemoveContainer" containerID="a6b9413520fd75e49a892cc5dba8a8fc1b927a604949a4d2b85854c65912d95e" Dec 11 14:25:12 crc kubenswrapper[5050]: I1211 14:25:12.450329 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerStarted","Data":"ac70e3a833eefb531f588e66b0c19516ee5cbd6aca4607b44eb1072f1ed802ce"} Dec 11 14:27:40 crc kubenswrapper[5050]: I1211 14:27:40.796601 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:27:40 crc kubenswrapper[5050]: I1211 14:27:40.797521 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:28:10 crc kubenswrapper[5050]: I1211 14:28:10.797136 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:28:10 crc kubenswrapper[5050]: I1211 14:28:10.797807 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:28:40 crc kubenswrapper[5050]: I1211 14:28:40.796137 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:28:40 crc kubenswrapper[5050]: I1211 14:28:40.797172 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:28:40 crc kubenswrapper[5050]: I1211 14:28:40.797310 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 14:28:40 crc kubenswrapper[5050]: I1211 14:28:40.798245 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ac70e3a833eefb531f588e66b0c19516ee5cbd6aca4607b44eb1072f1ed802ce"} pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 14:28:40 crc kubenswrapper[5050]: I1211 14:28:40.798322 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" containerID="cri-o://ac70e3a833eefb531f588e66b0c19516ee5cbd6aca4607b44eb1072f1ed802ce" gracePeriod=600 Dec 11 14:28:40 crc kubenswrapper[5050]: E1211 14:28:40.920466 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:28:41 crc kubenswrapper[5050]: I1211 14:28:41.257668 5050 generic.go:334] "Generic (PLEG): container finished" podID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerID="ac70e3a833eefb531f588e66b0c19516ee5cbd6aca4607b44eb1072f1ed802ce" exitCode=0 Dec 11 14:28:41 crc kubenswrapper[5050]: I1211 14:28:41.257740 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerDied","Data":"ac70e3a833eefb531f588e66b0c19516ee5cbd6aca4607b44eb1072f1ed802ce"} Dec 11 14:28:41 crc kubenswrapper[5050]: I1211 14:28:41.257817 5050 scope.go:117] "RemoveContainer" containerID="ef76d01a8e4a52680360d68b90caf4e1a76a0862e277fd6a1a982900a1430822" Dec 11 14:28:41 crc kubenswrapper[5050]: I1211 14:28:41.258514 5050 scope.go:117] "RemoveContainer" containerID="ac70e3a833eefb531f588e66b0c19516ee5cbd6aca4607b44eb1072f1ed802ce" Dec 11 14:28:41 crc kubenswrapper[5050]: E1211 14:28:41.258772 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:28:55 crc kubenswrapper[5050]: I1211 14:28:55.546929 5050 scope.go:117] "RemoveContainer" containerID="ac70e3a833eefb531f588e66b0c19516ee5cbd6aca4607b44eb1072f1ed802ce" Dec 11 14:28:55 crc kubenswrapper[5050]: E1211 14:28:55.547977 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:29:07 crc kubenswrapper[5050]: I1211 14:29:07.546558 5050 scope.go:117] "RemoveContainer" containerID="ac70e3a833eefb531f588e66b0c19516ee5cbd6aca4607b44eb1072f1ed802ce" Dec 11 14:29:07 crc kubenswrapper[5050]: E1211 14:29:07.547506 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:29:20 crc kubenswrapper[5050]: I1211 14:29:20.545995 5050 scope.go:117] "RemoveContainer" containerID="ac70e3a833eefb531f588e66b0c19516ee5cbd6aca4607b44eb1072f1ed802ce" Dec 11 14:29:20 crc kubenswrapper[5050]: E1211 14:29:20.547116 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:29:34 crc kubenswrapper[5050]: I1211 14:29:34.546907 5050 scope.go:117] "RemoveContainer" containerID="ac70e3a833eefb531f588e66b0c19516ee5cbd6aca4607b44eb1072f1ed802ce" Dec 11 14:29:34 crc kubenswrapper[5050]: E1211 14:29:34.547916 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:29:48 crc kubenswrapper[5050]: I1211 14:29:48.545579 5050 scope.go:117] "RemoveContainer" containerID="ac70e3a833eefb531f588e66b0c19516ee5cbd6aca4607b44eb1072f1ed802ce" Dec 11 14:29:48 crc kubenswrapper[5050]: E1211 14:29:48.546594 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:30:00 crc kubenswrapper[5050]: I1211 14:30:00.159159 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424390-rbjjw"] Dec 11 14:30:00 crc kubenswrapper[5050]: E1211 14:30:00.162674 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c171605-1017-41dd-9441-52299efe3ec2" containerName="registry-server" Dec 11 14:30:00 crc kubenswrapper[5050]: I1211 14:30:00.162824 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c171605-1017-41dd-9441-52299efe3ec2" containerName="registry-server" Dec 11 14:30:00 crc kubenswrapper[5050]: E1211 14:30:00.162949 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c171605-1017-41dd-9441-52299efe3ec2" containerName="extract-content" Dec 11 14:30:00 crc kubenswrapper[5050]: I1211 14:30:00.163103 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c171605-1017-41dd-9441-52299efe3ec2" containerName="extract-content" Dec 11 14:30:00 crc kubenswrapper[5050]: E1211 14:30:00.163288 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c171605-1017-41dd-9441-52299efe3ec2" containerName="extract-utilities" Dec 11 14:30:00 crc kubenswrapper[5050]: I1211 14:30:00.163364 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c171605-1017-41dd-9441-52299efe3ec2" containerName="extract-utilities" Dec 11 14:30:00 crc kubenswrapper[5050]: I1211 14:30:00.163801 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c171605-1017-41dd-9441-52299efe3ec2" containerName="registry-server" Dec 11 14:30:00 crc kubenswrapper[5050]: I1211 14:30:00.164524 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424390-rbjjw" Dec 11 14:30:00 crc kubenswrapper[5050]: I1211 14:30:00.166605 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424390-rbjjw"] Dec 11 14:30:00 crc kubenswrapper[5050]: I1211 14:30:00.167911 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 11 14:30:00 crc kubenswrapper[5050]: I1211 14:30:00.168546 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 11 14:30:00 crc kubenswrapper[5050]: I1211 14:30:00.207452 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f53ebb7-1953-41d9-a350-67ee00ac6559-secret-volume\") pod \"collect-profiles-29424390-rbjjw\" (UID: \"2f53ebb7-1953-41d9-a350-67ee00ac6559\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424390-rbjjw" Dec 11 14:30:00 crc kubenswrapper[5050]: I1211 14:30:00.207526 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thhtq\" (UniqueName: \"kubernetes.io/projected/2f53ebb7-1953-41d9-a350-67ee00ac6559-kube-api-access-thhtq\") pod \"collect-profiles-29424390-rbjjw\" (UID: \"2f53ebb7-1953-41d9-a350-67ee00ac6559\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424390-rbjjw" Dec 11 14:30:00 crc kubenswrapper[5050]: I1211 14:30:00.207596 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f53ebb7-1953-41d9-a350-67ee00ac6559-config-volume\") pod \"collect-profiles-29424390-rbjjw\" (UID: \"2f53ebb7-1953-41d9-a350-67ee00ac6559\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424390-rbjjw" Dec 11 14:30:00 crc kubenswrapper[5050]: I1211 14:30:00.309117 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f53ebb7-1953-41d9-a350-67ee00ac6559-config-volume\") pod \"collect-profiles-29424390-rbjjw\" (UID: \"2f53ebb7-1953-41d9-a350-67ee00ac6559\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424390-rbjjw" Dec 11 14:30:00 crc kubenswrapper[5050]: I1211 14:30:00.309572 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f53ebb7-1953-41d9-a350-67ee00ac6559-secret-volume\") pod \"collect-profiles-29424390-rbjjw\" (UID: \"2f53ebb7-1953-41d9-a350-67ee00ac6559\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424390-rbjjw" Dec 11 14:30:00 crc kubenswrapper[5050]: I1211 14:30:00.309743 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thhtq\" (UniqueName: \"kubernetes.io/projected/2f53ebb7-1953-41d9-a350-67ee00ac6559-kube-api-access-thhtq\") pod \"collect-profiles-29424390-rbjjw\" (UID: \"2f53ebb7-1953-41d9-a350-67ee00ac6559\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424390-rbjjw" Dec 11 14:30:00 crc kubenswrapper[5050]: I1211 14:30:00.310305 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f53ebb7-1953-41d9-a350-67ee00ac6559-config-volume\") pod \"collect-profiles-29424390-rbjjw\" (UID: \"2f53ebb7-1953-41d9-a350-67ee00ac6559\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424390-rbjjw" Dec 11 14:30:00 crc kubenswrapper[5050]: I1211 14:30:00.320564 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f53ebb7-1953-41d9-a350-67ee00ac6559-secret-volume\") pod \"collect-profiles-29424390-rbjjw\" (UID: \"2f53ebb7-1953-41d9-a350-67ee00ac6559\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424390-rbjjw" Dec 11 14:30:00 crc kubenswrapper[5050]: I1211 14:30:00.330768 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thhtq\" (UniqueName: \"kubernetes.io/projected/2f53ebb7-1953-41d9-a350-67ee00ac6559-kube-api-access-thhtq\") pod \"collect-profiles-29424390-rbjjw\" (UID: \"2f53ebb7-1953-41d9-a350-67ee00ac6559\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424390-rbjjw" Dec 11 14:30:00 crc kubenswrapper[5050]: I1211 14:30:00.492336 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424390-rbjjw" Dec 11 14:30:00 crc kubenswrapper[5050]: I1211 14:30:00.545695 5050 scope.go:117] "RemoveContainer" containerID="ac70e3a833eefb531f588e66b0c19516ee5cbd6aca4607b44eb1072f1ed802ce" Dec 11 14:30:00 crc kubenswrapper[5050]: E1211 14:30:00.546100 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:30:00 crc kubenswrapper[5050]: I1211 14:30:00.870827 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424390-rbjjw"] Dec 11 14:30:00 crc kubenswrapper[5050]: I1211 14:30:00.930536 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424390-rbjjw" event={"ID":"2f53ebb7-1953-41d9-a350-67ee00ac6559","Type":"ContainerStarted","Data":"53b73171b0a265883407aefdcf5e4a98a27f97ca024f5a1b15f705ac2ee5173c"} Dec 11 14:30:01 crc kubenswrapper[5050]: I1211 14:30:01.940762 5050 generic.go:334] "Generic (PLEG): container finished" podID="2f53ebb7-1953-41d9-a350-67ee00ac6559" containerID="422a3e52c9c362f015f715c21d04d80d41d46ca46883be3250ef8da67c8a01e6" exitCode=0 Dec 11 14:30:01 crc kubenswrapper[5050]: I1211 14:30:01.940873 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424390-rbjjw" event={"ID":"2f53ebb7-1953-41d9-a350-67ee00ac6559","Type":"ContainerDied","Data":"422a3e52c9c362f015f715c21d04d80d41d46ca46883be3250ef8da67c8a01e6"} Dec 11 14:30:03 crc kubenswrapper[5050]: I1211 14:30:03.243310 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424390-rbjjw" Dec 11 14:30:03 crc kubenswrapper[5050]: I1211 14:30:03.362872 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f53ebb7-1953-41d9-a350-67ee00ac6559-secret-volume\") pod \"2f53ebb7-1953-41d9-a350-67ee00ac6559\" (UID: \"2f53ebb7-1953-41d9-a350-67ee00ac6559\") " Dec 11 14:30:03 crc kubenswrapper[5050]: I1211 14:30:03.362987 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f53ebb7-1953-41d9-a350-67ee00ac6559-config-volume\") pod \"2f53ebb7-1953-41d9-a350-67ee00ac6559\" (UID: \"2f53ebb7-1953-41d9-a350-67ee00ac6559\") " Dec 11 14:30:03 crc kubenswrapper[5050]: I1211 14:30:03.363163 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thhtq\" (UniqueName: \"kubernetes.io/projected/2f53ebb7-1953-41d9-a350-67ee00ac6559-kube-api-access-thhtq\") pod \"2f53ebb7-1953-41d9-a350-67ee00ac6559\" (UID: \"2f53ebb7-1953-41d9-a350-67ee00ac6559\") " Dec 11 14:30:03 crc kubenswrapper[5050]: I1211 14:30:03.363717 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f53ebb7-1953-41d9-a350-67ee00ac6559-config-volume" (OuterVolumeSpecName: "config-volume") pod "2f53ebb7-1953-41d9-a350-67ee00ac6559" (UID: "2f53ebb7-1953-41d9-a350-67ee00ac6559"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:30:03 crc kubenswrapper[5050]: I1211 14:30:03.370274 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f53ebb7-1953-41d9-a350-67ee00ac6559-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2f53ebb7-1953-41d9-a350-67ee00ac6559" (UID: "2f53ebb7-1953-41d9-a350-67ee00ac6559"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:30:03 crc kubenswrapper[5050]: I1211 14:30:03.371547 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f53ebb7-1953-41d9-a350-67ee00ac6559-kube-api-access-thhtq" (OuterVolumeSpecName: "kube-api-access-thhtq") pod "2f53ebb7-1953-41d9-a350-67ee00ac6559" (UID: "2f53ebb7-1953-41d9-a350-67ee00ac6559"). InnerVolumeSpecName "kube-api-access-thhtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:30:03 crc kubenswrapper[5050]: I1211 14:30:03.464758 5050 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2f53ebb7-1953-41d9-a350-67ee00ac6559-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 14:30:03 crc kubenswrapper[5050]: I1211 14:30:03.464810 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thhtq\" (UniqueName: \"kubernetes.io/projected/2f53ebb7-1953-41d9-a350-67ee00ac6559-kube-api-access-thhtq\") on node \"crc\" DevicePath \"\"" Dec 11 14:30:03 crc kubenswrapper[5050]: I1211 14:30:03.464822 5050 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2f53ebb7-1953-41d9-a350-67ee00ac6559-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 14:30:03 crc kubenswrapper[5050]: I1211 14:30:03.960946 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424390-rbjjw" event={"ID":"2f53ebb7-1953-41d9-a350-67ee00ac6559","Type":"ContainerDied","Data":"53b73171b0a265883407aefdcf5e4a98a27f97ca024f5a1b15f705ac2ee5173c"} Dec 11 14:30:03 crc kubenswrapper[5050]: I1211 14:30:03.961046 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53b73171b0a265883407aefdcf5e4a98a27f97ca024f5a1b15f705ac2ee5173c" Dec 11 14:30:03 crc kubenswrapper[5050]: I1211 14:30:03.961084 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424390-rbjjw" Dec 11 14:30:04 crc kubenswrapper[5050]: I1211 14:30:04.324429 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424345-gxsbc"] Dec 11 14:30:04 crc kubenswrapper[5050]: I1211 14:30:04.332035 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424345-gxsbc"] Dec 11 14:30:05 crc kubenswrapper[5050]: I1211 14:30:05.556123 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78f80616-d1e0-4152-a7fb-99a512670f27" path="/var/lib/kubelet/pods/78f80616-d1e0-4152-a7fb-99a512670f27/volumes" Dec 11 14:30:12 crc kubenswrapper[5050]: I1211 14:30:12.546270 5050 scope.go:117] "RemoveContainer" containerID="ac70e3a833eefb531f588e66b0c19516ee5cbd6aca4607b44eb1072f1ed802ce" Dec 11 14:30:12 crc kubenswrapper[5050]: E1211 14:30:12.546979 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:30:23 crc kubenswrapper[5050]: I1211 14:30:23.546072 5050 scope.go:117] "RemoveContainer" containerID="ac70e3a833eefb531f588e66b0c19516ee5cbd6aca4607b44eb1072f1ed802ce" Dec 11 14:30:23 crc kubenswrapper[5050]: E1211 14:30:23.547048 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:30:38 crc kubenswrapper[5050]: I1211 14:30:38.546865 5050 scope.go:117] "RemoveContainer" containerID="ac70e3a833eefb531f588e66b0c19516ee5cbd6aca4607b44eb1072f1ed802ce" Dec 11 14:30:38 crc kubenswrapper[5050]: E1211 14:30:38.547995 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:30:41 crc kubenswrapper[5050]: I1211 14:30:41.860460 5050 scope.go:117] "RemoveContainer" containerID="a3e97dcb11dcb12023d7aac6301414eada55c698af12daef0dd5afda535de932" Dec 11 14:30:49 crc kubenswrapper[5050]: I1211 14:30:49.554345 5050 scope.go:117] "RemoveContainer" containerID="ac70e3a833eefb531f588e66b0c19516ee5cbd6aca4607b44eb1072f1ed802ce" Dec 11 14:30:49 crc kubenswrapper[5050]: E1211 14:30:49.554905 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:31:01 crc kubenswrapper[5050]: I1211 14:31:01.546379 5050 scope.go:117] "RemoveContainer" containerID="ac70e3a833eefb531f588e66b0c19516ee5cbd6aca4607b44eb1072f1ed802ce" Dec 11 14:31:01 crc kubenswrapper[5050]: E1211 14:31:01.547441 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:31:06 crc kubenswrapper[5050]: I1211 14:31:06.289936 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mzw26"] Dec 11 14:31:06 crc kubenswrapper[5050]: E1211 14:31:06.290779 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f53ebb7-1953-41d9-a350-67ee00ac6559" containerName="collect-profiles" Dec 11 14:31:06 crc kubenswrapper[5050]: I1211 14:31:06.290799 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f53ebb7-1953-41d9-a350-67ee00ac6559" containerName="collect-profiles" Dec 11 14:31:06 crc kubenswrapper[5050]: I1211 14:31:06.291056 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f53ebb7-1953-41d9-a350-67ee00ac6559" containerName="collect-profiles" Dec 11 14:31:06 crc kubenswrapper[5050]: I1211 14:31:06.292498 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mzw26" Dec 11 14:31:06 crc kubenswrapper[5050]: I1211 14:31:06.305921 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mzw26"] Dec 11 14:31:06 crc kubenswrapper[5050]: I1211 14:31:06.320241 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg494\" (UniqueName: \"kubernetes.io/projected/03e6de62-f7c2-4629-bc21-264bf66be1ed-kube-api-access-tg494\") pod \"redhat-operators-mzw26\" (UID: \"03e6de62-f7c2-4629-bc21-264bf66be1ed\") " pod="openshift-marketplace/redhat-operators-mzw26" Dec 11 14:31:06 crc kubenswrapper[5050]: I1211 14:31:06.320455 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03e6de62-f7c2-4629-bc21-264bf66be1ed-catalog-content\") pod \"redhat-operators-mzw26\" (UID: \"03e6de62-f7c2-4629-bc21-264bf66be1ed\") " pod="openshift-marketplace/redhat-operators-mzw26" Dec 11 14:31:06 crc kubenswrapper[5050]: I1211 14:31:06.320553 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03e6de62-f7c2-4629-bc21-264bf66be1ed-utilities\") pod \"redhat-operators-mzw26\" (UID: \"03e6de62-f7c2-4629-bc21-264bf66be1ed\") " pod="openshift-marketplace/redhat-operators-mzw26" Dec 11 14:31:06 crc kubenswrapper[5050]: I1211 14:31:06.421734 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03e6de62-f7c2-4629-bc21-264bf66be1ed-catalog-content\") pod \"redhat-operators-mzw26\" (UID: \"03e6de62-f7c2-4629-bc21-264bf66be1ed\") " pod="openshift-marketplace/redhat-operators-mzw26" Dec 11 14:31:06 crc kubenswrapper[5050]: I1211 14:31:06.421826 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03e6de62-f7c2-4629-bc21-264bf66be1ed-utilities\") pod \"redhat-operators-mzw26\" (UID: \"03e6de62-f7c2-4629-bc21-264bf66be1ed\") " pod="openshift-marketplace/redhat-operators-mzw26" Dec 11 14:31:06 crc kubenswrapper[5050]: I1211 14:31:06.421889 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg494\" (UniqueName: \"kubernetes.io/projected/03e6de62-f7c2-4629-bc21-264bf66be1ed-kube-api-access-tg494\") pod \"redhat-operators-mzw26\" (UID: \"03e6de62-f7c2-4629-bc21-264bf66be1ed\") " pod="openshift-marketplace/redhat-operators-mzw26" Dec 11 14:31:06 crc kubenswrapper[5050]: I1211 14:31:06.422979 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03e6de62-f7c2-4629-bc21-264bf66be1ed-catalog-content\") pod \"redhat-operators-mzw26\" (UID: \"03e6de62-f7c2-4629-bc21-264bf66be1ed\") " pod="openshift-marketplace/redhat-operators-mzw26" Dec 11 14:31:06 crc kubenswrapper[5050]: I1211 14:31:06.423307 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03e6de62-f7c2-4629-bc21-264bf66be1ed-utilities\") pod \"redhat-operators-mzw26\" (UID: \"03e6de62-f7c2-4629-bc21-264bf66be1ed\") " pod="openshift-marketplace/redhat-operators-mzw26" Dec 11 14:31:06 crc kubenswrapper[5050]: I1211 14:31:06.446604 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg494\" (UniqueName: \"kubernetes.io/projected/03e6de62-f7c2-4629-bc21-264bf66be1ed-kube-api-access-tg494\") pod \"redhat-operators-mzw26\" (UID: \"03e6de62-f7c2-4629-bc21-264bf66be1ed\") " pod="openshift-marketplace/redhat-operators-mzw26" Dec 11 14:31:06 crc kubenswrapper[5050]: I1211 14:31:06.620360 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mzw26" Dec 11 14:31:06 crc kubenswrapper[5050]: I1211 14:31:06.880841 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mzw26"] Dec 11 14:31:07 crc kubenswrapper[5050]: I1211 14:31:07.515976 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mzw26" event={"ID":"03e6de62-f7c2-4629-bc21-264bf66be1ed","Type":"ContainerStarted","Data":"1ea01b62eb8a17f836c490949fcb228e03c82d4e96f43d6e06bb6bdd62009c61"} Dec 11 14:31:08 crc kubenswrapper[5050]: I1211 14:31:08.532351 5050 generic.go:334] "Generic (PLEG): container finished" podID="03e6de62-f7c2-4629-bc21-264bf66be1ed" containerID="904d0bb2403df78b49fcad9adc9106c110699f2f4072661b5e78b651369bdc1a" exitCode=0 Dec 11 14:31:08 crc kubenswrapper[5050]: I1211 14:31:08.532420 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mzw26" event={"ID":"03e6de62-f7c2-4629-bc21-264bf66be1ed","Type":"ContainerDied","Data":"904d0bb2403df78b49fcad9adc9106c110699f2f4072661b5e78b651369bdc1a"} Dec 11 14:31:08 crc kubenswrapper[5050]: I1211 14:31:08.536688 5050 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 14:31:12 crc kubenswrapper[5050]: I1211 14:31:12.562511 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mzw26" event={"ID":"03e6de62-f7c2-4629-bc21-264bf66be1ed","Type":"ContainerStarted","Data":"87904af4b8eb564301f639e71e20b43748956092a73be0dd72725df1166dc405"} Dec 11 14:31:13 crc kubenswrapper[5050]: I1211 14:31:13.546194 5050 scope.go:117] "RemoveContainer" containerID="ac70e3a833eefb531f588e66b0c19516ee5cbd6aca4607b44eb1072f1ed802ce" Dec 11 14:31:13 crc kubenswrapper[5050]: E1211 14:31:13.546753 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:31:13 crc kubenswrapper[5050]: I1211 14:31:13.573679 5050 generic.go:334] "Generic (PLEG): container finished" podID="03e6de62-f7c2-4629-bc21-264bf66be1ed" containerID="87904af4b8eb564301f639e71e20b43748956092a73be0dd72725df1166dc405" exitCode=0 Dec 11 14:31:13 crc kubenswrapper[5050]: I1211 14:31:13.573760 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mzw26" event={"ID":"03e6de62-f7c2-4629-bc21-264bf66be1ed","Type":"ContainerDied","Data":"87904af4b8eb564301f639e71e20b43748956092a73be0dd72725df1166dc405"} Dec 11 14:31:15 crc kubenswrapper[5050]: I1211 14:31:15.593656 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mzw26" event={"ID":"03e6de62-f7c2-4629-bc21-264bf66be1ed","Type":"ContainerStarted","Data":"01233380d4eeadb76e9674509397e04b1f8fb49e5d7847e887d0cff6b267e770"} Dec 11 14:31:15 crc kubenswrapper[5050]: I1211 14:31:15.623909 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mzw26" podStartSLOduration=3.495917931 podStartE2EDuration="9.623885569s" podCreationTimestamp="2025-12-11 14:31:06 +0000 UTC" firstStartedPulling="2025-12-11 14:31:08.535815591 +0000 UTC m=+2559.379538177" lastFinishedPulling="2025-12-11 14:31:14.663783229 +0000 UTC m=+2565.507505815" observedRunningTime="2025-12-11 14:31:15.620403584 +0000 UTC m=+2566.464126170" watchObservedRunningTime="2025-12-11 14:31:15.623885569 +0000 UTC m=+2566.467608155" Dec 11 14:31:16 crc kubenswrapper[5050]: I1211 14:31:16.620788 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mzw26" Dec 11 14:31:16 crc kubenswrapper[5050]: I1211 14:31:16.621207 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mzw26" Dec 11 14:31:17 crc kubenswrapper[5050]: I1211 14:31:17.671347 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mzw26" podUID="03e6de62-f7c2-4629-bc21-264bf66be1ed" containerName="registry-server" probeResult="failure" output=< Dec 11 14:31:17 crc kubenswrapper[5050]: timeout: failed to connect service ":50051" within 1s Dec 11 14:31:17 crc kubenswrapper[5050]: > Dec 11 14:31:24 crc kubenswrapper[5050]: I1211 14:31:24.547377 5050 scope.go:117] "RemoveContainer" containerID="ac70e3a833eefb531f588e66b0c19516ee5cbd6aca4607b44eb1072f1ed802ce" Dec 11 14:31:24 crc kubenswrapper[5050]: E1211 14:31:24.548311 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:31:26 crc kubenswrapper[5050]: I1211 14:31:26.688393 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mzw26" Dec 11 14:31:26 crc kubenswrapper[5050]: I1211 14:31:26.754310 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mzw26" Dec 11 14:31:26 crc kubenswrapper[5050]: I1211 14:31:26.929662 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mzw26"] Dec 11 14:31:28 crc kubenswrapper[5050]: I1211 14:31:28.687118 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mzw26" podUID="03e6de62-f7c2-4629-bc21-264bf66be1ed" containerName="registry-server" containerID="cri-o://01233380d4eeadb76e9674509397e04b1f8fb49e5d7847e887d0cff6b267e770" gracePeriod=2 Dec 11 14:31:29 crc kubenswrapper[5050]: I1211 14:31:29.696997 5050 generic.go:334] "Generic (PLEG): container finished" podID="03e6de62-f7c2-4629-bc21-264bf66be1ed" containerID="01233380d4eeadb76e9674509397e04b1f8fb49e5d7847e887d0cff6b267e770" exitCode=0 Dec 11 14:31:29 crc kubenswrapper[5050]: I1211 14:31:29.697041 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mzw26" event={"ID":"03e6de62-f7c2-4629-bc21-264bf66be1ed","Type":"ContainerDied","Data":"01233380d4eeadb76e9674509397e04b1f8fb49e5d7847e887d0cff6b267e770"} Dec 11 14:31:30 crc kubenswrapper[5050]: I1211 14:31:30.264420 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mzw26" Dec 11 14:31:30 crc kubenswrapper[5050]: I1211 14:31:30.292399 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg494\" (UniqueName: \"kubernetes.io/projected/03e6de62-f7c2-4629-bc21-264bf66be1ed-kube-api-access-tg494\") pod \"03e6de62-f7c2-4629-bc21-264bf66be1ed\" (UID: \"03e6de62-f7c2-4629-bc21-264bf66be1ed\") " Dec 11 14:31:30 crc kubenswrapper[5050]: I1211 14:31:30.292500 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03e6de62-f7c2-4629-bc21-264bf66be1ed-catalog-content\") pod \"03e6de62-f7c2-4629-bc21-264bf66be1ed\" (UID: \"03e6de62-f7c2-4629-bc21-264bf66be1ed\") " Dec 11 14:31:30 crc kubenswrapper[5050]: I1211 14:31:30.292593 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03e6de62-f7c2-4629-bc21-264bf66be1ed-utilities\") pod \"03e6de62-f7c2-4629-bc21-264bf66be1ed\" (UID: \"03e6de62-f7c2-4629-bc21-264bf66be1ed\") " Dec 11 14:31:30 crc kubenswrapper[5050]: I1211 14:31:30.293654 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03e6de62-f7c2-4629-bc21-264bf66be1ed-utilities" (OuterVolumeSpecName: "utilities") pod "03e6de62-f7c2-4629-bc21-264bf66be1ed" (UID: "03e6de62-f7c2-4629-bc21-264bf66be1ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:31:30 crc kubenswrapper[5050]: I1211 14:31:30.312895 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03e6de62-f7c2-4629-bc21-264bf66be1ed-kube-api-access-tg494" (OuterVolumeSpecName: "kube-api-access-tg494") pod "03e6de62-f7c2-4629-bc21-264bf66be1ed" (UID: "03e6de62-f7c2-4629-bc21-264bf66be1ed"). InnerVolumeSpecName "kube-api-access-tg494". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:31:30 crc kubenswrapper[5050]: I1211 14:31:30.394115 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg494\" (UniqueName: \"kubernetes.io/projected/03e6de62-f7c2-4629-bc21-264bf66be1ed-kube-api-access-tg494\") on node \"crc\" DevicePath \"\"" Dec 11 14:31:30 crc kubenswrapper[5050]: I1211 14:31:30.394162 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/03e6de62-f7c2-4629-bc21-264bf66be1ed-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 14:31:30 crc kubenswrapper[5050]: I1211 14:31:30.407689 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03e6de62-f7c2-4629-bc21-264bf66be1ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "03e6de62-f7c2-4629-bc21-264bf66be1ed" (UID: "03e6de62-f7c2-4629-bc21-264bf66be1ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:31:30 crc kubenswrapper[5050]: I1211 14:31:30.495844 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/03e6de62-f7c2-4629-bc21-264bf66be1ed-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 14:31:30 crc kubenswrapper[5050]: I1211 14:31:30.708733 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mzw26" event={"ID":"03e6de62-f7c2-4629-bc21-264bf66be1ed","Type":"ContainerDied","Data":"1ea01b62eb8a17f836c490949fcb228e03c82d4e96f43d6e06bb6bdd62009c61"} Dec 11 14:31:30 crc kubenswrapper[5050]: I1211 14:31:30.708798 5050 scope.go:117] "RemoveContainer" containerID="01233380d4eeadb76e9674509397e04b1f8fb49e5d7847e887d0cff6b267e770" Dec 11 14:31:30 crc kubenswrapper[5050]: I1211 14:31:30.708974 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mzw26" Dec 11 14:31:30 crc kubenswrapper[5050]: I1211 14:31:30.750539 5050 scope.go:117] "RemoveContainer" containerID="87904af4b8eb564301f639e71e20b43748956092a73be0dd72725df1166dc405" Dec 11 14:31:30 crc kubenswrapper[5050]: I1211 14:31:30.760149 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mzw26"] Dec 11 14:31:30 crc kubenswrapper[5050]: I1211 14:31:30.766148 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mzw26"] Dec 11 14:31:30 crc kubenswrapper[5050]: I1211 14:31:30.784061 5050 scope.go:117] "RemoveContainer" containerID="904d0bb2403df78b49fcad9adc9106c110699f2f4072661b5e78b651369bdc1a" Dec 11 14:31:31 crc kubenswrapper[5050]: I1211 14:31:31.554380 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03e6de62-f7c2-4629-bc21-264bf66be1ed" path="/var/lib/kubelet/pods/03e6de62-f7c2-4629-bc21-264bf66be1ed/volumes" Dec 11 14:31:38 crc kubenswrapper[5050]: I1211 14:31:38.545976 5050 scope.go:117] "RemoveContainer" containerID="ac70e3a833eefb531f588e66b0c19516ee5cbd6aca4607b44eb1072f1ed802ce" Dec 11 14:31:38 crc kubenswrapper[5050]: E1211 14:31:38.546863 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:31:44 crc kubenswrapper[5050]: I1211 14:31:44.190077 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mq4m9"] Dec 11 14:31:44 crc kubenswrapper[5050]: E1211 14:31:44.191037 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03e6de62-f7c2-4629-bc21-264bf66be1ed" containerName="registry-server" Dec 11 14:31:44 crc kubenswrapper[5050]: I1211 14:31:44.191056 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="03e6de62-f7c2-4629-bc21-264bf66be1ed" containerName="registry-server" Dec 11 14:31:44 crc kubenswrapper[5050]: E1211 14:31:44.191075 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03e6de62-f7c2-4629-bc21-264bf66be1ed" containerName="extract-utilities" Dec 11 14:31:44 crc kubenswrapper[5050]: I1211 14:31:44.191083 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="03e6de62-f7c2-4629-bc21-264bf66be1ed" containerName="extract-utilities" Dec 11 14:31:44 crc kubenswrapper[5050]: E1211 14:31:44.191093 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03e6de62-f7c2-4629-bc21-264bf66be1ed" containerName="extract-content" Dec 11 14:31:44 crc kubenswrapper[5050]: I1211 14:31:44.191102 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="03e6de62-f7c2-4629-bc21-264bf66be1ed" containerName="extract-content" Dec 11 14:31:44 crc kubenswrapper[5050]: I1211 14:31:44.191282 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="03e6de62-f7c2-4629-bc21-264bf66be1ed" containerName="registry-server" Dec 11 14:31:44 crc kubenswrapper[5050]: I1211 14:31:44.192631 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mq4m9" Dec 11 14:31:44 crc kubenswrapper[5050]: I1211 14:31:44.202072 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5-catalog-content\") pod \"redhat-marketplace-mq4m9\" (UID: \"5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5\") " pod="openshift-marketplace/redhat-marketplace-mq4m9" Dec 11 14:31:44 crc kubenswrapper[5050]: I1211 14:31:44.202133 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6f7m\" (UniqueName: \"kubernetes.io/projected/5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5-kube-api-access-n6f7m\") pod \"redhat-marketplace-mq4m9\" (UID: \"5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5\") " pod="openshift-marketplace/redhat-marketplace-mq4m9" Dec 11 14:31:44 crc kubenswrapper[5050]: I1211 14:31:44.202247 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5-utilities\") pod \"redhat-marketplace-mq4m9\" (UID: \"5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5\") " pod="openshift-marketplace/redhat-marketplace-mq4m9" Dec 11 14:31:44 crc kubenswrapper[5050]: I1211 14:31:44.205330 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mq4m9"] Dec 11 14:31:44 crc kubenswrapper[5050]: I1211 14:31:44.303033 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5-catalog-content\") pod \"redhat-marketplace-mq4m9\" (UID: \"5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5\") " pod="openshift-marketplace/redhat-marketplace-mq4m9" Dec 11 14:31:44 crc kubenswrapper[5050]: I1211 14:31:44.303389 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6f7m\" (UniqueName: \"kubernetes.io/projected/5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5-kube-api-access-n6f7m\") pod \"redhat-marketplace-mq4m9\" (UID: \"5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5\") " pod="openshift-marketplace/redhat-marketplace-mq4m9" Dec 11 14:31:44 crc kubenswrapper[5050]: I1211 14:31:44.303537 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5-utilities\") pod \"redhat-marketplace-mq4m9\" (UID: \"5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5\") " pod="openshift-marketplace/redhat-marketplace-mq4m9" Dec 11 14:31:44 crc kubenswrapper[5050]: I1211 14:31:44.303662 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5-catalog-content\") pod \"redhat-marketplace-mq4m9\" (UID: \"5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5\") " pod="openshift-marketplace/redhat-marketplace-mq4m9" Dec 11 14:31:44 crc kubenswrapper[5050]: I1211 14:31:44.304026 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5-utilities\") pod \"redhat-marketplace-mq4m9\" (UID: \"5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5\") " pod="openshift-marketplace/redhat-marketplace-mq4m9" Dec 11 14:31:44 crc kubenswrapper[5050]: I1211 14:31:44.329694 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6f7m\" (UniqueName: \"kubernetes.io/projected/5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5-kube-api-access-n6f7m\") pod \"redhat-marketplace-mq4m9\" (UID: \"5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5\") " pod="openshift-marketplace/redhat-marketplace-mq4m9" Dec 11 14:31:44 crc kubenswrapper[5050]: I1211 14:31:44.514237 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mq4m9" Dec 11 14:31:44 crc kubenswrapper[5050]: I1211 14:31:44.773674 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mq4m9"] Dec 11 14:31:44 crc kubenswrapper[5050]: I1211 14:31:44.819653 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mq4m9" event={"ID":"5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5","Type":"ContainerStarted","Data":"8e3578c67e72b4ad6ea62512c8d41341e2ac41f9c3a73d1c7b9f066370b43823"} Dec 11 14:31:47 crc kubenswrapper[5050]: I1211 14:31:47.858291 5050 generic.go:334] "Generic (PLEG): container finished" podID="5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5" containerID="0f9116bee4fd89533c9b572d761c9284171a299a9a920acfe670fdfa6b0bc61a" exitCode=0 Dec 11 14:31:47 crc kubenswrapper[5050]: I1211 14:31:47.858351 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mq4m9" event={"ID":"5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5","Type":"ContainerDied","Data":"0f9116bee4fd89533c9b572d761c9284171a299a9a920acfe670fdfa6b0bc61a"} Dec 11 14:31:52 crc kubenswrapper[5050]: I1211 14:31:52.546574 5050 scope.go:117] "RemoveContainer" containerID="ac70e3a833eefb531f588e66b0c19516ee5cbd6aca4607b44eb1072f1ed802ce" Dec 11 14:31:52 crc kubenswrapper[5050]: E1211 14:31:52.547272 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:31:52 crc kubenswrapper[5050]: I1211 14:31:52.899913 5050 generic.go:334] "Generic (PLEG): container finished" podID="5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5" containerID="06bc578eb05e7c2888ecaf5ddac4af6a42bcfe1569ef54dfd93375724c681173" exitCode=0 Dec 11 14:31:52 crc kubenswrapper[5050]: I1211 14:31:52.899973 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mq4m9" event={"ID":"5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5","Type":"ContainerDied","Data":"06bc578eb05e7c2888ecaf5ddac4af6a42bcfe1569ef54dfd93375724c681173"} Dec 11 14:31:54 crc kubenswrapper[5050]: I1211 14:31:54.919136 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mq4m9" event={"ID":"5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5","Type":"ContainerStarted","Data":"1dd9f45088235cecf398c609dd81172d06d61b2e7c598f71fec813478ac08d28"} Dec 11 14:31:54 crc kubenswrapper[5050]: I1211 14:31:54.950488 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mq4m9" podStartSLOduration=4.683719454 podStartE2EDuration="10.950466411s" podCreationTimestamp="2025-12-11 14:31:44 +0000 UTC" firstStartedPulling="2025-12-11 14:31:47.859797942 +0000 UTC m=+2598.703520528" lastFinishedPulling="2025-12-11 14:31:54.126544909 +0000 UTC m=+2604.970267485" observedRunningTime="2025-12-11 14:31:54.948870488 +0000 UTC m=+2605.792593094" watchObservedRunningTime="2025-12-11 14:31:54.950466411 +0000 UTC m=+2605.794189007" Dec 11 14:32:04 crc kubenswrapper[5050]: I1211 14:32:04.514902 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mq4m9" Dec 11 14:32:04 crc kubenswrapper[5050]: I1211 14:32:04.515707 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mq4m9" Dec 11 14:32:04 crc kubenswrapper[5050]: I1211 14:32:04.555423 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mq4m9" Dec 11 14:32:05 crc kubenswrapper[5050]: I1211 14:32:05.046747 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mq4m9" Dec 11 14:32:05 crc kubenswrapper[5050]: I1211 14:32:05.106487 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mq4m9"] Dec 11 14:32:05 crc kubenswrapper[5050]: I1211 14:32:05.547004 5050 scope.go:117] "RemoveContainer" containerID="ac70e3a833eefb531f588e66b0c19516ee5cbd6aca4607b44eb1072f1ed802ce" Dec 11 14:32:05 crc kubenswrapper[5050]: E1211 14:32:05.547306 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:32:07 crc kubenswrapper[5050]: I1211 14:32:07.018876 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mq4m9" podUID="5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5" containerName="registry-server" containerID="cri-o://1dd9f45088235cecf398c609dd81172d06d61b2e7c598f71fec813478ac08d28" gracePeriod=2 Dec 11 14:32:08 crc kubenswrapper[5050]: I1211 14:32:08.616578 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mq4m9" Dec 11 14:32:08 crc kubenswrapper[5050]: I1211 14:32:08.647302 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6f7m\" (UniqueName: \"kubernetes.io/projected/5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5-kube-api-access-n6f7m\") pod \"5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5\" (UID: \"5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5\") " Dec 11 14:32:08 crc kubenswrapper[5050]: I1211 14:32:08.647517 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5-catalog-content\") pod \"5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5\" (UID: \"5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5\") " Dec 11 14:32:08 crc kubenswrapper[5050]: I1211 14:32:08.647556 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5-utilities\") pod \"5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5\" (UID: \"5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5\") " Dec 11 14:32:08 crc kubenswrapper[5050]: I1211 14:32:08.650619 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5-utilities" (OuterVolumeSpecName: "utilities") pod "5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5" (UID: "5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:32:08 crc kubenswrapper[5050]: I1211 14:32:08.654437 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5-kube-api-access-n6f7m" (OuterVolumeSpecName: "kube-api-access-n6f7m") pod "5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5" (UID: "5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5"). InnerVolumeSpecName "kube-api-access-n6f7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:32:08 crc kubenswrapper[5050]: I1211 14:32:08.673989 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5" (UID: "5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:32:08 crc kubenswrapper[5050]: I1211 14:32:08.749571 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6f7m\" (UniqueName: \"kubernetes.io/projected/5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5-kube-api-access-n6f7m\") on node \"crc\" DevicePath \"\"" Dec 11 14:32:08 crc kubenswrapper[5050]: I1211 14:32:08.749636 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 14:32:08 crc kubenswrapper[5050]: I1211 14:32:08.749645 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 14:32:09 crc kubenswrapper[5050]: I1211 14:32:09.039720 5050 generic.go:334] "Generic (PLEG): container finished" podID="5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5" containerID="1dd9f45088235cecf398c609dd81172d06d61b2e7c598f71fec813478ac08d28" exitCode=0 Dec 11 14:32:09 crc kubenswrapper[5050]: I1211 14:32:09.039766 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mq4m9" event={"ID":"5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5","Type":"ContainerDied","Data":"1dd9f45088235cecf398c609dd81172d06d61b2e7c598f71fec813478ac08d28"} Dec 11 14:32:09 crc kubenswrapper[5050]: I1211 14:32:09.039799 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mq4m9" event={"ID":"5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5","Type":"ContainerDied","Data":"8e3578c67e72b4ad6ea62512c8d41341e2ac41f9c3a73d1c7b9f066370b43823"} Dec 11 14:32:09 crc kubenswrapper[5050]: I1211 14:32:09.039817 5050 scope.go:117] "RemoveContainer" containerID="1dd9f45088235cecf398c609dd81172d06d61b2e7c598f71fec813478ac08d28" Dec 11 14:32:09 crc kubenswrapper[5050]: I1211 14:32:09.039817 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mq4m9" Dec 11 14:32:09 crc kubenswrapper[5050]: I1211 14:32:09.063043 5050 scope.go:117] "RemoveContainer" containerID="06bc578eb05e7c2888ecaf5ddac4af6a42bcfe1569ef54dfd93375724c681173" Dec 11 14:32:09 crc kubenswrapper[5050]: I1211 14:32:09.082912 5050 scope.go:117] "RemoveContainer" containerID="0f9116bee4fd89533c9b572d761c9284171a299a9a920acfe670fdfa6b0bc61a" Dec 11 14:32:09 crc kubenswrapper[5050]: I1211 14:32:09.094794 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mq4m9"] Dec 11 14:32:09 crc kubenswrapper[5050]: I1211 14:32:09.106986 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mq4m9"] Dec 11 14:32:09 crc kubenswrapper[5050]: I1211 14:32:09.116177 5050 scope.go:117] "RemoveContainer" containerID="1dd9f45088235cecf398c609dd81172d06d61b2e7c598f71fec813478ac08d28" Dec 11 14:32:09 crc kubenswrapper[5050]: E1211 14:32:09.116687 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dd9f45088235cecf398c609dd81172d06d61b2e7c598f71fec813478ac08d28\": container with ID starting with 1dd9f45088235cecf398c609dd81172d06d61b2e7c598f71fec813478ac08d28 not found: ID does not exist" containerID="1dd9f45088235cecf398c609dd81172d06d61b2e7c598f71fec813478ac08d28" Dec 11 14:32:09 crc kubenswrapper[5050]: I1211 14:32:09.116738 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dd9f45088235cecf398c609dd81172d06d61b2e7c598f71fec813478ac08d28"} err="failed to get container status \"1dd9f45088235cecf398c609dd81172d06d61b2e7c598f71fec813478ac08d28\": rpc error: code = NotFound desc = could not find container \"1dd9f45088235cecf398c609dd81172d06d61b2e7c598f71fec813478ac08d28\": container with ID starting with 1dd9f45088235cecf398c609dd81172d06d61b2e7c598f71fec813478ac08d28 not found: ID does not exist" Dec 11 14:32:09 crc kubenswrapper[5050]: I1211 14:32:09.116767 5050 scope.go:117] "RemoveContainer" containerID="06bc578eb05e7c2888ecaf5ddac4af6a42bcfe1569ef54dfd93375724c681173" Dec 11 14:32:09 crc kubenswrapper[5050]: E1211 14:32:09.117224 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06bc578eb05e7c2888ecaf5ddac4af6a42bcfe1569ef54dfd93375724c681173\": container with ID starting with 06bc578eb05e7c2888ecaf5ddac4af6a42bcfe1569ef54dfd93375724c681173 not found: ID does not exist" containerID="06bc578eb05e7c2888ecaf5ddac4af6a42bcfe1569ef54dfd93375724c681173" Dec 11 14:32:09 crc kubenswrapper[5050]: I1211 14:32:09.117305 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06bc578eb05e7c2888ecaf5ddac4af6a42bcfe1569ef54dfd93375724c681173"} err="failed to get container status \"06bc578eb05e7c2888ecaf5ddac4af6a42bcfe1569ef54dfd93375724c681173\": rpc error: code = NotFound desc = could not find container \"06bc578eb05e7c2888ecaf5ddac4af6a42bcfe1569ef54dfd93375724c681173\": container with ID starting with 06bc578eb05e7c2888ecaf5ddac4af6a42bcfe1569ef54dfd93375724c681173 not found: ID does not exist" Dec 11 14:32:09 crc kubenswrapper[5050]: I1211 14:32:09.117352 5050 scope.go:117] "RemoveContainer" containerID="0f9116bee4fd89533c9b572d761c9284171a299a9a920acfe670fdfa6b0bc61a" Dec 11 14:32:09 crc kubenswrapper[5050]: E1211 14:32:09.117921 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f9116bee4fd89533c9b572d761c9284171a299a9a920acfe670fdfa6b0bc61a\": container with ID starting with 0f9116bee4fd89533c9b572d761c9284171a299a9a920acfe670fdfa6b0bc61a not found: ID does not exist" containerID="0f9116bee4fd89533c9b572d761c9284171a299a9a920acfe670fdfa6b0bc61a" Dec 11 14:32:09 crc kubenswrapper[5050]: I1211 14:32:09.117963 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f9116bee4fd89533c9b572d761c9284171a299a9a920acfe670fdfa6b0bc61a"} err="failed to get container status \"0f9116bee4fd89533c9b572d761c9284171a299a9a920acfe670fdfa6b0bc61a\": rpc error: code = NotFound desc = could not find container \"0f9116bee4fd89533c9b572d761c9284171a299a9a920acfe670fdfa6b0bc61a\": container with ID starting with 0f9116bee4fd89533c9b572d761c9284171a299a9a920acfe670fdfa6b0bc61a not found: ID does not exist" Dec 11 14:32:09 crc kubenswrapper[5050]: I1211 14:32:09.559763 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5" path="/var/lib/kubelet/pods/5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5/volumes" Dec 11 14:32:19 crc kubenswrapper[5050]: I1211 14:32:19.549838 5050 scope.go:117] "RemoveContainer" containerID="ac70e3a833eefb531f588e66b0c19516ee5cbd6aca4607b44eb1072f1ed802ce" Dec 11 14:32:19 crc kubenswrapper[5050]: E1211 14:32:19.550902 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:32:30 crc kubenswrapper[5050]: I1211 14:32:30.546131 5050 scope.go:117] "RemoveContainer" containerID="ac70e3a833eefb531f588e66b0c19516ee5cbd6aca4607b44eb1072f1ed802ce" Dec 11 14:32:30 crc kubenswrapper[5050]: E1211 14:32:30.547107 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:32:41 crc kubenswrapper[5050]: I1211 14:32:41.545827 5050 scope.go:117] "RemoveContainer" containerID="ac70e3a833eefb531f588e66b0c19516ee5cbd6aca4607b44eb1072f1ed802ce" Dec 11 14:32:41 crc kubenswrapper[5050]: E1211 14:32:41.546668 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:32:53 crc kubenswrapper[5050]: I1211 14:32:53.596591 5050 scope.go:117] "RemoveContainer" containerID="ac70e3a833eefb531f588e66b0c19516ee5cbd6aca4607b44eb1072f1ed802ce" Dec 11 14:32:53 crc kubenswrapper[5050]: E1211 14:32:53.597269 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:33:06 crc kubenswrapper[5050]: I1211 14:33:06.547434 5050 scope.go:117] "RemoveContainer" containerID="ac70e3a833eefb531f588e66b0c19516ee5cbd6aca4607b44eb1072f1ed802ce" Dec 11 14:33:06 crc kubenswrapper[5050]: E1211 14:33:06.548638 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:33:21 crc kubenswrapper[5050]: I1211 14:33:21.546818 5050 scope.go:117] "RemoveContainer" containerID="ac70e3a833eefb531f588e66b0c19516ee5cbd6aca4607b44eb1072f1ed802ce" Dec 11 14:33:21 crc kubenswrapper[5050]: E1211 14:33:21.547670 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:33:32 crc kubenswrapper[5050]: I1211 14:33:32.546978 5050 scope.go:117] "RemoveContainer" containerID="ac70e3a833eefb531f588e66b0c19516ee5cbd6aca4607b44eb1072f1ed802ce" Dec 11 14:33:32 crc kubenswrapper[5050]: E1211 14:33:32.547677 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:33:46 crc kubenswrapper[5050]: I1211 14:33:46.547578 5050 scope.go:117] "RemoveContainer" containerID="ac70e3a833eefb531f588e66b0c19516ee5cbd6aca4607b44eb1072f1ed802ce" Dec 11 14:33:46 crc kubenswrapper[5050]: I1211 14:33:46.927162 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerStarted","Data":"bf9d54912535db3a24a217c8d8c8f4eabd07c3860b7fb720e565aa94ceabf2d1"} Dec 11 14:34:04 crc kubenswrapper[5050]: I1211 14:34:04.353714 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wrs2b"] Dec 11 14:34:04 crc kubenswrapper[5050]: E1211 14:34:04.354715 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5" containerName="registry-server" Dec 11 14:34:04 crc kubenswrapper[5050]: I1211 14:34:04.354731 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5" containerName="registry-server" Dec 11 14:34:04 crc kubenswrapper[5050]: E1211 14:34:04.354766 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5" containerName="extract-utilities" Dec 11 14:34:04 crc kubenswrapper[5050]: I1211 14:34:04.354776 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5" containerName="extract-utilities" Dec 11 14:34:04 crc kubenswrapper[5050]: E1211 14:34:04.354801 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5" containerName="extract-content" Dec 11 14:34:04 crc kubenswrapper[5050]: I1211 14:34:04.354811 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5" containerName="extract-content" Dec 11 14:34:04 crc kubenswrapper[5050]: I1211 14:34:04.354981 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a9a6d18-b1de-4b0a-843e-fcb66ad0ebf5" containerName="registry-server" Dec 11 14:34:04 crc kubenswrapper[5050]: I1211 14:34:04.356302 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wrs2b" Dec 11 14:34:04 crc kubenswrapper[5050]: I1211 14:34:04.364945 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wrs2b"] Dec 11 14:34:04 crc kubenswrapper[5050]: I1211 14:34:04.505662 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0d671ba-679e-40f7-b831-a793351be311-catalog-content\") pod \"certified-operators-wrs2b\" (UID: \"a0d671ba-679e-40f7-b831-a793351be311\") " pod="openshift-marketplace/certified-operators-wrs2b" Dec 11 14:34:04 crc kubenswrapper[5050]: I1211 14:34:04.505836 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn69x\" (UniqueName: \"kubernetes.io/projected/a0d671ba-679e-40f7-b831-a793351be311-kube-api-access-nn69x\") pod \"certified-operators-wrs2b\" (UID: \"a0d671ba-679e-40f7-b831-a793351be311\") " pod="openshift-marketplace/certified-operators-wrs2b" Dec 11 14:34:04 crc kubenswrapper[5050]: I1211 14:34:04.505890 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0d671ba-679e-40f7-b831-a793351be311-utilities\") pod \"certified-operators-wrs2b\" (UID: \"a0d671ba-679e-40f7-b831-a793351be311\") " pod="openshift-marketplace/certified-operators-wrs2b" Dec 11 14:34:04 crc kubenswrapper[5050]: I1211 14:34:04.607423 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn69x\" (UniqueName: \"kubernetes.io/projected/a0d671ba-679e-40f7-b831-a793351be311-kube-api-access-nn69x\") pod \"certified-operators-wrs2b\" (UID: \"a0d671ba-679e-40f7-b831-a793351be311\") " pod="openshift-marketplace/certified-operators-wrs2b" Dec 11 14:34:04 crc kubenswrapper[5050]: I1211 14:34:04.607481 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0d671ba-679e-40f7-b831-a793351be311-utilities\") pod \"certified-operators-wrs2b\" (UID: \"a0d671ba-679e-40f7-b831-a793351be311\") " pod="openshift-marketplace/certified-operators-wrs2b" Dec 11 14:34:04 crc kubenswrapper[5050]: I1211 14:34:04.607558 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0d671ba-679e-40f7-b831-a793351be311-catalog-content\") pod \"certified-operators-wrs2b\" (UID: \"a0d671ba-679e-40f7-b831-a793351be311\") " pod="openshift-marketplace/certified-operators-wrs2b" Dec 11 14:34:04 crc kubenswrapper[5050]: I1211 14:34:04.607965 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0d671ba-679e-40f7-b831-a793351be311-utilities\") pod \"certified-operators-wrs2b\" (UID: \"a0d671ba-679e-40f7-b831-a793351be311\") " pod="openshift-marketplace/certified-operators-wrs2b" Dec 11 14:34:04 crc kubenswrapper[5050]: I1211 14:34:04.608176 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0d671ba-679e-40f7-b831-a793351be311-catalog-content\") pod \"certified-operators-wrs2b\" (UID: \"a0d671ba-679e-40f7-b831-a793351be311\") " pod="openshift-marketplace/certified-operators-wrs2b" Dec 11 14:34:04 crc kubenswrapper[5050]: I1211 14:34:04.632099 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn69x\" (UniqueName: \"kubernetes.io/projected/a0d671ba-679e-40f7-b831-a793351be311-kube-api-access-nn69x\") pod \"certified-operators-wrs2b\" (UID: \"a0d671ba-679e-40f7-b831-a793351be311\") " pod="openshift-marketplace/certified-operators-wrs2b" Dec 11 14:34:04 crc kubenswrapper[5050]: I1211 14:34:04.724342 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wrs2b" Dec 11 14:34:04 crc kubenswrapper[5050]: I1211 14:34:04.970858 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wrs2b"] Dec 11 14:34:05 crc kubenswrapper[5050]: I1211 14:34:05.262377 5050 generic.go:334] "Generic (PLEG): container finished" podID="a0d671ba-679e-40f7-b831-a793351be311" containerID="5870c8ce1cf6a06d43ef807ac1fc26417b68f2b6040676cd5e19279f15761f85" exitCode=0 Dec 11 14:34:05 crc kubenswrapper[5050]: I1211 14:34:05.262434 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wrs2b" event={"ID":"a0d671ba-679e-40f7-b831-a793351be311","Type":"ContainerDied","Data":"5870c8ce1cf6a06d43ef807ac1fc26417b68f2b6040676cd5e19279f15761f85"} Dec 11 14:34:05 crc kubenswrapper[5050]: I1211 14:34:05.262695 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wrs2b" event={"ID":"a0d671ba-679e-40f7-b831-a793351be311","Type":"ContainerStarted","Data":"f63f57e67feee59a8ef0e4b027911a485bfc3e9fa66ad8f3a3f033273dbaccae"} Dec 11 14:34:06 crc kubenswrapper[5050]: I1211 14:34:06.274773 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wrs2b" event={"ID":"a0d671ba-679e-40f7-b831-a793351be311","Type":"ContainerStarted","Data":"cede52ed8a6f9060c1cf3b7980094045a93c6926852234de1b1b3e1f852f9fbe"} Dec 11 14:34:07 crc kubenswrapper[5050]: I1211 14:34:07.290801 5050 generic.go:334] "Generic (PLEG): container finished" podID="a0d671ba-679e-40f7-b831-a793351be311" containerID="cede52ed8a6f9060c1cf3b7980094045a93c6926852234de1b1b3e1f852f9fbe" exitCode=0 Dec 11 14:34:07 crc kubenswrapper[5050]: I1211 14:34:07.290869 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wrs2b" event={"ID":"a0d671ba-679e-40f7-b831-a793351be311","Type":"ContainerDied","Data":"cede52ed8a6f9060c1cf3b7980094045a93c6926852234de1b1b3e1f852f9fbe"} Dec 11 14:34:08 crc kubenswrapper[5050]: I1211 14:34:08.301619 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wrs2b" event={"ID":"a0d671ba-679e-40f7-b831-a793351be311","Type":"ContainerStarted","Data":"6edbf123c59a5402b168285ecb246f7315b0ae22917bb3a96ad31769190b9fbf"} Dec 11 14:34:08 crc kubenswrapper[5050]: I1211 14:34:08.320028 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wrs2b" podStartSLOduration=1.8264461829999998 podStartE2EDuration="4.31998391s" podCreationTimestamp="2025-12-11 14:34:04 +0000 UTC" firstStartedPulling="2025-12-11 14:34:05.263691803 +0000 UTC m=+2736.107414389" lastFinishedPulling="2025-12-11 14:34:07.75722952 +0000 UTC m=+2738.600952116" observedRunningTime="2025-12-11 14:34:08.319732283 +0000 UTC m=+2739.163454909" watchObservedRunningTime="2025-12-11 14:34:08.31998391 +0000 UTC m=+2739.163706496" Dec 11 14:34:14 crc kubenswrapper[5050]: I1211 14:34:14.724988 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wrs2b" Dec 11 14:34:14 crc kubenswrapper[5050]: I1211 14:34:14.726102 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wrs2b" Dec 11 14:34:14 crc kubenswrapper[5050]: I1211 14:34:14.787701 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wrs2b" Dec 11 14:34:15 crc kubenswrapper[5050]: I1211 14:34:15.415084 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wrs2b" Dec 11 14:34:15 crc kubenswrapper[5050]: I1211 14:34:15.464483 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wrs2b"] Dec 11 14:34:17 crc kubenswrapper[5050]: I1211 14:34:17.377210 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wrs2b" podUID="a0d671ba-679e-40f7-b831-a793351be311" containerName="registry-server" containerID="cri-o://6edbf123c59a5402b168285ecb246f7315b0ae22917bb3a96ad31769190b9fbf" gracePeriod=2 Dec 11 14:34:18 crc kubenswrapper[5050]: I1211 14:34:18.909737 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wrs2b" Dec 11 14:34:18 crc kubenswrapper[5050]: I1211 14:34:18.919917 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nn69x\" (UniqueName: \"kubernetes.io/projected/a0d671ba-679e-40f7-b831-a793351be311-kube-api-access-nn69x\") pod \"a0d671ba-679e-40f7-b831-a793351be311\" (UID: \"a0d671ba-679e-40f7-b831-a793351be311\") " Dec 11 14:34:18 crc kubenswrapper[5050]: I1211 14:34:18.919967 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0d671ba-679e-40f7-b831-a793351be311-utilities\") pod \"a0d671ba-679e-40f7-b831-a793351be311\" (UID: \"a0d671ba-679e-40f7-b831-a793351be311\") " Dec 11 14:34:18 crc kubenswrapper[5050]: I1211 14:34:18.920835 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0d671ba-679e-40f7-b831-a793351be311-utilities" (OuterVolumeSpecName: "utilities") pod "a0d671ba-679e-40f7-b831-a793351be311" (UID: "a0d671ba-679e-40f7-b831-a793351be311"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:34:18 crc kubenswrapper[5050]: I1211 14:34:18.946991 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0d671ba-679e-40f7-b831-a793351be311-kube-api-access-nn69x" (OuterVolumeSpecName: "kube-api-access-nn69x") pod "a0d671ba-679e-40f7-b831-a793351be311" (UID: "a0d671ba-679e-40f7-b831-a793351be311"). InnerVolumeSpecName "kube-api-access-nn69x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:34:19 crc kubenswrapper[5050]: I1211 14:34:19.020870 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0d671ba-679e-40f7-b831-a793351be311-catalog-content\") pod \"a0d671ba-679e-40f7-b831-a793351be311\" (UID: \"a0d671ba-679e-40f7-b831-a793351be311\") " Dec 11 14:34:19 crc kubenswrapper[5050]: I1211 14:34:19.021334 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nn69x\" (UniqueName: \"kubernetes.io/projected/a0d671ba-679e-40f7-b831-a793351be311-kube-api-access-nn69x\") on node \"crc\" DevicePath \"\"" Dec 11 14:34:19 crc kubenswrapper[5050]: I1211 14:34:19.021358 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0d671ba-679e-40f7-b831-a793351be311-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 14:34:19 crc kubenswrapper[5050]: I1211 14:34:19.076279 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0d671ba-679e-40f7-b831-a793351be311-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0d671ba-679e-40f7-b831-a793351be311" (UID: "a0d671ba-679e-40f7-b831-a793351be311"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:34:19 crc kubenswrapper[5050]: I1211 14:34:19.122355 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0d671ba-679e-40f7-b831-a793351be311-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 14:34:19 crc kubenswrapper[5050]: I1211 14:34:19.393134 5050 generic.go:334] "Generic (PLEG): container finished" podID="a0d671ba-679e-40f7-b831-a793351be311" containerID="6edbf123c59a5402b168285ecb246f7315b0ae22917bb3a96ad31769190b9fbf" exitCode=0 Dec 11 14:34:19 crc kubenswrapper[5050]: I1211 14:34:19.393186 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wrs2b" event={"ID":"a0d671ba-679e-40f7-b831-a793351be311","Type":"ContainerDied","Data":"6edbf123c59a5402b168285ecb246f7315b0ae22917bb3a96ad31769190b9fbf"} Dec 11 14:34:19 crc kubenswrapper[5050]: I1211 14:34:19.393221 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wrs2b" event={"ID":"a0d671ba-679e-40f7-b831-a793351be311","Type":"ContainerDied","Data":"f63f57e67feee59a8ef0e4b027911a485bfc3e9fa66ad8f3a3f033273dbaccae"} Dec 11 14:34:19 crc kubenswrapper[5050]: I1211 14:34:19.393257 5050 scope.go:117] "RemoveContainer" containerID="6edbf123c59a5402b168285ecb246f7315b0ae22917bb3a96ad31769190b9fbf" Dec 11 14:34:19 crc kubenswrapper[5050]: I1211 14:34:19.393187 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wrs2b" Dec 11 14:34:19 crc kubenswrapper[5050]: I1211 14:34:19.428288 5050 scope.go:117] "RemoveContainer" containerID="cede52ed8a6f9060c1cf3b7980094045a93c6926852234de1b1b3e1f852f9fbe" Dec 11 14:34:19 crc kubenswrapper[5050]: I1211 14:34:19.430273 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wrs2b"] Dec 11 14:34:19 crc kubenswrapper[5050]: I1211 14:34:19.438652 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wrs2b"] Dec 11 14:34:19 crc kubenswrapper[5050]: I1211 14:34:19.454579 5050 scope.go:117] "RemoveContainer" containerID="5870c8ce1cf6a06d43ef807ac1fc26417b68f2b6040676cd5e19279f15761f85" Dec 11 14:34:19 crc kubenswrapper[5050]: I1211 14:34:19.473312 5050 scope.go:117] "RemoveContainer" containerID="6edbf123c59a5402b168285ecb246f7315b0ae22917bb3a96ad31769190b9fbf" Dec 11 14:34:19 crc kubenswrapper[5050]: E1211 14:34:19.474132 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6edbf123c59a5402b168285ecb246f7315b0ae22917bb3a96ad31769190b9fbf\": container with ID starting with 6edbf123c59a5402b168285ecb246f7315b0ae22917bb3a96ad31769190b9fbf not found: ID does not exist" containerID="6edbf123c59a5402b168285ecb246f7315b0ae22917bb3a96ad31769190b9fbf" Dec 11 14:34:19 crc kubenswrapper[5050]: I1211 14:34:19.474172 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6edbf123c59a5402b168285ecb246f7315b0ae22917bb3a96ad31769190b9fbf"} err="failed to get container status \"6edbf123c59a5402b168285ecb246f7315b0ae22917bb3a96ad31769190b9fbf\": rpc error: code = NotFound desc = could not find container \"6edbf123c59a5402b168285ecb246f7315b0ae22917bb3a96ad31769190b9fbf\": container with ID starting with 6edbf123c59a5402b168285ecb246f7315b0ae22917bb3a96ad31769190b9fbf not found: ID does not exist" Dec 11 14:34:19 crc kubenswrapper[5050]: I1211 14:34:19.474199 5050 scope.go:117] "RemoveContainer" containerID="cede52ed8a6f9060c1cf3b7980094045a93c6926852234de1b1b3e1f852f9fbe" Dec 11 14:34:19 crc kubenswrapper[5050]: E1211 14:34:19.474723 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cede52ed8a6f9060c1cf3b7980094045a93c6926852234de1b1b3e1f852f9fbe\": container with ID starting with cede52ed8a6f9060c1cf3b7980094045a93c6926852234de1b1b3e1f852f9fbe not found: ID does not exist" containerID="cede52ed8a6f9060c1cf3b7980094045a93c6926852234de1b1b3e1f852f9fbe" Dec 11 14:34:19 crc kubenswrapper[5050]: I1211 14:34:19.474797 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cede52ed8a6f9060c1cf3b7980094045a93c6926852234de1b1b3e1f852f9fbe"} err="failed to get container status \"cede52ed8a6f9060c1cf3b7980094045a93c6926852234de1b1b3e1f852f9fbe\": rpc error: code = NotFound desc = could not find container \"cede52ed8a6f9060c1cf3b7980094045a93c6926852234de1b1b3e1f852f9fbe\": container with ID starting with cede52ed8a6f9060c1cf3b7980094045a93c6926852234de1b1b3e1f852f9fbe not found: ID does not exist" Dec 11 14:34:19 crc kubenswrapper[5050]: I1211 14:34:19.474831 5050 scope.go:117] "RemoveContainer" containerID="5870c8ce1cf6a06d43ef807ac1fc26417b68f2b6040676cd5e19279f15761f85" Dec 11 14:34:19 crc kubenswrapper[5050]: E1211 14:34:19.475227 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5870c8ce1cf6a06d43ef807ac1fc26417b68f2b6040676cd5e19279f15761f85\": container with ID starting with 5870c8ce1cf6a06d43ef807ac1fc26417b68f2b6040676cd5e19279f15761f85 not found: ID does not exist" containerID="5870c8ce1cf6a06d43ef807ac1fc26417b68f2b6040676cd5e19279f15761f85" Dec 11 14:34:19 crc kubenswrapper[5050]: I1211 14:34:19.475321 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5870c8ce1cf6a06d43ef807ac1fc26417b68f2b6040676cd5e19279f15761f85"} err="failed to get container status \"5870c8ce1cf6a06d43ef807ac1fc26417b68f2b6040676cd5e19279f15761f85\": rpc error: code = NotFound desc = could not find container \"5870c8ce1cf6a06d43ef807ac1fc26417b68f2b6040676cd5e19279f15761f85\": container with ID starting with 5870c8ce1cf6a06d43ef807ac1fc26417b68f2b6040676cd5e19279f15761f85 not found: ID does not exist" Dec 11 14:34:19 crc kubenswrapper[5050]: I1211 14:34:19.563235 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0d671ba-679e-40f7-b831-a793351be311" path="/var/lib/kubelet/pods/a0d671ba-679e-40f7-b831-a793351be311/volumes" Dec 11 14:34:49 crc kubenswrapper[5050]: I1211 14:34:49.424302 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4pz8k"] Dec 11 14:34:49 crc kubenswrapper[5050]: E1211 14:34:49.425042 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0d671ba-679e-40f7-b831-a793351be311" containerName="extract-utilities" Dec 11 14:34:49 crc kubenswrapper[5050]: I1211 14:34:49.425054 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0d671ba-679e-40f7-b831-a793351be311" containerName="extract-utilities" Dec 11 14:34:49 crc kubenswrapper[5050]: E1211 14:34:49.425072 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0d671ba-679e-40f7-b831-a793351be311" containerName="extract-content" Dec 11 14:34:49 crc kubenswrapper[5050]: I1211 14:34:49.425078 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0d671ba-679e-40f7-b831-a793351be311" containerName="extract-content" Dec 11 14:34:49 crc kubenswrapper[5050]: E1211 14:34:49.425088 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0d671ba-679e-40f7-b831-a793351be311" containerName="registry-server" Dec 11 14:34:49 crc kubenswrapper[5050]: I1211 14:34:49.425094 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0d671ba-679e-40f7-b831-a793351be311" containerName="registry-server" Dec 11 14:34:49 crc kubenswrapper[5050]: I1211 14:34:49.425241 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0d671ba-679e-40f7-b831-a793351be311" containerName="registry-server" Dec 11 14:34:49 crc kubenswrapper[5050]: I1211 14:34:49.426177 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4pz8k" Dec 11 14:34:49 crc kubenswrapper[5050]: I1211 14:34:49.447548 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4pz8k"] Dec 11 14:34:49 crc kubenswrapper[5050]: I1211 14:34:49.458087 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6853210d-43c3-4ac0-9972-6cf11a92b956-catalog-content\") pod \"community-operators-4pz8k\" (UID: \"6853210d-43c3-4ac0-9972-6cf11a92b956\") " pod="openshift-marketplace/community-operators-4pz8k" Dec 11 14:34:49 crc kubenswrapper[5050]: I1211 14:34:49.458207 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6853210d-43c3-4ac0-9972-6cf11a92b956-utilities\") pod \"community-operators-4pz8k\" (UID: \"6853210d-43c3-4ac0-9972-6cf11a92b956\") " pod="openshift-marketplace/community-operators-4pz8k" Dec 11 14:34:49 crc kubenswrapper[5050]: I1211 14:34:49.458244 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq6f9\" (UniqueName: \"kubernetes.io/projected/6853210d-43c3-4ac0-9972-6cf11a92b956-kube-api-access-cq6f9\") pod \"community-operators-4pz8k\" (UID: \"6853210d-43c3-4ac0-9972-6cf11a92b956\") " pod="openshift-marketplace/community-operators-4pz8k" Dec 11 14:34:49 crc kubenswrapper[5050]: I1211 14:34:49.559653 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6853210d-43c3-4ac0-9972-6cf11a92b956-catalog-content\") pod \"community-operators-4pz8k\" (UID: \"6853210d-43c3-4ac0-9972-6cf11a92b956\") " pod="openshift-marketplace/community-operators-4pz8k" Dec 11 14:34:49 crc kubenswrapper[5050]: I1211 14:34:49.559723 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6853210d-43c3-4ac0-9972-6cf11a92b956-utilities\") pod \"community-operators-4pz8k\" (UID: \"6853210d-43c3-4ac0-9972-6cf11a92b956\") " pod="openshift-marketplace/community-operators-4pz8k" Dec 11 14:34:49 crc kubenswrapper[5050]: I1211 14:34:49.559748 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq6f9\" (UniqueName: \"kubernetes.io/projected/6853210d-43c3-4ac0-9972-6cf11a92b956-kube-api-access-cq6f9\") pod \"community-operators-4pz8k\" (UID: \"6853210d-43c3-4ac0-9972-6cf11a92b956\") " pod="openshift-marketplace/community-operators-4pz8k" Dec 11 14:34:49 crc kubenswrapper[5050]: I1211 14:34:49.560437 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6853210d-43c3-4ac0-9972-6cf11a92b956-catalog-content\") pod \"community-operators-4pz8k\" (UID: \"6853210d-43c3-4ac0-9972-6cf11a92b956\") " pod="openshift-marketplace/community-operators-4pz8k" Dec 11 14:34:49 crc kubenswrapper[5050]: I1211 14:34:49.560578 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6853210d-43c3-4ac0-9972-6cf11a92b956-utilities\") pod \"community-operators-4pz8k\" (UID: \"6853210d-43c3-4ac0-9972-6cf11a92b956\") " pod="openshift-marketplace/community-operators-4pz8k" Dec 11 14:34:49 crc kubenswrapper[5050]: I1211 14:34:49.587971 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq6f9\" (UniqueName: \"kubernetes.io/projected/6853210d-43c3-4ac0-9972-6cf11a92b956-kube-api-access-cq6f9\") pod \"community-operators-4pz8k\" (UID: \"6853210d-43c3-4ac0-9972-6cf11a92b956\") " pod="openshift-marketplace/community-operators-4pz8k" Dec 11 14:34:49 crc kubenswrapper[5050]: I1211 14:34:49.753933 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4pz8k" Dec 11 14:34:50 crc kubenswrapper[5050]: I1211 14:34:50.058520 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4pz8k"] Dec 11 14:34:50 crc kubenswrapper[5050]: I1211 14:34:50.643751 5050 generic.go:334] "Generic (PLEG): container finished" podID="6853210d-43c3-4ac0-9972-6cf11a92b956" containerID="e777a67b9ed6a12b491a22e6f107a6d452fc3489b04aacc8cbd4d8c950c160bf" exitCode=0 Dec 11 14:34:50 crc kubenswrapper[5050]: I1211 14:34:50.643839 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4pz8k" event={"ID":"6853210d-43c3-4ac0-9972-6cf11a92b956","Type":"ContainerDied","Data":"e777a67b9ed6a12b491a22e6f107a6d452fc3489b04aacc8cbd4d8c950c160bf"} Dec 11 14:34:50 crc kubenswrapper[5050]: I1211 14:34:50.644031 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4pz8k" event={"ID":"6853210d-43c3-4ac0-9972-6cf11a92b956","Type":"ContainerStarted","Data":"e6ae6c58e77d2246d7b9f0f7c9426d3ccd8157f7242ee45931b8981d8d9cdee7"} Dec 11 14:34:51 crc kubenswrapper[5050]: I1211 14:34:51.652100 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4pz8k" event={"ID":"6853210d-43c3-4ac0-9972-6cf11a92b956","Type":"ContainerStarted","Data":"c8a17a9221c7b5e6aeabde9812a48cc342741da0e0af14d9acc8bc03869fa425"} Dec 11 14:34:52 crc kubenswrapper[5050]: I1211 14:34:52.659222 5050 generic.go:334] "Generic (PLEG): container finished" podID="6853210d-43c3-4ac0-9972-6cf11a92b956" containerID="c8a17a9221c7b5e6aeabde9812a48cc342741da0e0af14d9acc8bc03869fa425" exitCode=0 Dec 11 14:34:52 crc kubenswrapper[5050]: I1211 14:34:52.659263 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4pz8k" event={"ID":"6853210d-43c3-4ac0-9972-6cf11a92b956","Type":"ContainerDied","Data":"c8a17a9221c7b5e6aeabde9812a48cc342741da0e0af14d9acc8bc03869fa425"} Dec 11 14:34:53 crc kubenswrapper[5050]: I1211 14:34:53.667590 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4pz8k" event={"ID":"6853210d-43c3-4ac0-9972-6cf11a92b956","Type":"ContainerStarted","Data":"2f7b1cc868968d2c37917ada91707badee955e4575b1bbec8d0499faf20a52c7"} Dec 11 14:34:53 crc kubenswrapper[5050]: I1211 14:34:53.686940 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4pz8k" podStartSLOduration=2.261246382 podStartE2EDuration="4.68692192s" podCreationTimestamp="2025-12-11 14:34:49 +0000 UTC" firstStartedPulling="2025-12-11 14:34:50.645322572 +0000 UTC m=+2781.489045188" lastFinishedPulling="2025-12-11 14:34:53.07099814 +0000 UTC m=+2783.914720726" observedRunningTime="2025-12-11 14:34:53.682992074 +0000 UTC m=+2784.526714670" watchObservedRunningTime="2025-12-11 14:34:53.68692192 +0000 UTC m=+2784.530644506" Dec 11 14:34:59 crc kubenswrapper[5050]: I1211 14:34:59.754853 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4pz8k" Dec 11 14:34:59 crc kubenswrapper[5050]: I1211 14:34:59.755424 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4pz8k" Dec 11 14:34:59 crc kubenswrapper[5050]: I1211 14:34:59.808739 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4pz8k" Dec 11 14:35:00 crc kubenswrapper[5050]: I1211 14:35:00.753987 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4pz8k" Dec 11 14:35:00 crc kubenswrapper[5050]: I1211 14:35:00.794612 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4pz8k"] Dec 11 14:35:02 crc kubenswrapper[5050]: I1211 14:35:02.730351 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4pz8k" podUID="6853210d-43c3-4ac0-9972-6cf11a92b956" containerName="registry-server" containerID="cri-o://2f7b1cc868968d2c37917ada91707badee955e4575b1bbec8d0499faf20a52c7" gracePeriod=2 Dec 11 14:35:03 crc kubenswrapper[5050]: I1211 14:35:03.741777 5050 generic.go:334] "Generic (PLEG): container finished" podID="6853210d-43c3-4ac0-9972-6cf11a92b956" containerID="2f7b1cc868968d2c37917ada91707badee955e4575b1bbec8d0499faf20a52c7" exitCode=0 Dec 11 14:35:03 crc kubenswrapper[5050]: I1211 14:35:03.741839 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4pz8k" event={"ID":"6853210d-43c3-4ac0-9972-6cf11a92b956","Type":"ContainerDied","Data":"2f7b1cc868968d2c37917ada91707badee955e4575b1bbec8d0499faf20a52c7"} Dec 11 14:35:04 crc kubenswrapper[5050]: I1211 14:35:04.272696 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4pz8k" Dec 11 14:35:04 crc kubenswrapper[5050]: I1211 14:35:04.382287 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq6f9\" (UniqueName: \"kubernetes.io/projected/6853210d-43c3-4ac0-9972-6cf11a92b956-kube-api-access-cq6f9\") pod \"6853210d-43c3-4ac0-9972-6cf11a92b956\" (UID: \"6853210d-43c3-4ac0-9972-6cf11a92b956\") " Dec 11 14:35:04 crc kubenswrapper[5050]: I1211 14:35:04.382489 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6853210d-43c3-4ac0-9972-6cf11a92b956-catalog-content\") pod \"6853210d-43c3-4ac0-9972-6cf11a92b956\" (UID: \"6853210d-43c3-4ac0-9972-6cf11a92b956\") " Dec 11 14:35:04 crc kubenswrapper[5050]: I1211 14:35:04.382524 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6853210d-43c3-4ac0-9972-6cf11a92b956-utilities\") pod \"6853210d-43c3-4ac0-9972-6cf11a92b956\" (UID: \"6853210d-43c3-4ac0-9972-6cf11a92b956\") " Dec 11 14:35:04 crc kubenswrapper[5050]: I1211 14:35:04.383708 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6853210d-43c3-4ac0-9972-6cf11a92b956-utilities" (OuterVolumeSpecName: "utilities") pod "6853210d-43c3-4ac0-9972-6cf11a92b956" (UID: "6853210d-43c3-4ac0-9972-6cf11a92b956"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:35:04 crc kubenswrapper[5050]: I1211 14:35:04.396282 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6853210d-43c3-4ac0-9972-6cf11a92b956-kube-api-access-cq6f9" (OuterVolumeSpecName: "kube-api-access-cq6f9") pod "6853210d-43c3-4ac0-9972-6cf11a92b956" (UID: "6853210d-43c3-4ac0-9972-6cf11a92b956"). InnerVolumeSpecName "kube-api-access-cq6f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:35:04 crc kubenswrapper[5050]: I1211 14:35:04.447326 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6853210d-43c3-4ac0-9972-6cf11a92b956-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6853210d-43c3-4ac0-9972-6cf11a92b956" (UID: "6853210d-43c3-4ac0-9972-6cf11a92b956"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:35:04 crc kubenswrapper[5050]: I1211 14:35:04.484048 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6853210d-43c3-4ac0-9972-6cf11a92b956-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 14:35:04 crc kubenswrapper[5050]: I1211 14:35:04.484089 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq6f9\" (UniqueName: \"kubernetes.io/projected/6853210d-43c3-4ac0-9972-6cf11a92b956-kube-api-access-cq6f9\") on node \"crc\" DevicePath \"\"" Dec 11 14:35:04 crc kubenswrapper[5050]: I1211 14:35:04.484127 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6853210d-43c3-4ac0-9972-6cf11a92b956-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 14:35:04 crc kubenswrapper[5050]: I1211 14:35:04.753549 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4pz8k" event={"ID":"6853210d-43c3-4ac0-9972-6cf11a92b956","Type":"ContainerDied","Data":"e6ae6c58e77d2246d7b9f0f7c9426d3ccd8157f7242ee45931b8981d8d9cdee7"} Dec 11 14:35:04 crc kubenswrapper[5050]: I1211 14:35:04.753610 5050 scope.go:117] "RemoveContainer" containerID="2f7b1cc868968d2c37917ada91707badee955e4575b1bbec8d0499faf20a52c7" Dec 11 14:35:04 crc kubenswrapper[5050]: I1211 14:35:04.753616 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4pz8k" Dec 11 14:35:04 crc kubenswrapper[5050]: I1211 14:35:04.783179 5050 scope.go:117] "RemoveContainer" containerID="c8a17a9221c7b5e6aeabde9812a48cc342741da0e0af14d9acc8bc03869fa425" Dec 11 14:35:04 crc kubenswrapper[5050]: I1211 14:35:04.795188 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4pz8k"] Dec 11 14:35:04 crc kubenswrapper[5050]: I1211 14:35:04.808687 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4pz8k"] Dec 11 14:35:04 crc kubenswrapper[5050]: I1211 14:35:04.844733 5050 scope.go:117] "RemoveContainer" containerID="e777a67b9ed6a12b491a22e6f107a6d452fc3489b04aacc8cbd4d8c950c160bf" Dec 11 14:35:05 crc kubenswrapper[5050]: I1211 14:35:05.557206 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6853210d-43c3-4ac0-9972-6cf11a92b956" path="/var/lib/kubelet/pods/6853210d-43c3-4ac0-9972-6cf11a92b956/volumes" Dec 11 14:36:10 crc kubenswrapper[5050]: I1211 14:36:10.796639 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:36:10 crc kubenswrapper[5050]: I1211 14:36:10.798155 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:36:40 crc kubenswrapper[5050]: I1211 14:36:40.796914 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:36:40 crc kubenswrapper[5050]: I1211 14:36:40.797349 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:37:10 crc kubenswrapper[5050]: I1211 14:37:10.796770 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:37:10 crc kubenswrapper[5050]: I1211 14:37:10.798348 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:37:10 crc kubenswrapper[5050]: I1211 14:37:10.798494 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 14:37:10 crc kubenswrapper[5050]: I1211 14:37:10.800192 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bf9d54912535db3a24a217c8d8c8f4eabd07c3860b7fb720e565aa94ceabf2d1"} pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 14:37:10 crc kubenswrapper[5050]: I1211 14:37:10.800318 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" containerID="cri-o://bf9d54912535db3a24a217c8d8c8f4eabd07c3860b7fb720e565aa94ceabf2d1" gracePeriod=600 Dec 11 14:37:11 crc kubenswrapper[5050]: I1211 14:37:11.735579 5050 generic.go:334] "Generic (PLEG): container finished" podID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerID="bf9d54912535db3a24a217c8d8c8f4eabd07c3860b7fb720e565aa94ceabf2d1" exitCode=0 Dec 11 14:37:11 crc kubenswrapper[5050]: I1211 14:37:11.735647 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerDied","Data":"bf9d54912535db3a24a217c8d8c8f4eabd07c3860b7fb720e565aa94ceabf2d1"} Dec 11 14:37:11 crc kubenswrapper[5050]: I1211 14:37:11.736933 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerStarted","Data":"f99d22e27bac8e0550651090d627b2404c105d7886051c4dcf85cf0c6169d292"} Dec 11 14:37:11 crc kubenswrapper[5050]: I1211 14:37:11.737002 5050 scope.go:117] "RemoveContainer" containerID="ac70e3a833eefb531f588e66b0c19516ee5cbd6aca4607b44eb1072f1ed802ce" Dec 11 14:39:40 crc kubenswrapper[5050]: I1211 14:39:40.796253 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:39:40 crc kubenswrapper[5050]: I1211 14:39:40.796783 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:40:10 crc kubenswrapper[5050]: I1211 14:40:10.837216 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:40:10 crc kubenswrapper[5050]: I1211 14:40:10.838925 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:40:40 crc kubenswrapper[5050]: I1211 14:40:40.796557 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:40:40 crc kubenswrapper[5050]: I1211 14:40:40.797615 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:40:40 crc kubenswrapper[5050]: I1211 14:40:40.797709 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 14:40:40 crc kubenswrapper[5050]: I1211 14:40:40.798830 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f99d22e27bac8e0550651090d627b2404c105d7886051c4dcf85cf0c6169d292"} pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 14:40:40 crc kubenswrapper[5050]: I1211 14:40:40.798986 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" containerID="cri-o://f99d22e27bac8e0550651090d627b2404c105d7886051c4dcf85cf0c6169d292" gracePeriod=600 Dec 11 14:40:40 crc kubenswrapper[5050]: E1211 14:40:40.943624 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:40:41 crc kubenswrapper[5050]: I1211 14:40:41.722609 5050 generic.go:334] "Generic (PLEG): container finished" podID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerID="f99d22e27bac8e0550651090d627b2404c105d7886051c4dcf85cf0c6169d292" exitCode=0 Dec 11 14:40:41 crc kubenswrapper[5050]: I1211 14:40:41.722670 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerDied","Data":"f99d22e27bac8e0550651090d627b2404c105d7886051c4dcf85cf0c6169d292"} Dec 11 14:40:41 crc kubenswrapper[5050]: I1211 14:40:41.722708 5050 scope.go:117] "RemoveContainer" containerID="bf9d54912535db3a24a217c8d8c8f4eabd07c3860b7fb720e565aa94ceabf2d1" Dec 11 14:40:41 crc kubenswrapper[5050]: I1211 14:40:41.723193 5050 scope.go:117] "RemoveContainer" containerID="f99d22e27bac8e0550651090d627b2404c105d7886051c4dcf85cf0c6169d292" Dec 11 14:40:41 crc kubenswrapper[5050]: E1211 14:40:41.723409 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:40:54 crc kubenswrapper[5050]: I1211 14:40:54.546703 5050 scope.go:117] "RemoveContainer" containerID="f99d22e27bac8e0550651090d627b2404c105d7886051c4dcf85cf0c6169d292" Dec 11 14:40:54 crc kubenswrapper[5050]: E1211 14:40:54.547601 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:41:08 crc kubenswrapper[5050]: I1211 14:41:08.545736 5050 scope.go:117] "RemoveContainer" containerID="f99d22e27bac8e0550651090d627b2404c105d7886051c4dcf85cf0c6169d292" Dec 11 14:41:08 crc kubenswrapper[5050]: E1211 14:41:08.547737 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:41:19 crc kubenswrapper[5050]: I1211 14:41:19.554622 5050 scope.go:117] "RemoveContainer" containerID="f99d22e27bac8e0550651090d627b2404c105d7886051c4dcf85cf0c6169d292" Dec 11 14:41:19 crc kubenswrapper[5050]: E1211 14:41:19.556331 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:41:30 crc kubenswrapper[5050]: I1211 14:41:30.546506 5050 scope.go:117] "RemoveContainer" containerID="f99d22e27bac8e0550651090d627b2404c105d7886051c4dcf85cf0c6169d292" Dec 11 14:41:30 crc kubenswrapper[5050]: E1211 14:41:30.547118 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:41:45 crc kubenswrapper[5050]: I1211 14:41:45.546748 5050 scope.go:117] "RemoveContainer" containerID="f99d22e27bac8e0550651090d627b2404c105d7886051c4dcf85cf0c6169d292" Dec 11 14:41:45 crc kubenswrapper[5050]: E1211 14:41:45.547592 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:41:56 crc kubenswrapper[5050]: I1211 14:41:56.548181 5050 scope.go:117] "RemoveContainer" containerID="f99d22e27bac8e0550651090d627b2404c105d7886051c4dcf85cf0c6169d292" Dec 11 14:41:56 crc kubenswrapper[5050]: E1211 14:41:56.548926 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:42:08 crc kubenswrapper[5050]: I1211 14:42:08.219364 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2tvnw"] Dec 11 14:42:08 crc kubenswrapper[5050]: E1211 14:42:08.220478 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6853210d-43c3-4ac0-9972-6cf11a92b956" containerName="registry-server" Dec 11 14:42:08 crc kubenswrapper[5050]: I1211 14:42:08.220495 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="6853210d-43c3-4ac0-9972-6cf11a92b956" containerName="registry-server" Dec 11 14:42:08 crc kubenswrapper[5050]: E1211 14:42:08.220527 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6853210d-43c3-4ac0-9972-6cf11a92b956" containerName="extract-utilities" Dec 11 14:42:08 crc kubenswrapper[5050]: I1211 14:42:08.220535 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="6853210d-43c3-4ac0-9972-6cf11a92b956" containerName="extract-utilities" Dec 11 14:42:08 crc kubenswrapper[5050]: E1211 14:42:08.220547 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6853210d-43c3-4ac0-9972-6cf11a92b956" containerName="extract-content" Dec 11 14:42:08 crc kubenswrapper[5050]: I1211 14:42:08.220556 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="6853210d-43c3-4ac0-9972-6cf11a92b956" containerName="extract-content" Dec 11 14:42:08 crc kubenswrapper[5050]: I1211 14:42:08.220735 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="6853210d-43c3-4ac0-9972-6cf11a92b956" containerName="registry-server" Dec 11 14:42:08 crc kubenswrapper[5050]: I1211 14:42:08.225099 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2tvnw" Dec 11 14:42:08 crc kubenswrapper[5050]: I1211 14:42:08.242199 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2tvnw"] Dec 11 14:42:08 crc kubenswrapper[5050]: I1211 14:42:08.314514 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6349c907-cffd-4bd7-aae6-a304e22dd2cd-utilities\") pod \"redhat-operators-2tvnw\" (UID: \"6349c907-cffd-4bd7-aae6-a304e22dd2cd\") " pod="openshift-marketplace/redhat-operators-2tvnw" Dec 11 14:42:08 crc kubenswrapper[5050]: I1211 14:42:08.314910 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx6pm\" (UniqueName: \"kubernetes.io/projected/6349c907-cffd-4bd7-aae6-a304e22dd2cd-kube-api-access-tx6pm\") pod \"redhat-operators-2tvnw\" (UID: \"6349c907-cffd-4bd7-aae6-a304e22dd2cd\") " pod="openshift-marketplace/redhat-operators-2tvnw" Dec 11 14:42:08 crc kubenswrapper[5050]: I1211 14:42:08.315296 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6349c907-cffd-4bd7-aae6-a304e22dd2cd-catalog-content\") pod \"redhat-operators-2tvnw\" (UID: \"6349c907-cffd-4bd7-aae6-a304e22dd2cd\") " pod="openshift-marketplace/redhat-operators-2tvnw" Dec 11 14:42:08 crc kubenswrapper[5050]: I1211 14:42:08.416396 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6349c907-cffd-4bd7-aae6-a304e22dd2cd-catalog-content\") pod \"redhat-operators-2tvnw\" (UID: \"6349c907-cffd-4bd7-aae6-a304e22dd2cd\") " pod="openshift-marketplace/redhat-operators-2tvnw" Dec 11 14:42:08 crc kubenswrapper[5050]: I1211 14:42:08.416485 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6349c907-cffd-4bd7-aae6-a304e22dd2cd-utilities\") pod \"redhat-operators-2tvnw\" (UID: \"6349c907-cffd-4bd7-aae6-a304e22dd2cd\") " pod="openshift-marketplace/redhat-operators-2tvnw" Dec 11 14:42:08 crc kubenswrapper[5050]: I1211 14:42:08.416540 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx6pm\" (UniqueName: \"kubernetes.io/projected/6349c907-cffd-4bd7-aae6-a304e22dd2cd-kube-api-access-tx6pm\") pod \"redhat-operators-2tvnw\" (UID: \"6349c907-cffd-4bd7-aae6-a304e22dd2cd\") " pod="openshift-marketplace/redhat-operators-2tvnw" Dec 11 14:42:08 crc kubenswrapper[5050]: I1211 14:42:08.417304 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6349c907-cffd-4bd7-aae6-a304e22dd2cd-catalog-content\") pod \"redhat-operators-2tvnw\" (UID: \"6349c907-cffd-4bd7-aae6-a304e22dd2cd\") " pod="openshift-marketplace/redhat-operators-2tvnw" Dec 11 14:42:08 crc kubenswrapper[5050]: I1211 14:42:08.417525 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6349c907-cffd-4bd7-aae6-a304e22dd2cd-utilities\") pod \"redhat-operators-2tvnw\" (UID: \"6349c907-cffd-4bd7-aae6-a304e22dd2cd\") " pod="openshift-marketplace/redhat-operators-2tvnw" Dec 11 14:42:08 crc kubenswrapper[5050]: I1211 14:42:08.442327 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx6pm\" (UniqueName: \"kubernetes.io/projected/6349c907-cffd-4bd7-aae6-a304e22dd2cd-kube-api-access-tx6pm\") pod \"redhat-operators-2tvnw\" (UID: \"6349c907-cffd-4bd7-aae6-a304e22dd2cd\") " pod="openshift-marketplace/redhat-operators-2tvnw" Dec 11 14:42:08 crc kubenswrapper[5050]: I1211 14:42:08.547569 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2tvnw" Dec 11 14:42:08 crc kubenswrapper[5050]: I1211 14:42:08.800632 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2tvnw"] Dec 11 14:42:09 crc kubenswrapper[5050]: I1211 14:42:09.448879 5050 generic.go:334] "Generic (PLEG): container finished" podID="6349c907-cffd-4bd7-aae6-a304e22dd2cd" containerID="05140e27d335968379af1eb13640ec6301cdc986bf6fac9dd44ae90bf60af06f" exitCode=0 Dec 11 14:42:09 crc kubenswrapper[5050]: I1211 14:42:09.448985 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tvnw" event={"ID":"6349c907-cffd-4bd7-aae6-a304e22dd2cd","Type":"ContainerDied","Data":"05140e27d335968379af1eb13640ec6301cdc986bf6fac9dd44ae90bf60af06f"} Dec 11 14:42:09 crc kubenswrapper[5050]: I1211 14:42:09.449238 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tvnw" event={"ID":"6349c907-cffd-4bd7-aae6-a304e22dd2cd","Type":"ContainerStarted","Data":"5f8aa140a3c1b4bf8ce0fded63b5e9ae4446810ccdc5ba0abfc5aeac2efb47b8"} Dec 11 14:42:09 crc kubenswrapper[5050]: I1211 14:42:09.451274 5050 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 14:42:10 crc kubenswrapper[5050]: I1211 14:42:10.546615 5050 scope.go:117] "RemoveContainer" containerID="f99d22e27bac8e0550651090d627b2404c105d7886051c4dcf85cf0c6169d292" Dec 11 14:42:10 crc kubenswrapper[5050]: E1211 14:42:10.547083 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:42:11 crc kubenswrapper[5050]: I1211 14:42:11.462805 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tvnw" event={"ID":"6349c907-cffd-4bd7-aae6-a304e22dd2cd","Type":"ContainerStarted","Data":"52bb50079e7844d0cef7c8185c3b9dfd81c1c6e7409b890eb2cd9c6220f99517"} Dec 11 14:42:12 crc kubenswrapper[5050]: I1211 14:42:12.471686 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tvnw" event={"ID":"6349c907-cffd-4bd7-aae6-a304e22dd2cd","Type":"ContainerDied","Data":"52bb50079e7844d0cef7c8185c3b9dfd81c1c6e7409b890eb2cd9c6220f99517"} Dec 11 14:42:12 crc kubenswrapper[5050]: I1211 14:42:12.471460 5050 generic.go:334] "Generic (PLEG): container finished" podID="6349c907-cffd-4bd7-aae6-a304e22dd2cd" containerID="52bb50079e7844d0cef7c8185c3b9dfd81c1c6e7409b890eb2cd9c6220f99517" exitCode=0 Dec 11 14:42:14 crc kubenswrapper[5050]: I1211 14:42:14.490599 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tvnw" event={"ID":"6349c907-cffd-4bd7-aae6-a304e22dd2cd","Type":"ContainerStarted","Data":"d9ddd37381ef3bdd0b06c16cfecc04e286b12f1ac202b17097808ad29cc707a2"} Dec 11 14:42:14 crc kubenswrapper[5050]: I1211 14:42:14.510824 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2tvnw" podStartSLOduration=1.956811595 podStartE2EDuration="6.510807599s" podCreationTimestamp="2025-12-11 14:42:08 +0000 UTC" firstStartedPulling="2025-12-11 14:42:09.450998545 +0000 UTC m=+3220.294721131" lastFinishedPulling="2025-12-11 14:42:14.004994549 +0000 UTC m=+3224.848717135" observedRunningTime="2025-12-11 14:42:14.506003319 +0000 UTC m=+3225.349725905" watchObservedRunningTime="2025-12-11 14:42:14.510807599 +0000 UTC m=+3225.354530185" Dec 11 14:42:18 crc kubenswrapper[5050]: I1211 14:42:18.547928 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2tvnw" Dec 11 14:42:18 crc kubenswrapper[5050]: I1211 14:42:18.548293 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2tvnw" Dec 11 14:42:19 crc kubenswrapper[5050]: I1211 14:42:19.593229 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2tvnw" podUID="6349c907-cffd-4bd7-aae6-a304e22dd2cd" containerName="registry-server" probeResult="failure" output=< Dec 11 14:42:19 crc kubenswrapper[5050]: timeout: failed to connect service ":50051" within 1s Dec 11 14:42:19 crc kubenswrapper[5050]: > Dec 11 14:42:24 crc kubenswrapper[5050]: I1211 14:42:24.545680 5050 scope.go:117] "RemoveContainer" containerID="f99d22e27bac8e0550651090d627b2404c105d7886051c4dcf85cf0c6169d292" Dec 11 14:42:24 crc kubenswrapper[5050]: E1211 14:42:24.546179 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:42:28 crc kubenswrapper[5050]: I1211 14:42:28.599714 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2tvnw" Dec 11 14:42:28 crc kubenswrapper[5050]: I1211 14:42:28.646449 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2tvnw" Dec 11 14:42:28 crc kubenswrapper[5050]: I1211 14:42:28.837702 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2tvnw"] Dec 11 14:42:30 crc kubenswrapper[5050]: I1211 14:42:30.604434 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2tvnw" podUID="6349c907-cffd-4bd7-aae6-a304e22dd2cd" containerName="registry-server" containerID="cri-o://d9ddd37381ef3bdd0b06c16cfecc04e286b12f1ac202b17097808ad29cc707a2" gracePeriod=2 Dec 11 14:42:31 crc kubenswrapper[5050]: I1211 14:42:31.596802 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2tvnw" Dec 11 14:42:31 crc kubenswrapper[5050]: I1211 14:42:31.627183 5050 generic.go:334] "Generic (PLEG): container finished" podID="6349c907-cffd-4bd7-aae6-a304e22dd2cd" containerID="d9ddd37381ef3bdd0b06c16cfecc04e286b12f1ac202b17097808ad29cc707a2" exitCode=0 Dec 11 14:42:31 crc kubenswrapper[5050]: I1211 14:42:31.627237 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tvnw" event={"ID":"6349c907-cffd-4bd7-aae6-a304e22dd2cd","Type":"ContainerDied","Data":"d9ddd37381ef3bdd0b06c16cfecc04e286b12f1ac202b17097808ad29cc707a2"} Dec 11 14:42:31 crc kubenswrapper[5050]: I1211 14:42:31.627274 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2tvnw" event={"ID":"6349c907-cffd-4bd7-aae6-a304e22dd2cd","Type":"ContainerDied","Data":"5f8aa140a3c1b4bf8ce0fded63b5e9ae4446810ccdc5ba0abfc5aeac2efb47b8"} Dec 11 14:42:31 crc kubenswrapper[5050]: I1211 14:42:31.627294 5050 scope.go:117] "RemoveContainer" containerID="d9ddd37381ef3bdd0b06c16cfecc04e286b12f1ac202b17097808ad29cc707a2" Dec 11 14:42:31 crc kubenswrapper[5050]: I1211 14:42:31.627340 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2tvnw" Dec 11 14:42:31 crc kubenswrapper[5050]: I1211 14:42:31.662186 5050 scope.go:117] "RemoveContainer" containerID="52bb50079e7844d0cef7c8185c3b9dfd81c1c6e7409b890eb2cd9c6220f99517" Dec 11 14:42:31 crc kubenswrapper[5050]: I1211 14:42:31.666002 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6349c907-cffd-4bd7-aae6-a304e22dd2cd-catalog-content\") pod \"6349c907-cffd-4bd7-aae6-a304e22dd2cd\" (UID: \"6349c907-cffd-4bd7-aae6-a304e22dd2cd\") " Dec 11 14:42:31 crc kubenswrapper[5050]: I1211 14:42:31.666100 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx6pm\" (UniqueName: \"kubernetes.io/projected/6349c907-cffd-4bd7-aae6-a304e22dd2cd-kube-api-access-tx6pm\") pod \"6349c907-cffd-4bd7-aae6-a304e22dd2cd\" (UID: \"6349c907-cffd-4bd7-aae6-a304e22dd2cd\") " Dec 11 14:42:31 crc kubenswrapper[5050]: I1211 14:42:31.666432 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6349c907-cffd-4bd7-aae6-a304e22dd2cd-utilities\") pod \"6349c907-cffd-4bd7-aae6-a304e22dd2cd\" (UID: \"6349c907-cffd-4bd7-aae6-a304e22dd2cd\") " Dec 11 14:42:31 crc kubenswrapper[5050]: I1211 14:42:31.667899 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6349c907-cffd-4bd7-aae6-a304e22dd2cd-utilities" (OuterVolumeSpecName: "utilities") pod "6349c907-cffd-4bd7-aae6-a304e22dd2cd" (UID: "6349c907-cffd-4bd7-aae6-a304e22dd2cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:42:31 crc kubenswrapper[5050]: I1211 14:42:31.678094 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6349c907-cffd-4bd7-aae6-a304e22dd2cd-kube-api-access-tx6pm" (OuterVolumeSpecName: "kube-api-access-tx6pm") pod "6349c907-cffd-4bd7-aae6-a304e22dd2cd" (UID: "6349c907-cffd-4bd7-aae6-a304e22dd2cd"). InnerVolumeSpecName "kube-api-access-tx6pm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:42:31 crc kubenswrapper[5050]: I1211 14:42:31.700039 5050 scope.go:117] "RemoveContainer" containerID="05140e27d335968379af1eb13640ec6301cdc986bf6fac9dd44ae90bf60af06f" Dec 11 14:42:31 crc kubenswrapper[5050]: I1211 14:42:31.727716 5050 scope.go:117] "RemoveContainer" containerID="d9ddd37381ef3bdd0b06c16cfecc04e286b12f1ac202b17097808ad29cc707a2" Dec 11 14:42:31 crc kubenswrapper[5050]: E1211 14:42:31.728363 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9ddd37381ef3bdd0b06c16cfecc04e286b12f1ac202b17097808ad29cc707a2\": container with ID starting with d9ddd37381ef3bdd0b06c16cfecc04e286b12f1ac202b17097808ad29cc707a2 not found: ID does not exist" containerID="d9ddd37381ef3bdd0b06c16cfecc04e286b12f1ac202b17097808ad29cc707a2" Dec 11 14:42:31 crc kubenswrapper[5050]: I1211 14:42:31.728416 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9ddd37381ef3bdd0b06c16cfecc04e286b12f1ac202b17097808ad29cc707a2"} err="failed to get container status \"d9ddd37381ef3bdd0b06c16cfecc04e286b12f1ac202b17097808ad29cc707a2\": rpc error: code = NotFound desc = could not find container \"d9ddd37381ef3bdd0b06c16cfecc04e286b12f1ac202b17097808ad29cc707a2\": container with ID starting with d9ddd37381ef3bdd0b06c16cfecc04e286b12f1ac202b17097808ad29cc707a2 not found: ID does not exist" Dec 11 14:42:31 crc kubenswrapper[5050]: I1211 14:42:31.728451 5050 scope.go:117] "RemoveContainer" containerID="52bb50079e7844d0cef7c8185c3b9dfd81c1c6e7409b890eb2cd9c6220f99517" Dec 11 14:42:31 crc kubenswrapper[5050]: E1211 14:42:31.728781 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52bb50079e7844d0cef7c8185c3b9dfd81c1c6e7409b890eb2cd9c6220f99517\": container with ID starting with 52bb50079e7844d0cef7c8185c3b9dfd81c1c6e7409b890eb2cd9c6220f99517 not found: ID does not exist" containerID="52bb50079e7844d0cef7c8185c3b9dfd81c1c6e7409b890eb2cd9c6220f99517" Dec 11 14:42:31 crc kubenswrapper[5050]: I1211 14:42:31.728822 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52bb50079e7844d0cef7c8185c3b9dfd81c1c6e7409b890eb2cd9c6220f99517"} err="failed to get container status \"52bb50079e7844d0cef7c8185c3b9dfd81c1c6e7409b890eb2cd9c6220f99517\": rpc error: code = NotFound desc = could not find container \"52bb50079e7844d0cef7c8185c3b9dfd81c1c6e7409b890eb2cd9c6220f99517\": container with ID starting with 52bb50079e7844d0cef7c8185c3b9dfd81c1c6e7409b890eb2cd9c6220f99517 not found: ID does not exist" Dec 11 14:42:31 crc kubenswrapper[5050]: I1211 14:42:31.728845 5050 scope.go:117] "RemoveContainer" containerID="05140e27d335968379af1eb13640ec6301cdc986bf6fac9dd44ae90bf60af06f" Dec 11 14:42:31 crc kubenswrapper[5050]: E1211 14:42:31.729258 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05140e27d335968379af1eb13640ec6301cdc986bf6fac9dd44ae90bf60af06f\": container with ID starting with 05140e27d335968379af1eb13640ec6301cdc986bf6fac9dd44ae90bf60af06f not found: ID does not exist" containerID="05140e27d335968379af1eb13640ec6301cdc986bf6fac9dd44ae90bf60af06f" Dec 11 14:42:31 crc kubenswrapper[5050]: I1211 14:42:31.729284 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05140e27d335968379af1eb13640ec6301cdc986bf6fac9dd44ae90bf60af06f"} err="failed to get container status \"05140e27d335968379af1eb13640ec6301cdc986bf6fac9dd44ae90bf60af06f\": rpc error: code = NotFound desc = could not find container \"05140e27d335968379af1eb13640ec6301cdc986bf6fac9dd44ae90bf60af06f\": container with ID starting with 05140e27d335968379af1eb13640ec6301cdc986bf6fac9dd44ae90bf60af06f not found: ID does not exist" Dec 11 14:42:31 crc kubenswrapper[5050]: I1211 14:42:31.768743 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6349c907-cffd-4bd7-aae6-a304e22dd2cd-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 14:42:31 crc kubenswrapper[5050]: I1211 14:42:31.768800 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tx6pm\" (UniqueName: \"kubernetes.io/projected/6349c907-cffd-4bd7-aae6-a304e22dd2cd-kube-api-access-tx6pm\") on node \"crc\" DevicePath \"\"" Dec 11 14:42:31 crc kubenswrapper[5050]: I1211 14:42:31.806702 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6349c907-cffd-4bd7-aae6-a304e22dd2cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6349c907-cffd-4bd7-aae6-a304e22dd2cd" (UID: "6349c907-cffd-4bd7-aae6-a304e22dd2cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:42:31 crc kubenswrapper[5050]: I1211 14:42:31.869933 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6349c907-cffd-4bd7-aae6-a304e22dd2cd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 14:42:31 crc kubenswrapper[5050]: I1211 14:42:31.962464 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2tvnw"] Dec 11 14:42:31 crc kubenswrapper[5050]: I1211 14:42:31.968666 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2tvnw"] Dec 11 14:42:33 crc kubenswrapper[5050]: I1211 14:42:33.556320 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6349c907-cffd-4bd7-aae6-a304e22dd2cd" path="/var/lib/kubelet/pods/6349c907-cffd-4bd7-aae6-a304e22dd2cd/volumes" Dec 11 14:42:37 crc kubenswrapper[5050]: I1211 14:42:37.546522 5050 scope.go:117] "RemoveContainer" containerID="f99d22e27bac8e0550651090d627b2404c105d7886051c4dcf85cf0c6169d292" Dec 11 14:42:37 crc kubenswrapper[5050]: E1211 14:42:37.547823 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:42:50 crc kubenswrapper[5050]: I1211 14:42:50.546352 5050 scope.go:117] "RemoveContainer" containerID="f99d22e27bac8e0550651090d627b2404c105d7886051c4dcf85cf0c6169d292" Dec 11 14:42:50 crc kubenswrapper[5050]: E1211 14:42:50.547089 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:43:01 crc kubenswrapper[5050]: I1211 14:43:01.546604 5050 scope.go:117] "RemoveContainer" containerID="f99d22e27bac8e0550651090d627b2404c105d7886051c4dcf85cf0c6169d292" Dec 11 14:43:01 crc kubenswrapper[5050]: E1211 14:43:01.547483 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:43:04 crc kubenswrapper[5050]: I1211 14:43:04.227597 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rj6nv"] Dec 11 14:43:04 crc kubenswrapper[5050]: E1211 14:43:04.228122 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6349c907-cffd-4bd7-aae6-a304e22dd2cd" containerName="extract-content" Dec 11 14:43:04 crc kubenswrapper[5050]: I1211 14:43:04.228135 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="6349c907-cffd-4bd7-aae6-a304e22dd2cd" containerName="extract-content" Dec 11 14:43:04 crc kubenswrapper[5050]: E1211 14:43:04.228166 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6349c907-cffd-4bd7-aae6-a304e22dd2cd" containerName="extract-utilities" Dec 11 14:43:04 crc kubenswrapper[5050]: I1211 14:43:04.228172 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="6349c907-cffd-4bd7-aae6-a304e22dd2cd" containerName="extract-utilities" Dec 11 14:43:04 crc kubenswrapper[5050]: E1211 14:43:04.228184 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6349c907-cffd-4bd7-aae6-a304e22dd2cd" containerName="registry-server" Dec 11 14:43:04 crc kubenswrapper[5050]: I1211 14:43:04.228190 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="6349c907-cffd-4bd7-aae6-a304e22dd2cd" containerName="registry-server" Dec 11 14:43:04 crc kubenswrapper[5050]: I1211 14:43:04.228311 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="6349c907-cffd-4bd7-aae6-a304e22dd2cd" containerName="registry-server" Dec 11 14:43:04 crc kubenswrapper[5050]: I1211 14:43:04.229268 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rj6nv" Dec 11 14:43:04 crc kubenswrapper[5050]: I1211 14:43:04.247548 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rj6nv"] Dec 11 14:43:04 crc kubenswrapper[5050]: I1211 14:43:04.282803 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkkkv\" (UniqueName: \"kubernetes.io/projected/27c9aeb8-21e0-465c-b6f5-31eef7f1ec53-kube-api-access-rkkkv\") pod \"redhat-marketplace-rj6nv\" (UID: \"27c9aeb8-21e0-465c-b6f5-31eef7f1ec53\") " pod="openshift-marketplace/redhat-marketplace-rj6nv" Dec 11 14:43:04 crc kubenswrapper[5050]: I1211 14:43:04.283227 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27c9aeb8-21e0-465c-b6f5-31eef7f1ec53-utilities\") pod \"redhat-marketplace-rj6nv\" (UID: \"27c9aeb8-21e0-465c-b6f5-31eef7f1ec53\") " pod="openshift-marketplace/redhat-marketplace-rj6nv" Dec 11 14:43:04 crc kubenswrapper[5050]: I1211 14:43:04.283260 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27c9aeb8-21e0-465c-b6f5-31eef7f1ec53-catalog-content\") pod \"redhat-marketplace-rj6nv\" (UID: \"27c9aeb8-21e0-465c-b6f5-31eef7f1ec53\") " pod="openshift-marketplace/redhat-marketplace-rj6nv" Dec 11 14:43:04 crc kubenswrapper[5050]: I1211 14:43:04.384523 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkkkv\" (UniqueName: \"kubernetes.io/projected/27c9aeb8-21e0-465c-b6f5-31eef7f1ec53-kube-api-access-rkkkv\") pod \"redhat-marketplace-rj6nv\" (UID: \"27c9aeb8-21e0-465c-b6f5-31eef7f1ec53\") " pod="openshift-marketplace/redhat-marketplace-rj6nv" Dec 11 14:43:04 crc kubenswrapper[5050]: I1211 14:43:04.384677 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27c9aeb8-21e0-465c-b6f5-31eef7f1ec53-utilities\") pod \"redhat-marketplace-rj6nv\" (UID: \"27c9aeb8-21e0-465c-b6f5-31eef7f1ec53\") " pod="openshift-marketplace/redhat-marketplace-rj6nv" Dec 11 14:43:04 crc kubenswrapper[5050]: I1211 14:43:04.384703 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27c9aeb8-21e0-465c-b6f5-31eef7f1ec53-catalog-content\") pod \"redhat-marketplace-rj6nv\" (UID: \"27c9aeb8-21e0-465c-b6f5-31eef7f1ec53\") " pod="openshift-marketplace/redhat-marketplace-rj6nv" Dec 11 14:43:04 crc kubenswrapper[5050]: I1211 14:43:04.385544 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27c9aeb8-21e0-465c-b6f5-31eef7f1ec53-utilities\") pod \"redhat-marketplace-rj6nv\" (UID: \"27c9aeb8-21e0-465c-b6f5-31eef7f1ec53\") " pod="openshift-marketplace/redhat-marketplace-rj6nv" Dec 11 14:43:04 crc kubenswrapper[5050]: I1211 14:43:04.385599 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27c9aeb8-21e0-465c-b6f5-31eef7f1ec53-catalog-content\") pod \"redhat-marketplace-rj6nv\" (UID: \"27c9aeb8-21e0-465c-b6f5-31eef7f1ec53\") " pod="openshift-marketplace/redhat-marketplace-rj6nv" Dec 11 14:43:04 crc kubenswrapper[5050]: I1211 14:43:04.406860 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkkkv\" (UniqueName: \"kubernetes.io/projected/27c9aeb8-21e0-465c-b6f5-31eef7f1ec53-kube-api-access-rkkkv\") pod \"redhat-marketplace-rj6nv\" (UID: \"27c9aeb8-21e0-465c-b6f5-31eef7f1ec53\") " pod="openshift-marketplace/redhat-marketplace-rj6nv" Dec 11 14:43:04 crc kubenswrapper[5050]: I1211 14:43:04.564915 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rj6nv" Dec 11 14:43:04 crc kubenswrapper[5050]: I1211 14:43:04.831625 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rj6nv"] Dec 11 14:43:04 crc kubenswrapper[5050]: I1211 14:43:04.916006 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rj6nv" event={"ID":"27c9aeb8-21e0-465c-b6f5-31eef7f1ec53","Type":"ContainerStarted","Data":"3f13cc38cb1b3582e5908103aa159df60234ee8de5fd117dc5c3848acfeed45f"} Dec 11 14:43:05 crc kubenswrapper[5050]: I1211 14:43:05.925788 5050 generic.go:334] "Generic (PLEG): container finished" podID="27c9aeb8-21e0-465c-b6f5-31eef7f1ec53" containerID="6cd6fc80afbcd88a70f0c284fa6e7bff8867371cde9947e7c6d89d5befaecf17" exitCode=0 Dec 11 14:43:05 crc kubenswrapper[5050]: I1211 14:43:05.925878 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rj6nv" event={"ID":"27c9aeb8-21e0-465c-b6f5-31eef7f1ec53","Type":"ContainerDied","Data":"6cd6fc80afbcd88a70f0c284fa6e7bff8867371cde9947e7c6d89d5befaecf17"} Dec 11 14:43:07 crc kubenswrapper[5050]: I1211 14:43:07.945867 5050 generic.go:334] "Generic (PLEG): container finished" podID="27c9aeb8-21e0-465c-b6f5-31eef7f1ec53" containerID="7340c5ba1cbf75b77f23fbe1b016640d17a813b9f0e80c10eeaba5e42777d8fc" exitCode=0 Dec 11 14:43:07 crc kubenswrapper[5050]: I1211 14:43:07.945978 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rj6nv" event={"ID":"27c9aeb8-21e0-465c-b6f5-31eef7f1ec53","Type":"ContainerDied","Data":"7340c5ba1cbf75b77f23fbe1b016640d17a813b9f0e80c10eeaba5e42777d8fc"} Dec 11 14:43:14 crc kubenswrapper[5050]: I1211 14:43:14.015869 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rj6nv" event={"ID":"27c9aeb8-21e0-465c-b6f5-31eef7f1ec53","Type":"ContainerStarted","Data":"591d9ecaca9ab8b0dabc60e700d4357e467f29603218fdd0fae41e59df261eca"} Dec 11 14:43:14 crc kubenswrapper[5050]: I1211 14:43:14.042301 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rj6nv" podStartSLOduration=2.996224637 podStartE2EDuration="10.042277303s" podCreationTimestamp="2025-12-11 14:43:04 +0000 UTC" firstStartedPulling="2025-12-11 14:43:05.928315003 +0000 UTC m=+3276.772037589" lastFinishedPulling="2025-12-11 14:43:12.974367669 +0000 UTC m=+3283.818090255" observedRunningTime="2025-12-11 14:43:14.034894955 +0000 UTC m=+3284.878617541" watchObservedRunningTime="2025-12-11 14:43:14.042277303 +0000 UTC m=+3284.885999889" Dec 11 14:43:14 crc kubenswrapper[5050]: I1211 14:43:14.565910 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rj6nv" Dec 11 14:43:14 crc kubenswrapper[5050]: I1211 14:43:14.565980 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rj6nv" Dec 11 14:43:14 crc kubenswrapper[5050]: I1211 14:43:14.608596 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rj6nv" Dec 11 14:43:15 crc kubenswrapper[5050]: I1211 14:43:15.546419 5050 scope.go:117] "RemoveContainer" containerID="f99d22e27bac8e0550651090d627b2404c105d7886051c4dcf85cf0c6169d292" Dec 11 14:43:15 crc kubenswrapper[5050]: E1211 14:43:15.546852 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:43:24 crc kubenswrapper[5050]: I1211 14:43:24.619599 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rj6nv" Dec 11 14:43:24 crc kubenswrapper[5050]: I1211 14:43:24.688176 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rj6nv"] Dec 11 14:43:25 crc kubenswrapper[5050]: I1211 14:43:25.099222 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rj6nv" podUID="27c9aeb8-21e0-465c-b6f5-31eef7f1ec53" containerName="registry-server" containerID="cri-o://591d9ecaca9ab8b0dabc60e700d4357e467f29603218fdd0fae41e59df261eca" gracePeriod=2 Dec 11 14:43:25 crc kubenswrapper[5050]: I1211 14:43:25.485826 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rj6nv" Dec 11 14:43:25 crc kubenswrapper[5050]: I1211 14:43:25.540204 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27c9aeb8-21e0-465c-b6f5-31eef7f1ec53-utilities\") pod \"27c9aeb8-21e0-465c-b6f5-31eef7f1ec53\" (UID: \"27c9aeb8-21e0-465c-b6f5-31eef7f1ec53\") " Dec 11 14:43:25 crc kubenswrapper[5050]: I1211 14:43:25.540385 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkkkv\" (UniqueName: \"kubernetes.io/projected/27c9aeb8-21e0-465c-b6f5-31eef7f1ec53-kube-api-access-rkkkv\") pod \"27c9aeb8-21e0-465c-b6f5-31eef7f1ec53\" (UID: \"27c9aeb8-21e0-465c-b6f5-31eef7f1ec53\") " Dec 11 14:43:25 crc kubenswrapper[5050]: I1211 14:43:25.540435 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27c9aeb8-21e0-465c-b6f5-31eef7f1ec53-catalog-content\") pod \"27c9aeb8-21e0-465c-b6f5-31eef7f1ec53\" (UID: \"27c9aeb8-21e0-465c-b6f5-31eef7f1ec53\") " Dec 11 14:43:25 crc kubenswrapper[5050]: I1211 14:43:25.541170 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27c9aeb8-21e0-465c-b6f5-31eef7f1ec53-utilities" (OuterVolumeSpecName: "utilities") pod "27c9aeb8-21e0-465c-b6f5-31eef7f1ec53" (UID: "27c9aeb8-21e0-465c-b6f5-31eef7f1ec53"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:43:25 crc kubenswrapper[5050]: I1211 14:43:25.545996 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27c9aeb8-21e0-465c-b6f5-31eef7f1ec53-kube-api-access-rkkkv" (OuterVolumeSpecName: "kube-api-access-rkkkv") pod "27c9aeb8-21e0-465c-b6f5-31eef7f1ec53" (UID: "27c9aeb8-21e0-465c-b6f5-31eef7f1ec53"). InnerVolumeSpecName "kube-api-access-rkkkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:43:25 crc kubenswrapper[5050]: I1211 14:43:25.573374 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27c9aeb8-21e0-465c-b6f5-31eef7f1ec53-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "27c9aeb8-21e0-465c-b6f5-31eef7f1ec53" (UID: "27c9aeb8-21e0-465c-b6f5-31eef7f1ec53"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:43:25 crc kubenswrapper[5050]: I1211 14:43:25.642080 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27c9aeb8-21e0-465c-b6f5-31eef7f1ec53-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 14:43:25 crc kubenswrapper[5050]: I1211 14:43:25.642125 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkkkv\" (UniqueName: \"kubernetes.io/projected/27c9aeb8-21e0-465c-b6f5-31eef7f1ec53-kube-api-access-rkkkv\") on node \"crc\" DevicePath \"\"" Dec 11 14:43:25 crc kubenswrapper[5050]: I1211 14:43:25.642136 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27c9aeb8-21e0-465c-b6f5-31eef7f1ec53-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 14:43:26 crc kubenswrapper[5050]: I1211 14:43:26.108758 5050 generic.go:334] "Generic (PLEG): container finished" podID="27c9aeb8-21e0-465c-b6f5-31eef7f1ec53" containerID="591d9ecaca9ab8b0dabc60e700d4357e467f29603218fdd0fae41e59df261eca" exitCode=0 Dec 11 14:43:26 crc kubenswrapper[5050]: I1211 14:43:26.108826 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rj6nv" event={"ID":"27c9aeb8-21e0-465c-b6f5-31eef7f1ec53","Type":"ContainerDied","Data":"591d9ecaca9ab8b0dabc60e700d4357e467f29603218fdd0fae41e59df261eca"} Dec 11 14:43:26 crc kubenswrapper[5050]: I1211 14:43:26.108833 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rj6nv" Dec 11 14:43:26 crc kubenswrapper[5050]: I1211 14:43:26.108865 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rj6nv" event={"ID":"27c9aeb8-21e0-465c-b6f5-31eef7f1ec53","Type":"ContainerDied","Data":"3f13cc38cb1b3582e5908103aa159df60234ee8de5fd117dc5c3848acfeed45f"} Dec 11 14:43:26 crc kubenswrapper[5050]: I1211 14:43:26.108891 5050 scope.go:117] "RemoveContainer" containerID="591d9ecaca9ab8b0dabc60e700d4357e467f29603218fdd0fae41e59df261eca" Dec 11 14:43:26 crc kubenswrapper[5050]: I1211 14:43:26.142719 5050 scope.go:117] "RemoveContainer" containerID="7340c5ba1cbf75b77f23fbe1b016640d17a813b9f0e80c10eeaba5e42777d8fc" Dec 11 14:43:26 crc kubenswrapper[5050]: I1211 14:43:26.146094 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rj6nv"] Dec 11 14:43:26 crc kubenswrapper[5050]: I1211 14:43:26.155100 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rj6nv"] Dec 11 14:43:26 crc kubenswrapper[5050]: I1211 14:43:26.172347 5050 scope.go:117] "RemoveContainer" containerID="6cd6fc80afbcd88a70f0c284fa6e7bff8867371cde9947e7c6d89d5befaecf17" Dec 11 14:43:26 crc kubenswrapper[5050]: I1211 14:43:26.196651 5050 scope.go:117] "RemoveContainer" containerID="591d9ecaca9ab8b0dabc60e700d4357e467f29603218fdd0fae41e59df261eca" Dec 11 14:43:26 crc kubenswrapper[5050]: E1211 14:43:26.197697 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"591d9ecaca9ab8b0dabc60e700d4357e467f29603218fdd0fae41e59df261eca\": container with ID starting with 591d9ecaca9ab8b0dabc60e700d4357e467f29603218fdd0fae41e59df261eca not found: ID does not exist" containerID="591d9ecaca9ab8b0dabc60e700d4357e467f29603218fdd0fae41e59df261eca" Dec 11 14:43:26 crc kubenswrapper[5050]: I1211 14:43:26.197797 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"591d9ecaca9ab8b0dabc60e700d4357e467f29603218fdd0fae41e59df261eca"} err="failed to get container status \"591d9ecaca9ab8b0dabc60e700d4357e467f29603218fdd0fae41e59df261eca\": rpc error: code = NotFound desc = could not find container \"591d9ecaca9ab8b0dabc60e700d4357e467f29603218fdd0fae41e59df261eca\": container with ID starting with 591d9ecaca9ab8b0dabc60e700d4357e467f29603218fdd0fae41e59df261eca not found: ID does not exist" Dec 11 14:43:26 crc kubenswrapper[5050]: I1211 14:43:26.197855 5050 scope.go:117] "RemoveContainer" containerID="7340c5ba1cbf75b77f23fbe1b016640d17a813b9f0e80c10eeaba5e42777d8fc" Dec 11 14:43:26 crc kubenswrapper[5050]: E1211 14:43:26.198423 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7340c5ba1cbf75b77f23fbe1b016640d17a813b9f0e80c10eeaba5e42777d8fc\": container with ID starting with 7340c5ba1cbf75b77f23fbe1b016640d17a813b9f0e80c10eeaba5e42777d8fc not found: ID does not exist" containerID="7340c5ba1cbf75b77f23fbe1b016640d17a813b9f0e80c10eeaba5e42777d8fc" Dec 11 14:43:26 crc kubenswrapper[5050]: I1211 14:43:26.198489 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7340c5ba1cbf75b77f23fbe1b016640d17a813b9f0e80c10eeaba5e42777d8fc"} err="failed to get container status \"7340c5ba1cbf75b77f23fbe1b016640d17a813b9f0e80c10eeaba5e42777d8fc\": rpc error: code = NotFound desc = could not find container \"7340c5ba1cbf75b77f23fbe1b016640d17a813b9f0e80c10eeaba5e42777d8fc\": container with ID starting with 7340c5ba1cbf75b77f23fbe1b016640d17a813b9f0e80c10eeaba5e42777d8fc not found: ID does not exist" Dec 11 14:43:26 crc kubenswrapper[5050]: I1211 14:43:26.198548 5050 scope.go:117] "RemoveContainer" containerID="6cd6fc80afbcd88a70f0c284fa6e7bff8867371cde9947e7c6d89d5befaecf17" Dec 11 14:43:26 crc kubenswrapper[5050]: E1211 14:43:26.199316 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cd6fc80afbcd88a70f0c284fa6e7bff8867371cde9947e7c6d89d5befaecf17\": container with ID starting with 6cd6fc80afbcd88a70f0c284fa6e7bff8867371cde9947e7c6d89d5befaecf17 not found: ID does not exist" containerID="6cd6fc80afbcd88a70f0c284fa6e7bff8867371cde9947e7c6d89d5befaecf17" Dec 11 14:43:26 crc kubenswrapper[5050]: I1211 14:43:26.199374 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cd6fc80afbcd88a70f0c284fa6e7bff8867371cde9947e7c6d89d5befaecf17"} err="failed to get container status \"6cd6fc80afbcd88a70f0c284fa6e7bff8867371cde9947e7c6d89d5befaecf17\": rpc error: code = NotFound desc = could not find container \"6cd6fc80afbcd88a70f0c284fa6e7bff8867371cde9947e7c6d89d5befaecf17\": container with ID starting with 6cd6fc80afbcd88a70f0c284fa6e7bff8867371cde9947e7c6d89d5befaecf17 not found: ID does not exist" Dec 11 14:43:27 crc kubenswrapper[5050]: I1211 14:43:27.555343 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27c9aeb8-21e0-465c-b6f5-31eef7f1ec53" path="/var/lib/kubelet/pods/27c9aeb8-21e0-465c-b6f5-31eef7f1ec53/volumes" Dec 11 14:43:30 crc kubenswrapper[5050]: I1211 14:43:30.547336 5050 scope.go:117] "RemoveContainer" containerID="f99d22e27bac8e0550651090d627b2404c105d7886051c4dcf85cf0c6169d292" Dec 11 14:43:30 crc kubenswrapper[5050]: E1211 14:43:30.547796 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:43:45 crc kubenswrapper[5050]: I1211 14:43:45.547719 5050 scope.go:117] "RemoveContainer" containerID="f99d22e27bac8e0550651090d627b2404c105d7886051c4dcf85cf0c6169d292" Dec 11 14:43:45 crc kubenswrapper[5050]: E1211 14:43:45.548463 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:43:57 crc kubenswrapper[5050]: I1211 14:43:57.547690 5050 scope.go:117] "RemoveContainer" containerID="f99d22e27bac8e0550651090d627b2404c105d7886051c4dcf85cf0c6169d292" Dec 11 14:43:57 crc kubenswrapper[5050]: E1211 14:43:57.548419 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:44:08 crc kubenswrapper[5050]: I1211 14:44:08.546206 5050 scope.go:117] "RemoveContainer" containerID="f99d22e27bac8e0550651090d627b2404c105d7886051c4dcf85cf0c6169d292" Dec 11 14:44:08 crc kubenswrapper[5050]: E1211 14:44:08.547131 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:44:12 crc kubenswrapper[5050]: I1211 14:44:12.928102 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cdt9j"] Dec 11 14:44:12 crc kubenswrapper[5050]: E1211 14:44:12.929705 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27c9aeb8-21e0-465c-b6f5-31eef7f1ec53" containerName="registry-server" Dec 11 14:44:12 crc kubenswrapper[5050]: I1211 14:44:12.929799 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="27c9aeb8-21e0-465c-b6f5-31eef7f1ec53" containerName="registry-server" Dec 11 14:44:12 crc kubenswrapper[5050]: E1211 14:44:12.929879 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27c9aeb8-21e0-465c-b6f5-31eef7f1ec53" containerName="extract-content" Dec 11 14:44:12 crc kubenswrapper[5050]: I1211 14:44:12.929982 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="27c9aeb8-21e0-465c-b6f5-31eef7f1ec53" containerName="extract-content" Dec 11 14:44:12 crc kubenswrapper[5050]: E1211 14:44:12.930093 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27c9aeb8-21e0-465c-b6f5-31eef7f1ec53" containerName="extract-utilities" Dec 11 14:44:12 crc kubenswrapper[5050]: I1211 14:44:12.930178 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="27c9aeb8-21e0-465c-b6f5-31eef7f1ec53" containerName="extract-utilities" Dec 11 14:44:12 crc kubenswrapper[5050]: I1211 14:44:12.930441 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="27c9aeb8-21e0-465c-b6f5-31eef7f1ec53" containerName="registry-server" Dec 11 14:44:12 crc kubenswrapper[5050]: I1211 14:44:12.931586 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdt9j" Dec 11 14:44:12 crc kubenswrapper[5050]: I1211 14:44:12.941032 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cdt9j"] Dec 11 14:44:13 crc kubenswrapper[5050]: I1211 14:44:13.060190 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2xfb\" (UniqueName: \"kubernetes.io/projected/e918afd1-ed7d-4f70-9e71-1ab0862e3ea4-kube-api-access-p2xfb\") pod \"certified-operators-cdt9j\" (UID: \"e918afd1-ed7d-4f70-9e71-1ab0862e3ea4\") " pod="openshift-marketplace/certified-operators-cdt9j" Dec 11 14:44:13 crc kubenswrapper[5050]: I1211 14:44:13.060236 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e918afd1-ed7d-4f70-9e71-1ab0862e3ea4-utilities\") pod \"certified-operators-cdt9j\" (UID: \"e918afd1-ed7d-4f70-9e71-1ab0862e3ea4\") " pod="openshift-marketplace/certified-operators-cdt9j" Dec 11 14:44:13 crc kubenswrapper[5050]: I1211 14:44:13.060701 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e918afd1-ed7d-4f70-9e71-1ab0862e3ea4-catalog-content\") pod \"certified-operators-cdt9j\" (UID: \"e918afd1-ed7d-4f70-9e71-1ab0862e3ea4\") " pod="openshift-marketplace/certified-operators-cdt9j" Dec 11 14:44:13 crc kubenswrapper[5050]: I1211 14:44:13.162106 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e918afd1-ed7d-4f70-9e71-1ab0862e3ea4-catalog-content\") pod \"certified-operators-cdt9j\" (UID: \"e918afd1-ed7d-4f70-9e71-1ab0862e3ea4\") " pod="openshift-marketplace/certified-operators-cdt9j" Dec 11 14:44:13 crc kubenswrapper[5050]: I1211 14:44:13.162176 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2xfb\" (UniqueName: \"kubernetes.io/projected/e918afd1-ed7d-4f70-9e71-1ab0862e3ea4-kube-api-access-p2xfb\") pod \"certified-operators-cdt9j\" (UID: \"e918afd1-ed7d-4f70-9e71-1ab0862e3ea4\") " pod="openshift-marketplace/certified-operators-cdt9j" Dec 11 14:44:13 crc kubenswrapper[5050]: I1211 14:44:13.162202 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e918afd1-ed7d-4f70-9e71-1ab0862e3ea4-utilities\") pod \"certified-operators-cdt9j\" (UID: \"e918afd1-ed7d-4f70-9e71-1ab0862e3ea4\") " pod="openshift-marketplace/certified-operators-cdt9j" Dec 11 14:44:13 crc kubenswrapper[5050]: I1211 14:44:13.162713 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e918afd1-ed7d-4f70-9e71-1ab0862e3ea4-catalog-content\") pod \"certified-operators-cdt9j\" (UID: \"e918afd1-ed7d-4f70-9e71-1ab0862e3ea4\") " pod="openshift-marketplace/certified-operators-cdt9j" Dec 11 14:44:13 crc kubenswrapper[5050]: I1211 14:44:13.162749 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e918afd1-ed7d-4f70-9e71-1ab0862e3ea4-utilities\") pod \"certified-operators-cdt9j\" (UID: \"e918afd1-ed7d-4f70-9e71-1ab0862e3ea4\") " pod="openshift-marketplace/certified-operators-cdt9j" Dec 11 14:44:13 crc kubenswrapper[5050]: I1211 14:44:13.182264 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2xfb\" (UniqueName: \"kubernetes.io/projected/e918afd1-ed7d-4f70-9e71-1ab0862e3ea4-kube-api-access-p2xfb\") pod \"certified-operators-cdt9j\" (UID: \"e918afd1-ed7d-4f70-9e71-1ab0862e3ea4\") " pod="openshift-marketplace/certified-operators-cdt9j" Dec 11 14:44:13 crc kubenswrapper[5050]: I1211 14:44:13.297484 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdt9j" Dec 11 14:44:13 crc kubenswrapper[5050]: I1211 14:44:13.592763 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cdt9j"] Dec 11 14:44:14 crc kubenswrapper[5050]: I1211 14:44:14.477386 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdt9j" event={"ID":"e918afd1-ed7d-4f70-9e71-1ab0862e3ea4","Type":"ContainerStarted","Data":"d3dc814afba0e94a8a099e905bec9f0c1f205ab47f89e82fbadc6a0f14308623"} Dec 11 14:44:17 crc kubenswrapper[5050]: I1211 14:44:17.499709 5050 generic.go:334] "Generic (PLEG): container finished" podID="e918afd1-ed7d-4f70-9e71-1ab0862e3ea4" containerID="1b5941e48719fc75c4b8aad0557b716573d3e39d8f9a30b85bd73d8ed9f2a60a" exitCode=0 Dec 11 14:44:17 crc kubenswrapper[5050]: I1211 14:44:17.499864 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdt9j" event={"ID":"e918afd1-ed7d-4f70-9e71-1ab0862e3ea4","Type":"ContainerDied","Data":"1b5941e48719fc75c4b8aad0557b716573d3e39d8f9a30b85bd73d8ed9f2a60a"} Dec 11 14:44:20 crc kubenswrapper[5050]: I1211 14:44:20.538054 5050 generic.go:334] "Generic (PLEG): container finished" podID="e918afd1-ed7d-4f70-9e71-1ab0862e3ea4" containerID="c42090303c51ee159f635907967ac788334a8542ff7bfb812cc661233e650737" exitCode=0 Dec 11 14:44:20 crc kubenswrapper[5050]: I1211 14:44:20.538123 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdt9j" event={"ID":"e918afd1-ed7d-4f70-9e71-1ab0862e3ea4","Type":"ContainerDied","Data":"c42090303c51ee159f635907967ac788334a8542ff7bfb812cc661233e650737"} Dec 11 14:44:20 crc kubenswrapper[5050]: I1211 14:44:20.545904 5050 scope.go:117] "RemoveContainer" containerID="f99d22e27bac8e0550651090d627b2404c105d7886051c4dcf85cf0c6169d292" Dec 11 14:44:20 crc kubenswrapper[5050]: E1211 14:44:20.546176 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:44:22 crc kubenswrapper[5050]: I1211 14:44:22.555288 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdt9j" event={"ID":"e918afd1-ed7d-4f70-9e71-1ab0862e3ea4","Type":"ContainerStarted","Data":"ba9349c48fc4b05ae91ee46a8583e05226d7c0a486e7c46a328de0528e5d293f"} Dec 11 14:44:22 crc kubenswrapper[5050]: I1211 14:44:22.573560 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cdt9j" podStartSLOduration=6.249069919 podStartE2EDuration="10.573541066s" podCreationTimestamp="2025-12-11 14:44:12 +0000 UTC" firstStartedPulling="2025-12-11 14:44:17.501517545 +0000 UTC m=+3348.345240131" lastFinishedPulling="2025-12-11 14:44:21.825988682 +0000 UTC m=+3352.669711278" observedRunningTime="2025-12-11 14:44:22.571543573 +0000 UTC m=+3353.415266169" watchObservedRunningTime="2025-12-11 14:44:22.573541066 +0000 UTC m=+3353.417263652" Dec 11 14:44:23 crc kubenswrapper[5050]: I1211 14:44:23.298477 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cdt9j" Dec 11 14:44:23 crc kubenswrapper[5050]: I1211 14:44:23.298523 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cdt9j" Dec 11 14:44:24 crc kubenswrapper[5050]: I1211 14:44:24.338486 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-cdt9j" podUID="e918afd1-ed7d-4f70-9e71-1ab0862e3ea4" containerName="registry-server" probeResult="failure" output=< Dec 11 14:44:24 crc kubenswrapper[5050]: timeout: failed to connect service ":50051" within 1s Dec 11 14:44:24 crc kubenswrapper[5050]: > Dec 11 14:44:33 crc kubenswrapper[5050]: I1211 14:44:33.366232 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cdt9j" Dec 11 14:44:33 crc kubenswrapper[5050]: I1211 14:44:33.452988 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cdt9j" Dec 11 14:44:33 crc kubenswrapper[5050]: I1211 14:44:33.546808 5050 scope.go:117] "RemoveContainer" containerID="f99d22e27bac8e0550651090d627b2404c105d7886051c4dcf85cf0c6169d292" Dec 11 14:44:33 crc kubenswrapper[5050]: E1211 14:44:33.547163 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:44:33 crc kubenswrapper[5050]: I1211 14:44:33.602301 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cdt9j"] Dec 11 14:44:34 crc kubenswrapper[5050]: I1211 14:44:34.630201 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cdt9j" podUID="e918afd1-ed7d-4f70-9e71-1ab0862e3ea4" containerName="registry-server" containerID="cri-o://ba9349c48fc4b05ae91ee46a8583e05226d7c0a486e7c46a328de0528e5d293f" gracePeriod=2 Dec 11 14:44:35 crc kubenswrapper[5050]: I1211 14:44:35.054394 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdt9j" Dec 11 14:44:35 crc kubenswrapper[5050]: I1211 14:44:35.180218 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e918afd1-ed7d-4f70-9e71-1ab0862e3ea4-utilities\") pod \"e918afd1-ed7d-4f70-9e71-1ab0862e3ea4\" (UID: \"e918afd1-ed7d-4f70-9e71-1ab0862e3ea4\") " Dec 11 14:44:35 crc kubenswrapper[5050]: I1211 14:44:35.180280 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e918afd1-ed7d-4f70-9e71-1ab0862e3ea4-catalog-content\") pod \"e918afd1-ed7d-4f70-9e71-1ab0862e3ea4\" (UID: \"e918afd1-ed7d-4f70-9e71-1ab0862e3ea4\") " Dec 11 14:44:35 crc kubenswrapper[5050]: I1211 14:44:35.180317 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2xfb\" (UniqueName: \"kubernetes.io/projected/e918afd1-ed7d-4f70-9e71-1ab0862e3ea4-kube-api-access-p2xfb\") pod \"e918afd1-ed7d-4f70-9e71-1ab0862e3ea4\" (UID: \"e918afd1-ed7d-4f70-9e71-1ab0862e3ea4\") " Dec 11 14:44:35 crc kubenswrapper[5050]: I1211 14:44:35.181373 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e918afd1-ed7d-4f70-9e71-1ab0862e3ea4-utilities" (OuterVolumeSpecName: "utilities") pod "e918afd1-ed7d-4f70-9e71-1ab0862e3ea4" (UID: "e918afd1-ed7d-4f70-9e71-1ab0862e3ea4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:44:35 crc kubenswrapper[5050]: I1211 14:44:35.186204 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e918afd1-ed7d-4f70-9e71-1ab0862e3ea4-kube-api-access-p2xfb" (OuterVolumeSpecName: "kube-api-access-p2xfb") pod "e918afd1-ed7d-4f70-9e71-1ab0862e3ea4" (UID: "e918afd1-ed7d-4f70-9e71-1ab0862e3ea4"). InnerVolumeSpecName "kube-api-access-p2xfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:44:35 crc kubenswrapper[5050]: I1211 14:44:35.236619 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e918afd1-ed7d-4f70-9e71-1ab0862e3ea4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e918afd1-ed7d-4f70-9e71-1ab0862e3ea4" (UID: "e918afd1-ed7d-4f70-9e71-1ab0862e3ea4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:44:35 crc kubenswrapper[5050]: I1211 14:44:35.281424 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2xfb\" (UniqueName: \"kubernetes.io/projected/e918afd1-ed7d-4f70-9e71-1ab0862e3ea4-kube-api-access-p2xfb\") on node \"crc\" DevicePath \"\"" Dec 11 14:44:35 crc kubenswrapper[5050]: I1211 14:44:35.281462 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e918afd1-ed7d-4f70-9e71-1ab0862e3ea4-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 14:44:35 crc kubenswrapper[5050]: I1211 14:44:35.281471 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e918afd1-ed7d-4f70-9e71-1ab0862e3ea4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 14:44:35 crc kubenswrapper[5050]: I1211 14:44:35.645285 5050 generic.go:334] "Generic (PLEG): container finished" podID="e918afd1-ed7d-4f70-9e71-1ab0862e3ea4" containerID="ba9349c48fc4b05ae91ee46a8583e05226d7c0a486e7c46a328de0528e5d293f" exitCode=0 Dec 11 14:44:35 crc kubenswrapper[5050]: I1211 14:44:35.645329 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdt9j" event={"ID":"e918afd1-ed7d-4f70-9e71-1ab0862e3ea4","Type":"ContainerDied","Data":"ba9349c48fc4b05ae91ee46a8583e05226d7c0a486e7c46a328de0528e5d293f"} Dec 11 14:44:35 crc kubenswrapper[5050]: I1211 14:44:35.645364 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cdt9j" Dec 11 14:44:35 crc kubenswrapper[5050]: I1211 14:44:35.645377 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cdt9j" event={"ID":"e918afd1-ed7d-4f70-9e71-1ab0862e3ea4","Type":"ContainerDied","Data":"d3dc814afba0e94a8a099e905bec9f0c1f205ab47f89e82fbadc6a0f14308623"} Dec 11 14:44:35 crc kubenswrapper[5050]: I1211 14:44:35.645408 5050 scope.go:117] "RemoveContainer" containerID="ba9349c48fc4b05ae91ee46a8583e05226d7c0a486e7c46a328de0528e5d293f" Dec 11 14:44:35 crc kubenswrapper[5050]: I1211 14:44:35.676543 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cdt9j"] Dec 11 14:44:35 crc kubenswrapper[5050]: I1211 14:44:35.677394 5050 scope.go:117] "RemoveContainer" containerID="c42090303c51ee159f635907967ac788334a8542ff7bfb812cc661233e650737" Dec 11 14:44:35 crc kubenswrapper[5050]: I1211 14:44:35.682747 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cdt9j"] Dec 11 14:44:35 crc kubenswrapper[5050]: I1211 14:44:35.706031 5050 scope.go:117] "RemoveContainer" containerID="1b5941e48719fc75c4b8aad0557b716573d3e39d8f9a30b85bd73d8ed9f2a60a" Dec 11 14:44:35 crc kubenswrapper[5050]: I1211 14:44:35.722404 5050 scope.go:117] "RemoveContainer" containerID="ba9349c48fc4b05ae91ee46a8583e05226d7c0a486e7c46a328de0528e5d293f" Dec 11 14:44:35 crc kubenswrapper[5050]: E1211 14:44:35.723165 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba9349c48fc4b05ae91ee46a8583e05226d7c0a486e7c46a328de0528e5d293f\": container with ID starting with ba9349c48fc4b05ae91ee46a8583e05226d7c0a486e7c46a328de0528e5d293f not found: ID does not exist" containerID="ba9349c48fc4b05ae91ee46a8583e05226d7c0a486e7c46a328de0528e5d293f" Dec 11 14:44:35 crc kubenswrapper[5050]: I1211 14:44:35.723209 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba9349c48fc4b05ae91ee46a8583e05226d7c0a486e7c46a328de0528e5d293f"} err="failed to get container status \"ba9349c48fc4b05ae91ee46a8583e05226d7c0a486e7c46a328de0528e5d293f\": rpc error: code = NotFound desc = could not find container \"ba9349c48fc4b05ae91ee46a8583e05226d7c0a486e7c46a328de0528e5d293f\": container with ID starting with ba9349c48fc4b05ae91ee46a8583e05226d7c0a486e7c46a328de0528e5d293f not found: ID does not exist" Dec 11 14:44:35 crc kubenswrapper[5050]: I1211 14:44:35.723240 5050 scope.go:117] "RemoveContainer" containerID="c42090303c51ee159f635907967ac788334a8542ff7bfb812cc661233e650737" Dec 11 14:44:35 crc kubenswrapper[5050]: E1211 14:44:35.724266 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c42090303c51ee159f635907967ac788334a8542ff7bfb812cc661233e650737\": container with ID starting with c42090303c51ee159f635907967ac788334a8542ff7bfb812cc661233e650737 not found: ID does not exist" containerID="c42090303c51ee159f635907967ac788334a8542ff7bfb812cc661233e650737" Dec 11 14:44:35 crc kubenswrapper[5050]: I1211 14:44:35.724297 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c42090303c51ee159f635907967ac788334a8542ff7bfb812cc661233e650737"} err="failed to get container status \"c42090303c51ee159f635907967ac788334a8542ff7bfb812cc661233e650737\": rpc error: code = NotFound desc = could not find container \"c42090303c51ee159f635907967ac788334a8542ff7bfb812cc661233e650737\": container with ID starting with c42090303c51ee159f635907967ac788334a8542ff7bfb812cc661233e650737 not found: ID does not exist" Dec 11 14:44:35 crc kubenswrapper[5050]: I1211 14:44:35.724316 5050 scope.go:117] "RemoveContainer" containerID="1b5941e48719fc75c4b8aad0557b716573d3e39d8f9a30b85bd73d8ed9f2a60a" Dec 11 14:44:35 crc kubenswrapper[5050]: E1211 14:44:35.724762 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b5941e48719fc75c4b8aad0557b716573d3e39d8f9a30b85bd73d8ed9f2a60a\": container with ID starting with 1b5941e48719fc75c4b8aad0557b716573d3e39d8f9a30b85bd73d8ed9f2a60a not found: ID does not exist" containerID="1b5941e48719fc75c4b8aad0557b716573d3e39d8f9a30b85bd73d8ed9f2a60a" Dec 11 14:44:35 crc kubenswrapper[5050]: I1211 14:44:35.724793 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b5941e48719fc75c4b8aad0557b716573d3e39d8f9a30b85bd73d8ed9f2a60a"} err="failed to get container status \"1b5941e48719fc75c4b8aad0557b716573d3e39d8f9a30b85bd73d8ed9f2a60a\": rpc error: code = NotFound desc = could not find container \"1b5941e48719fc75c4b8aad0557b716573d3e39d8f9a30b85bd73d8ed9f2a60a\": container with ID starting with 1b5941e48719fc75c4b8aad0557b716573d3e39d8f9a30b85bd73d8ed9f2a60a not found: ID does not exist" Dec 11 14:44:37 crc kubenswrapper[5050]: I1211 14:44:37.554676 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e918afd1-ed7d-4f70-9e71-1ab0862e3ea4" path="/var/lib/kubelet/pods/e918afd1-ed7d-4f70-9e71-1ab0862e3ea4/volumes" Dec 11 14:44:46 crc kubenswrapper[5050]: I1211 14:44:46.546158 5050 scope.go:117] "RemoveContainer" containerID="f99d22e27bac8e0550651090d627b2404c105d7886051c4dcf85cf0c6169d292" Dec 11 14:44:46 crc kubenswrapper[5050]: E1211 14:44:46.546721 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:45:00 crc kubenswrapper[5050]: I1211 14:45:00.138217 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424405-j6477"] Dec 11 14:45:00 crc kubenswrapper[5050]: E1211 14:45:00.139389 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e918afd1-ed7d-4f70-9e71-1ab0862e3ea4" containerName="extract-utilities" Dec 11 14:45:00 crc kubenswrapper[5050]: I1211 14:45:00.139410 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e918afd1-ed7d-4f70-9e71-1ab0862e3ea4" containerName="extract-utilities" Dec 11 14:45:00 crc kubenswrapper[5050]: E1211 14:45:00.139458 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e918afd1-ed7d-4f70-9e71-1ab0862e3ea4" containerName="extract-content" Dec 11 14:45:00 crc kubenswrapper[5050]: I1211 14:45:00.139467 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e918afd1-ed7d-4f70-9e71-1ab0862e3ea4" containerName="extract-content" Dec 11 14:45:00 crc kubenswrapper[5050]: E1211 14:45:00.139481 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e918afd1-ed7d-4f70-9e71-1ab0862e3ea4" containerName="registry-server" Dec 11 14:45:00 crc kubenswrapper[5050]: I1211 14:45:00.139490 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e918afd1-ed7d-4f70-9e71-1ab0862e3ea4" containerName="registry-server" Dec 11 14:45:00 crc kubenswrapper[5050]: I1211 14:45:00.139667 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="e918afd1-ed7d-4f70-9e71-1ab0862e3ea4" containerName="registry-server" Dec 11 14:45:00 crc kubenswrapper[5050]: I1211 14:45:00.140286 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424405-j6477" Dec 11 14:45:00 crc kubenswrapper[5050]: I1211 14:45:00.145953 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 11 14:45:00 crc kubenswrapper[5050]: I1211 14:45:00.146262 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 11 14:45:00 crc kubenswrapper[5050]: I1211 14:45:00.147267 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424405-j6477"] Dec 11 14:45:00 crc kubenswrapper[5050]: I1211 14:45:00.178285 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ebc932e0-34cb-4293-a192-a5ec57f96e9c-secret-volume\") pod \"collect-profiles-29424405-j6477\" (UID: \"ebc932e0-34cb-4293-a192-a5ec57f96e9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424405-j6477" Dec 11 14:45:00 crc kubenswrapper[5050]: I1211 14:45:00.178328 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5hmg\" (UniqueName: \"kubernetes.io/projected/ebc932e0-34cb-4293-a192-a5ec57f96e9c-kube-api-access-r5hmg\") pod \"collect-profiles-29424405-j6477\" (UID: \"ebc932e0-34cb-4293-a192-a5ec57f96e9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424405-j6477" Dec 11 14:45:00 crc kubenswrapper[5050]: I1211 14:45:00.178354 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebc932e0-34cb-4293-a192-a5ec57f96e9c-config-volume\") pod \"collect-profiles-29424405-j6477\" (UID: \"ebc932e0-34cb-4293-a192-a5ec57f96e9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424405-j6477" Dec 11 14:45:00 crc kubenswrapper[5050]: I1211 14:45:00.279536 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ebc932e0-34cb-4293-a192-a5ec57f96e9c-secret-volume\") pod \"collect-profiles-29424405-j6477\" (UID: \"ebc932e0-34cb-4293-a192-a5ec57f96e9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424405-j6477" Dec 11 14:45:00 crc kubenswrapper[5050]: I1211 14:45:00.279602 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5hmg\" (UniqueName: \"kubernetes.io/projected/ebc932e0-34cb-4293-a192-a5ec57f96e9c-kube-api-access-r5hmg\") pod \"collect-profiles-29424405-j6477\" (UID: \"ebc932e0-34cb-4293-a192-a5ec57f96e9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424405-j6477" Dec 11 14:45:00 crc kubenswrapper[5050]: I1211 14:45:00.279628 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebc932e0-34cb-4293-a192-a5ec57f96e9c-config-volume\") pod \"collect-profiles-29424405-j6477\" (UID: \"ebc932e0-34cb-4293-a192-a5ec57f96e9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424405-j6477" Dec 11 14:45:00 crc kubenswrapper[5050]: I1211 14:45:00.280553 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebc932e0-34cb-4293-a192-a5ec57f96e9c-config-volume\") pod \"collect-profiles-29424405-j6477\" (UID: \"ebc932e0-34cb-4293-a192-a5ec57f96e9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424405-j6477" Dec 11 14:45:00 crc kubenswrapper[5050]: I1211 14:45:00.285365 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ebc932e0-34cb-4293-a192-a5ec57f96e9c-secret-volume\") pod \"collect-profiles-29424405-j6477\" (UID: \"ebc932e0-34cb-4293-a192-a5ec57f96e9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424405-j6477" Dec 11 14:45:00 crc kubenswrapper[5050]: I1211 14:45:00.297921 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5hmg\" (UniqueName: \"kubernetes.io/projected/ebc932e0-34cb-4293-a192-a5ec57f96e9c-kube-api-access-r5hmg\") pod \"collect-profiles-29424405-j6477\" (UID: \"ebc932e0-34cb-4293-a192-a5ec57f96e9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424405-j6477" Dec 11 14:45:00 crc kubenswrapper[5050]: I1211 14:45:00.464779 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424405-j6477" Dec 11 14:45:00 crc kubenswrapper[5050]: I1211 14:45:00.678215 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424405-j6477"] Dec 11 14:45:00 crc kubenswrapper[5050]: I1211 14:45:00.854572 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424405-j6477" event={"ID":"ebc932e0-34cb-4293-a192-a5ec57f96e9c","Type":"ContainerStarted","Data":"6f513f7e42b3202a769c72258e9b5f5e8924f39b92d4e32178f47bb5e70b020b"} Dec 11 14:45:01 crc kubenswrapper[5050]: I1211 14:45:01.545689 5050 scope.go:117] "RemoveContainer" containerID="f99d22e27bac8e0550651090d627b2404c105d7886051c4dcf85cf0c6169d292" Dec 11 14:45:01 crc kubenswrapper[5050]: E1211 14:45:01.547294 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:45:01 crc kubenswrapper[5050]: I1211 14:45:01.862432 5050 generic.go:334] "Generic (PLEG): container finished" podID="ebc932e0-34cb-4293-a192-a5ec57f96e9c" containerID="8f5a5a163ac44f0f51572872b8b0a4ede464c6b2c468e853bfc87060f5a44b9e" exitCode=0 Dec 11 14:45:01 crc kubenswrapper[5050]: I1211 14:45:01.862485 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424405-j6477" event={"ID":"ebc932e0-34cb-4293-a192-a5ec57f96e9c","Type":"ContainerDied","Data":"8f5a5a163ac44f0f51572872b8b0a4ede464c6b2c468e853bfc87060f5a44b9e"} Dec 11 14:45:03 crc kubenswrapper[5050]: I1211 14:45:03.130526 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424405-j6477" Dec 11 14:45:03 crc kubenswrapper[5050]: I1211 14:45:03.223231 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5hmg\" (UniqueName: \"kubernetes.io/projected/ebc932e0-34cb-4293-a192-a5ec57f96e9c-kube-api-access-r5hmg\") pod \"ebc932e0-34cb-4293-a192-a5ec57f96e9c\" (UID: \"ebc932e0-34cb-4293-a192-a5ec57f96e9c\") " Dec 11 14:45:03 crc kubenswrapper[5050]: I1211 14:45:03.223310 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ebc932e0-34cb-4293-a192-a5ec57f96e9c-secret-volume\") pod \"ebc932e0-34cb-4293-a192-a5ec57f96e9c\" (UID: \"ebc932e0-34cb-4293-a192-a5ec57f96e9c\") " Dec 11 14:45:03 crc kubenswrapper[5050]: I1211 14:45:03.223328 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebc932e0-34cb-4293-a192-a5ec57f96e9c-config-volume\") pod \"ebc932e0-34cb-4293-a192-a5ec57f96e9c\" (UID: \"ebc932e0-34cb-4293-a192-a5ec57f96e9c\") " Dec 11 14:45:03 crc kubenswrapper[5050]: I1211 14:45:03.223928 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebc932e0-34cb-4293-a192-a5ec57f96e9c-config-volume" (OuterVolumeSpecName: "config-volume") pod "ebc932e0-34cb-4293-a192-a5ec57f96e9c" (UID: "ebc932e0-34cb-4293-a192-a5ec57f96e9c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 14:45:03 crc kubenswrapper[5050]: I1211 14:45:03.228901 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebc932e0-34cb-4293-a192-a5ec57f96e9c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ebc932e0-34cb-4293-a192-a5ec57f96e9c" (UID: "ebc932e0-34cb-4293-a192-a5ec57f96e9c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 14:45:03 crc kubenswrapper[5050]: I1211 14:45:03.229591 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebc932e0-34cb-4293-a192-a5ec57f96e9c-kube-api-access-r5hmg" (OuterVolumeSpecName: "kube-api-access-r5hmg") pod "ebc932e0-34cb-4293-a192-a5ec57f96e9c" (UID: "ebc932e0-34cb-4293-a192-a5ec57f96e9c"). InnerVolumeSpecName "kube-api-access-r5hmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:45:03 crc kubenswrapper[5050]: I1211 14:45:03.324311 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5hmg\" (UniqueName: \"kubernetes.io/projected/ebc932e0-34cb-4293-a192-a5ec57f96e9c-kube-api-access-r5hmg\") on node \"crc\" DevicePath \"\"" Dec 11 14:45:03 crc kubenswrapper[5050]: I1211 14:45:03.324352 5050 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ebc932e0-34cb-4293-a192-a5ec57f96e9c-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 14:45:03 crc kubenswrapper[5050]: I1211 14:45:03.324365 5050 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebc932e0-34cb-4293-a192-a5ec57f96e9c-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 14:45:03 crc kubenswrapper[5050]: I1211 14:45:03.880331 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424405-j6477" event={"ID":"ebc932e0-34cb-4293-a192-a5ec57f96e9c","Type":"ContainerDied","Data":"6f513f7e42b3202a769c72258e9b5f5e8924f39b92d4e32178f47bb5e70b020b"} Dec 11 14:45:03 crc kubenswrapper[5050]: I1211 14:45:03.880375 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424405-j6477" Dec 11 14:45:03 crc kubenswrapper[5050]: I1211 14:45:03.880385 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f513f7e42b3202a769c72258e9b5f5e8924f39b92d4e32178f47bb5e70b020b" Dec 11 14:45:04 crc kubenswrapper[5050]: I1211 14:45:04.199429 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424360-8hl85"] Dec 11 14:45:04 crc kubenswrapper[5050]: I1211 14:45:04.206512 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424360-8hl85"] Dec 11 14:45:05 crc kubenswrapper[5050]: I1211 14:45:05.560141 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e93fea-aeee-42f1-8cc5-204a7365d883" path="/var/lib/kubelet/pods/e7e93fea-aeee-42f1-8cc5-204a7365d883/volumes" Dec 11 14:45:12 crc kubenswrapper[5050]: I1211 14:45:12.546748 5050 scope.go:117] "RemoveContainer" containerID="f99d22e27bac8e0550651090d627b2404c105d7886051c4dcf85cf0c6169d292" Dec 11 14:45:12 crc kubenswrapper[5050]: E1211 14:45:12.547540 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:45:26 crc kubenswrapper[5050]: I1211 14:45:26.546433 5050 scope.go:117] "RemoveContainer" containerID="f99d22e27bac8e0550651090d627b2404c105d7886051c4dcf85cf0c6169d292" Dec 11 14:45:26 crc kubenswrapper[5050]: E1211 14:45:26.547343 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:45:40 crc kubenswrapper[5050]: I1211 14:45:40.546593 5050 scope.go:117] "RemoveContainer" containerID="f99d22e27bac8e0550651090d627b2404c105d7886051c4dcf85cf0c6169d292" Dec 11 14:45:40 crc kubenswrapper[5050]: E1211 14:45:40.547599 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:45:44 crc kubenswrapper[5050]: I1211 14:45:44.114627 5050 scope.go:117] "RemoveContainer" containerID="778610a6f99edc88e656495bd5e22549a8285191df977232d12b751af22a5717" Dec 11 14:45:52 crc kubenswrapper[5050]: I1211 14:45:52.723267 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp" podUID="85683dfb-37fb-4301-8c7a-fbb7453b303d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 14:45:52 crc kubenswrapper[5050]: I1211 14:45:52.723581 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp" podUID="85683dfb-37fb-4301-8c7a-fbb7453b303d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 14:45:52 crc kubenswrapper[5050]: I1211 14:45:52.723780 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-sqjhh" podUID="0aa7657b-dbca-4b2b-ac62-7000681a918a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 14:45:52 crc kubenswrapper[5050]: I1211 14:45:52.723810 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-sqjhh" podUID="0aa7657b-dbca-4b2b-ac62-7000681a918a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 14:45:53 crc kubenswrapper[5050]: I1211 14:45:53.545631 5050 scope.go:117] "RemoveContainer" containerID="f99d22e27bac8e0550651090d627b2404c105d7886051c4dcf85cf0c6169d292" Dec 11 14:45:54 crc kubenswrapper[5050]: I1211 14:45:54.262745 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerStarted","Data":"03b9612a20533c6904ec86e9cfe67fe1ca83ed181da522a7b829c61efaf15d01"} Dec 11 14:46:08 crc kubenswrapper[5050]: I1211 14:46:08.160897 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2rlwp"] Dec 11 14:46:08 crc kubenswrapper[5050]: E1211 14:46:08.161720 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebc932e0-34cb-4293-a192-a5ec57f96e9c" containerName="collect-profiles" Dec 11 14:46:08 crc kubenswrapper[5050]: I1211 14:46:08.161734 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebc932e0-34cb-4293-a192-a5ec57f96e9c" containerName="collect-profiles" Dec 11 14:46:08 crc kubenswrapper[5050]: I1211 14:46:08.161881 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebc932e0-34cb-4293-a192-a5ec57f96e9c" containerName="collect-profiles" Dec 11 14:46:08 crc kubenswrapper[5050]: I1211 14:46:08.162951 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2rlwp" Dec 11 14:46:08 crc kubenswrapper[5050]: I1211 14:46:08.168985 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2rlwp"] Dec 11 14:46:08 crc kubenswrapper[5050]: I1211 14:46:08.259751 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dc9994d-6277-4318-8b42-0d6b00969551-catalog-content\") pod \"community-operators-2rlwp\" (UID: \"0dc9994d-6277-4318-8b42-0d6b00969551\") " pod="openshift-marketplace/community-operators-2rlwp" Dec 11 14:46:08 crc kubenswrapper[5050]: I1211 14:46:08.259803 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dc9994d-6277-4318-8b42-0d6b00969551-utilities\") pod \"community-operators-2rlwp\" (UID: \"0dc9994d-6277-4318-8b42-0d6b00969551\") " pod="openshift-marketplace/community-operators-2rlwp" Dec 11 14:46:08 crc kubenswrapper[5050]: I1211 14:46:08.259940 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k74c9\" (UniqueName: \"kubernetes.io/projected/0dc9994d-6277-4318-8b42-0d6b00969551-kube-api-access-k74c9\") pod \"community-operators-2rlwp\" (UID: \"0dc9994d-6277-4318-8b42-0d6b00969551\") " pod="openshift-marketplace/community-operators-2rlwp" Dec 11 14:46:08 crc kubenswrapper[5050]: I1211 14:46:08.361725 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k74c9\" (UniqueName: \"kubernetes.io/projected/0dc9994d-6277-4318-8b42-0d6b00969551-kube-api-access-k74c9\") pod \"community-operators-2rlwp\" (UID: \"0dc9994d-6277-4318-8b42-0d6b00969551\") " pod="openshift-marketplace/community-operators-2rlwp" Dec 11 14:46:08 crc kubenswrapper[5050]: I1211 14:46:08.361809 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dc9994d-6277-4318-8b42-0d6b00969551-catalog-content\") pod \"community-operators-2rlwp\" (UID: \"0dc9994d-6277-4318-8b42-0d6b00969551\") " pod="openshift-marketplace/community-operators-2rlwp" Dec 11 14:46:08 crc kubenswrapper[5050]: I1211 14:46:08.361842 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dc9994d-6277-4318-8b42-0d6b00969551-utilities\") pod \"community-operators-2rlwp\" (UID: \"0dc9994d-6277-4318-8b42-0d6b00969551\") " pod="openshift-marketplace/community-operators-2rlwp" Dec 11 14:46:08 crc kubenswrapper[5050]: I1211 14:46:08.362311 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dc9994d-6277-4318-8b42-0d6b00969551-catalog-content\") pod \"community-operators-2rlwp\" (UID: \"0dc9994d-6277-4318-8b42-0d6b00969551\") " pod="openshift-marketplace/community-operators-2rlwp" Dec 11 14:46:08 crc kubenswrapper[5050]: I1211 14:46:08.362496 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dc9994d-6277-4318-8b42-0d6b00969551-utilities\") pod \"community-operators-2rlwp\" (UID: \"0dc9994d-6277-4318-8b42-0d6b00969551\") " pod="openshift-marketplace/community-operators-2rlwp" Dec 11 14:46:08 crc kubenswrapper[5050]: I1211 14:46:08.386157 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k74c9\" (UniqueName: \"kubernetes.io/projected/0dc9994d-6277-4318-8b42-0d6b00969551-kube-api-access-k74c9\") pod \"community-operators-2rlwp\" (UID: \"0dc9994d-6277-4318-8b42-0d6b00969551\") " pod="openshift-marketplace/community-operators-2rlwp" Dec 11 14:46:08 crc kubenswrapper[5050]: I1211 14:46:08.479291 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2rlwp" Dec 11 14:46:08 crc kubenswrapper[5050]: I1211 14:46:08.758214 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2rlwp"] Dec 11 14:46:09 crc kubenswrapper[5050]: I1211 14:46:09.374145 5050 generic.go:334] "Generic (PLEG): container finished" podID="0dc9994d-6277-4318-8b42-0d6b00969551" containerID="cc41c5b5082674e6ddb8787306de1312e4d9a019fc8ace15b0414a27bd00e5fa" exitCode=0 Dec 11 14:46:09 crc kubenswrapper[5050]: I1211 14:46:09.374202 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2rlwp" event={"ID":"0dc9994d-6277-4318-8b42-0d6b00969551","Type":"ContainerDied","Data":"cc41c5b5082674e6ddb8787306de1312e4d9a019fc8ace15b0414a27bd00e5fa"} Dec 11 14:46:09 crc kubenswrapper[5050]: I1211 14:46:09.374235 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2rlwp" event={"ID":"0dc9994d-6277-4318-8b42-0d6b00969551","Type":"ContainerStarted","Data":"89aed0651ca8d46dbdcc7303406df24246c2cd45610190bd1cc96a96ef474d91"} Dec 11 14:46:10 crc kubenswrapper[5050]: I1211 14:46:10.385229 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2rlwp" event={"ID":"0dc9994d-6277-4318-8b42-0d6b00969551","Type":"ContainerStarted","Data":"99b929bf9431f7867452252620fc2bd1bc54ea0016fabf89d7c7272a0480afa9"} Dec 11 14:46:11 crc kubenswrapper[5050]: I1211 14:46:11.394269 5050 generic.go:334] "Generic (PLEG): container finished" podID="0dc9994d-6277-4318-8b42-0d6b00969551" containerID="99b929bf9431f7867452252620fc2bd1bc54ea0016fabf89d7c7272a0480afa9" exitCode=0 Dec 11 14:46:11 crc kubenswrapper[5050]: I1211 14:46:11.394361 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2rlwp" event={"ID":"0dc9994d-6277-4318-8b42-0d6b00969551","Type":"ContainerDied","Data":"99b929bf9431f7867452252620fc2bd1bc54ea0016fabf89d7c7272a0480afa9"} Dec 11 14:46:12 crc kubenswrapper[5050]: I1211 14:46:12.405086 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2rlwp" event={"ID":"0dc9994d-6277-4318-8b42-0d6b00969551","Type":"ContainerStarted","Data":"1d26539d7630885cee516bdce16fd40229d71e6efb33ba1e3f203eeb28276c26"} Dec 11 14:46:12 crc kubenswrapper[5050]: I1211 14:46:12.437549 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2rlwp" podStartSLOduration=1.954338118 podStartE2EDuration="4.437504222s" podCreationTimestamp="2025-12-11 14:46:08 +0000 UTC" firstStartedPulling="2025-12-11 14:46:09.376407768 +0000 UTC m=+3460.220130354" lastFinishedPulling="2025-12-11 14:46:11.859573882 +0000 UTC m=+3462.703296458" observedRunningTime="2025-12-11 14:46:12.422370295 +0000 UTC m=+3463.266092921" watchObservedRunningTime="2025-12-11 14:46:12.437504222 +0000 UTC m=+3463.281226858" Dec 11 14:46:18 crc kubenswrapper[5050]: I1211 14:46:18.479915 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2rlwp" Dec 11 14:46:18 crc kubenswrapper[5050]: I1211 14:46:18.480392 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2rlwp" Dec 11 14:46:18 crc kubenswrapper[5050]: I1211 14:46:18.530786 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2rlwp" Dec 11 14:46:19 crc kubenswrapper[5050]: I1211 14:46:19.501440 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2rlwp" Dec 11 14:46:19 crc kubenswrapper[5050]: I1211 14:46:19.557263 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2rlwp"] Dec 11 14:46:21 crc kubenswrapper[5050]: I1211 14:46:21.473251 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2rlwp" podUID="0dc9994d-6277-4318-8b42-0d6b00969551" containerName="registry-server" containerID="cri-o://1d26539d7630885cee516bdce16fd40229d71e6efb33ba1e3f203eeb28276c26" gracePeriod=2 Dec 11 14:46:25 crc kubenswrapper[5050]: I1211 14:46:25.510584 5050 generic.go:334] "Generic (PLEG): container finished" podID="0dc9994d-6277-4318-8b42-0d6b00969551" containerID="1d26539d7630885cee516bdce16fd40229d71e6efb33ba1e3f203eeb28276c26" exitCode=0 Dec 11 14:46:25 crc kubenswrapper[5050]: I1211 14:46:25.510695 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2rlwp" event={"ID":"0dc9994d-6277-4318-8b42-0d6b00969551","Type":"ContainerDied","Data":"1d26539d7630885cee516bdce16fd40229d71e6efb33ba1e3f203eeb28276c26"} Dec 11 14:46:26 crc kubenswrapper[5050]: I1211 14:46:26.075661 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2rlwp" Dec 11 14:46:26 crc kubenswrapper[5050]: I1211 14:46:26.262310 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k74c9\" (UniqueName: \"kubernetes.io/projected/0dc9994d-6277-4318-8b42-0d6b00969551-kube-api-access-k74c9\") pod \"0dc9994d-6277-4318-8b42-0d6b00969551\" (UID: \"0dc9994d-6277-4318-8b42-0d6b00969551\") " Dec 11 14:46:26 crc kubenswrapper[5050]: I1211 14:46:26.262676 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dc9994d-6277-4318-8b42-0d6b00969551-catalog-content\") pod \"0dc9994d-6277-4318-8b42-0d6b00969551\" (UID: \"0dc9994d-6277-4318-8b42-0d6b00969551\") " Dec 11 14:46:26 crc kubenswrapper[5050]: I1211 14:46:26.262881 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dc9994d-6277-4318-8b42-0d6b00969551-utilities\") pod \"0dc9994d-6277-4318-8b42-0d6b00969551\" (UID: \"0dc9994d-6277-4318-8b42-0d6b00969551\") " Dec 11 14:46:26 crc kubenswrapper[5050]: I1211 14:46:26.263575 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dc9994d-6277-4318-8b42-0d6b00969551-utilities" (OuterVolumeSpecName: "utilities") pod "0dc9994d-6277-4318-8b42-0d6b00969551" (UID: "0dc9994d-6277-4318-8b42-0d6b00969551"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:46:26 crc kubenswrapper[5050]: I1211 14:46:26.268168 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dc9994d-6277-4318-8b42-0d6b00969551-kube-api-access-k74c9" (OuterVolumeSpecName: "kube-api-access-k74c9") pod "0dc9994d-6277-4318-8b42-0d6b00969551" (UID: "0dc9994d-6277-4318-8b42-0d6b00969551"). InnerVolumeSpecName "kube-api-access-k74c9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:46:26 crc kubenswrapper[5050]: I1211 14:46:26.325756 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0dc9994d-6277-4318-8b42-0d6b00969551-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0dc9994d-6277-4318-8b42-0d6b00969551" (UID: "0dc9994d-6277-4318-8b42-0d6b00969551"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:46:26 crc kubenswrapper[5050]: I1211 14:46:26.364345 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k74c9\" (UniqueName: \"kubernetes.io/projected/0dc9994d-6277-4318-8b42-0d6b00969551-kube-api-access-k74c9\") on node \"crc\" DevicePath \"\"" Dec 11 14:46:26 crc kubenswrapper[5050]: I1211 14:46:26.364382 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0dc9994d-6277-4318-8b42-0d6b00969551-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 14:46:26 crc kubenswrapper[5050]: I1211 14:46:26.364392 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0dc9994d-6277-4318-8b42-0d6b00969551-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 14:46:26 crc kubenswrapper[5050]: I1211 14:46:26.523359 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2rlwp" event={"ID":"0dc9994d-6277-4318-8b42-0d6b00969551","Type":"ContainerDied","Data":"89aed0651ca8d46dbdcc7303406df24246c2cd45610190bd1cc96a96ef474d91"} Dec 11 14:46:26 crc kubenswrapper[5050]: I1211 14:46:26.523408 5050 scope.go:117] "RemoveContainer" containerID="1d26539d7630885cee516bdce16fd40229d71e6efb33ba1e3f203eeb28276c26" Dec 11 14:46:26 crc kubenswrapper[5050]: I1211 14:46:26.523471 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2rlwp" Dec 11 14:46:26 crc kubenswrapper[5050]: I1211 14:46:26.542456 5050 scope.go:117] "RemoveContainer" containerID="99b929bf9431f7867452252620fc2bd1bc54ea0016fabf89d7c7272a0480afa9" Dec 11 14:46:26 crc kubenswrapper[5050]: I1211 14:46:26.588574 5050 scope.go:117] "RemoveContainer" containerID="cc41c5b5082674e6ddb8787306de1312e4d9a019fc8ace15b0414a27bd00e5fa" Dec 11 14:46:26 crc kubenswrapper[5050]: I1211 14:46:26.592859 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2rlwp"] Dec 11 14:46:26 crc kubenswrapper[5050]: I1211 14:46:26.600365 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2rlwp"] Dec 11 14:46:27 crc kubenswrapper[5050]: I1211 14:46:27.562477 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dc9994d-6277-4318-8b42-0d6b00969551" path="/var/lib/kubelet/pods/0dc9994d-6277-4318-8b42-0d6b00969551/volumes" Dec 11 14:48:10 crc kubenswrapper[5050]: I1211 14:48:10.797000 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:48:10 crc kubenswrapper[5050]: I1211 14:48:10.797688 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:48:40 crc kubenswrapper[5050]: I1211 14:48:40.797129 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:48:40 crc kubenswrapper[5050]: I1211 14:48:40.797887 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:49:10 crc kubenswrapper[5050]: I1211 14:49:10.796332 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:49:10 crc kubenswrapper[5050]: I1211 14:49:10.796828 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:49:10 crc kubenswrapper[5050]: I1211 14:49:10.796870 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 14:49:10 crc kubenswrapper[5050]: I1211 14:49:10.797484 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"03b9612a20533c6904ec86e9cfe67fe1ca83ed181da522a7b829c61efaf15d01"} pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 14:49:10 crc kubenswrapper[5050]: I1211 14:49:10.797537 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" containerID="cri-o://03b9612a20533c6904ec86e9cfe67fe1ca83ed181da522a7b829c61efaf15d01" gracePeriod=600 Dec 11 14:49:11 crc kubenswrapper[5050]: I1211 14:49:11.787780 5050 generic.go:334] "Generic (PLEG): container finished" podID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerID="03b9612a20533c6904ec86e9cfe67fe1ca83ed181da522a7b829c61efaf15d01" exitCode=0 Dec 11 14:49:11 crc kubenswrapper[5050]: I1211 14:49:11.787870 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerDied","Data":"03b9612a20533c6904ec86e9cfe67fe1ca83ed181da522a7b829c61efaf15d01"} Dec 11 14:49:11 crc kubenswrapper[5050]: I1211 14:49:11.788504 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerStarted","Data":"592a05928c3cabea3545c1e1092cb9126fb731cd50a3674aaab04017d500886d"} Dec 11 14:49:11 crc kubenswrapper[5050]: I1211 14:49:11.788525 5050 scope.go:117] "RemoveContainer" containerID="f99d22e27bac8e0550651090d627b2404c105d7886051c4dcf85cf0c6169d292" Dec 11 14:51:40 crc kubenswrapper[5050]: I1211 14:51:40.796883 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:51:40 crc kubenswrapper[5050]: I1211 14:51:40.797484 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:52:10 crc kubenswrapper[5050]: I1211 14:52:10.796762 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:52:10 crc kubenswrapper[5050]: I1211 14:52:10.797319 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:52:37 crc kubenswrapper[5050]: I1211 14:52:37.098985 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rtrd8"] Dec 11 14:52:37 crc kubenswrapper[5050]: E1211 14:52:37.099959 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dc9994d-6277-4318-8b42-0d6b00969551" containerName="registry-server" Dec 11 14:52:37 crc kubenswrapper[5050]: I1211 14:52:37.099977 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dc9994d-6277-4318-8b42-0d6b00969551" containerName="registry-server" Dec 11 14:52:37 crc kubenswrapper[5050]: E1211 14:52:37.099995 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dc9994d-6277-4318-8b42-0d6b00969551" containerName="extract-content" Dec 11 14:52:37 crc kubenswrapper[5050]: I1211 14:52:37.100034 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dc9994d-6277-4318-8b42-0d6b00969551" containerName="extract-content" Dec 11 14:52:37 crc kubenswrapper[5050]: E1211 14:52:37.100055 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0dc9994d-6277-4318-8b42-0d6b00969551" containerName="extract-utilities" Dec 11 14:52:37 crc kubenswrapper[5050]: I1211 14:52:37.100065 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="0dc9994d-6277-4318-8b42-0d6b00969551" containerName="extract-utilities" Dec 11 14:52:37 crc kubenswrapper[5050]: I1211 14:52:37.100249 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dc9994d-6277-4318-8b42-0d6b00969551" containerName="registry-server" Dec 11 14:52:37 crc kubenswrapper[5050]: I1211 14:52:37.101598 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rtrd8" Dec 11 14:52:37 crc kubenswrapper[5050]: I1211 14:52:37.112697 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rtrd8"] Dec 11 14:52:37 crc kubenswrapper[5050]: I1211 14:52:37.218372 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00a39a14-1cf7-42df-bd52-a5ea96862a4d-catalog-content\") pod \"redhat-operators-rtrd8\" (UID: \"00a39a14-1cf7-42df-bd52-a5ea96862a4d\") " pod="openshift-marketplace/redhat-operators-rtrd8" Dec 11 14:52:37 crc kubenswrapper[5050]: I1211 14:52:37.218492 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5fvt\" (UniqueName: \"kubernetes.io/projected/00a39a14-1cf7-42df-bd52-a5ea96862a4d-kube-api-access-l5fvt\") pod \"redhat-operators-rtrd8\" (UID: \"00a39a14-1cf7-42df-bd52-a5ea96862a4d\") " pod="openshift-marketplace/redhat-operators-rtrd8" Dec 11 14:52:37 crc kubenswrapper[5050]: I1211 14:52:37.218517 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00a39a14-1cf7-42df-bd52-a5ea96862a4d-utilities\") pod \"redhat-operators-rtrd8\" (UID: \"00a39a14-1cf7-42df-bd52-a5ea96862a4d\") " pod="openshift-marketplace/redhat-operators-rtrd8" Dec 11 14:52:37 crc kubenswrapper[5050]: I1211 14:52:37.319517 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00a39a14-1cf7-42df-bd52-a5ea96862a4d-catalog-content\") pod \"redhat-operators-rtrd8\" (UID: \"00a39a14-1cf7-42df-bd52-a5ea96862a4d\") " pod="openshift-marketplace/redhat-operators-rtrd8" Dec 11 14:52:37 crc kubenswrapper[5050]: I1211 14:52:37.319645 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5fvt\" (UniqueName: \"kubernetes.io/projected/00a39a14-1cf7-42df-bd52-a5ea96862a4d-kube-api-access-l5fvt\") pod \"redhat-operators-rtrd8\" (UID: \"00a39a14-1cf7-42df-bd52-a5ea96862a4d\") " pod="openshift-marketplace/redhat-operators-rtrd8" Dec 11 14:52:37 crc kubenswrapper[5050]: I1211 14:52:37.319676 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00a39a14-1cf7-42df-bd52-a5ea96862a4d-utilities\") pod \"redhat-operators-rtrd8\" (UID: \"00a39a14-1cf7-42df-bd52-a5ea96862a4d\") " pod="openshift-marketplace/redhat-operators-rtrd8" Dec 11 14:52:37 crc kubenswrapper[5050]: I1211 14:52:37.320247 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00a39a14-1cf7-42df-bd52-a5ea96862a4d-utilities\") pod \"redhat-operators-rtrd8\" (UID: \"00a39a14-1cf7-42df-bd52-a5ea96862a4d\") " pod="openshift-marketplace/redhat-operators-rtrd8" Dec 11 14:52:37 crc kubenswrapper[5050]: I1211 14:52:37.320529 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00a39a14-1cf7-42df-bd52-a5ea96862a4d-catalog-content\") pod \"redhat-operators-rtrd8\" (UID: \"00a39a14-1cf7-42df-bd52-a5ea96862a4d\") " pod="openshift-marketplace/redhat-operators-rtrd8" Dec 11 14:52:37 crc kubenswrapper[5050]: I1211 14:52:37.345235 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5fvt\" (UniqueName: \"kubernetes.io/projected/00a39a14-1cf7-42df-bd52-a5ea96862a4d-kube-api-access-l5fvt\") pod \"redhat-operators-rtrd8\" (UID: \"00a39a14-1cf7-42df-bd52-a5ea96862a4d\") " pod="openshift-marketplace/redhat-operators-rtrd8" Dec 11 14:52:37 crc kubenswrapper[5050]: I1211 14:52:37.452767 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rtrd8" Dec 11 14:52:37 crc kubenswrapper[5050]: I1211 14:52:37.881291 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rtrd8"] Dec 11 14:52:38 crc kubenswrapper[5050]: I1211 14:52:38.358439 5050 generic.go:334] "Generic (PLEG): container finished" podID="00a39a14-1cf7-42df-bd52-a5ea96862a4d" containerID="0877500d6934da460b1dd2f08b806dab1330d962c56c3be15a2183aedb365e90" exitCode=0 Dec 11 14:52:38 crc kubenswrapper[5050]: I1211 14:52:38.358491 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtrd8" event={"ID":"00a39a14-1cf7-42df-bd52-a5ea96862a4d","Type":"ContainerDied","Data":"0877500d6934da460b1dd2f08b806dab1330d962c56c3be15a2183aedb365e90"} Dec 11 14:52:38 crc kubenswrapper[5050]: I1211 14:52:38.358523 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtrd8" event={"ID":"00a39a14-1cf7-42df-bd52-a5ea96862a4d","Type":"ContainerStarted","Data":"cb6a8a2714c01b82f074d6f15ef05786c81a398216f71ea88501b0039904d0a8"} Dec 11 14:52:38 crc kubenswrapper[5050]: I1211 14:52:38.360115 5050 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 14:52:39 crc kubenswrapper[5050]: I1211 14:52:39.365753 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtrd8" event={"ID":"00a39a14-1cf7-42df-bd52-a5ea96862a4d","Type":"ContainerStarted","Data":"aeb959151787476e6dc2741022bdc0ea1e1137f9c30a464090a80e306b1e9228"} Dec 11 14:52:40 crc kubenswrapper[5050]: I1211 14:52:40.376524 5050 generic.go:334] "Generic (PLEG): container finished" podID="00a39a14-1cf7-42df-bd52-a5ea96862a4d" containerID="aeb959151787476e6dc2741022bdc0ea1e1137f9c30a464090a80e306b1e9228" exitCode=0 Dec 11 14:52:40 crc kubenswrapper[5050]: I1211 14:52:40.376579 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtrd8" event={"ID":"00a39a14-1cf7-42df-bd52-a5ea96862a4d","Type":"ContainerDied","Data":"aeb959151787476e6dc2741022bdc0ea1e1137f9c30a464090a80e306b1e9228"} Dec 11 14:52:40 crc kubenswrapper[5050]: I1211 14:52:40.797039 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 14:52:40 crc kubenswrapper[5050]: I1211 14:52:40.797418 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 14:52:40 crc kubenswrapper[5050]: I1211 14:52:40.797465 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 14:52:40 crc kubenswrapper[5050]: I1211 14:52:40.798134 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"592a05928c3cabea3545c1e1092cb9126fb731cd50a3674aaab04017d500886d"} pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 14:52:40 crc kubenswrapper[5050]: I1211 14:52:40.798198 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" containerID="cri-o://592a05928c3cabea3545c1e1092cb9126fb731cd50a3674aaab04017d500886d" gracePeriod=600 Dec 11 14:52:40 crc kubenswrapper[5050]: E1211 14:52:40.918933 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:52:41 crc kubenswrapper[5050]: I1211 14:52:41.387703 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtrd8" event={"ID":"00a39a14-1cf7-42df-bd52-a5ea96862a4d","Type":"ContainerStarted","Data":"917d316a1168eff7f044951dd1a7c813563777791025b9aeede5b0b63385067c"} Dec 11 14:52:41 crc kubenswrapper[5050]: I1211 14:52:41.390889 5050 generic.go:334] "Generic (PLEG): container finished" podID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerID="592a05928c3cabea3545c1e1092cb9126fb731cd50a3674aaab04017d500886d" exitCode=0 Dec 11 14:52:41 crc kubenswrapper[5050]: I1211 14:52:41.390966 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerDied","Data":"592a05928c3cabea3545c1e1092cb9126fb731cd50a3674aaab04017d500886d"} Dec 11 14:52:41 crc kubenswrapper[5050]: I1211 14:52:41.391047 5050 scope.go:117] "RemoveContainer" containerID="03b9612a20533c6904ec86e9cfe67fe1ca83ed181da522a7b829c61efaf15d01" Dec 11 14:52:41 crc kubenswrapper[5050]: I1211 14:52:41.391957 5050 scope.go:117] "RemoveContainer" containerID="592a05928c3cabea3545c1e1092cb9126fb731cd50a3674aaab04017d500886d" Dec 11 14:52:41 crc kubenswrapper[5050]: E1211 14:52:41.392413 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:52:41 crc kubenswrapper[5050]: I1211 14:52:41.437001 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rtrd8" podStartSLOduration=1.847030578 podStartE2EDuration="4.436967655s" podCreationTimestamp="2025-12-11 14:52:37 +0000 UTC" firstStartedPulling="2025-12-11 14:52:38.359781278 +0000 UTC m=+3849.203503864" lastFinishedPulling="2025-12-11 14:52:40.949718355 +0000 UTC m=+3851.793440941" observedRunningTime="2025-12-11 14:52:41.419586628 +0000 UTC m=+3852.263309234" watchObservedRunningTime="2025-12-11 14:52:41.436967655 +0000 UTC m=+3852.280690261" Dec 11 14:52:47 crc kubenswrapper[5050]: I1211 14:52:47.453601 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rtrd8" Dec 11 14:52:47 crc kubenswrapper[5050]: I1211 14:52:47.453902 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rtrd8" Dec 11 14:52:47 crc kubenswrapper[5050]: I1211 14:52:47.497179 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rtrd8" Dec 11 14:52:48 crc kubenswrapper[5050]: I1211 14:52:48.484303 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rtrd8" Dec 11 14:52:48 crc kubenswrapper[5050]: I1211 14:52:48.526105 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rtrd8"] Dec 11 14:52:50 crc kubenswrapper[5050]: I1211 14:52:50.600454 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rtrd8" podUID="00a39a14-1cf7-42df-bd52-a5ea96862a4d" containerName="registry-server" containerID="cri-o://917d316a1168eff7f044951dd1a7c813563777791025b9aeede5b0b63385067c" gracePeriod=2 Dec 11 14:52:53 crc kubenswrapper[5050]: I1211 14:52:53.546246 5050 scope.go:117] "RemoveContainer" containerID="592a05928c3cabea3545c1e1092cb9126fb731cd50a3674aaab04017d500886d" Dec 11 14:52:53 crc kubenswrapper[5050]: E1211 14:52:53.546838 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:52:53 crc kubenswrapper[5050]: I1211 14:52:53.623887 5050 generic.go:334] "Generic (PLEG): container finished" podID="00a39a14-1cf7-42df-bd52-a5ea96862a4d" containerID="917d316a1168eff7f044951dd1a7c813563777791025b9aeede5b0b63385067c" exitCode=0 Dec 11 14:52:53 crc kubenswrapper[5050]: I1211 14:52:53.623990 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtrd8" event={"ID":"00a39a14-1cf7-42df-bd52-a5ea96862a4d","Type":"ContainerDied","Data":"917d316a1168eff7f044951dd1a7c813563777791025b9aeede5b0b63385067c"} Dec 11 14:52:53 crc kubenswrapper[5050]: I1211 14:52:53.853246 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rtrd8" Dec 11 14:52:53 crc kubenswrapper[5050]: I1211 14:52:53.978788 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00a39a14-1cf7-42df-bd52-a5ea96862a4d-utilities\") pod \"00a39a14-1cf7-42df-bd52-a5ea96862a4d\" (UID: \"00a39a14-1cf7-42df-bd52-a5ea96862a4d\") " Dec 11 14:52:53 crc kubenswrapper[5050]: I1211 14:52:53.979127 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5fvt\" (UniqueName: \"kubernetes.io/projected/00a39a14-1cf7-42df-bd52-a5ea96862a4d-kube-api-access-l5fvt\") pod \"00a39a14-1cf7-42df-bd52-a5ea96862a4d\" (UID: \"00a39a14-1cf7-42df-bd52-a5ea96862a4d\") " Dec 11 14:52:53 crc kubenswrapper[5050]: I1211 14:52:53.979416 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00a39a14-1cf7-42df-bd52-a5ea96862a4d-catalog-content\") pod \"00a39a14-1cf7-42df-bd52-a5ea96862a4d\" (UID: \"00a39a14-1cf7-42df-bd52-a5ea96862a4d\") " Dec 11 14:52:53 crc kubenswrapper[5050]: I1211 14:52:53.980307 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00a39a14-1cf7-42df-bd52-a5ea96862a4d-utilities" (OuterVolumeSpecName: "utilities") pod "00a39a14-1cf7-42df-bd52-a5ea96862a4d" (UID: "00a39a14-1cf7-42df-bd52-a5ea96862a4d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:52:53 crc kubenswrapper[5050]: I1211 14:52:53.987317 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00a39a14-1cf7-42df-bd52-a5ea96862a4d-kube-api-access-l5fvt" (OuterVolumeSpecName: "kube-api-access-l5fvt") pod "00a39a14-1cf7-42df-bd52-a5ea96862a4d" (UID: "00a39a14-1cf7-42df-bd52-a5ea96862a4d"). InnerVolumeSpecName "kube-api-access-l5fvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:52:54 crc kubenswrapper[5050]: I1211 14:52:54.081084 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00a39a14-1cf7-42df-bd52-a5ea96862a4d-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 14:52:54 crc kubenswrapper[5050]: I1211 14:52:54.081162 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5fvt\" (UniqueName: \"kubernetes.io/projected/00a39a14-1cf7-42df-bd52-a5ea96862a4d-kube-api-access-l5fvt\") on node \"crc\" DevicePath \"\"" Dec 11 14:52:54 crc kubenswrapper[5050]: I1211 14:52:54.106343 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00a39a14-1cf7-42df-bd52-a5ea96862a4d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "00a39a14-1cf7-42df-bd52-a5ea96862a4d" (UID: "00a39a14-1cf7-42df-bd52-a5ea96862a4d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:52:54 crc kubenswrapper[5050]: I1211 14:52:54.182641 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00a39a14-1cf7-42df-bd52-a5ea96862a4d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 14:52:54 crc kubenswrapper[5050]: I1211 14:52:54.636308 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rtrd8" event={"ID":"00a39a14-1cf7-42df-bd52-a5ea96862a4d","Type":"ContainerDied","Data":"cb6a8a2714c01b82f074d6f15ef05786c81a398216f71ea88501b0039904d0a8"} Dec 11 14:52:54 crc kubenswrapper[5050]: I1211 14:52:54.636399 5050 scope.go:117] "RemoveContainer" containerID="917d316a1168eff7f044951dd1a7c813563777791025b9aeede5b0b63385067c" Dec 11 14:52:54 crc kubenswrapper[5050]: I1211 14:52:54.636409 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rtrd8" Dec 11 14:52:54 crc kubenswrapper[5050]: I1211 14:52:54.678177 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rtrd8"] Dec 11 14:52:54 crc kubenswrapper[5050]: I1211 14:52:54.678683 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rtrd8"] Dec 11 14:52:54 crc kubenswrapper[5050]: I1211 14:52:54.683599 5050 scope.go:117] "RemoveContainer" containerID="aeb959151787476e6dc2741022bdc0ea1e1137f9c30a464090a80e306b1e9228" Dec 11 14:52:54 crc kubenswrapper[5050]: I1211 14:52:54.709322 5050 scope.go:117] "RemoveContainer" containerID="0877500d6934da460b1dd2f08b806dab1330d962c56c3be15a2183aedb365e90" Dec 11 14:52:55 crc kubenswrapper[5050]: I1211 14:52:55.555774 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00a39a14-1cf7-42df-bd52-a5ea96862a4d" path="/var/lib/kubelet/pods/00a39a14-1cf7-42df-bd52-a5ea96862a4d/volumes" Dec 11 14:53:07 crc kubenswrapper[5050]: I1211 14:53:07.547157 5050 scope.go:117] "RemoveContainer" containerID="592a05928c3cabea3545c1e1092cb9126fb731cd50a3674aaab04017d500886d" Dec 11 14:53:07 crc kubenswrapper[5050]: E1211 14:53:07.548443 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:53:19 crc kubenswrapper[5050]: I1211 14:53:19.555296 5050 scope.go:117] "RemoveContainer" containerID="592a05928c3cabea3545c1e1092cb9126fb731cd50a3674aaab04017d500886d" Dec 11 14:53:19 crc kubenswrapper[5050]: E1211 14:53:19.556299 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:53:31 crc kubenswrapper[5050]: I1211 14:53:31.546042 5050 scope.go:117] "RemoveContainer" containerID="592a05928c3cabea3545c1e1092cb9126fb731cd50a3674aaab04017d500886d" Dec 11 14:53:31 crc kubenswrapper[5050]: E1211 14:53:31.547084 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:53:43 crc kubenswrapper[5050]: I1211 14:53:43.546199 5050 scope.go:117] "RemoveContainer" containerID="592a05928c3cabea3545c1e1092cb9126fb731cd50a3674aaab04017d500886d" Dec 11 14:53:43 crc kubenswrapper[5050]: E1211 14:53:43.547439 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:53:52 crc kubenswrapper[5050]: I1211 14:53:52.270390 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s7k87"] Dec 11 14:53:52 crc kubenswrapper[5050]: E1211 14:53:52.271299 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00a39a14-1cf7-42df-bd52-a5ea96862a4d" containerName="registry-server" Dec 11 14:53:52 crc kubenswrapper[5050]: I1211 14:53:52.271313 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="00a39a14-1cf7-42df-bd52-a5ea96862a4d" containerName="registry-server" Dec 11 14:53:52 crc kubenswrapper[5050]: E1211 14:53:52.271325 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00a39a14-1cf7-42df-bd52-a5ea96862a4d" containerName="extract-utilities" Dec 11 14:53:52 crc kubenswrapper[5050]: I1211 14:53:52.271331 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="00a39a14-1cf7-42df-bd52-a5ea96862a4d" containerName="extract-utilities" Dec 11 14:53:52 crc kubenswrapper[5050]: E1211 14:53:52.271344 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00a39a14-1cf7-42df-bd52-a5ea96862a4d" containerName="extract-content" Dec 11 14:53:52 crc kubenswrapper[5050]: I1211 14:53:52.271352 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="00a39a14-1cf7-42df-bd52-a5ea96862a4d" containerName="extract-content" Dec 11 14:53:52 crc kubenswrapper[5050]: I1211 14:53:52.271480 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="00a39a14-1cf7-42df-bd52-a5ea96862a4d" containerName="registry-server" Dec 11 14:53:52 crc kubenswrapper[5050]: I1211 14:53:52.272916 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s7k87" Dec 11 14:53:52 crc kubenswrapper[5050]: I1211 14:53:52.324880 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b0066ac-c475-4694-a0eb-ba9dd5de4113-catalog-content\") pod \"redhat-marketplace-s7k87\" (UID: \"7b0066ac-c475-4694-a0eb-ba9dd5de4113\") " pod="openshift-marketplace/redhat-marketplace-s7k87" Dec 11 14:53:52 crc kubenswrapper[5050]: I1211 14:53:52.324927 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b0066ac-c475-4694-a0eb-ba9dd5de4113-utilities\") pod \"redhat-marketplace-s7k87\" (UID: \"7b0066ac-c475-4694-a0eb-ba9dd5de4113\") " pod="openshift-marketplace/redhat-marketplace-s7k87" Dec 11 14:53:52 crc kubenswrapper[5050]: I1211 14:53:52.325046 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnxk9\" (UniqueName: \"kubernetes.io/projected/7b0066ac-c475-4694-a0eb-ba9dd5de4113-kube-api-access-cnxk9\") pod \"redhat-marketplace-s7k87\" (UID: \"7b0066ac-c475-4694-a0eb-ba9dd5de4113\") " pod="openshift-marketplace/redhat-marketplace-s7k87" Dec 11 14:53:52 crc kubenswrapper[5050]: I1211 14:53:52.329722 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s7k87"] Dec 11 14:53:52 crc kubenswrapper[5050]: I1211 14:53:52.426596 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b0066ac-c475-4694-a0eb-ba9dd5de4113-catalog-content\") pod \"redhat-marketplace-s7k87\" (UID: \"7b0066ac-c475-4694-a0eb-ba9dd5de4113\") " pod="openshift-marketplace/redhat-marketplace-s7k87" Dec 11 14:53:52 crc kubenswrapper[5050]: I1211 14:53:52.426643 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b0066ac-c475-4694-a0eb-ba9dd5de4113-utilities\") pod \"redhat-marketplace-s7k87\" (UID: \"7b0066ac-c475-4694-a0eb-ba9dd5de4113\") " pod="openshift-marketplace/redhat-marketplace-s7k87" Dec 11 14:53:52 crc kubenswrapper[5050]: I1211 14:53:52.426709 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnxk9\" (UniqueName: \"kubernetes.io/projected/7b0066ac-c475-4694-a0eb-ba9dd5de4113-kube-api-access-cnxk9\") pod \"redhat-marketplace-s7k87\" (UID: \"7b0066ac-c475-4694-a0eb-ba9dd5de4113\") " pod="openshift-marketplace/redhat-marketplace-s7k87" Dec 11 14:53:52 crc kubenswrapper[5050]: I1211 14:53:52.427621 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b0066ac-c475-4694-a0eb-ba9dd5de4113-catalog-content\") pod \"redhat-marketplace-s7k87\" (UID: \"7b0066ac-c475-4694-a0eb-ba9dd5de4113\") " pod="openshift-marketplace/redhat-marketplace-s7k87" Dec 11 14:53:52 crc kubenswrapper[5050]: I1211 14:53:52.428224 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b0066ac-c475-4694-a0eb-ba9dd5de4113-utilities\") pod \"redhat-marketplace-s7k87\" (UID: \"7b0066ac-c475-4694-a0eb-ba9dd5de4113\") " pod="openshift-marketplace/redhat-marketplace-s7k87" Dec 11 14:53:52 crc kubenswrapper[5050]: I1211 14:53:52.446134 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnxk9\" (UniqueName: \"kubernetes.io/projected/7b0066ac-c475-4694-a0eb-ba9dd5de4113-kube-api-access-cnxk9\") pod \"redhat-marketplace-s7k87\" (UID: \"7b0066ac-c475-4694-a0eb-ba9dd5de4113\") " pod="openshift-marketplace/redhat-marketplace-s7k87" Dec 11 14:53:52 crc kubenswrapper[5050]: I1211 14:53:52.642251 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s7k87" Dec 11 14:53:52 crc kubenswrapper[5050]: I1211 14:53:52.992699 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s7k87"] Dec 11 14:53:53 crc kubenswrapper[5050]: I1211 14:53:53.226245 5050 generic.go:334] "Generic (PLEG): container finished" podID="7b0066ac-c475-4694-a0eb-ba9dd5de4113" containerID="b031beda16a8470ea2c8212231361666c6aac94915bbc165a3c109d557b4d2cf" exitCode=0 Dec 11 14:53:53 crc kubenswrapper[5050]: I1211 14:53:53.226325 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s7k87" event={"ID":"7b0066ac-c475-4694-a0eb-ba9dd5de4113","Type":"ContainerDied","Data":"b031beda16a8470ea2c8212231361666c6aac94915bbc165a3c109d557b4d2cf"} Dec 11 14:53:53 crc kubenswrapper[5050]: I1211 14:53:53.226946 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s7k87" event={"ID":"7b0066ac-c475-4694-a0eb-ba9dd5de4113","Type":"ContainerStarted","Data":"872abd8b8f516a4d61fded4287c6d67c411e496c856263f061486dc7995acaea"} Dec 11 14:53:54 crc kubenswrapper[5050]: I1211 14:53:54.545694 5050 scope.go:117] "RemoveContainer" containerID="592a05928c3cabea3545c1e1092cb9126fb731cd50a3674aaab04017d500886d" Dec 11 14:53:54 crc kubenswrapper[5050]: E1211 14:53:54.546376 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:53:55 crc kubenswrapper[5050]: I1211 14:53:55.246831 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s7k87" event={"ID":"7b0066ac-c475-4694-a0eb-ba9dd5de4113","Type":"ContainerDied","Data":"c03fb525606de44be53152962618a2b91a1ef183d384ca5f23b377a1ae52c428"} Dec 11 14:53:55 crc kubenswrapper[5050]: I1211 14:53:55.246657 5050 generic.go:334] "Generic (PLEG): container finished" podID="7b0066ac-c475-4694-a0eb-ba9dd5de4113" containerID="c03fb525606de44be53152962618a2b91a1ef183d384ca5f23b377a1ae52c428" exitCode=0 Dec 11 14:53:57 crc kubenswrapper[5050]: I1211 14:53:57.261706 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s7k87" event={"ID":"7b0066ac-c475-4694-a0eb-ba9dd5de4113","Type":"ContainerStarted","Data":"74374927d8a9b302c710fca7fafb2f57a840ada07ba8855202f09d644ffa9cf1"} Dec 11 14:53:57 crc kubenswrapper[5050]: I1211 14:53:57.281966 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s7k87" podStartSLOduration=2.300915508 podStartE2EDuration="5.2819497s" podCreationTimestamp="2025-12-11 14:53:52 +0000 UTC" firstStartedPulling="2025-12-11 14:53:53.228558265 +0000 UTC m=+3924.072280891" lastFinishedPulling="2025-12-11 14:53:56.209592457 +0000 UTC m=+3927.053315083" observedRunningTime="2025-12-11 14:53:57.277962352 +0000 UTC m=+3928.121684948" watchObservedRunningTime="2025-12-11 14:53:57.2819497 +0000 UTC m=+3928.125672286" Dec 11 14:54:02 crc kubenswrapper[5050]: I1211 14:54:02.642614 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s7k87" Dec 11 14:54:02 crc kubenswrapper[5050]: I1211 14:54:02.643613 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s7k87" Dec 11 14:54:02 crc kubenswrapper[5050]: I1211 14:54:02.735541 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s7k87" Dec 11 14:54:03 crc kubenswrapper[5050]: I1211 14:54:03.350294 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s7k87" Dec 11 14:54:03 crc kubenswrapper[5050]: I1211 14:54:03.414543 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s7k87"] Dec 11 14:54:05 crc kubenswrapper[5050]: I1211 14:54:05.328262 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s7k87" podUID="7b0066ac-c475-4694-a0eb-ba9dd5de4113" containerName="registry-server" containerID="cri-o://74374927d8a9b302c710fca7fafb2f57a840ada07ba8855202f09d644ffa9cf1" gracePeriod=2 Dec 11 14:54:06 crc kubenswrapper[5050]: I1211 14:54:06.247482 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s7k87" Dec 11 14:54:06 crc kubenswrapper[5050]: I1211 14:54:06.336664 5050 generic.go:334] "Generic (PLEG): container finished" podID="7b0066ac-c475-4694-a0eb-ba9dd5de4113" containerID="74374927d8a9b302c710fca7fafb2f57a840ada07ba8855202f09d644ffa9cf1" exitCode=0 Dec 11 14:54:06 crc kubenswrapper[5050]: I1211 14:54:06.336720 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s7k87" Dec 11 14:54:06 crc kubenswrapper[5050]: I1211 14:54:06.336721 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s7k87" event={"ID":"7b0066ac-c475-4694-a0eb-ba9dd5de4113","Type":"ContainerDied","Data":"74374927d8a9b302c710fca7fafb2f57a840ada07ba8855202f09d644ffa9cf1"} Dec 11 14:54:06 crc kubenswrapper[5050]: I1211 14:54:06.336863 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s7k87" event={"ID":"7b0066ac-c475-4694-a0eb-ba9dd5de4113","Type":"ContainerDied","Data":"872abd8b8f516a4d61fded4287c6d67c411e496c856263f061486dc7995acaea"} Dec 11 14:54:06 crc kubenswrapper[5050]: I1211 14:54:06.336885 5050 scope.go:117] "RemoveContainer" containerID="74374927d8a9b302c710fca7fafb2f57a840ada07ba8855202f09d644ffa9cf1" Dec 11 14:54:06 crc kubenswrapper[5050]: I1211 14:54:06.354023 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b0066ac-c475-4694-a0eb-ba9dd5de4113-utilities\") pod \"7b0066ac-c475-4694-a0eb-ba9dd5de4113\" (UID: \"7b0066ac-c475-4694-a0eb-ba9dd5de4113\") " Dec 11 14:54:06 crc kubenswrapper[5050]: I1211 14:54:06.354074 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b0066ac-c475-4694-a0eb-ba9dd5de4113-catalog-content\") pod \"7b0066ac-c475-4694-a0eb-ba9dd5de4113\" (UID: \"7b0066ac-c475-4694-a0eb-ba9dd5de4113\") " Dec 11 14:54:06 crc kubenswrapper[5050]: I1211 14:54:06.354165 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnxk9\" (UniqueName: \"kubernetes.io/projected/7b0066ac-c475-4694-a0eb-ba9dd5de4113-kube-api-access-cnxk9\") pod \"7b0066ac-c475-4694-a0eb-ba9dd5de4113\" (UID: \"7b0066ac-c475-4694-a0eb-ba9dd5de4113\") " Dec 11 14:54:06 crc kubenswrapper[5050]: I1211 14:54:06.355471 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b0066ac-c475-4694-a0eb-ba9dd5de4113-utilities" (OuterVolumeSpecName: "utilities") pod "7b0066ac-c475-4694-a0eb-ba9dd5de4113" (UID: "7b0066ac-c475-4694-a0eb-ba9dd5de4113"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:54:06 crc kubenswrapper[5050]: I1211 14:54:06.362151 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b0066ac-c475-4694-a0eb-ba9dd5de4113-kube-api-access-cnxk9" (OuterVolumeSpecName: "kube-api-access-cnxk9") pod "7b0066ac-c475-4694-a0eb-ba9dd5de4113" (UID: "7b0066ac-c475-4694-a0eb-ba9dd5de4113"). InnerVolumeSpecName "kube-api-access-cnxk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:54:06 crc kubenswrapper[5050]: I1211 14:54:06.363191 5050 scope.go:117] "RemoveContainer" containerID="c03fb525606de44be53152962618a2b91a1ef183d384ca5f23b377a1ae52c428" Dec 11 14:54:06 crc kubenswrapper[5050]: I1211 14:54:06.377907 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b0066ac-c475-4694-a0eb-ba9dd5de4113-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b0066ac-c475-4694-a0eb-ba9dd5de4113" (UID: "7b0066ac-c475-4694-a0eb-ba9dd5de4113"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:54:06 crc kubenswrapper[5050]: I1211 14:54:06.395285 5050 scope.go:117] "RemoveContainer" containerID="b031beda16a8470ea2c8212231361666c6aac94915bbc165a3c109d557b4d2cf" Dec 11 14:54:06 crc kubenswrapper[5050]: I1211 14:54:06.414308 5050 scope.go:117] "RemoveContainer" containerID="74374927d8a9b302c710fca7fafb2f57a840ada07ba8855202f09d644ffa9cf1" Dec 11 14:54:06 crc kubenswrapper[5050]: E1211 14:54:06.414636 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74374927d8a9b302c710fca7fafb2f57a840ada07ba8855202f09d644ffa9cf1\": container with ID starting with 74374927d8a9b302c710fca7fafb2f57a840ada07ba8855202f09d644ffa9cf1 not found: ID does not exist" containerID="74374927d8a9b302c710fca7fafb2f57a840ada07ba8855202f09d644ffa9cf1" Dec 11 14:54:06 crc kubenswrapper[5050]: I1211 14:54:06.414667 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74374927d8a9b302c710fca7fafb2f57a840ada07ba8855202f09d644ffa9cf1"} err="failed to get container status \"74374927d8a9b302c710fca7fafb2f57a840ada07ba8855202f09d644ffa9cf1\": rpc error: code = NotFound desc = could not find container \"74374927d8a9b302c710fca7fafb2f57a840ada07ba8855202f09d644ffa9cf1\": container with ID starting with 74374927d8a9b302c710fca7fafb2f57a840ada07ba8855202f09d644ffa9cf1 not found: ID does not exist" Dec 11 14:54:06 crc kubenswrapper[5050]: I1211 14:54:06.414691 5050 scope.go:117] "RemoveContainer" containerID="c03fb525606de44be53152962618a2b91a1ef183d384ca5f23b377a1ae52c428" Dec 11 14:54:06 crc kubenswrapper[5050]: E1211 14:54:06.414964 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c03fb525606de44be53152962618a2b91a1ef183d384ca5f23b377a1ae52c428\": container with ID starting with c03fb525606de44be53152962618a2b91a1ef183d384ca5f23b377a1ae52c428 not found: ID does not exist" containerID="c03fb525606de44be53152962618a2b91a1ef183d384ca5f23b377a1ae52c428" Dec 11 14:54:06 crc kubenswrapper[5050]: I1211 14:54:06.415001 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c03fb525606de44be53152962618a2b91a1ef183d384ca5f23b377a1ae52c428"} err="failed to get container status \"c03fb525606de44be53152962618a2b91a1ef183d384ca5f23b377a1ae52c428\": rpc error: code = NotFound desc = could not find container \"c03fb525606de44be53152962618a2b91a1ef183d384ca5f23b377a1ae52c428\": container with ID starting with c03fb525606de44be53152962618a2b91a1ef183d384ca5f23b377a1ae52c428 not found: ID does not exist" Dec 11 14:54:06 crc kubenswrapper[5050]: I1211 14:54:06.415055 5050 scope.go:117] "RemoveContainer" containerID="b031beda16a8470ea2c8212231361666c6aac94915bbc165a3c109d557b4d2cf" Dec 11 14:54:06 crc kubenswrapper[5050]: E1211 14:54:06.415316 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b031beda16a8470ea2c8212231361666c6aac94915bbc165a3c109d557b4d2cf\": container with ID starting with b031beda16a8470ea2c8212231361666c6aac94915bbc165a3c109d557b4d2cf not found: ID does not exist" containerID="b031beda16a8470ea2c8212231361666c6aac94915bbc165a3c109d557b4d2cf" Dec 11 14:54:06 crc kubenswrapper[5050]: I1211 14:54:06.415334 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b031beda16a8470ea2c8212231361666c6aac94915bbc165a3c109d557b4d2cf"} err="failed to get container status \"b031beda16a8470ea2c8212231361666c6aac94915bbc165a3c109d557b4d2cf\": rpc error: code = NotFound desc = could not find container \"b031beda16a8470ea2c8212231361666c6aac94915bbc165a3c109d557b4d2cf\": container with ID starting with b031beda16a8470ea2c8212231361666c6aac94915bbc165a3c109d557b4d2cf not found: ID does not exist" Dec 11 14:54:06 crc kubenswrapper[5050]: I1211 14:54:06.455869 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b0066ac-c475-4694-a0eb-ba9dd5de4113-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 14:54:06 crc kubenswrapper[5050]: I1211 14:54:06.455916 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b0066ac-c475-4694-a0eb-ba9dd5de4113-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 14:54:06 crc kubenswrapper[5050]: I1211 14:54:06.455936 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnxk9\" (UniqueName: \"kubernetes.io/projected/7b0066ac-c475-4694-a0eb-ba9dd5de4113-kube-api-access-cnxk9\") on node \"crc\" DevicePath \"\"" Dec 11 14:54:06 crc kubenswrapper[5050]: I1211 14:54:06.687280 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s7k87"] Dec 11 14:54:06 crc kubenswrapper[5050]: I1211 14:54:06.693056 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s7k87"] Dec 11 14:54:07 crc kubenswrapper[5050]: I1211 14:54:07.557172 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b0066ac-c475-4694-a0eb-ba9dd5de4113" path="/var/lib/kubelet/pods/7b0066ac-c475-4694-a0eb-ba9dd5de4113/volumes" Dec 11 14:54:08 crc kubenswrapper[5050]: I1211 14:54:08.545930 5050 scope.go:117] "RemoveContainer" containerID="592a05928c3cabea3545c1e1092cb9126fb731cd50a3674aaab04017d500886d" Dec 11 14:54:08 crc kubenswrapper[5050]: E1211 14:54:08.547177 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:54:22 crc kubenswrapper[5050]: I1211 14:54:22.546004 5050 scope.go:117] "RemoveContainer" containerID="592a05928c3cabea3545c1e1092cb9126fb731cd50a3674aaab04017d500886d" Dec 11 14:54:22 crc kubenswrapper[5050]: E1211 14:54:22.548917 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:54:33 crc kubenswrapper[5050]: I1211 14:54:33.545816 5050 scope.go:117] "RemoveContainer" containerID="592a05928c3cabea3545c1e1092cb9126fb731cd50a3674aaab04017d500886d" Dec 11 14:54:33 crc kubenswrapper[5050]: E1211 14:54:33.546611 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:54:47 crc kubenswrapper[5050]: I1211 14:54:47.546285 5050 scope.go:117] "RemoveContainer" containerID="592a05928c3cabea3545c1e1092cb9126fb731cd50a3674aaab04017d500886d" Dec 11 14:54:47 crc kubenswrapper[5050]: E1211 14:54:47.547384 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:55:00 crc kubenswrapper[5050]: I1211 14:55:00.547144 5050 scope.go:117] "RemoveContainer" containerID="592a05928c3cabea3545c1e1092cb9126fb731cd50a3674aaab04017d500886d" Dec 11 14:55:00 crc kubenswrapper[5050]: E1211 14:55:00.548001 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:55:13 crc kubenswrapper[5050]: I1211 14:55:13.546616 5050 scope.go:117] "RemoveContainer" containerID="592a05928c3cabea3545c1e1092cb9126fb731cd50a3674aaab04017d500886d" Dec 11 14:55:13 crc kubenswrapper[5050]: E1211 14:55:13.547449 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:55:15 crc kubenswrapper[5050]: I1211 14:55:15.882178 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rwxcx"] Dec 11 14:55:15 crc kubenswrapper[5050]: E1211 14:55:15.882724 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b0066ac-c475-4694-a0eb-ba9dd5de4113" containerName="registry-server" Dec 11 14:55:15 crc kubenswrapper[5050]: I1211 14:55:15.882747 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b0066ac-c475-4694-a0eb-ba9dd5de4113" containerName="registry-server" Dec 11 14:55:15 crc kubenswrapper[5050]: E1211 14:55:15.882789 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b0066ac-c475-4694-a0eb-ba9dd5de4113" containerName="extract-utilities" Dec 11 14:55:15 crc kubenswrapper[5050]: I1211 14:55:15.882802 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b0066ac-c475-4694-a0eb-ba9dd5de4113" containerName="extract-utilities" Dec 11 14:55:15 crc kubenswrapper[5050]: E1211 14:55:15.882835 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b0066ac-c475-4694-a0eb-ba9dd5de4113" containerName="extract-content" Dec 11 14:55:15 crc kubenswrapper[5050]: I1211 14:55:15.882849 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b0066ac-c475-4694-a0eb-ba9dd5de4113" containerName="extract-content" Dec 11 14:55:15 crc kubenswrapper[5050]: I1211 14:55:15.883223 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b0066ac-c475-4694-a0eb-ba9dd5de4113" containerName="registry-server" Dec 11 14:55:15 crc kubenswrapper[5050]: I1211 14:55:15.885107 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rwxcx" Dec 11 14:55:15 crc kubenswrapper[5050]: I1211 14:55:15.890847 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rwxcx"] Dec 11 14:55:15 crc kubenswrapper[5050]: I1211 14:55:15.989357 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5j2m\" (UniqueName: \"kubernetes.io/projected/103a09a8-5a53-4e16-95e4-93a9e5ca7d11-kube-api-access-b5j2m\") pod \"certified-operators-rwxcx\" (UID: \"103a09a8-5a53-4e16-95e4-93a9e5ca7d11\") " pod="openshift-marketplace/certified-operators-rwxcx" Dec 11 14:55:15 crc kubenswrapper[5050]: I1211 14:55:15.991163 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/103a09a8-5a53-4e16-95e4-93a9e5ca7d11-catalog-content\") pod \"certified-operators-rwxcx\" (UID: \"103a09a8-5a53-4e16-95e4-93a9e5ca7d11\") " pod="openshift-marketplace/certified-operators-rwxcx" Dec 11 14:55:15 crc kubenswrapper[5050]: I1211 14:55:15.991296 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/103a09a8-5a53-4e16-95e4-93a9e5ca7d11-utilities\") pod \"certified-operators-rwxcx\" (UID: \"103a09a8-5a53-4e16-95e4-93a9e5ca7d11\") " pod="openshift-marketplace/certified-operators-rwxcx" Dec 11 14:55:16 crc kubenswrapper[5050]: I1211 14:55:16.092640 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/103a09a8-5a53-4e16-95e4-93a9e5ca7d11-catalog-content\") pod \"certified-operators-rwxcx\" (UID: \"103a09a8-5a53-4e16-95e4-93a9e5ca7d11\") " pod="openshift-marketplace/certified-operators-rwxcx" Dec 11 14:55:16 crc kubenswrapper[5050]: I1211 14:55:16.092711 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/103a09a8-5a53-4e16-95e4-93a9e5ca7d11-utilities\") pod \"certified-operators-rwxcx\" (UID: \"103a09a8-5a53-4e16-95e4-93a9e5ca7d11\") " pod="openshift-marketplace/certified-operators-rwxcx" Dec 11 14:55:16 crc kubenswrapper[5050]: I1211 14:55:16.092790 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5j2m\" (UniqueName: \"kubernetes.io/projected/103a09a8-5a53-4e16-95e4-93a9e5ca7d11-kube-api-access-b5j2m\") pod \"certified-operators-rwxcx\" (UID: \"103a09a8-5a53-4e16-95e4-93a9e5ca7d11\") " pod="openshift-marketplace/certified-operators-rwxcx" Dec 11 14:55:16 crc kubenswrapper[5050]: I1211 14:55:16.093502 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/103a09a8-5a53-4e16-95e4-93a9e5ca7d11-catalog-content\") pod \"certified-operators-rwxcx\" (UID: \"103a09a8-5a53-4e16-95e4-93a9e5ca7d11\") " pod="openshift-marketplace/certified-operators-rwxcx" Dec 11 14:55:16 crc kubenswrapper[5050]: I1211 14:55:16.093578 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/103a09a8-5a53-4e16-95e4-93a9e5ca7d11-utilities\") pod \"certified-operators-rwxcx\" (UID: \"103a09a8-5a53-4e16-95e4-93a9e5ca7d11\") " pod="openshift-marketplace/certified-operators-rwxcx" Dec 11 14:55:16 crc kubenswrapper[5050]: I1211 14:55:16.112276 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5j2m\" (UniqueName: \"kubernetes.io/projected/103a09a8-5a53-4e16-95e4-93a9e5ca7d11-kube-api-access-b5j2m\") pod \"certified-operators-rwxcx\" (UID: \"103a09a8-5a53-4e16-95e4-93a9e5ca7d11\") " pod="openshift-marketplace/certified-operators-rwxcx" Dec 11 14:55:16 crc kubenswrapper[5050]: I1211 14:55:16.215588 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rwxcx" Dec 11 14:55:16 crc kubenswrapper[5050]: I1211 14:55:16.531186 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rwxcx"] Dec 11 14:55:16 crc kubenswrapper[5050]: I1211 14:55:16.855268 5050 generic.go:334] "Generic (PLEG): container finished" podID="103a09a8-5a53-4e16-95e4-93a9e5ca7d11" containerID="175c1502f1ca795039e9ef167bc7bc5758d3975156da536d1661d9c9464dbee2" exitCode=0 Dec 11 14:55:16 crc kubenswrapper[5050]: I1211 14:55:16.855319 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rwxcx" event={"ID":"103a09a8-5a53-4e16-95e4-93a9e5ca7d11","Type":"ContainerDied","Data":"175c1502f1ca795039e9ef167bc7bc5758d3975156da536d1661d9c9464dbee2"} Dec 11 14:55:16 crc kubenswrapper[5050]: I1211 14:55:16.855356 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rwxcx" event={"ID":"103a09a8-5a53-4e16-95e4-93a9e5ca7d11","Type":"ContainerStarted","Data":"7b42a6d187307b4c68a9e8121365d39da599e889070ea50a77fd221d7d524a6f"} Dec 11 14:55:18 crc kubenswrapper[5050]: I1211 14:55:18.875067 5050 generic.go:334] "Generic (PLEG): container finished" podID="103a09a8-5a53-4e16-95e4-93a9e5ca7d11" containerID="eacc4091310b130bd78490687421cc253496c4047efbe78487c3f96b51f7890f" exitCode=0 Dec 11 14:55:18 crc kubenswrapper[5050]: I1211 14:55:18.875127 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rwxcx" event={"ID":"103a09a8-5a53-4e16-95e4-93a9e5ca7d11","Type":"ContainerDied","Data":"eacc4091310b130bd78490687421cc253496c4047efbe78487c3f96b51f7890f"} Dec 11 14:55:19 crc kubenswrapper[5050]: I1211 14:55:19.883708 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rwxcx" event={"ID":"103a09a8-5a53-4e16-95e4-93a9e5ca7d11","Type":"ContainerStarted","Data":"9dc377d550225bb9520fa7f658694c60f50ee5315865a4443d027fece1678e60"} Dec 11 14:55:19 crc kubenswrapper[5050]: I1211 14:55:19.901492 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rwxcx" podStartSLOduration=2.449167831 podStartE2EDuration="4.901475095s" podCreationTimestamp="2025-12-11 14:55:15 +0000 UTC" firstStartedPulling="2025-12-11 14:55:16.857700646 +0000 UTC m=+4007.701423232" lastFinishedPulling="2025-12-11 14:55:19.31000792 +0000 UTC m=+4010.153730496" observedRunningTime="2025-12-11 14:55:19.898385581 +0000 UTC m=+4010.742108167" watchObservedRunningTime="2025-12-11 14:55:19.901475095 +0000 UTC m=+4010.745197681" Dec 11 14:55:24 crc kubenswrapper[5050]: I1211 14:55:24.546474 5050 scope.go:117] "RemoveContainer" containerID="592a05928c3cabea3545c1e1092cb9126fb731cd50a3674aaab04017d500886d" Dec 11 14:55:24 crc kubenswrapper[5050]: E1211 14:55:24.547035 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:55:26 crc kubenswrapper[5050]: I1211 14:55:26.216666 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rwxcx" Dec 11 14:55:26 crc kubenswrapper[5050]: I1211 14:55:26.216943 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rwxcx" Dec 11 14:55:26 crc kubenswrapper[5050]: I1211 14:55:26.288831 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rwxcx" Dec 11 14:55:26 crc kubenswrapper[5050]: I1211 14:55:26.995724 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rwxcx" Dec 11 14:55:27 crc kubenswrapper[5050]: I1211 14:55:27.056462 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rwxcx"] Dec 11 14:55:28 crc kubenswrapper[5050]: I1211 14:55:28.947932 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rwxcx" podUID="103a09a8-5a53-4e16-95e4-93a9e5ca7d11" containerName="registry-server" containerID="cri-o://9dc377d550225bb9520fa7f658694c60f50ee5315865a4443d027fece1678e60" gracePeriod=2 Dec 11 14:55:30 crc kubenswrapper[5050]: I1211 14:55:30.454705 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rwxcx" Dec 11 14:55:30 crc kubenswrapper[5050]: I1211 14:55:30.608561 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/103a09a8-5a53-4e16-95e4-93a9e5ca7d11-utilities\") pod \"103a09a8-5a53-4e16-95e4-93a9e5ca7d11\" (UID: \"103a09a8-5a53-4e16-95e4-93a9e5ca7d11\") " Dec 11 14:55:30 crc kubenswrapper[5050]: I1211 14:55:30.608826 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5j2m\" (UniqueName: \"kubernetes.io/projected/103a09a8-5a53-4e16-95e4-93a9e5ca7d11-kube-api-access-b5j2m\") pod \"103a09a8-5a53-4e16-95e4-93a9e5ca7d11\" (UID: \"103a09a8-5a53-4e16-95e4-93a9e5ca7d11\") " Dec 11 14:55:30 crc kubenswrapper[5050]: I1211 14:55:30.608903 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/103a09a8-5a53-4e16-95e4-93a9e5ca7d11-catalog-content\") pod \"103a09a8-5a53-4e16-95e4-93a9e5ca7d11\" (UID: \"103a09a8-5a53-4e16-95e4-93a9e5ca7d11\") " Dec 11 14:55:30 crc kubenswrapper[5050]: I1211 14:55:30.609467 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/103a09a8-5a53-4e16-95e4-93a9e5ca7d11-utilities" (OuterVolumeSpecName: "utilities") pod "103a09a8-5a53-4e16-95e4-93a9e5ca7d11" (UID: "103a09a8-5a53-4e16-95e4-93a9e5ca7d11"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:55:30 crc kubenswrapper[5050]: I1211 14:55:30.616841 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/103a09a8-5a53-4e16-95e4-93a9e5ca7d11-kube-api-access-b5j2m" (OuterVolumeSpecName: "kube-api-access-b5j2m") pod "103a09a8-5a53-4e16-95e4-93a9e5ca7d11" (UID: "103a09a8-5a53-4e16-95e4-93a9e5ca7d11"). InnerVolumeSpecName "kube-api-access-b5j2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 14:55:30 crc kubenswrapper[5050]: I1211 14:55:30.671198 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/103a09a8-5a53-4e16-95e4-93a9e5ca7d11-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "103a09a8-5a53-4e16-95e4-93a9e5ca7d11" (UID: "103a09a8-5a53-4e16-95e4-93a9e5ca7d11"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 14:55:30 crc kubenswrapper[5050]: I1211 14:55:30.710647 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/103a09a8-5a53-4e16-95e4-93a9e5ca7d11-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 14:55:30 crc kubenswrapper[5050]: I1211 14:55:30.710677 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5j2m\" (UniqueName: \"kubernetes.io/projected/103a09a8-5a53-4e16-95e4-93a9e5ca7d11-kube-api-access-b5j2m\") on node \"crc\" DevicePath \"\"" Dec 11 14:55:30 crc kubenswrapper[5050]: I1211 14:55:30.710688 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/103a09a8-5a53-4e16-95e4-93a9e5ca7d11-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 14:55:30 crc kubenswrapper[5050]: I1211 14:55:30.965922 5050 generic.go:334] "Generic (PLEG): container finished" podID="103a09a8-5a53-4e16-95e4-93a9e5ca7d11" containerID="9dc377d550225bb9520fa7f658694c60f50ee5315865a4443d027fece1678e60" exitCode=0 Dec 11 14:55:30 crc kubenswrapper[5050]: I1211 14:55:30.965979 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rwxcx" event={"ID":"103a09a8-5a53-4e16-95e4-93a9e5ca7d11","Type":"ContainerDied","Data":"9dc377d550225bb9520fa7f658694c60f50ee5315865a4443d027fece1678e60"} Dec 11 14:55:30 crc kubenswrapper[5050]: I1211 14:55:30.966041 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rwxcx" event={"ID":"103a09a8-5a53-4e16-95e4-93a9e5ca7d11","Type":"ContainerDied","Data":"7b42a6d187307b4c68a9e8121365d39da599e889070ea50a77fd221d7d524a6f"} Dec 11 14:55:30 crc kubenswrapper[5050]: I1211 14:55:30.966062 5050 scope.go:117] "RemoveContainer" containerID="9dc377d550225bb9520fa7f658694c60f50ee5315865a4443d027fece1678e60" Dec 11 14:55:30 crc kubenswrapper[5050]: I1211 14:55:30.966102 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rwxcx" Dec 11 14:55:30 crc kubenswrapper[5050]: I1211 14:55:30.987912 5050 scope.go:117] "RemoveContainer" containerID="eacc4091310b130bd78490687421cc253496c4047efbe78487c3f96b51f7890f" Dec 11 14:55:31 crc kubenswrapper[5050]: I1211 14:55:31.009425 5050 scope.go:117] "RemoveContainer" containerID="175c1502f1ca795039e9ef167bc7bc5758d3975156da536d1661d9c9464dbee2" Dec 11 14:55:31 crc kubenswrapper[5050]: I1211 14:55:31.041846 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rwxcx"] Dec 11 14:55:31 crc kubenswrapper[5050]: I1211 14:55:31.053052 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rwxcx"] Dec 11 14:55:31 crc kubenswrapper[5050]: I1211 14:55:31.059300 5050 scope.go:117] "RemoveContainer" containerID="9dc377d550225bb9520fa7f658694c60f50ee5315865a4443d027fece1678e60" Dec 11 14:55:31 crc kubenswrapper[5050]: E1211 14:55:31.059899 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dc377d550225bb9520fa7f658694c60f50ee5315865a4443d027fece1678e60\": container with ID starting with 9dc377d550225bb9520fa7f658694c60f50ee5315865a4443d027fece1678e60 not found: ID does not exist" containerID="9dc377d550225bb9520fa7f658694c60f50ee5315865a4443d027fece1678e60" Dec 11 14:55:31 crc kubenswrapper[5050]: I1211 14:55:31.059979 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dc377d550225bb9520fa7f658694c60f50ee5315865a4443d027fece1678e60"} err="failed to get container status \"9dc377d550225bb9520fa7f658694c60f50ee5315865a4443d027fece1678e60\": rpc error: code = NotFound desc = could not find container \"9dc377d550225bb9520fa7f658694c60f50ee5315865a4443d027fece1678e60\": container with ID starting with 9dc377d550225bb9520fa7f658694c60f50ee5315865a4443d027fece1678e60 not found: ID does not exist" Dec 11 14:55:31 crc kubenswrapper[5050]: I1211 14:55:31.060031 5050 scope.go:117] "RemoveContainer" containerID="eacc4091310b130bd78490687421cc253496c4047efbe78487c3f96b51f7890f" Dec 11 14:55:31 crc kubenswrapper[5050]: E1211 14:55:31.061150 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eacc4091310b130bd78490687421cc253496c4047efbe78487c3f96b51f7890f\": container with ID starting with eacc4091310b130bd78490687421cc253496c4047efbe78487c3f96b51f7890f not found: ID does not exist" containerID="eacc4091310b130bd78490687421cc253496c4047efbe78487c3f96b51f7890f" Dec 11 14:55:31 crc kubenswrapper[5050]: I1211 14:55:31.061189 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eacc4091310b130bd78490687421cc253496c4047efbe78487c3f96b51f7890f"} err="failed to get container status \"eacc4091310b130bd78490687421cc253496c4047efbe78487c3f96b51f7890f\": rpc error: code = NotFound desc = could not find container \"eacc4091310b130bd78490687421cc253496c4047efbe78487c3f96b51f7890f\": container with ID starting with eacc4091310b130bd78490687421cc253496c4047efbe78487c3f96b51f7890f not found: ID does not exist" Dec 11 14:55:31 crc kubenswrapper[5050]: I1211 14:55:31.061253 5050 scope.go:117] "RemoveContainer" containerID="175c1502f1ca795039e9ef167bc7bc5758d3975156da536d1661d9c9464dbee2" Dec 11 14:55:31 crc kubenswrapper[5050]: E1211 14:55:31.061532 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"175c1502f1ca795039e9ef167bc7bc5758d3975156da536d1661d9c9464dbee2\": container with ID starting with 175c1502f1ca795039e9ef167bc7bc5758d3975156da536d1661d9c9464dbee2 not found: ID does not exist" containerID="175c1502f1ca795039e9ef167bc7bc5758d3975156da536d1661d9c9464dbee2" Dec 11 14:55:31 crc kubenswrapper[5050]: I1211 14:55:31.061560 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"175c1502f1ca795039e9ef167bc7bc5758d3975156da536d1661d9c9464dbee2"} err="failed to get container status \"175c1502f1ca795039e9ef167bc7bc5758d3975156da536d1661d9c9464dbee2\": rpc error: code = NotFound desc = could not find container \"175c1502f1ca795039e9ef167bc7bc5758d3975156da536d1661d9c9464dbee2\": container with ID starting with 175c1502f1ca795039e9ef167bc7bc5758d3975156da536d1661d9c9464dbee2 not found: ID does not exist" Dec 11 14:55:31 crc kubenswrapper[5050]: I1211 14:55:31.557475 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="103a09a8-5a53-4e16-95e4-93a9e5ca7d11" path="/var/lib/kubelet/pods/103a09a8-5a53-4e16-95e4-93a9e5ca7d11/volumes" Dec 11 14:55:36 crc kubenswrapper[5050]: I1211 14:55:36.546573 5050 scope.go:117] "RemoveContainer" containerID="592a05928c3cabea3545c1e1092cb9126fb731cd50a3674aaab04017d500886d" Dec 11 14:55:36 crc kubenswrapper[5050]: E1211 14:55:36.547795 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:55:50 crc kubenswrapper[5050]: I1211 14:55:50.546530 5050 scope.go:117] "RemoveContainer" containerID="592a05928c3cabea3545c1e1092cb9126fb731cd50a3674aaab04017d500886d" Dec 11 14:55:50 crc kubenswrapper[5050]: E1211 14:55:50.547352 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:56:01 crc kubenswrapper[5050]: I1211 14:56:01.546449 5050 scope.go:117] "RemoveContainer" containerID="592a05928c3cabea3545c1e1092cb9126fb731cd50a3674aaab04017d500886d" Dec 11 14:56:01 crc kubenswrapper[5050]: E1211 14:56:01.547193 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:56:12 crc kubenswrapper[5050]: I1211 14:56:12.546410 5050 scope.go:117] "RemoveContainer" containerID="592a05928c3cabea3545c1e1092cb9126fb731cd50a3674aaab04017d500886d" Dec 11 14:56:12 crc kubenswrapper[5050]: E1211 14:56:12.547038 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:56:26 crc kubenswrapper[5050]: I1211 14:56:26.546545 5050 scope.go:117] "RemoveContainer" containerID="592a05928c3cabea3545c1e1092cb9126fb731cd50a3674aaab04017d500886d" Dec 11 14:56:26 crc kubenswrapper[5050]: E1211 14:56:26.547381 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:56:41 crc kubenswrapper[5050]: I1211 14:56:41.546565 5050 scope.go:117] "RemoveContainer" containerID="592a05928c3cabea3545c1e1092cb9126fb731cd50a3674aaab04017d500886d" Dec 11 14:56:41 crc kubenswrapper[5050]: E1211 14:56:41.549609 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:56:52 crc kubenswrapper[5050]: I1211 14:56:52.546870 5050 scope.go:117] "RemoveContainer" containerID="592a05928c3cabea3545c1e1092cb9126fb731cd50a3674aaab04017d500886d" Dec 11 14:56:52 crc kubenswrapper[5050]: E1211 14:56:52.548003 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:57:06 crc kubenswrapper[5050]: I1211 14:57:06.546616 5050 scope.go:117] "RemoveContainer" containerID="592a05928c3cabea3545c1e1092cb9126fb731cd50a3674aaab04017d500886d" Dec 11 14:57:06 crc kubenswrapper[5050]: E1211 14:57:06.547639 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:57:21 crc kubenswrapper[5050]: I1211 14:57:21.549642 5050 scope.go:117] "RemoveContainer" containerID="592a05928c3cabea3545c1e1092cb9126fb731cd50a3674aaab04017d500886d" Dec 11 14:57:21 crc kubenswrapper[5050]: E1211 14:57:21.550352 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:57:35 crc kubenswrapper[5050]: I1211 14:57:35.546591 5050 scope.go:117] "RemoveContainer" containerID="592a05928c3cabea3545c1e1092cb9126fb731cd50a3674aaab04017d500886d" Dec 11 14:57:35 crc kubenswrapper[5050]: E1211 14:57:35.547572 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 14:57:50 crc kubenswrapper[5050]: I1211 14:57:50.546097 5050 scope.go:117] "RemoveContainer" containerID="592a05928c3cabea3545c1e1092cb9126fb731cd50a3674aaab04017d500886d" Dec 11 14:57:51 crc kubenswrapper[5050]: I1211 14:57:51.039906 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerStarted","Data":"6ff8571bb4559fd374475c94385275b8db9d1ca3a36ff5857122f05b5cf16e65"} Dec 11 15:00:00 crc kubenswrapper[5050]: I1211 15:00:00.192072 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424420-lmctp"] Dec 11 15:00:00 crc kubenswrapper[5050]: E1211 15:00:00.193069 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="103a09a8-5a53-4e16-95e4-93a9e5ca7d11" containerName="extract-utilities" Dec 11 15:00:00 crc kubenswrapper[5050]: I1211 15:00:00.193085 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="103a09a8-5a53-4e16-95e4-93a9e5ca7d11" containerName="extract-utilities" Dec 11 15:00:00 crc kubenswrapper[5050]: E1211 15:00:00.193111 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="103a09a8-5a53-4e16-95e4-93a9e5ca7d11" containerName="extract-content" Dec 11 15:00:00 crc kubenswrapper[5050]: I1211 15:00:00.193122 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="103a09a8-5a53-4e16-95e4-93a9e5ca7d11" containerName="extract-content" Dec 11 15:00:00 crc kubenswrapper[5050]: E1211 15:00:00.193134 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="103a09a8-5a53-4e16-95e4-93a9e5ca7d11" containerName="registry-server" Dec 11 15:00:00 crc kubenswrapper[5050]: I1211 15:00:00.193142 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="103a09a8-5a53-4e16-95e4-93a9e5ca7d11" containerName="registry-server" Dec 11 15:00:00 crc kubenswrapper[5050]: I1211 15:00:00.193316 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="103a09a8-5a53-4e16-95e4-93a9e5ca7d11" containerName="registry-server" Dec 11 15:00:00 crc kubenswrapper[5050]: I1211 15:00:00.193861 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424420-lmctp" Dec 11 15:00:00 crc kubenswrapper[5050]: I1211 15:00:00.197682 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 11 15:00:00 crc kubenswrapper[5050]: I1211 15:00:00.197925 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 11 15:00:00 crc kubenswrapper[5050]: I1211 15:00:00.204410 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424420-lmctp"] Dec 11 15:00:00 crc kubenswrapper[5050]: I1211 15:00:00.314030 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb7wg\" (UniqueName: \"kubernetes.io/projected/4e7cfb00-52ec-46c0-af09-c9f8a67d69f7-kube-api-access-tb7wg\") pod \"collect-profiles-29424420-lmctp\" (UID: \"4e7cfb00-52ec-46c0-af09-c9f8a67d69f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424420-lmctp" Dec 11 15:00:00 crc kubenswrapper[5050]: I1211 15:00:00.314921 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e7cfb00-52ec-46c0-af09-c9f8a67d69f7-secret-volume\") pod \"collect-profiles-29424420-lmctp\" (UID: \"4e7cfb00-52ec-46c0-af09-c9f8a67d69f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424420-lmctp" Dec 11 15:00:00 crc kubenswrapper[5050]: I1211 15:00:00.315609 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e7cfb00-52ec-46c0-af09-c9f8a67d69f7-config-volume\") pod \"collect-profiles-29424420-lmctp\" (UID: \"4e7cfb00-52ec-46c0-af09-c9f8a67d69f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424420-lmctp" Dec 11 15:00:00 crc kubenswrapper[5050]: I1211 15:00:00.417797 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e7cfb00-52ec-46c0-af09-c9f8a67d69f7-config-volume\") pod \"collect-profiles-29424420-lmctp\" (UID: \"4e7cfb00-52ec-46c0-af09-c9f8a67d69f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424420-lmctp" Dec 11 15:00:00 crc kubenswrapper[5050]: I1211 15:00:00.418830 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e7cfb00-52ec-46c0-af09-c9f8a67d69f7-config-volume\") pod \"collect-profiles-29424420-lmctp\" (UID: \"4e7cfb00-52ec-46c0-af09-c9f8a67d69f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424420-lmctp" Dec 11 15:00:00 crc kubenswrapper[5050]: I1211 15:00:00.420358 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb7wg\" (UniqueName: \"kubernetes.io/projected/4e7cfb00-52ec-46c0-af09-c9f8a67d69f7-kube-api-access-tb7wg\") pod \"collect-profiles-29424420-lmctp\" (UID: \"4e7cfb00-52ec-46c0-af09-c9f8a67d69f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424420-lmctp" Dec 11 15:00:00 crc kubenswrapper[5050]: I1211 15:00:00.420574 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e7cfb00-52ec-46c0-af09-c9f8a67d69f7-secret-volume\") pod \"collect-profiles-29424420-lmctp\" (UID: \"4e7cfb00-52ec-46c0-af09-c9f8a67d69f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424420-lmctp" Dec 11 15:00:00 crc kubenswrapper[5050]: I1211 15:00:00.430625 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e7cfb00-52ec-46c0-af09-c9f8a67d69f7-secret-volume\") pod \"collect-profiles-29424420-lmctp\" (UID: \"4e7cfb00-52ec-46c0-af09-c9f8a67d69f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424420-lmctp" Dec 11 15:00:00 crc kubenswrapper[5050]: I1211 15:00:00.439707 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb7wg\" (UniqueName: \"kubernetes.io/projected/4e7cfb00-52ec-46c0-af09-c9f8a67d69f7-kube-api-access-tb7wg\") pod \"collect-profiles-29424420-lmctp\" (UID: \"4e7cfb00-52ec-46c0-af09-c9f8a67d69f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424420-lmctp" Dec 11 15:00:00 crc kubenswrapper[5050]: I1211 15:00:00.524566 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424420-lmctp" Dec 11 15:00:00 crc kubenswrapper[5050]: I1211 15:00:00.761755 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424420-lmctp"] Dec 11 15:00:01 crc kubenswrapper[5050]: I1211 15:00:01.055078 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424420-lmctp" event={"ID":"4e7cfb00-52ec-46c0-af09-c9f8a67d69f7","Type":"ContainerStarted","Data":"3f39034e0cc12ca4d052108fa9c64b15358a201f93093de596d7110520e9635a"} Dec 11 15:00:01 crc kubenswrapper[5050]: I1211 15:00:01.055155 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424420-lmctp" event={"ID":"4e7cfb00-52ec-46c0-af09-c9f8a67d69f7","Type":"ContainerStarted","Data":"5ca25d6bb8ab19e5dbb763078766ad4b1171c51d37bdf124d6169a30a7f392ee"} Dec 11 15:00:01 crc kubenswrapper[5050]: I1211 15:00:01.082500 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29424420-lmctp" podStartSLOduration=1.082476249 podStartE2EDuration="1.082476249s" podCreationTimestamp="2025-12-11 15:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:00:01.07530314 +0000 UTC m=+4291.919025736" watchObservedRunningTime="2025-12-11 15:00:01.082476249 +0000 UTC m=+4291.926198855" Dec 11 15:00:02 crc kubenswrapper[5050]: I1211 15:00:02.063281 5050 generic.go:334] "Generic (PLEG): container finished" podID="4e7cfb00-52ec-46c0-af09-c9f8a67d69f7" containerID="3f39034e0cc12ca4d052108fa9c64b15358a201f93093de596d7110520e9635a" exitCode=0 Dec 11 15:00:02 crc kubenswrapper[5050]: I1211 15:00:02.063397 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424420-lmctp" event={"ID":"4e7cfb00-52ec-46c0-af09-c9f8a67d69f7","Type":"ContainerDied","Data":"3f39034e0cc12ca4d052108fa9c64b15358a201f93093de596d7110520e9635a"} Dec 11 15:00:03 crc kubenswrapper[5050]: I1211 15:00:03.330969 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424420-lmctp" Dec 11 15:00:03 crc kubenswrapper[5050]: I1211 15:00:03.460204 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e7cfb00-52ec-46c0-af09-c9f8a67d69f7-secret-volume\") pod \"4e7cfb00-52ec-46c0-af09-c9f8a67d69f7\" (UID: \"4e7cfb00-52ec-46c0-af09-c9f8a67d69f7\") " Dec 11 15:00:03 crc kubenswrapper[5050]: I1211 15:00:03.460297 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e7cfb00-52ec-46c0-af09-c9f8a67d69f7-config-volume\") pod \"4e7cfb00-52ec-46c0-af09-c9f8a67d69f7\" (UID: \"4e7cfb00-52ec-46c0-af09-c9f8a67d69f7\") " Dec 11 15:00:03 crc kubenswrapper[5050]: I1211 15:00:03.460372 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb7wg\" (UniqueName: \"kubernetes.io/projected/4e7cfb00-52ec-46c0-af09-c9f8a67d69f7-kube-api-access-tb7wg\") pod \"4e7cfb00-52ec-46c0-af09-c9f8a67d69f7\" (UID: \"4e7cfb00-52ec-46c0-af09-c9f8a67d69f7\") " Dec 11 15:00:03 crc kubenswrapper[5050]: I1211 15:00:03.461341 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e7cfb00-52ec-46c0-af09-c9f8a67d69f7-config-volume" (OuterVolumeSpecName: "config-volume") pod "4e7cfb00-52ec-46c0-af09-c9f8a67d69f7" (UID: "4e7cfb00-52ec-46c0-af09-c9f8a67d69f7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:00:03 crc kubenswrapper[5050]: I1211 15:00:03.465980 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e7cfb00-52ec-46c0-af09-c9f8a67d69f7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4e7cfb00-52ec-46c0-af09-c9f8a67d69f7" (UID: "4e7cfb00-52ec-46c0-af09-c9f8a67d69f7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:00:03 crc kubenswrapper[5050]: I1211 15:00:03.466154 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e7cfb00-52ec-46c0-af09-c9f8a67d69f7-kube-api-access-tb7wg" (OuterVolumeSpecName: "kube-api-access-tb7wg") pod "4e7cfb00-52ec-46c0-af09-c9f8a67d69f7" (UID: "4e7cfb00-52ec-46c0-af09-c9f8a67d69f7"). InnerVolumeSpecName "kube-api-access-tb7wg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:00:03 crc kubenswrapper[5050]: I1211 15:00:03.562943 5050 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e7cfb00-52ec-46c0-af09-c9f8a67d69f7-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 15:00:03 crc kubenswrapper[5050]: I1211 15:00:03.563723 5050 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e7cfb00-52ec-46c0-af09-c9f8a67d69f7-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 15:00:03 crc kubenswrapper[5050]: I1211 15:00:03.563778 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tb7wg\" (UniqueName: \"kubernetes.io/projected/4e7cfb00-52ec-46c0-af09-c9f8a67d69f7-kube-api-access-tb7wg\") on node \"crc\" DevicePath \"\"" Dec 11 15:00:04 crc kubenswrapper[5050]: I1211 15:00:04.079968 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424420-lmctp" event={"ID":"4e7cfb00-52ec-46c0-af09-c9f8a67d69f7","Type":"ContainerDied","Data":"5ca25d6bb8ab19e5dbb763078766ad4b1171c51d37bdf124d6169a30a7f392ee"} Dec 11 15:00:04 crc kubenswrapper[5050]: I1211 15:00:04.080042 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ca25d6bb8ab19e5dbb763078766ad4b1171c51d37bdf124d6169a30a7f392ee" Dec 11 15:00:04 crc kubenswrapper[5050]: I1211 15:00:04.080328 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424420-lmctp" Dec 11 15:00:04 crc kubenswrapper[5050]: I1211 15:00:04.417998 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424375-w2t68"] Dec 11 15:00:04 crc kubenswrapper[5050]: I1211 15:00:04.423745 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424375-w2t68"] Dec 11 15:00:05 crc kubenswrapper[5050]: I1211 15:00:05.560814 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7574660d-3967-453b-8cd4-6aa447aff652" path="/var/lib/kubelet/pods/7574660d-3967-453b-8cd4-6aa447aff652/volumes" Dec 11 15:00:10 crc kubenswrapper[5050]: I1211 15:00:10.797172 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 15:00:10 crc kubenswrapper[5050]: I1211 15:00:10.798214 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 15:00:40 crc kubenswrapper[5050]: I1211 15:00:40.796219 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 15:00:40 crc kubenswrapper[5050]: I1211 15:00:40.796741 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 15:00:44 crc kubenswrapper[5050]: I1211 15:00:44.417112 5050 scope.go:117] "RemoveContainer" containerID="f69005345c1398c48053c76b41d03dd74e0ffaac52e6c09e7dc98a7000961900" Dec 11 15:01:10 crc kubenswrapper[5050]: I1211 15:01:10.797164 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 15:01:10 crc kubenswrapper[5050]: I1211 15:01:10.798165 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 15:01:10 crc kubenswrapper[5050]: I1211 15:01:10.798269 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 15:01:10 crc kubenswrapper[5050]: I1211 15:01:10.799369 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6ff8571bb4559fd374475c94385275b8db9d1ca3a36ff5857122f05b5cf16e65"} pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 15:01:10 crc kubenswrapper[5050]: I1211 15:01:10.799431 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" containerID="cri-o://6ff8571bb4559fd374475c94385275b8db9d1ca3a36ff5857122f05b5cf16e65" gracePeriod=600 Dec 11 15:01:11 crc kubenswrapper[5050]: I1211 15:01:11.641387 5050 generic.go:334] "Generic (PLEG): container finished" podID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerID="6ff8571bb4559fd374475c94385275b8db9d1ca3a36ff5857122f05b5cf16e65" exitCode=0 Dec 11 15:01:11 crc kubenswrapper[5050]: I1211 15:01:11.641516 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerDied","Data":"6ff8571bb4559fd374475c94385275b8db9d1ca3a36ff5857122f05b5cf16e65"} Dec 11 15:01:11 crc kubenswrapper[5050]: I1211 15:01:11.642421 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerStarted","Data":"b585bea20b0bb52e58642ee9f3e33b5b182a66a544d35daafc0202a53b35a810"} Dec 11 15:01:11 crc kubenswrapper[5050]: I1211 15:01:11.642453 5050 scope.go:117] "RemoveContainer" containerID="592a05928c3cabea3545c1e1092cb9126fb731cd50a3674aaab04017d500886d" Dec 11 15:01:20 crc kubenswrapper[5050]: I1211 15:01:20.366906 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dn9rq"] Dec 11 15:01:20 crc kubenswrapper[5050]: E1211 15:01:20.367901 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e7cfb00-52ec-46c0-af09-c9f8a67d69f7" containerName="collect-profiles" Dec 11 15:01:20 crc kubenswrapper[5050]: I1211 15:01:20.367918 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e7cfb00-52ec-46c0-af09-c9f8a67d69f7" containerName="collect-profiles" Dec 11 15:01:20 crc kubenswrapper[5050]: I1211 15:01:20.368135 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e7cfb00-52ec-46c0-af09-c9f8a67d69f7" containerName="collect-profiles" Dec 11 15:01:20 crc kubenswrapper[5050]: I1211 15:01:20.369398 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dn9rq" Dec 11 15:01:20 crc kubenswrapper[5050]: I1211 15:01:20.375934 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dn9rq"] Dec 11 15:01:20 crc kubenswrapper[5050]: I1211 15:01:20.423109 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl8vc\" (UniqueName: \"kubernetes.io/projected/e0a93f42-8f95-4e25-880f-f4cdc85825e1-kube-api-access-pl8vc\") pod \"community-operators-dn9rq\" (UID: \"e0a93f42-8f95-4e25-880f-f4cdc85825e1\") " pod="openshift-marketplace/community-operators-dn9rq" Dec 11 15:01:20 crc kubenswrapper[5050]: I1211 15:01:20.423467 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0a93f42-8f95-4e25-880f-f4cdc85825e1-catalog-content\") pod \"community-operators-dn9rq\" (UID: \"e0a93f42-8f95-4e25-880f-f4cdc85825e1\") " pod="openshift-marketplace/community-operators-dn9rq" Dec 11 15:01:20 crc kubenswrapper[5050]: I1211 15:01:20.423511 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0a93f42-8f95-4e25-880f-f4cdc85825e1-utilities\") pod \"community-operators-dn9rq\" (UID: \"e0a93f42-8f95-4e25-880f-f4cdc85825e1\") " pod="openshift-marketplace/community-operators-dn9rq" Dec 11 15:01:20 crc kubenswrapper[5050]: I1211 15:01:20.524536 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0a93f42-8f95-4e25-880f-f4cdc85825e1-utilities\") pod \"community-operators-dn9rq\" (UID: \"e0a93f42-8f95-4e25-880f-f4cdc85825e1\") " pod="openshift-marketplace/community-operators-dn9rq" Dec 11 15:01:20 crc kubenswrapper[5050]: I1211 15:01:20.524898 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl8vc\" (UniqueName: \"kubernetes.io/projected/e0a93f42-8f95-4e25-880f-f4cdc85825e1-kube-api-access-pl8vc\") pod \"community-operators-dn9rq\" (UID: \"e0a93f42-8f95-4e25-880f-f4cdc85825e1\") " pod="openshift-marketplace/community-operators-dn9rq" Dec 11 15:01:20 crc kubenswrapper[5050]: I1211 15:01:20.525132 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0a93f42-8f95-4e25-880f-f4cdc85825e1-catalog-content\") pod \"community-operators-dn9rq\" (UID: \"e0a93f42-8f95-4e25-880f-f4cdc85825e1\") " pod="openshift-marketplace/community-operators-dn9rq" Dec 11 15:01:20 crc kubenswrapper[5050]: I1211 15:01:20.525638 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0a93f42-8f95-4e25-880f-f4cdc85825e1-utilities\") pod \"community-operators-dn9rq\" (UID: \"e0a93f42-8f95-4e25-880f-f4cdc85825e1\") " pod="openshift-marketplace/community-operators-dn9rq" Dec 11 15:01:20 crc kubenswrapper[5050]: I1211 15:01:20.525778 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0a93f42-8f95-4e25-880f-f4cdc85825e1-catalog-content\") pod \"community-operators-dn9rq\" (UID: \"e0a93f42-8f95-4e25-880f-f4cdc85825e1\") " pod="openshift-marketplace/community-operators-dn9rq" Dec 11 15:01:20 crc kubenswrapper[5050]: I1211 15:01:20.545041 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl8vc\" (UniqueName: \"kubernetes.io/projected/e0a93f42-8f95-4e25-880f-f4cdc85825e1-kube-api-access-pl8vc\") pod \"community-operators-dn9rq\" (UID: \"e0a93f42-8f95-4e25-880f-f4cdc85825e1\") " pod="openshift-marketplace/community-operators-dn9rq" Dec 11 15:01:20 crc kubenswrapper[5050]: I1211 15:01:20.696168 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dn9rq" Dec 11 15:01:21 crc kubenswrapper[5050]: I1211 15:01:21.259240 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dn9rq"] Dec 11 15:01:21 crc kubenswrapper[5050]: W1211 15:01:21.266899 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0a93f42_8f95_4e25_880f_f4cdc85825e1.slice/crio-405792e8d46393643590c8f3cf2366c9c6ec25302b07fd44df01e76bad72da8b WatchSource:0}: Error finding container 405792e8d46393643590c8f3cf2366c9c6ec25302b07fd44df01e76bad72da8b: Status 404 returned error can't find the container with id 405792e8d46393643590c8f3cf2366c9c6ec25302b07fd44df01e76bad72da8b Dec 11 15:01:21 crc kubenswrapper[5050]: I1211 15:01:21.741573 5050 generic.go:334] "Generic (PLEG): container finished" podID="e0a93f42-8f95-4e25-880f-f4cdc85825e1" containerID="1b01d9af47427c51c775d8a39bb9821a32e72f2f13bab8299c5fff4793633da5" exitCode=0 Dec 11 15:01:21 crc kubenswrapper[5050]: I1211 15:01:21.741679 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dn9rq" event={"ID":"e0a93f42-8f95-4e25-880f-f4cdc85825e1","Type":"ContainerDied","Data":"1b01d9af47427c51c775d8a39bb9821a32e72f2f13bab8299c5fff4793633da5"} Dec 11 15:01:21 crc kubenswrapper[5050]: I1211 15:01:21.742071 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dn9rq" event={"ID":"e0a93f42-8f95-4e25-880f-f4cdc85825e1","Type":"ContainerStarted","Data":"405792e8d46393643590c8f3cf2366c9c6ec25302b07fd44df01e76bad72da8b"} Dec 11 15:01:21 crc kubenswrapper[5050]: I1211 15:01:21.743783 5050 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 15:01:22 crc kubenswrapper[5050]: I1211 15:01:22.752049 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dn9rq" event={"ID":"e0a93f42-8f95-4e25-880f-f4cdc85825e1","Type":"ContainerStarted","Data":"a2c081057b538314fe2920fa68647ec8dc74bba980712db343294eaca5704095"} Dec 11 15:01:23 crc kubenswrapper[5050]: I1211 15:01:23.767393 5050 generic.go:334] "Generic (PLEG): container finished" podID="e0a93f42-8f95-4e25-880f-f4cdc85825e1" containerID="a2c081057b538314fe2920fa68647ec8dc74bba980712db343294eaca5704095" exitCode=0 Dec 11 15:01:23 crc kubenswrapper[5050]: I1211 15:01:23.767433 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dn9rq" event={"ID":"e0a93f42-8f95-4e25-880f-f4cdc85825e1","Type":"ContainerDied","Data":"a2c081057b538314fe2920fa68647ec8dc74bba980712db343294eaca5704095"} Dec 11 15:01:25 crc kubenswrapper[5050]: I1211 15:01:25.786086 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dn9rq" event={"ID":"e0a93f42-8f95-4e25-880f-f4cdc85825e1","Type":"ContainerStarted","Data":"c766a535105d9a9675665f2a72b44b2314486955a1cbc404a94de4b79f52dead"} Dec 11 15:01:25 crc kubenswrapper[5050]: I1211 15:01:25.812752 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dn9rq" podStartSLOduration=3.321985109 podStartE2EDuration="5.812726952s" podCreationTimestamp="2025-12-11 15:01:20 +0000 UTC" firstStartedPulling="2025-12-11 15:01:21.74351862 +0000 UTC m=+4372.587241206" lastFinishedPulling="2025-12-11 15:01:24.234260463 +0000 UTC m=+4375.077983049" observedRunningTime="2025-12-11 15:01:25.807698119 +0000 UTC m=+4376.651420715" watchObservedRunningTime="2025-12-11 15:01:25.812726952 +0000 UTC m=+4376.656449548" Dec 11 15:01:30 crc kubenswrapper[5050]: I1211 15:01:30.697031 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dn9rq" Dec 11 15:01:30 crc kubenswrapper[5050]: I1211 15:01:30.697787 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dn9rq" Dec 11 15:01:30 crc kubenswrapper[5050]: I1211 15:01:30.755416 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dn9rq" Dec 11 15:01:30 crc kubenswrapper[5050]: I1211 15:01:30.877389 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dn9rq" Dec 11 15:01:31 crc kubenswrapper[5050]: I1211 15:01:31.963594 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dn9rq"] Dec 11 15:01:32 crc kubenswrapper[5050]: I1211 15:01:32.847969 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dn9rq" podUID="e0a93f42-8f95-4e25-880f-f4cdc85825e1" containerName="registry-server" containerID="cri-o://c766a535105d9a9675665f2a72b44b2314486955a1cbc404a94de4b79f52dead" gracePeriod=2 Dec 11 15:01:33 crc kubenswrapper[5050]: I1211 15:01:33.350977 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dn9rq" Dec 11 15:01:33 crc kubenswrapper[5050]: I1211 15:01:33.450178 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pl8vc\" (UniqueName: \"kubernetes.io/projected/e0a93f42-8f95-4e25-880f-f4cdc85825e1-kube-api-access-pl8vc\") pod \"e0a93f42-8f95-4e25-880f-f4cdc85825e1\" (UID: \"e0a93f42-8f95-4e25-880f-f4cdc85825e1\") " Dec 11 15:01:33 crc kubenswrapper[5050]: I1211 15:01:33.450305 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0a93f42-8f95-4e25-880f-f4cdc85825e1-catalog-content\") pod \"e0a93f42-8f95-4e25-880f-f4cdc85825e1\" (UID: \"e0a93f42-8f95-4e25-880f-f4cdc85825e1\") " Dec 11 15:01:33 crc kubenswrapper[5050]: I1211 15:01:33.450398 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0a93f42-8f95-4e25-880f-f4cdc85825e1-utilities\") pod \"e0a93f42-8f95-4e25-880f-f4cdc85825e1\" (UID: \"e0a93f42-8f95-4e25-880f-f4cdc85825e1\") " Dec 11 15:01:33 crc kubenswrapper[5050]: I1211 15:01:33.451747 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0a93f42-8f95-4e25-880f-f4cdc85825e1-utilities" (OuterVolumeSpecName: "utilities") pod "e0a93f42-8f95-4e25-880f-f4cdc85825e1" (UID: "e0a93f42-8f95-4e25-880f-f4cdc85825e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:01:33 crc kubenswrapper[5050]: I1211 15:01:33.457375 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0a93f42-8f95-4e25-880f-f4cdc85825e1-kube-api-access-pl8vc" (OuterVolumeSpecName: "kube-api-access-pl8vc") pod "e0a93f42-8f95-4e25-880f-f4cdc85825e1" (UID: "e0a93f42-8f95-4e25-880f-f4cdc85825e1"). InnerVolumeSpecName "kube-api-access-pl8vc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:01:33 crc kubenswrapper[5050]: I1211 15:01:33.520436 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0a93f42-8f95-4e25-880f-f4cdc85825e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e0a93f42-8f95-4e25-880f-f4cdc85825e1" (UID: "e0a93f42-8f95-4e25-880f-f4cdc85825e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:01:33 crc kubenswrapper[5050]: I1211 15:01:33.552955 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pl8vc\" (UniqueName: \"kubernetes.io/projected/e0a93f42-8f95-4e25-880f-f4cdc85825e1-kube-api-access-pl8vc\") on node \"crc\" DevicePath \"\"" Dec 11 15:01:33 crc kubenswrapper[5050]: I1211 15:01:33.553072 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0a93f42-8f95-4e25-880f-f4cdc85825e1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 15:01:33 crc kubenswrapper[5050]: I1211 15:01:33.553086 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0a93f42-8f95-4e25-880f-f4cdc85825e1-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 15:01:33 crc kubenswrapper[5050]: I1211 15:01:33.861874 5050 generic.go:334] "Generic (PLEG): container finished" podID="e0a93f42-8f95-4e25-880f-f4cdc85825e1" containerID="c766a535105d9a9675665f2a72b44b2314486955a1cbc404a94de4b79f52dead" exitCode=0 Dec 11 15:01:33 crc kubenswrapper[5050]: I1211 15:01:33.861971 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dn9rq" event={"ID":"e0a93f42-8f95-4e25-880f-f4cdc85825e1","Type":"ContainerDied","Data":"c766a535105d9a9675665f2a72b44b2314486955a1cbc404a94de4b79f52dead"} Dec 11 15:01:33 crc kubenswrapper[5050]: I1211 15:01:33.862079 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dn9rq" event={"ID":"e0a93f42-8f95-4e25-880f-f4cdc85825e1","Type":"ContainerDied","Data":"405792e8d46393643590c8f3cf2366c9c6ec25302b07fd44df01e76bad72da8b"} Dec 11 15:01:33 crc kubenswrapper[5050]: I1211 15:01:33.862093 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dn9rq" Dec 11 15:01:33 crc kubenswrapper[5050]: I1211 15:01:33.862120 5050 scope.go:117] "RemoveContainer" containerID="c766a535105d9a9675665f2a72b44b2314486955a1cbc404a94de4b79f52dead" Dec 11 15:01:33 crc kubenswrapper[5050]: I1211 15:01:33.904384 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dn9rq"] Dec 11 15:01:33 crc kubenswrapper[5050]: I1211 15:01:33.911909 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dn9rq"] Dec 11 15:01:33 crc kubenswrapper[5050]: I1211 15:01:33.915433 5050 scope.go:117] "RemoveContainer" containerID="a2c081057b538314fe2920fa68647ec8dc74bba980712db343294eaca5704095" Dec 11 15:01:33 crc kubenswrapper[5050]: I1211 15:01:33.946773 5050 scope.go:117] "RemoveContainer" containerID="1b01d9af47427c51c775d8a39bb9821a32e72f2f13bab8299c5fff4793633da5" Dec 11 15:01:34 crc kubenswrapper[5050]: I1211 15:01:33.999611 5050 scope.go:117] "RemoveContainer" containerID="c766a535105d9a9675665f2a72b44b2314486955a1cbc404a94de4b79f52dead" Dec 11 15:01:34 crc kubenswrapper[5050]: E1211 15:01:34.000434 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c766a535105d9a9675665f2a72b44b2314486955a1cbc404a94de4b79f52dead\": container with ID starting with c766a535105d9a9675665f2a72b44b2314486955a1cbc404a94de4b79f52dead not found: ID does not exist" containerID="c766a535105d9a9675665f2a72b44b2314486955a1cbc404a94de4b79f52dead" Dec 11 15:01:34 crc kubenswrapper[5050]: I1211 15:01:34.000500 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c766a535105d9a9675665f2a72b44b2314486955a1cbc404a94de4b79f52dead"} err="failed to get container status \"c766a535105d9a9675665f2a72b44b2314486955a1cbc404a94de4b79f52dead\": rpc error: code = NotFound desc = could not find container \"c766a535105d9a9675665f2a72b44b2314486955a1cbc404a94de4b79f52dead\": container with ID starting with c766a535105d9a9675665f2a72b44b2314486955a1cbc404a94de4b79f52dead not found: ID does not exist" Dec 11 15:01:34 crc kubenswrapper[5050]: I1211 15:01:34.000542 5050 scope.go:117] "RemoveContainer" containerID="a2c081057b538314fe2920fa68647ec8dc74bba980712db343294eaca5704095" Dec 11 15:01:34 crc kubenswrapper[5050]: E1211 15:01:34.001492 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2c081057b538314fe2920fa68647ec8dc74bba980712db343294eaca5704095\": container with ID starting with a2c081057b538314fe2920fa68647ec8dc74bba980712db343294eaca5704095 not found: ID does not exist" containerID="a2c081057b538314fe2920fa68647ec8dc74bba980712db343294eaca5704095" Dec 11 15:01:34 crc kubenswrapper[5050]: I1211 15:01:34.001743 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2c081057b538314fe2920fa68647ec8dc74bba980712db343294eaca5704095"} err="failed to get container status \"a2c081057b538314fe2920fa68647ec8dc74bba980712db343294eaca5704095\": rpc error: code = NotFound desc = could not find container \"a2c081057b538314fe2920fa68647ec8dc74bba980712db343294eaca5704095\": container with ID starting with a2c081057b538314fe2920fa68647ec8dc74bba980712db343294eaca5704095 not found: ID does not exist" Dec 11 15:01:34 crc kubenswrapper[5050]: I1211 15:01:34.001767 5050 scope.go:117] "RemoveContainer" containerID="1b01d9af47427c51c775d8a39bb9821a32e72f2f13bab8299c5fff4793633da5" Dec 11 15:01:34 crc kubenswrapper[5050]: E1211 15:01:34.002488 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b01d9af47427c51c775d8a39bb9821a32e72f2f13bab8299c5fff4793633da5\": container with ID starting with 1b01d9af47427c51c775d8a39bb9821a32e72f2f13bab8299c5fff4793633da5 not found: ID does not exist" containerID="1b01d9af47427c51c775d8a39bb9821a32e72f2f13bab8299c5fff4793633da5" Dec 11 15:01:34 crc kubenswrapper[5050]: I1211 15:01:34.002526 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b01d9af47427c51c775d8a39bb9821a32e72f2f13bab8299c5fff4793633da5"} err="failed to get container status \"1b01d9af47427c51c775d8a39bb9821a32e72f2f13bab8299c5fff4793633da5\": rpc error: code = NotFound desc = could not find container \"1b01d9af47427c51c775d8a39bb9821a32e72f2f13bab8299c5fff4793633da5\": container with ID starting with 1b01d9af47427c51c775d8a39bb9821a32e72f2f13bab8299c5fff4793633da5 not found: ID does not exist" Dec 11 15:01:35 crc kubenswrapper[5050]: I1211 15:01:35.555007 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0a93f42-8f95-4e25-880f-f4cdc85825e1" path="/var/lib/kubelet/pods/e0a93f42-8f95-4e25-880f-f4cdc85825e1/volumes" Dec 11 15:03:12 crc kubenswrapper[5050]: I1211 15:03:12.445506 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-2hvjg"] Dec 11 15:03:12 crc kubenswrapper[5050]: I1211 15:03:12.455259 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-2hvjg"] Dec 11 15:03:12 crc kubenswrapper[5050]: I1211 15:03:12.586109 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-l6cbz"] Dec 11 15:03:12 crc kubenswrapper[5050]: E1211 15:03:12.586494 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0a93f42-8f95-4e25-880f-f4cdc85825e1" containerName="extract-utilities" Dec 11 15:03:12 crc kubenswrapper[5050]: I1211 15:03:12.586514 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0a93f42-8f95-4e25-880f-f4cdc85825e1" containerName="extract-utilities" Dec 11 15:03:12 crc kubenswrapper[5050]: E1211 15:03:12.586555 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0a93f42-8f95-4e25-880f-f4cdc85825e1" containerName="registry-server" Dec 11 15:03:12 crc kubenswrapper[5050]: I1211 15:03:12.586563 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0a93f42-8f95-4e25-880f-f4cdc85825e1" containerName="registry-server" Dec 11 15:03:12 crc kubenswrapper[5050]: E1211 15:03:12.586577 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0a93f42-8f95-4e25-880f-f4cdc85825e1" containerName="extract-content" Dec 11 15:03:12 crc kubenswrapper[5050]: I1211 15:03:12.586587 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0a93f42-8f95-4e25-880f-f4cdc85825e1" containerName="extract-content" Dec 11 15:03:12 crc kubenswrapper[5050]: I1211 15:03:12.586750 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0a93f42-8f95-4e25-880f-f4cdc85825e1" containerName="registry-server" Dec 11 15:03:12 crc kubenswrapper[5050]: I1211 15:03:12.587391 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-l6cbz" Dec 11 15:03:12 crc kubenswrapper[5050]: I1211 15:03:12.591288 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Dec 11 15:03:12 crc kubenswrapper[5050]: I1211 15:03:12.598376 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Dec 11 15:03:12 crc kubenswrapper[5050]: I1211 15:03:12.598749 5050 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-5d95f" Dec 11 15:03:12 crc kubenswrapper[5050]: I1211 15:03:12.602403 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Dec 11 15:03:12 crc kubenswrapper[5050]: I1211 15:03:12.604880 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-l6cbz"] Dec 11 15:03:12 crc kubenswrapper[5050]: I1211 15:03:12.696360 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/40ba7c06-f5b2-423f-8c53-93c3f57ab91e-node-mnt\") pod \"crc-storage-crc-l6cbz\" (UID: \"40ba7c06-f5b2-423f-8c53-93c3f57ab91e\") " pod="crc-storage/crc-storage-crc-l6cbz" Dec 11 15:03:12 crc kubenswrapper[5050]: I1211 15:03:12.696446 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46v9p\" (UniqueName: \"kubernetes.io/projected/40ba7c06-f5b2-423f-8c53-93c3f57ab91e-kube-api-access-46v9p\") pod \"crc-storage-crc-l6cbz\" (UID: \"40ba7c06-f5b2-423f-8c53-93c3f57ab91e\") " pod="crc-storage/crc-storage-crc-l6cbz" Dec 11 15:03:12 crc kubenswrapper[5050]: I1211 15:03:12.696537 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/40ba7c06-f5b2-423f-8c53-93c3f57ab91e-crc-storage\") pod \"crc-storage-crc-l6cbz\" (UID: \"40ba7c06-f5b2-423f-8c53-93c3f57ab91e\") " pod="crc-storage/crc-storage-crc-l6cbz" Dec 11 15:03:12 crc kubenswrapper[5050]: I1211 15:03:12.797660 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/40ba7c06-f5b2-423f-8c53-93c3f57ab91e-crc-storage\") pod \"crc-storage-crc-l6cbz\" (UID: \"40ba7c06-f5b2-423f-8c53-93c3f57ab91e\") " pod="crc-storage/crc-storage-crc-l6cbz" Dec 11 15:03:12 crc kubenswrapper[5050]: I1211 15:03:12.797782 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/40ba7c06-f5b2-423f-8c53-93c3f57ab91e-node-mnt\") pod \"crc-storage-crc-l6cbz\" (UID: \"40ba7c06-f5b2-423f-8c53-93c3f57ab91e\") " pod="crc-storage/crc-storage-crc-l6cbz" Dec 11 15:03:12 crc kubenswrapper[5050]: I1211 15:03:12.797838 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46v9p\" (UniqueName: \"kubernetes.io/projected/40ba7c06-f5b2-423f-8c53-93c3f57ab91e-kube-api-access-46v9p\") pod \"crc-storage-crc-l6cbz\" (UID: \"40ba7c06-f5b2-423f-8c53-93c3f57ab91e\") " pod="crc-storage/crc-storage-crc-l6cbz" Dec 11 15:03:12 crc kubenswrapper[5050]: I1211 15:03:12.798251 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/40ba7c06-f5b2-423f-8c53-93c3f57ab91e-node-mnt\") pod \"crc-storage-crc-l6cbz\" (UID: \"40ba7c06-f5b2-423f-8c53-93c3f57ab91e\") " pod="crc-storage/crc-storage-crc-l6cbz" Dec 11 15:03:12 crc kubenswrapper[5050]: I1211 15:03:12.798514 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/40ba7c06-f5b2-423f-8c53-93c3f57ab91e-crc-storage\") pod \"crc-storage-crc-l6cbz\" (UID: \"40ba7c06-f5b2-423f-8c53-93c3f57ab91e\") " pod="crc-storage/crc-storage-crc-l6cbz" Dec 11 15:03:12 crc kubenswrapper[5050]: I1211 15:03:12.824980 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46v9p\" (UniqueName: \"kubernetes.io/projected/40ba7c06-f5b2-423f-8c53-93c3f57ab91e-kube-api-access-46v9p\") pod \"crc-storage-crc-l6cbz\" (UID: \"40ba7c06-f5b2-423f-8c53-93c3f57ab91e\") " pod="crc-storage/crc-storage-crc-l6cbz" Dec 11 15:03:12 crc kubenswrapper[5050]: I1211 15:03:12.939433 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-l6cbz" Dec 11 15:03:13 crc kubenswrapper[5050]: I1211 15:03:13.434652 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-l6cbz"] Dec 11 15:03:13 crc kubenswrapper[5050]: I1211 15:03:13.567771 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="805a0aa2-b76d-42f2-8b65-8ffcdd30e32d" path="/var/lib/kubelet/pods/805a0aa2-b76d-42f2-8b65-8ffcdd30e32d/volumes" Dec 11 15:03:13 crc kubenswrapper[5050]: I1211 15:03:13.742036 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-l6cbz" event={"ID":"40ba7c06-f5b2-423f-8c53-93c3f57ab91e","Type":"ContainerStarted","Data":"79ea602a68b0209a490a7fe88d1d0a6614ceefad6e4a64fbb49a395a8cf73e9d"} Dec 11 15:03:15 crc kubenswrapper[5050]: I1211 15:03:15.763176 5050 generic.go:334] "Generic (PLEG): container finished" podID="40ba7c06-f5b2-423f-8c53-93c3f57ab91e" containerID="48daca79bfdb97d1da96c5f43c1776e6c9bfb0a29aebfb5c97419322f8ada43f" exitCode=0 Dec 11 15:03:15 crc kubenswrapper[5050]: I1211 15:03:15.763295 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-l6cbz" event={"ID":"40ba7c06-f5b2-423f-8c53-93c3f57ab91e","Type":"ContainerDied","Data":"48daca79bfdb97d1da96c5f43c1776e6c9bfb0a29aebfb5c97419322f8ada43f"} Dec 11 15:03:17 crc kubenswrapper[5050]: I1211 15:03:17.069652 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ln8rg"] Dec 11 15:03:17 crc kubenswrapper[5050]: I1211 15:03:17.071674 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ln8rg" Dec 11 15:03:17 crc kubenswrapper[5050]: I1211 15:03:17.087450 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ln8rg"] Dec 11 15:03:17 crc kubenswrapper[5050]: I1211 15:03:17.239286 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-l6cbz" Dec 11 15:03:17 crc kubenswrapper[5050]: I1211 15:03:17.274303 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c15290ff-7be2-41cb-b846-0cae120af188-utilities\") pod \"redhat-operators-ln8rg\" (UID: \"c15290ff-7be2-41cb-b846-0cae120af188\") " pod="openshift-marketplace/redhat-operators-ln8rg" Dec 11 15:03:17 crc kubenswrapper[5050]: I1211 15:03:17.274355 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c15290ff-7be2-41cb-b846-0cae120af188-catalog-content\") pod \"redhat-operators-ln8rg\" (UID: \"c15290ff-7be2-41cb-b846-0cae120af188\") " pod="openshift-marketplace/redhat-operators-ln8rg" Dec 11 15:03:17 crc kubenswrapper[5050]: I1211 15:03:17.274401 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtjrl\" (UniqueName: \"kubernetes.io/projected/c15290ff-7be2-41cb-b846-0cae120af188-kube-api-access-vtjrl\") pod \"redhat-operators-ln8rg\" (UID: \"c15290ff-7be2-41cb-b846-0cae120af188\") " pod="openshift-marketplace/redhat-operators-ln8rg" Dec 11 15:03:17 crc kubenswrapper[5050]: I1211 15:03:17.375235 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/40ba7c06-f5b2-423f-8c53-93c3f57ab91e-node-mnt\") pod \"40ba7c06-f5b2-423f-8c53-93c3f57ab91e\" (UID: \"40ba7c06-f5b2-423f-8c53-93c3f57ab91e\") " Dec 11 15:03:17 crc kubenswrapper[5050]: I1211 15:03:17.375329 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/40ba7c06-f5b2-423f-8c53-93c3f57ab91e-crc-storage\") pod \"40ba7c06-f5b2-423f-8c53-93c3f57ab91e\" (UID: \"40ba7c06-f5b2-423f-8c53-93c3f57ab91e\") " Dec 11 15:03:17 crc kubenswrapper[5050]: I1211 15:03:17.375345 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40ba7c06-f5b2-423f-8c53-93c3f57ab91e-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "40ba7c06-f5b2-423f-8c53-93c3f57ab91e" (UID: "40ba7c06-f5b2-423f-8c53-93c3f57ab91e"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 15:03:17 crc kubenswrapper[5050]: I1211 15:03:17.375438 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46v9p\" (UniqueName: \"kubernetes.io/projected/40ba7c06-f5b2-423f-8c53-93c3f57ab91e-kube-api-access-46v9p\") pod \"40ba7c06-f5b2-423f-8c53-93c3f57ab91e\" (UID: \"40ba7c06-f5b2-423f-8c53-93c3f57ab91e\") " Dec 11 15:03:17 crc kubenswrapper[5050]: I1211 15:03:17.375745 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c15290ff-7be2-41cb-b846-0cae120af188-utilities\") pod \"redhat-operators-ln8rg\" (UID: \"c15290ff-7be2-41cb-b846-0cae120af188\") " pod="openshift-marketplace/redhat-operators-ln8rg" Dec 11 15:03:17 crc kubenswrapper[5050]: I1211 15:03:17.375792 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c15290ff-7be2-41cb-b846-0cae120af188-catalog-content\") pod \"redhat-operators-ln8rg\" (UID: \"c15290ff-7be2-41cb-b846-0cae120af188\") " pod="openshift-marketplace/redhat-operators-ln8rg" Dec 11 15:03:17 crc kubenswrapper[5050]: I1211 15:03:17.375850 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtjrl\" (UniqueName: \"kubernetes.io/projected/c15290ff-7be2-41cb-b846-0cae120af188-kube-api-access-vtjrl\") pod \"redhat-operators-ln8rg\" (UID: \"c15290ff-7be2-41cb-b846-0cae120af188\") " pod="openshift-marketplace/redhat-operators-ln8rg" Dec 11 15:03:17 crc kubenswrapper[5050]: I1211 15:03:17.375992 5050 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/40ba7c06-f5b2-423f-8c53-93c3f57ab91e-node-mnt\") on node \"crc\" DevicePath \"\"" Dec 11 15:03:17 crc kubenswrapper[5050]: I1211 15:03:17.376297 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c15290ff-7be2-41cb-b846-0cae120af188-catalog-content\") pod \"redhat-operators-ln8rg\" (UID: \"c15290ff-7be2-41cb-b846-0cae120af188\") " pod="openshift-marketplace/redhat-operators-ln8rg" Dec 11 15:03:17 crc kubenswrapper[5050]: I1211 15:03:17.376613 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c15290ff-7be2-41cb-b846-0cae120af188-utilities\") pod \"redhat-operators-ln8rg\" (UID: \"c15290ff-7be2-41cb-b846-0cae120af188\") " pod="openshift-marketplace/redhat-operators-ln8rg" Dec 11 15:03:17 crc kubenswrapper[5050]: I1211 15:03:17.380321 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40ba7c06-f5b2-423f-8c53-93c3f57ab91e-kube-api-access-46v9p" (OuterVolumeSpecName: "kube-api-access-46v9p") pod "40ba7c06-f5b2-423f-8c53-93c3f57ab91e" (UID: "40ba7c06-f5b2-423f-8c53-93c3f57ab91e"). InnerVolumeSpecName "kube-api-access-46v9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:03:17 crc kubenswrapper[5050]: I1211 15:03:17.395251 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40ba7c06-f5b2-423f-8c53-93c3f57ab91e-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "40ba7c06-f5b2-423f-8c53-93c3f57ab91e" (UID: "40ba7c06-f5b2-423f-8c53-93c3f57ab91e"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:03:17 crc kubenswrapper[5050]: I1211 15:03:17.395695 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtjrl\" (UniqueName: \"kubernetes.io/projected/c15290ff-7be2-41cb-b846-0cae120af188-kube-api-access-vtjrl\") pod \"redhat-operators-ln8rg\" (UID: \"c15290ff-7be2-41cb-b846-0cae120af188\") " pod="openshift-marketplace/redhat-operators-ln8rg" Dec 11 15:03:17 crc kubenswrapper[5050]: I1211 15:03:17.408338 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ln8rg" Dec 11 15:03:17 crc kubenswrapper[5050]: I1211 15:03:17.476834 5050 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/40ba7c06-f5b2-423f-8c53-93c3f57ab91e-crc-storage\") on node \"crc\" DevicePath \"\"" Dec 11 15:03:17 crc kubenswrapper[5050]: I1211 15:03:17.476879 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46v9p\" (UniqueName: \"kubernetes.io/projected/40ba7c06-f5b2-423f-8c53-93c3f57ab91e-kube-api-access-46v9p\") on node \"crc\" DevicePath \"\"" Dec 11 15:03:17 crc kubenswrapper[5050]: I1211 15:03:17.712364 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ln8rg"] Dec 11 15:03:17 crc kubenswrapper[5050]: I1211 15:03:17.781180 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ln8rg" event={"ID":"c15290ff-7be2-41cb-b846-0cae120af188","Type":"ContainerStarted","Data":"1823b7420de00cc75315baede207bdc0f51531a9b2261c2ef2f6467ada87235a"} Dec 11 15:03:17 crc kubenswrapper[5050]: I1211 15:03:17.783865 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-l6cbz" event={"ID":"40ba7c06-f5b2-423f-8c53-93c3f57ab91e","Type":"ContainerDied","Data":"79ea602a68b0209a490a7fe88d1d0a6614ceefad6e4a64fbb49a395a8cf73e9d"} Dec 11 15:03:17 crc kubenswrapper[5050]: I1211 15:03:17.784086 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79ea602a68b0209a490a7fe88d1d0a6614ceefad6e4a64fbb49a395a8cf73e9d" Dec 11 15:03:17 crc kubenswrapper[5050]: I1211 15:03:17.783946 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-l6cbz" Dec 11 15:03:18 crc kubenswrapper[5050]: I1211 15:03:18.791004 5050 generic.go:334] "Generic (PLEG): container finished" podID="c15290ff-7be2-41cb-b846-0cae120af188" containerID="a08c22825b8f0cedb3304f7fd99f222d3e2c703f9e32d4c512649b55f7972708" exitCode=0 Dec 11 15:03:18 crc kubenswrapper[5050]: I1211 15:03:18.791280 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ln8rg" event={"ID":"c15290ff-7be2-41cb-b846-0cae120af188","Type":"ContainerDied","Data":"a08c22825b8f0cedb3304f7fd99f222d3e2c703f9e32d4c512649b55f7972708"} Dec 11 15:03:19 crc kubenswrapper[5050]: I1211 15:03:19.834168 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-l6cbz"] Dec 11 15:03:19 crc kubenswrapper[5050]: I1211 15:03:19.843265 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-l6cbz"] Dec 11 15:03:19 crc kubenswrapper[5050]: I1211 15:03:19.986399 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-l9qhm"] Dec 11 15:03:19 crc kubenswrapper[5050]: E1211 15:03:19.986733 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40ba7c06-f5b2-423f-8c53-93c3f57ab91e" containerName="storage" Dec 11 15:03:19 crc kubenswrapper[5050]: I1211 15:03:19.986750 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="40ba7c06-f5b2-423f-8c53-93c3f57ab91e" containerName="storage" Dec 11 15:03:19 crc kubenswrapper[5050]: I1211 15:03:19.986900 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="40ba7c06-f5b2-423f-8c53-93c3f57ab91e" containerName="storage" Dec 11 15:03:19 crc kubenswrapper[5050]: I1211 15:03:19.987471 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-l9qhm" Dec 11 15:03:19 crc kubenswrapper[5050]: I1211 15:03:19.990152 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Dec 11 15:03:19 crc kubenswrapper[5050]: I1211 15:03:19.990622 5050 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-5d95f" Dec 11 15:03:19 crc kubenswrapper[5050]: I1211 15:03:19.990652 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Dec 11 15:03:19 crc kubenswrapper[5050]: I1211 15:03:19.995389 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Dec 11 15:03:20 crc kubenswrapper[5050]: I1211 15:03:20.001588 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-l9qhm"] Dec 11 15:03:20 crc kubenswrapper[5050]: I1211 15:03:20.020306 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/38160dfe-d25f-477c-a779-9a6a5e921e05-crc-storage\") pod \"crc-storage-crc-l9qhm\" (UID: \"38160dfe-d25f-477c-a779-9a6a5e921e05\") " pod="crc-storage/crc-storage-crc-l9qhm" Dec 11 15:03:20 crc kubenswrapper[5050]: I1211 15:03:20.020349 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/38160dfe-d25f-477c-a779-9a6a5e921e05-node-mnt\") pod \"crc-storage-crc-l9qhm\" (UID: \"38160dfe-d25f-477c-a779-9a6a5e921e05\") " pod="crc-storage/crc-storage-crc-l9qhm" Dec 11 15:03:20 crc kubenswrapper[5050]: I1211 15:03:20.020386 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7tzx\" (UniqueName: \"kubernetes.io/projected/38160dfe-d25f-477c-a779-9a6a5e921e05-kube-api-access-s7tzx\") pod \"crc-storage-crc-l9qhm\" (UID: \"38160dfe-d25f-477c-a779-9a6a5e921e05\") " pod="crc-storage/crc-storage-crc-l9qhm" Dec 11 15:03:20 crc kubenswrapper[5050]: I1211 15:03:20.121627 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7tzx\" (UniqueName: \"kubernetes.io/projected/38160dfe-d25f-477c-a779-9a6a5e921e05-kube-api-access-s7tzx\") pod \"crc-storage-crc-l9qhm\" (UID: \"38160dfe-d25f-477c-a779-9a6a5e921e05\") " pod="crc-storage/crc-storage-crc-l9qhm" Dec 11 15:03:20 crc kubenswrapper[5050]: I1211 15:03:20.122086 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/38160dfe-d25f-477c-a779-9a6a5e921e05-crc-storage\") pod \"crc-storage-crc-l9qhm\" (UID: \"38160dfe-d25f-477c-a779-9a6a5e921e05\") " pod="crc-storage/crc-storage-crc-l9qhm" Dec 11 15:03:20 crc kubenswrapper[5050]: I1211 15:03:20.122307 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/38160dfe-d25f-477c-a779-9a6a5e921e05-node-mnt\") pod \"crc-storage-crc-l9qhm\" (UID: \"38160dfe-d25f-477c-a779-9a6a5e921e05\") " pod="crc-storage/crc-storage-crc-l9qhm" Dec 11 15:03:20 crc kubenswrapper[5050]: I1211 15:03:20.122673 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/38160dfe-d25f-477c-a779-9a6a5e921e05-node-mnt\") pod \"crc-storage-crc-l9qhm\" (UID: \"38160dfe-d25f-477c-a779-9a6a5e921e05\") " pod="crc-storage/crc-storage-crc-l9qhm" Dec 11 15:03:20 crc kubenswrapper[5050]: I1211 15:03:20.122788 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/38160dfe-d25f-477c-a779-9a6a5e921e05-crc-storage\") pod \"crc-storage-crc-l9qhm\" (UID: \"38160dfe-d25f-477c-a779-9a6a5e921e05\") " pod="crc-storage/crc-storage-crc-l9qhm" Dec 11 15:03:20 crc kubenswrapper[5050]: I1211 15:03:20.144380 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7tzx\" (UniqueName: \"kubernetes.io/projected/38160dfe-d25f-477c-a779-9a6a5e921e05-kube-api-access-s7tzx\") pod \"crc-storage-crc-l9qhm\" (UID: \"38160dfe-d25f-477c-a779-9a6a5e921e05\") " pod="crc-storage/crc-storage-crc-l9qhm" Dec 11 15:03:20 crc kubenswrapper[5050]: I1211 15:03:20.383614 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-l9qhm" Dec 11 15:03:20 crc kubenswrapper[5050]: I1211 15:03:20.625579 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-l9qhm"] Dec 11 15:03:20 crc kubenswrapper[5050]: W1211 15:03:20.633505 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38160dfe_d25f_477c_a779_9a6a5e921e05.slice/crio-25f4a39053e87d7dab847033b3c7b4e768023e355933f330cb4e67f44a54a6fe WatchSource:0}: Error finding container 25f4a39053e87d7dab847033b3c7b4e768023e355933f330cb4e67f44a54a6fe: Status 404 returned error can't find the container with id 25f4a39053e87d7dab847033b3c7b4e768023e355933f330cb4e67f44a54a6fe Dec 11 15:03:20 crc kubenswrapper[5050]: I1211 15:03:20.805319 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-l9qhm" event={"ID":"38160dfe-d25f-477c-a779-9a6a5e921e05","Type":"ContainerStarted","Data":"25f4a39053e87d7dab847033b3c7b4e768023e355933f330cb4e67f44a54a6fe"} Dec 11 15:03:20 crc kubenswrapper[5050]: I1211 15:03:20.807145 5050 generic.go:334] "Generic (PLEG): container finished" podID="c15290ff-7be2-41cb-b846-0cae120af188" containerID="8ad337e12f1ac32dda54b5229407afc81dc1b3091c28e66dd1f4ec65c726ee9e" exitCode=0 Dec 11 15:03:20 crc kubenswrapper[5050]: I1211 15:03:20.807183 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ln8rg" event={"ID":"c15290ff-7be2-41cb-b846-0cae120af188","Type":"ContainerDied","Data":"8ad337e12f1ac32dda54b5229407afc81dc1b3091c28e66dd1f4ec65c726ee9e"} Dec 11 15:03:21 crc kubenswrapper[5050]: I1211 15:03:21.558730 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40ba7c06-f5b2-423f-8c53-93c3f57ab91e" path="/var/lib/kubelet/pods/40ba7c06-f5b2-423f-8c53-93c3f57ab91e/volumes" Dec 11 15:03:21 crc kubenswrapper[5050]: I1211 15:03:21.815419 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ln8rg" event={"ID":"c15290ff-7be2-41cb-b846-0cae120af188","Type":"ContainerStarted","Data":"88f50f637c372fee29e8eaeaa1ea00431fa497c43ce119f60849e831412fc07e"} Dec 11 15:03:21 crc kubenswrapper[5050]: I1211 15:03:21.817368 5050 generic.go:334] "Generic (PLEG): container finished" podID="38160dfe-d25f-477c-a779-9a6a5e921e05" containerID="24b8ff319ef6f98e0b02dc0708e05479388ea63fb20b78ab94162120812a61a2" exitCode=0 Dec 11 15:03:21 crc kubenswrapper[5050]: I1211 15:03:21.817428 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-l9qhm" event={"ID":"38160dfe-d25f-477c-a779-9a6a5e921e05","Type":"ContainerDied","Data":"24b8ff319ef6f98e0b02dc0708e05479388ea63fb20b78ab94162120812a61a2"} Dec 11 15:03:21 crc kubenswrapper[5050]: I1211 15:03:21.836037 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ln8rg" podStartSLOduration=2.204855842 podStartE2EDuration="4.836001536s" podCreationTimestamp="2025-12-11 15:03:17 +0000 UTC" firstStartedPulling="2025-12-11 15:03:18.793119782 +0000 UTC m=+4489.636842358" lastFinishedPulling="2025-12-11 15:03:21.424265456 +0000 UTC m=+4492.267988052" observedRunningTime="2025-12-11 15:03:21.831067295 +0000 UTC m=+4492.674789901" watchObservedRunningTime="2025-12-11 15:03:21.836001536 +0000 UTC m=+4492.679724132" Dec 11 15:03:23 crc kubenswrapper[5050]: I1211 15:03:23.155663 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-l9qhm" Dec 11 15:03:23 crc kubenswrapper[5050]: I1211 15:03:23.184743 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/38160dfe-d25f-477c-a779-9a6a5e921e05-crc-storage\") pod \"38160dfe-d25f-477c-a779-9a6a5e921e05\" (UID: \"38160dfe-d25f-477c-a779-9a6a5e921e05\") " Dec 11 15:03:23 crc kubenswrapper[5050]: I1211 15:03:23.184880 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7tzx\" (UniqueName: \"kubernetes.io/projected/38160dfe-d25f-477c-a779-9a6a5e921e05-kube-api-access-s7tzx\") pod \"38160dfe-d25f-477c-a779-9a6a5e921e05\" (UID: \"38160dfe-d25f-477c-a779-9a6a5e921e05\") " Dec 11 15:03:23 crc kubenswrapper[5050]: I1211 15:03:23.185020 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/38160dfe-d25f-477c-a779-9a6a5e921e05-node-mnt\") pod \"38160dfe-d25f-477c-a779-9a6a5e921e05\" (UID: \"38160dfe-d25f-477c-a779-9a6a5e921e05\") " Dec 11 15:03:23 crc kubenswrapper[5050]: I1211 15:03:23.185105 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/38160dfe-d25f-477c-a779-9a6a5e921e05-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "38160dfe-d25f-477c-a779-9a6a5e921e05" (UID: "38160dfe-d25f-477c-a779-9a6a5e921e05"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 15:03:23 crc kubenswrapper[5050]: I1211 15:03:23.185423 5050 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/38160dfe-d25f-477c-a779-9a6a5e921e05-node-mnt\") on node \"crc\" DevicePath \"\"" Dec 11 15:03:23 crc kubenswrapper[5050]: I1211 15:03:23.193299 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38160dfe-d25f-477c-a779-9a6a5e921e05-kube-api-access-s7tzx" (OuterVolumeSpecName: "kube-api-access-s7tzx") pod "38160dfe-d25f-477c-a779-9a6a5e921e05" (UID: "38160dfe-d25f-477c-a779-9a6a5e921e05"). InnerVolumeSpecName "kube-api-access-s7tzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:03:23 crc kubenswrapper[5050]: I1211 15:03:23.207560 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38160dfe-d25f-477c-a779-9a6a5e921e05-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "38160dfe-d25f-477c-a779-9a6a5e921e05" (UID: "38160dfe-d25f-477c-a779-9a6a5e921e05"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:03:23 crc kubenswrapper[5050]: I1211 15:03:23.285828 5050 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/38160dfe-d25f-477c-a779-9a6a5e921e05-crc-storage\") on node \"crc\" DevicePath \"\"" Dec 11 15:03:23 crc kubenswrapper[5050]: I1211 15:03:23.285866 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7tzx\" (UniqueName: \"kubernetes.io/projected/38160dfe-d25f-477c-a779-9a6a5e921e05-kube-api-access-s7tzx\") on node \"crc\" DevicePath \"\"" Dec 11 15:03:23 crc kubenswrapper[5050]: I1211 15:03:23.832072 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-l9qhm" event={"ID":"38160dfe-d25f-477c-a779-9a6a5e921e05","Type":"ContainerDied","Data":"25f4a39053e87d7dab847033b3c7b4e768023e355933f330cb4e67f44a54a6fe"} Dec 11 15:03:23 crc kubenswrapper[5050]: I1211 15:03:23.832580 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25f4a39053e87d7dab847033b3c7b4e768023e355933f330cb4e67f44a54a6fe" Dec 11 15:03:23 crc kubenswrapper[5050]: I1211 15:03:23.832191 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-l9qhm" Dec 11 15:03:27 crc kubenswrapper[5050]: I1211 15:03:27.408820 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ln8rg" Dec 11 15:03:27 crc kubenswrapper[5050]: I1211 15:03:27.409525 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ln8rg" Dec 11 15:03:28 crc kubenswrapper[5050]: I1211 15:03:28.479075 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ln8rg" podUID="c15290ff-7be2-41cb-b846-0cae120af188" containerName="registry-server" probeResult="failure" output=< Dec 11 15:03:28 crc kubenswrapper[5050]: timeout: failed to connect service ":50051" within 1s Dec 11 15:03:28 crc kubenswrapper[5050]: > Dec 11 15:03:37 crc kubenswrapper[5050]: I1211 15:03:37.743832 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ln8rg" Dec 11 15:03:37 crc kubenswrapper[5050]: I1211 15:03:37.807066 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ln8rg" Dec 11 15:03:37 crc kubenswrapper[5050]: I1211 15:03:37.989585 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ln8rg"] Dec 11 15:03:38 crc kubenswrapper[5050]: I1211 15:03:38.960171 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ln8rg" podUID="c15290ff-7be2-41cb-b846-0cae120af188" containerName="registry-server" containerID="cri-o://88f50f637c372fee29e8eaeaa1ea00431fa497c43ce119f60849e831412fc07e" gracePeriod=2 Dec 11 15:03:40 crc kubenswrapper[5050]: I1211 15:03:40.796457 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 15:03:40 crc kubenswrapper[5050]: I1211 15:03:40.796846 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 15:03:41 crc kubenswrapper[5050]: I1211 15:03:41.501110 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ln8rg" Dec 11 15:03:41 crc kubenswrapper[5050]: I1211 15:03:41.679103 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c15290ff-7be2-41cb-b846-0cae120af188-utilities\") pod \"c15290ff-7be2-41cb-b846-0cae120af188\" (UID: \"c15290ff-7be2-41cb-b846-0cae120af188\") " Dec 11 15:03:41 crc kubenswrapper[5050]: I1211 15:03:41.679151 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtjrl\" (UniqueName: \"kubernetes.io/projected/c15290ff-7be2-41cb-b846-0cae120af188-kube-api-access-vtjrl\") pod \"c15290ff-7be2-41cb-b846-0cae120af188\" (UID: \"c15290ff-7be2-41cb-b846-0cae120af188\") " Dec 11 15:03:41 crc kubenswrapper[5050]: I1211 15:03:41.679253 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c15290ff-7be2-41cb-b846-0cae120af188-catalog-content\") pod \"c15290ff-7be2-41cb-b846-0cae120af188\" (UID: \"c15290ff-7be2-41cb-b846-0cae120af188\") " Dec 11 15:03:41 crc kubenswrapper[5050]: I1211 15:03:41.680085 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c15290ff-7be2-41cb-b846-0cae120af188-utilities" (OuterVolumeSpecName: "utilities") pod "c15290ff-7be2-41cb-b846-0cae120af188" (UID: "c15290ff-7be2-41cb-b846-0cae120af188"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:03:41 crc kubenswrapper[5050]: I1211 15:03:41.684470 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c15290ff-7be2-41cb-b846-0cae120af188-kube-api-access-vtjrl" (OuterVolumeSpecName: "kube-api-access-vtjrl") pod "c15290ff-7be2-41cb-b846-0cae120af188" (UID: "c15290ff-7be2-41cb-b846-0cae120af188"). InnerVolumeSpecName "kube-api-access-vtjrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:03:41 crc kubenswrapper[5050]: I1211 15:03:41.780741 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c15290ff-7be2-41cb-b846-0cae120af188-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 15:03:41 crc kubenswrapper[5050]: I1211 15:03:41.780782 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtjrl\" (UniqueName: \"kubernetes.io/projected/c15290ff-7be2-41cb-b846-0cae120af188-kube-api-access-vtjrl\") on node \"crc\" DevicePath \"\"" Dec 11 15:03:41 crc kubenswrapper[5050]: I1211 15:03:41.833792 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c15290ff-7be2-41cb-b846-0cae120af188-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c15290ff-7be2-41cb-b846-0cae120af188" (UID: "c15290ff-7be2-41cb-b846-0cae120af188"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:03:41 crc kubenswrapper[5050]: I1211 15:03:41.882244 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c15290ff-7be2-41cb-b846-0cae120af188-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 15:03:41 crc kubenswrapper[5050]: I1211 15:03:41.985605 5050 generic.go:334] "Generic (PLEG): container finished" podID="c15290ff-7be2-41cb-b846-0cae120af188" containerID="88f50f637c372fee29e8eaeaa1ea00431fa497c43ce119f60849e831412fc07e" exitCode=0 Dec 11 15:03:41 crc kubenswrapper[5050]: I1211 15:03:41.985677 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ln8rg" Dec 11 15:03:41 crc kubenswrapper[5050]: I1211 15:03:41.985678 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ln8rg" event={"ID":"c15290ff-7be2-41cb-b846-0cae120af188","Type":"ContainerDied","Data":"88f50f637c372fee29e8eaeaa1ea00431fa497c43ce119f60849e831412fc07e"} Dec 11 15:03:41 crc kubenswrapper[5050]: I1211 15:03:41.985776 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ln8rg" event={"ID":"c15290ff-7be2-41cb-b846-0cae120af188","Type":"ContainerDied","Data":"1823b7420de00cc75315baede207bdc0f51531a9b2261c2ef2f6467ada87235a"} Dec 11 15:03:41 crc kubenswrapper[5050]: I1211 15:03:41.985803 5050 scope.go:117] "RemoveContainer" containerID="88f50f637c372fee29e8eaeaa1ea00431fa497c43ce119f60849e831412fc07e" Dec 11 15:03:42 crc kubenswrapper[5050]: I1211 15:03:42.016553 5050 scope.go:117] "RemoveContainer" containerID="8ad337e12f1ac32dda54b5229407afc81dc1b3091c28e66dd1f4ec65c726ee9e" Dec 11 15:03:42 crc kubenswrapper[5050]: I1211 15:03:42.029929 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ln8rg"] Dec 11 15:03:42 crc kubenswrapper[5050]: I1211 15:03:42.036520 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ln8rg"] Dec 11 15:03:42 crc kubenswrapper[5050]: I1211 15:03:42.052901 5050 scope.go:117] "RemoveContainer" containerID="a08c22825b8f0cedb3304f7fd99f222d3e2c703f9e32d4c512649b55f7972708" Dec 11 15:03:42 crc kubenswrapper[5050]: I1211 15:03:42.076890 5050 scope.go:117] "RemoveContainer" containerID="88f50f637c372fee29e8eaeaa1ea00431fa497c43ce119f60849e831412fc07e" Dec 11 15:03:42 crc kubenswrapper[5050]: E1211 15:03:42.077419 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88f50f637c372fee29e8eaeaa1ea00431fa497c43ce119f60849e831412fc07e\": container with ID starting with 88f50f637c372fee29e8eaeaa1ea00431fa497c43ce119f60849e831412fc07e not found: ID does not exist" containerID="88f50f637c372fee29e8eaeaa1ea00431fa497c43ce119f60849e831412fc07e" Dec 11 15:03:42 crc kubenswrapper[5050]: I1211 15:03:42.077493 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88f50f637c372fee29e8eaeaa1ea00431fa497c43ce119f60849e831412fc07e"} err="failed to get container status \"88f50f637c372fee29e8eaeaa1ea00431fa497c43ce119f60849e831412fc07e\": rpc error: code = NotFound desc = could not find container \"88f50f637c372fee29e8eaeaa1ea00431fa497c43ce119f60849e831412fc07e\": container with ID starting with 88f50f637c372fee29e8eaeaa1ea00431fa497c43ce119f60849e831412fc07e not found: ID does not exist" Dec 11 15:03:42 crc kubenswrapper[5050]: I1211 15:03:42.077543 5050 scope.go:117] "RemoveContainer" containerID="8ad337e12f1ac32dda54b5229407afc81dc1b3091c28e66dd1f4ec65c726ee9e" Dec 11 15:03:42 crc kubenswrapper[5050]: E1211 15:03:42.078006 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ad337e12f1ac32dda54b5229407afc81dc1b3091c28e66dd1f4ec65c726ee9e\": container with ID starting with 8ad337e12f1ac32dda54b5229407afc81dc1b3091c28e66dd1f4ec65c726ee9e not found: ID does not exist" containerID="8ad337e12f1ac32dda54b5229407afc81dc1b3091c28e66dd1f4ec65c726ee9e" Dec 11 15:03:42 crc kubenswrapper[5050]: I1211 15:03:42.078054 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ad337e12f1ac32dda54b5229407afc81dc1b3091c28e66dd1f4ec65c726ee9e"} err="failed to get container status \"8ad337e12f1ac32dda54b5229407afc81dc1b3091c28e66dd1f4ec65c726ee9e\": rpc error: code = NotFound desc = could not find container \"8ad337e12f1ac32dda54b5229407afc81dc1b3091c28e66dd1f4ec65c726ee9e\": container with ID starting with 8ad337e12f1ac32dda54b5229407afc81dc1b3091c28e66dd1f4ec65c726ee9e not found: ID does not exist" Dec 11 15:03:42 crc kubenswrapper[5050]: I1211 15:03:42.078073 5050 scope.go:117] "RemoveContainer" containerID="a08c22825b8f0cedb3304f7fd99f222d3e2c703f9e32d4c512649b55f7972708" Dec 11 15:03:42 crc kubenswrapper[5050]: E1211 15:03:42.078718 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a08c22825b8f0cedb3304f7fd99f222d3e2c703f9e32d4c512649b55f7972708\": container with ID starting with a08c22825b8f0cedb3304f7fd99f222d3e2c703f9e32d4c512649b55f7972708 not found: ID does not exist" containerID="a08c22825b8f0cedb3304f7fd99f222d3e2c703f9e32d4c512649b55f7972708" Dec 11 15:03:42 crc kubenswrapper[5050]: I1211 15:03:42.078756 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a08c22825b8f0cedb3304f7fd99f222d3e2c703f9e32d4c512649b55f7972708"} err="failed to get container status \"a08c22825b8f0cedb3304f7fd99f222d3e2c703f9e32d4c512649b55f7972708\": rpc error: code = NotFound desc = could not find container \"a08c22825b8f0cedb3304f7fd99f222d3e2c703f9e32d4c512649b55f7972708\": container with ID starting with a08c22825b8f0cedb3304f7fd99f222d3e2c703f9e32d4c512649b55f7972708 not found: ID does not exist" Dec 11 15:03:43 crc kubenswrapper[5050]: I1211 15:03:43.568257 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c15290ff-7be2-41cb-b846-0cae120af188" path="/var/lib/kubelet/pods/c15290ff-7be2-41cb-b846-0cae120af188/volumes" Dec 11 15:03:44 crc kubenswrapper[5050]: I1211 15:03:44.513439 5050 scope.go:117] "RemoveContainer" containerID="d216b65a928c99a67fed82e452c22653fec2716e6d9bfbd46f24c3dfa2efb0dd" Dec 11 15:04:10 crc kubenswrapper[5050]: I1211 15:04:10.796584 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 15:04:10 crc kubenswrapper[5050]: I1211 15:04:10.797478 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 15:04:40 crc kubenswrapper[5050]: I1211 15:04:40.797407 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 15:04:40 crc kubenswrapper[5050]: I1211 15:04:40.797950 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 15:04:40 crc kubenswrapper[5050]: I1211 15:04:40.798005 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 15:04:40 crc kubenswrapper[5050]: I1211 15:04:40.798536 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b585bea20b0bb52e58642ee9f3e33b5b182a66a544d35daafc0202a53b35a810"} pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 15:04:40 crc kubenswrapper[5050]: I1211 15:04:40.798652 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" containerID="cri-o://b585bea20b0bb52e58642ee9f3e33b5b182a66a544d35daafc0202a53b35a810" gracePeriod=600 Dec 11 15:04:40 crc kubenswrapper[5050]: E1211 15:04:40.921335 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:04:41 crc kubenswrapper[5050]: I1211 15:04:41.531814 5050 generic.go:334] "Generic (PLEG): container finished" podID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerID="b585bea20b0bb52e58642ee9f3e33b5b182a66a544d35daafc0202a53b35a810" exitCode=0 Dec 11 15:04:41 crc kubenswrapper[5050]: I1211 15:04:41.531911 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerDied","Data":"b585bea20b0bb52e58642ee9f3e33b5b182a66a544d35daafc0202a53b35a810"} Dec 11 15:04:41 crc kubenswrapper[5050]: I1211 15:04:41.532624 5050 scope.go:117] "RemoveContainer" containerID="6ff8571bb4559fd374475c94385275b8db9d1ca3a36ff5857122f05b5cf16e65" Dec 11 15:04:41 crc kubenswrapper[5050]: I1211 15:04:41.533528 5050 scope.go:117] "RemoveContainer" containerID="b585bea20b0bb52e58642ee9f3e33b5b182a66a544d35daafc0202a53b35a810" Dec 11 15:04:41 crc kubenswrapper[5050]: E1211 15:04:41.534061 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:04:48 crc kubenswrapper[5050]: I1211 15:04:48.533591 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hg6tz"] Dec 11 15:04:48 crc kubenswrapper[5050]: E1211 15:04:48.535075 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38160dfe-d25f-477c-a779-9a6a5e921e05" containerName="storage" Dec 11 15:04:48 crc kubenswrapper[5050]: I1211 15:04:48.535102 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="38160dfe-d25f-477c-a779-9a6a5e921e05" containerName="storage" Dec 11 15:04:48 crc kubenswrapper[5050]: E1211 15:04:48.535136 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c15290ff-7be2-41cb-b846-0cae120af188" containerName="extract-content" Dec 11 15:04:48 crc kubenswrapper[5050]: I1211 15:04:48.535150 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="c15290ff-7be2-41cb-b846-0cae120af188" containerName="extract-content" Dec 11 15:04:48 crc kubenswrapper[5050]: E1211 15:04:48.535188 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c15290ff-7be2-41cb-b846-0cae120af188" containerName="extract-utilities" Dec 11 15:04:48 crc kubenswrapper[5050]: I1211 15:04:48.535203 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="c15290ff-7be2-41cb-b846-0cae120af188" containerName="extract-utilities" Dec 11 15:04:48 crc kubenswrapper[5050]: E1211 15:04:48.535235 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c15290ff-7be2-41cb-b846-0cae120af188" containerName="registry-server" Dec 11 15:04:48 crc kubenswrapper[5050]: I1211 15:04:48.535248 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="c15290ff-7be2-41cb-b846-0cae120af188" containerName="registry-server" Dec 11 15:04:48 crc kubenswrapper[5050]: I1211 15:04:48.535486 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="c15290ff-7be2-41cb-b846-0cae120af188" containerName="registry-server" Dec 11 15:04:48 crc kubenswrapper[5050]: I1211 15:04:48.535519 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="38160dfe-d25f-477c-a779-9a6a5e921e05" containerName="storage" Dec 11 15:04:48 crc kubenswrapper[5050]: I1211 15:04:48.537965 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hg6tz" Dec 11 15:04:48 crc kubenswrapper[5050]: I1211 15:04:48.556244 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hg6tz"] Dec 11 15:04:48 crc kubenswrapper[5050]: I1211 15:04:48.698397 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f824bb5-ed65-4412-837c-48f7dd65f498-utilities\") pod \"redhat-marketplace-hg6tz\" (UID: \"1f824bb5-ed65-4412-837c-48f7dd65f498\") " pod="openshift-marketplace/redhat-marketplace-hg6tz" Dec 11 15:04:48 crc kubenswrapper[5050]: I1211 15:04:48.698461 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f824bb5-ed65-4412-837c-48f7dd65f498-catalog-content\") pod \"redhat-marketplace-hg6tz\" (UID: \"1f824bb5-ed65-4412-837c-48f7dd65f498\") " pod="openshift-marketplace/redhat-marketplace-hg6tz" Dec 11 15:04:48 crc kubenswrapper[5050]: I1211 15:04:48.698531 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2vn5\" (UniqueName: \"kubernetes.io/projected/1f824bb5-ed65-4412-837c-48f7dd65f498-kube-api-access-n2vn5\") pod \"redhat-marketplace-hg6tz\" (UID: \"1f824bb5-ed65-4412-837c-48f7dd65f498\") " pod="openshift-marketplace/redhat-marketplace-hg6tz" Dec 11 15:04:48 crc kubenswrapper[5050]: I1211 15:04:48.800665 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f824bb5-ed65-4412-837c-48f7dd65f498-utilities\") pod \"redhat-marketplace-hg6tz\" (UID: \"1f824bb5-ed65-4412-837c-48f7dd65f498\") " pod="openshift-marketplace/redhat-marketplace-hg6tz" Dec 11 15:04:48 crc kubenswrapper[5050]: I1211 15:04:48.800789 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f824bb5-ed65-4412-837c-48f7dd65f498-catalog-content\") pod \"redhat-marketplace-hg6tz\" (UID: \"1f824bb5-ed65-4412-837c-48f7dd65f498\") " pod="openshift-marketplace/redhat-marketplace-hg6tz" Dec 11 15:04:48 crc kubenswrapper[5050]: I1211 15:04:48.800874 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2vn5\" (UniqueName: \"kubernetes.io/projected/1f824bb5-ed65-4412-837c-48f7dd65f498-kube-api-access-n2vn5\") pod \"redhat-marketplace-hg6tz\" (UID: \"1f824bb5-ed65-4412-837c-48f7dd65f498\") " pod="openshift-marketplace/redhat-marketplace-hg6tz" Dec 11 15:04:48 crc kubenswrapper[5050]: I1211 15:04:48.801675 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f824bb5-ed65-4412-837c-48f7dd65f498-catalog-content\") pod \"redhat-marketplace-hg6tz\" (UID: \"1f824bb5-ed65-4412-837c-48f7dd65f498\") " pod="openshift-marketplace/redhat-marketplace-hg6tz" Dec 11 15:04:48 crc kubenswrapper[5050]: I1211 15:04:48.801863 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f824bb5-ed65-4412-837c-48f7dd65f498-utilities\") pod \"redhat-marketplace-hg6tz\" (UID: \"1f824bb5-ed65-4412-837c-48f7dd65f498\") " pod="openshift-marketplace/redhat-marketplace-hg6tz" Dec 11 15:04:48 crc kubenswrapper[5050]: I1211 15:04:48.829142 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2vn5\" (UniqueName: \"kubernetes.io/projected/1f824bb5-ed65-4412-837c-48f7dd65f498-kube-api-access-n2vn5\") pod \"redhat-marketplace-hg6tz\" (UID: \"1f824bb5-ed65-4412-837c-48f7dd65f498\") " pod="openshift-marketplace/redhat-marketplace-hg6tz" Dec 11 15:04:48 crc kubenswrapper[5050]: I1211 15:04:48.878510 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hg6tz" Dec 11 15:04:49 crc kubenswrapper[5050]: I1211 15:04:49.166483 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hg6tz"] Dec 11 15:04:49 crc kubenswrapper[5050]: I1211 15:04:49.612870 5050 generic.go:334] "Generic (PLEG): container finished" podID="1f824bb5-ed65-4412-837c-48f7dd65f498" containerID="0f15142a456c53bb896d67bc68ef41cd37edd67afaa3fc47d504f774d42faa8e" exitCode=0 Dec 11 15:04:49 crc kubenswrapper[5050]: I1211 15:04:49.612951 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hg6tz" event={"ID":"1f824bb5-ed65-4412-837c-48f7dd65f498","Type":"ContainerDied","Data":"0f15142a456c53bb896d67bc68ef41cd37edd67afaa3fc47d504f774d42faa8e"} Dec 11 15:04:49 crc kubenswrapper[5050]: I1211 15:04:49.612998 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hg6tz" event={"ID":"1f824bb5-ed65-4412-837c-48f7dd65f498","Type":"ContainerStarted","Data":"31f1e651a41008cd9ab851b6a4bdc050a80558179f487a189b6281d00f6caf82"} Dec 11 15:04:51 crc kubenswrapper[5050]: I1211 15:04:51.636893 5050 generic.go:334] "Generic (PLEG): container finished" podID="1f824bb5-ed65-4412-837c-48f7dd65f498" containerID="fce9c942e63fc1d8a2798092211d09cd95469b00c53c48a9b8a5755c74cdbcee" exitCode=0 Dec 11 15:04:51 crc kubenswrapper[5050]: I1211 15:04:51.637139 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hg6tz" event={"ID":"1f824bb5-ed65-4412-837c-48f7dd65f498","Type":"ContainerDied","Data":"fce9c942e63fc1d8a2798092211d09cd95469b00c53c48a9b8a5755c74cdbcee"} Dec 11 15:04:53 crc kubenswrapper[5050]: I1211 15:04:53.661925 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hg6tz" event={"ID":"1f824bb5-ed65-4412-837c-48f7dd65f498","Type":"ContainerStarted","Data":"f3723c67929e1d86f4968fcab377274b5230dc2b9d3ff353c5d23001d33d09f7"} Dec 11 15:04:53 crc kubenswrapper[5050]: I1211 15:04:53.695190 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hg6tz" podStartSLOduration=2.837850775 podStartE2EDuration="5.695171253s" podCreationTimestamp="2025-12-11 15:04:48 +0000 UTC" firstStartedPulling="2025-12-11 15:04:49.616817211 +0000 UTC m=+4580.460539797" lastFinishedPulling="2025-12-11 15:04:52.474137699 +0000 UTC m=+4583.317860275" observedRunningTime="2025-12-11 15:04:53.689689686 +0000 UTC m=+4584.533412272" watchObservedRunningTime="2025-12-11 15:04:53.695171253 +0000 UTC m=+4584.538893839" Dec 11 15:04:56 crc kubenswrapper[5050]: I1211 15:04:56.546421 5050 scope.go:117] "RemoveContainer" containerID="b585bea20b0bb52e58642ee9f3e33b5b182a66a544d35daafc0202a53b35a810" Dec 11 15:04:56 crc kubenswrapper[5050]: E1211 15:04:56.547578 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:04:58 crc kubenswrapper[5050]: I1211 15:04:58.879573 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hg6tz" Dec 11 15:04:58 crc kubenswrapper[5050]: I1211 15:04:58.881490 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hg6tz" Dec 11 15:04:58 crc kubenswrapper[5050]: I1211 15:04:58.943217 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hg6tz" Dec 11 15:05:00 crc kubenswrapper[5050]: I1211 15:05:00.211648 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hg6tz" Dec 11 15:05:00 crc kubenswrapper[5050]: I1211 15:05:00.289077 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hg6tz"] Dec 11 15:05:01 crc kubenswrapper[5050]: I1211 15:05:01.741610 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hg6tz" podUID="1f824bb5-ed65-4412-837c-48f7dd65f498" containerName="registry-server" containerID="cri-o://f3723c67929e1d86f4968fcab377274b5230dc2b9d3ff353c5d23001d33d09f7" gracePeriod=2 Dec 11 15:05:02 crc kubenswrapper[5050]: I1211 15:05:02.737665 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hg6tz" Dec 11 15:05:02 crc kubenswrapper[5050]: I1211 15:05:02.751712 5050 generic.go:334] "Generic (PLEG): container finished" podID="1f824bb5-ed65-4412-837c-48f7dd65f498" containerID="f3723c67929e1d86f4968fcab377274b5230dc2b9d3ff353c5d23001d33d09f7" exitCode=0 Dec 11 15:05:02 crc kubenswrapper[5050]: I1211 15:05:02.751764 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hg6tz" event={"ID":"1f824bb5-ed65-4412-837c-48f7dd65f498","Type":"ContainerDied","Data":"f3723c67929e1d86f4968fcab377274b5230dc2b9d3ff353c5d23001d33d09f7"} Dec 11 15:05:02 crc kubenswrapper[5050]: I1211 15:05:02.751808 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hg6tz" Dec 11 15:05:02 crc kubenswrapper[5050]: I1211 15:05:02.751831 5050 scope.go:117] "RemoveContainer" containerID="f3723c67929e1d86f4968fcab377274b5230dc2b9d3ff353c5d23001d33d09f7" Dec 11 15:05:02 crc kubenswrapper[5050]: I1211 15:05:02.751819 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hg6tz" event={"ID":"1f824bb5-ed65-4412-837c-48f7dd65f498","Type":"ContainerDied","Data":"31f1e651a41008cd9ab851b6a4bdc050a80558179f487a189b6281d00f6caf82"} Dec 11 15:05:02 crc kubenswrapper[5050]: I1211 15:05:02.792966 5050 scope.go:117] "RemoveContainer" containerID="fce9c942e63fc1d8a2798092211d09cd95469b00c53c48a9b8a5755c74cdbcee" Dec 11 15:05:02 crc kubenswrapper[5050]: I1211 15:05:02.815794 5050 scope.go:117] "RemoveContainer" containerID="0f15142a456c53bb896d67bc68ef41cd37edd67afaa3fc47d504f774d42faa8e" Dec 11 15:05:02 crc kubenswrapper[5050]: I1211 15:05:02.839162 5050 scope.go:117] "RemoveContainer" containerID="f3723c67929e1d86f4968fcab377274b5230dc2b9d3ff353c5d23001d33d09f7" Dec 11 15:05:02 crc kubenswrapper[5050]: E1211 15:05:02.841535 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3723c67929e1d86f4968fcab377274b5230dc2b9d3ff353c5d23001d33d09f7\": container with ID starting with f3723c67929e1d86f4968fcab377274b5230dc2b9d3ff353c5d23001d33d09f7 not found: ID does not exist" containerID="f3723c67929e1d86f4968fcab377274b5230dc2b9d3ff353c5d23001d33d09f7" Dec 11 15:05:02 crc kubenswrapper[5050]: I1211 15:05:02.841576 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3723c67929e1d86f4968fcab377274b5230dc2b9d3ff353c5d23001d33d09f7"} err="failed to get container status \"f3723c67929e1d86f4968fcab377274b5230dc2b9d3ff353c5d23001d33d09f7\": rpc error: code = NotFound desc = could not find container \"f3723c67929e1d86f4968fcab377274b5230dc2b9d3ff353c5d23001d33d09f7\": container with ID starting with f3723c67929e1d86f4968fcab377274b5230dc2b9d3ff353c5d23001d33d09f7 not found: ID does not exist" Dec 11 15:05:02 crc kubenswrapper[5050]: I1211 15:05:02.841597 5050 scope.go:117] "RemoveContainer" containerID="fce9c942e63fc1d8a2798092211d09cd95469b00c53c48a9b8a5755c74cdbcee" Dec 11 15:05:02 crc kubenswrapper[5050]: E1211 15:05:02.842145 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fce9c942e63fc1d8a2798092211d09cd95469b00c53c48a9b8a5755c74cdbcee\": container with ID starting with fce9c942e63fc1d8a2798092211d09cd95469b00c53c48a9b8a5755c74cdbcee not found: ID does not exist" containerID="fce9c942e63fc1d8a2798092211d09cd95469b00c53c48a9b8a5755c74cdbcee" Dec 11 15:05:02 crc kubenswrapper[5050]: I1211 15:05:02.842233 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fce9c942e63fc1d8a2798092211d09cd95469b00c53c48a9b8a5755c74cdbcee"} err="failed to get container status \"fce9c942e63fc1d8a2798092211d09cd95469b00c53c48a9b8a5755c74cdbcee\": rpc error: code = NotFound desc = could not find container \"fce9c942e63fc1d8a2798092211d09cd95469b00c53c48a9b8a5755c74cdbcee\": container with ID starting with fce9c942e63fc1d8a2798092211d09cd95469b00c53c48a9b8a5755c74cdbcee not found: ID does not exist" Dec 11 15:05:02 crc kubenswrapper[5050]: I1211 15:05:02.842287 5050 scope.go:117] "RemoveContainer" containerID="0f15142a456c53bb896d67bc68ef41cd37edd67afaa3fc47d504f774d42faa8e" Dec 11 15:05:02 crc kubenswrapper[5050]: E1211 15:05:02.842660 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f15142a456c53bb896d67bc68ef41cd37edd67afaa3fc47d504f774d42faa8e\": container with ID starting with 0f15142a456c53bb896d67bc68ef41cd37edd67afaa3fc47d504f774d42faa8e not found: ID does not exist" containerID="0f15142a456c53bb896d67bc68ef41cd37edd67afaa3fc47d504f774d42faa8e" Dec 11 15:05:02 crc kubenswrapper[5050]: I1211 15:05:02.842692 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f15142a456c53bb896d67bc68ef41cd37edd67afaa3fc47d504f774d42faa8e"} err="failed to get container status \"0f15142a456c53bb896d67bc68ef41cd37edd67afaa3fc47d504f774d42faa8e\": rpc error: code = NotFound desc = could not find container \"0f15142a456c53bb896d67bc68ef41cd37edd67afaa3fc47d504f774d42faa8e\": container with ID starting with 0f15142a456c53bb896d67bc68ef41cd37edd67afaa3fc47d504f774d42faa8e not found: ID does not exist" Dec 11 15:05:02 crc kubenswrapper[5050]: I1211 15:05:02.852356 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f824bb5-ed65-4412-837c-48f7dd65f498-catalog-content\") pod \"1f824bb5-ed65-4412-837c-48f7dd65f498\" (UID: \"1f824bb5-ed65-4412-837c-48f7dd65f498\") " Dec 11 15:05:02 crc kubenswrapper[5050]: I1211 15:05:02.852446 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f824bb5-ed65-4412-837c-48f7dd65f498-utilities\") pod \"1f824bb5-ed65-4412-837c-48f7dd65f498\" (UID: \"1f824bb5-ed65-4412-837c-48f7dd65f498\") " Dec 11 15:05:02 crc kubenswrapper[5050]: I1211 15:05:02.852608 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2vn5\" (UniqueName: \"kubernetes.io/projected/1f824bb5-ed65-4412-837c-48f7dd65f498-kube-api-access-n2vn5\") pod \"1f824bb5-ed65-4412-837c-48f7dd65f498\" (UID: \"1f824bb5-ed65-4412-837c-48f7dd65f498\") " Dec 11 15:05:02 crc kubenswrapper[5050]: I1211 15:05:02.854244 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f824bb5-ed65-4412-837c-48f7dd65f498-utilities" (OuterVolumeSpecName: "utilities") pod "1f824bb5-ed65-4412-837c-48f7dd65f498" (UID: "1f824bb5-ed65-4412-837c-48f7dd65f498"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:05:02 crc kubenswrapper[5050]: I1211 15:05:02.861751 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f824bb5-ed65-4412-837c-48f7dd65f498-kube-api-access-n2vn5" (OuterVolumeSpecName: "kube-api-access-n2vn5") pod "1f824bb5-ed65-4412-837c-48f7dd65f498" (UID: "1f824bb5-ed65-4412-837c-48f7dd65f498"). InnerVolumeSpecName "kube-api-access-n2vn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:05:02 crc kubenswrapper[5050]: I1211 15:05:02.886032 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f824bb5-ed65-4412-837c-48f7dd65f498-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f824bb5-ed65-4412-837c-48f7dd65f498" (UID: "1f824bb5-ed65-4412-837c-48f7dd65f498"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:05:02 crc kubenswrapper[5050]: I1211 15:05:02.954673 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f824bb5-ed65-4412-837c-48f7dd65f498-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 15:05:02 crc kubenswrapper[5050]: I1211 15:05:02.954757 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f824bb5-ed65-4412-837c-48f7dd65f498-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 15:05:02 crc kubenswrapper[5050]: I1211 15:05:02.954781 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2vn5\" (UniqueName: \"kubernetes.io/projected/1f824bb5-ed65-4412-837c-48f7dd65f498-kube-api-access-n2vn5\") on node \"crc\" DevicePath \"\"" Dec 11 15:05:03 crc kubenswrapper[5050]: I1211 15:05:03.096245 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hg6tz"] Dec 11 15:05:03 crc kubenswrapper[5050]: I1211 15:05:03.113814 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hg6tz"] Dec 11 15:05:03 crc kubenswrapper[5050]: I1211 15:05:03.562474 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f824bb5-ed65-4412-837c-48f7dd65f498" path="/var/lib/kubelet/pods/1f824bb5-ed65-4412-837c-48f7dd65f498/volumes" Dec 11 15:05:09 crc kubenswrapper[5050]: I1211 15:05:09.555312 5050 scope.go:117] "RemoveContainer" containerID="b585bea20b0bb52e58642ee9f3e33b5b182a66a544d35daafc0202a53b35a810" Dec 11 15:05:09 crc kubenswrapper[5050]: E1211 15:05:09.556966 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:05:20 crc kubenswrapper[5050]: I1211 15:05:20.546375 5050 scope.go:117] "RemoveContainer" containerID="b585bea20b0bb52e58642ee9f3e33b5b182a66a544d35daafc0202a53b35a810" Dec 11 15:05:20 crc kubenswrapper[5050]: E1211 15:05:20.547663 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:05:31 crc kubenswrapper[5050]: I1211 15:05:31.546293 5050 scope.go:117] "RemoveContainer" containerID="b585bea20b0bb52e58642ee9f3e33b5b182a66a544d35daafc0202a53b35a810" Dec 11 15:05:31 crc kubenswrapper[5050]: E1211 15:05:31.546992 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:05:45 crc kubenswrapper[5050]: I1211 15:05:45.546233 5050 scope.go:117] "RemoveContainer" containerID="b585bea20b0bb52e58642ee9f3e33b5b182a66a544d35daafc0202a53b35a810" Dec 11 15:05:45 crc kubenswrapper[5050]: E1211 15:05:45.547094 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:05:55 crc kubenswrapper[5050]: I1211 15:05:55.047258 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9jcqm"] Dec 11 15:05:55 crc kubenswrapper[5050]: E1211 15:05:55.048385 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f824bb5-ed65-4412-837c-48f7dd65f498" containerName="extract-utilities" Dec 11 15:05:55 crc kubenswrapper[5050]: I1211 15:05:55.048407 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f824bb5-ed65-4412-837c-48f7dd65f498" containerName="extract-utilities" Dec 11 15:05:55 crc kubenswrapper[5050]: E1211 15:05:55.048431 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f824bb5-ed65-4412-837c-48f7dd65f498" containerName="extract-content" Dec 11 15:05:55 crc kubenswrapper[5050]: I1211 15:05:55.048439 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f824bb5-ed65-4412-837c-48f7dd65f498" containerName="extract-content" Dec 11 15:05:55 crc kubenswrapper[5050]: E1211 15:05:55.048456 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f824bb5-ed65-4412-837c-48f7dd65f498" containerName="registry-server" Dec 11 15:05:55 crc kubenswrapper[5050]: I1211 15:05:55.048465 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f824bb5-ed65-4412-837c-48f7dd65f498" containerName="registry-server" Dec 11 15:05:55 crc kubenswrapper[5050]: I1211 15:05:55.048656 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f824bb5-ed65-4412-837c-48f7dd65f498" containerName="registry-server" Dec 11 15:05:55 crc kubenswrapper[5050]: I1211 15:05:55.050094 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9jcqm" Dec 11 15:05:55 crc kubenswrapper[5050]: I1211 15:05:55.064909 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9jcqm"] Dec 11 15:05:55 crc kubenswrapper[5050]: I1211 15:05:55.165839 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3da555d5-000a-44f4-b964-72e3ae4ba495-catalog-content\") pod \"certified-operators-9jcqm\" (UID: \"3da555d5-000a-44f4-b964-72e3ae4ba495\") " pod="openshift-marketplace/certified-operators-9jcqm" Dec 11 15:05:55 crc kubenswrapper[5050]: I1211 15:05:55.166521 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw58v\" (UniqueName: \"kubernetes.io/projected/3da555d5-000a-44f4-b964-72e3ae4ba495-kube-api-access-hw58v\") pod \"certified-operators-9jcqm\" (UID: \"3da555d5-000a-44f4-b964-72e3ae4ba495\") " pod="openshift-marketplace/certified-operators-9jcqm" Dec 11 15:05:55 crc kubenswrapper[5050]: I1211 15:05:55.166791 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3da555d5-000a-44f4-b964-72e3ae4ba495-utilities\") pod \"certified-operators-9jcqm\" (UID: \"3da555d5-000a-44f4-b964-72e3ae4ba495\") " pod="openshift-marketplace/certified-operators-9jcqm" Dec 11 15:05:55 crc kubenswrapper[5050]: I1211 15:05:55.269171 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3da555d5-000a-44f4-b964-72e3ae4ba495-utilities\") pod \"certified-operators-9jcqm\" (UID: \"3da555d5-000a-44f4-b964-72e3ae4ba495\") " pod="openshift-marketplace/certified-operators-9jcqm" Dec 11 15:05:55 crc kubenswrapper[5050]: I1211 15:05:55.269325 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3da555d5-000a-44f4-b964-72e3ae4ba495-catalog-content\") pod \"certified-operators-9jcqm\" (UID: \"3da555d5-000a-44f4-b964-72e3ae4ba495\") " pod="openshift-marketplace/certified-operators-9jcqm" Dec 11 15:05:55 crc kubenswrapper[5050]: I1211 15:05:55.269381 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw58v\" (UniqueName: \"kubernetes.io/projected/3da555d5-000a-44f4-b964-72e3ae4ba495-kube-api-access-hw58v\") pod \"certified-operators-9jcqm\" (UID: \"3da555d5-000a-44f4-b964-72e3ae4ba495\") " pod="openshift-marketplace/certified-operators-9jcqm" Dec 11 15:05:55 crc kubenswrapper[5050]: I1211 15:05:55.270078 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3da555d5-000a-44f4-b964-72e3ae4ba495-utilities\") pod \"certified-operators-9jcqm\" (UID: \"3da555d5-000a-44f4-b964-72e3ae4ba495\") " pod="openshift-marketplace/certified-operators-9jcqm" Dec 11 15:05:55 crc kubenswrapper[5050]: I1211 15:05:55.270533 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3da555d5-000a-44f4-b964-72e3ae4ba495-catalog-content\") pod \"certified-operators-9jcqm\" (UID: \"3da555d5-000a-44f4-b964-72e3ae4ba495\") " pod="openshift-marketplace/certified-operators-9jcqm" Dec 11 15:05:55 crc kubenswrapper[5050]: I1211 15:05:55.290377 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw58v\" (UniqueName: \"kubernetes.io/projected/3da555d5-000a-44f4-b964-72e3ae4ba495-kube-api-access-hw58v\") pod \"certified-operators-9jcqm\" (UID: \"3da555d5-000a-44f4-b964-72e3ae4ba495\") " pod="openshift-marketplace/certified-operators-9jcqm" Dec 11 15:05:55 crc kubenswrapper[5050]: I1211 15:05:55.389127 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9jcqm" Dec 11 15:05:55 crc kubenswrapper[5050]: I1211 15:05:55.872396 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9jcqm"] Dec 11 15:05:56 crc kubenswrapper[5050]: I1211 15:05:56.239680 5050 generic.go:334] "Generic (PLEG): container finished" podID="3da555d5-000a-44f4-b964-72e3ae4ba495" containerID="cbf9641c37a340f9d68cb3acf287d317aa7d84067ac77b35baf73159b2e69652" exitCode=0 Dec 11 15:05:56 crc kubenswrapper[5050]: I1211 15:05:56.239876 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jcqm" event={"ID":"3da555d5-000a-44f4-b964-72e3ae4ba495","Type":"ContainerDied","Data":"cbf9641c37a340f9d68cb3acf287d317aa7d84067ac77b35baf73159b2e69652"} Dec 11 15:05:56 crc kubenswrapper[5050]: I1211 15:05:56.240114 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jcqm" event={"ID":"3da555d5-000a-44f4-b964-72e3ae4ba495","Type":"ContainerStarted","Data":"f1452ec6d9034c62eecd949331f1f50f4de1266fcef1ec95ec7e7e7ba68fdc8e"} Dec 11 15:05:56 crc kubenswrapper[5050]: I1211 15:05:56.546998 5050 scope.go:117] "RemoveContainer" containerID="b585bea20b0bb52e58642ee9f3e33b5b182a66a544d35daafc0202a53b35a810" Dec 11 15:05:56 crc kubenswrapper[5050]: E1211 15:05:56.547389 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:05:57 crc kubenswrapper[5050]: I1211 15:05:57.261647 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jcqm" event={"ID":"3da555d5-000a-44f4-b964-72e3ae4ba495","Type":"ContainerStarted","Data":"1fe7e737642a6435a93f0f7103677b444cdd5778c333197823f99d3c1b9f5aba"} Dec 11 15:05:58 crc kubenswrapper[5050]: I1211 15:05:58.270635 5050 generic.go:334] "Generic (PLEG): container finished" podID="3da555d5-000a-44f4-b964-72e3ae4ba495" containerID="1fe7e737642a6435a93f0f7103677b444cdd5778c333197823f99d3c1b9f5aba" exitCode=0 Dec 11 15:05:58 crc kubenswrapper[5050]: I1211 15:05:58.270793 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jcqm" event={"ID":"3da555d5-000a-44f4-b964-72e3ae4ba495","Type":"ContainerDied","Data":"1fe7e737642a6435a93f0f7103677b444cdd5778c333197823f99d3c1b9f5aba"} Dec 11 15:05:59 crc kubenswrapper[5050]: I1211 15:05:59.287482 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jcqm" event={"ID":"3da555d5-000a-44f4-b964-72e3ae4ba495","Type":"ContainerStarted","Data":"339dc0fc4a2f22c6ce987f99dfeb30a9313e1b2065a50836e3d292632eb1f250"} Dec 11 15:05:59 crc kubenswrapper[5050]: I1211 15:05:59.316649 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9jcqm" podStartSLOduration=1.740480309 podStartE2EDuration="4.316626728s" podCreationTimestamp="2025-12-11 15:05:55 +0000 UTC" firstStartedPulling="2025-12-11 15:05:56.241909873 +0000 UTC m=+4647.085632459" lastFinishedPulling="2025-12-11 15:05:58.818056292 +0000 UTC m=+4649.661778878" observedRunningTime="2025-12-11 15:05:59.312788115 +0000 UTC m=+4650.156510751" watchObservedRunningTime="2025-12-11 15:05:59.316626728 +0000 UTC m=+4650.160349314" Dec 11 15:06:05 crc kubenswrapper[5050]: I1211 15:06:05.390217 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9jcqm" Dec 11 15:06:05 crc kubenswrapper[5050]: I1211 15:06:05.390811 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9jcqm" Dec 11 15:06:05 crc kubenswrapper[5050]: I1211 15:06:05.442347 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9jcqm" Dec 11 15:06:06 crc kubenswrapper[5050]: I1211 15:06:06.403415 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9jcqm" Dec 11 15:06:06 crc kubenswrapper[5050]: I1211 15:06:06.477466 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9jcqm"] Dec 11 15:06:07 crc kubenswrapper[5050]: I1211 15:06:07.547027 5050 scope.go:117] "RemoveContainer" containerID="b585bea20b0bb52e58642ee9f3e33b5b182a66a544d35daafc0202a53b35a810" Dec 11 15:06:07 crc kubenswrapper[5050]: E1211 15:06:07.547390 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:06:08 crc kubenswrapper[5050]: I1211 15:06:08.369870 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9jcqm" podUID="3da555d5-000a-44f4-b964-72e3ae4ba495" containerName="registry-server" containerID="cri-o://339dc0fc4a2f22c6ce987f99dfeb30a9313e1b2065a50836e3d292632eb1f250" gracePeriod=2 Dec 11 15:06:09 crc kubenswrapper[5050]: I1211 15:06:09.319854 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9jcqm" Dec 11 15:06:09 crc kubenswrapper[5050]: I1211 15:06:09.379462 5050 generic.go:334] "Generic (PLEG): container finished" podID="3da555d5-000a-44f4-b964-72e3ae4ba495" containerID="339dc0fc4a2f22c6ce987f99dfeb30a9313e1b2065a50836e3d292632eb1f250" exitCode=0 Dec 11 15:06:09 crc kubenswrapper[5050]: I1211 15:06:09.379785 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jcqm" event={"ID":"3da555d5-000a-44f4-b964-72e3ae4ba495","Type":"ContainerDied","Data":"339dc0fc4a2f22c6ce987f99dfeb30a9313e1b2065a50836e3d292632eb1f250"} Dec 11 15:06:09 crc kubenswrapper[5050]: I1211 15:06:09.379885 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9jcqm" event={"ID":"3da555d5-000a-44f4-b964-72e3ae4ba495","Type":"ContainerDied","Data":"f1452ec6d9034c62eecd949331f1f50f4de1266fcef1ec95ec7e7e7ba68fdc8e"} Dec 11 15:06:09 crc kubenswrapper[5050]: I1211 15:06:09.379983 5050 scope.go:117] "RemoveContainer" containerID="339dc0fc4a2f22c6ce987f99dfeb30a9313e1b2065a50836e3d292632eb1f250" Dec 11 15:06:09 crc kubenswrapper[5050]: I1211 15:06:09.380202 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9jcqm" Dec 11 15:06:09 crc kubenswrapper[5050]: I1211 15:06:09.416450 5050 scope.go:117] "RemoveContainer" containerID="1fe7e737642a6435a93f0f7103677b444cdd5778c333197823f99d3c1b9f5aba" Dec 11 15:06:09 crc kubenswrapper[5050]: I1211 15:06:09.446698 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3da555d5-000a-44f4-b964-72e3ae4ba495-utilities\") pod \"3da555d5-000a-44f4-b964-72e3ae4ba495\" (UID: \"3da555d5-000a-44f4-b964-72e3ae4ba495\") " Dec 11 15:06:09 crc kubenswrapper[5050]: I1211 15:06:09.446899 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw58v\" (UniqueName: \"kubernetes.io/projected/3da555d5-000a-44f4-b964-72e3ae4ba495-kube-api-access-hw58v\") pod \"3da555d5-000a-44f4-b964-72e3ae4ba495\" (UID: \"3da555d5-000a-44f4-b964-72e3ae4ba495\") " Dec 11 15:06:09 crc kubenswrapper[5050]: I1211 15:06:09.447184 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3da555d5-000a-44f4-b964-72e3ae4ba495-catalog-content\") pod \"3da555d5-000a-44f4-b964-72e3ae4ba495\" (UID: \"3da555d5-000a-44f4-b964-72e3ae4ba495\") " Dec 11 15:06:09 crc kubenswrapper[5050]: I1211 15:06:09.449896 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3da555d5-000a-44f4-b964-72e3ae4ba495-utilities" (OuterVolumeSpecName: "utilities") pod "3da555d5-000a-44f4-b964-72e3ae4ba495" (UID: "3da555d5-000a-44f4-b964-72e3ae4ba495"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:06:09 crc kubenswrapper[5050]: I1211 15:06:09.450792 5050 scope.go:117] "RemoveContainer" containerID="cbf9641c37a340f9d68cb3acf287d317aa7d84067ac77b35baf73159b2e69652" Dec 11 15:06:09 crc kubenswrapper[5050]: I1211 15:06:09.459842 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3da555d5-000a-44f4-b964-72e3ae4ba495-kube-api-access-hw58v" (OuterVolumeSpecName: "kube-api-access-hw58v") pod "3da555d5-000a-44f4-b964-72e3ae4ba495" (UID: "3da555d5-000a-44f4-b964-72e3ae4ba495"). InnerVolumeSpecName "kube-api-access-hw58v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:06:09 crc kubenswrapper[5050]: I1211 15:06:09.512048 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3da555d5-000a-44f4-b964-72e3ae4ba495-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3da555d5-000a-44f4-b964-72e3ae4ba495" (UID: "3da555d5-000a-44f4-b964-72e3ae4ba495"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:06:09 crc kubenswrapper[5050]: I1211 15:06:09.515608 5050 scope.go:117] "RemoveContainer" containerID="339dc0fc4a2f22c6ce987f99dfeb30a9313e1b2065a50836e3d292632eb1f250" Dec 11 15:06:09 crc kubenswrapper[5050]: E1211 15:06:09.516425 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"339dc0fc4a2f22c6ce987f99dfeb30a9313e1b2065a50836e3d292632eb1f250\": container with ID starting with 339dc0fc4a2f22c6ce987f99dfeb30a9313e1b2065a50836e3d292632eb1f250 not found: ID does not exist" containerID="339dc0fc4a2f22c6ce987f99dfeb30a9313e1b2065a50836e3d292632eb1f250" Dec 11 15:06:09 crc kubenswrapper[5050]: I1211 15:06:09.516541 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"339dc0fc4a2f22c6ce987f99dfeb30a9313e1b2065a50836e3d292632eb1f250"} err="failed to get container status \"339dc0fc4a2f22c6ce987f99dfeb30a9313e1b2065a50836e3d292632eb1f250\": rpc error: code = NotFound desc = could not find container \"339dc0fc4a2f22c6ce987f99dfeb30a9313e1b2065a50836e3d292632eb1f250\": container with ID starting with 339dc0fc4a2f22c6ce987f99dfeb30a9313e1b2065a50836e3d292632eb1f250 not found: ID does not exist" Dec 11 15:06:09 crc kubenswrapper[5050]: I1211 15:06:09.516671 5050 scope.go:117] "RemoveContainer" containerID="1fe7e737642a6435a93f0f7103677b444cdd5778c333197823f99d3c1b9f5aba" Dec 11 15:06:09 crc kubenswrapper[5050]: E1211 15:06:09.520400 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fe7e737642a6435a93f0f7103677b444cdd5778c333197823f99d3c1b9f5aba\": container with ID starting with 1fe7e737642a6435a93f0f7103677b444cdd5778c333197823f99d3c1b9f5aba not found: ID does not exist" containerID="1fe7e737642a6435a93f0f7103677b444cdd5778c333197823f99d3c1b9f5aba" Dec 11 15:06:09 crc kubenswrapper[5050]: I1211 15:06:09.520560 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fe7e737642a6435a93f0f7103677b444cdd5778c333197823f99d3c1b9f5aba"} err="failed to get container status \"1fe7e737642a6435a93f0f7103677b444cdd5778c333197823f99d3c1b9f5aba\": rpc error: code = NotFound desc = could not find container \"1fe7e737642a6435a93f0f7103677b444cdd5778c333197823f99d3c1b9f5aba\": container with ID starting with 1fe7e737642a6435a93f0f7103677b444cdd5778c333197823f99d3c1b9f5aba not found: ID does not exist" Dec 11 15:06:09 crc kubenswrapper[5050]: I1211 15:06:09.521021 5050 scope.go:117] "RemoveContainer" containerID="cbf9641c37a340f9d68cb3acf287d317aa7d84067ac77b35baf73159b2e69652" Dec 11 15:06:09 crc kubenswrapper[5050]: E1211 15:06:09.524346 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbf9641c37a340f9d68cb3acf287d317aa7d84067ac77b35baf73159b2e69652\": container with ID starting with cbf9641c37a340f9d68cb3acf287d317aa7d84067ac77b35baf73159b2e69652 not found: ID does not exist" containerID="cbf9641c37a340f9d68cb3acf287d317aa7d84067ac77b35baf73159b2e69652" Dec 11 15:06:09 crc kubenswrapper[5050]: I1211 15:06:09.524488 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbf9641c37a340f9d68cb3acf287d317aa7d84067ac77b35baf73159b2e69652"} err="failed to get container status \"cbf9641c37a340f9d68cb3acf287d317aa7d84067ac77b35baf73159b2e69652\": rpc error: code = NotFound desc = could not find container \"cbf9641c37a340f9d68cb3acf287d317aa7d84067ac77b35baf73159b2e69652\": container with ID starting with cbf9641c37a340f9d68cb3acf287d317aa7d84067ac77b35baf73159b2e69652 not found: ID does not exist" Dec 11 15:06:09 crc kubenswrapper[5050]: I1211 15:06:09.549395 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3da555d5-000a-44f4-b964-72e3ae4ba495-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 15:06:09 crc kubenswrapper[5050]: I1211 15:06:09.549434 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3da555d5-000a-44f4-b964-72e3ae4ba495-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 15:06:09 crc kubenswrapper[5050]: I1211 15:06:09.549447 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hw58v\" (UniqueName: \"kubernetes.io/projected/3da555d5-000a-44f4-b964-72e3ae4ba495-kube-api-access-hw58v\") on node \"crc\" DevicePath \"\"" Dec 11 15:06:09 crc kubenswrapper[5050]: I1211 15:06:09.700594 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9jcqm"] Dec 11 15:06:09 crc kubenswrapper[5050]: I1211 15:06:09.709345 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9jcqm"] Dec 11 15:06:11 crc kubenswrapper[5050]: I1211 15:06:11.564490 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3da555d5-000a-44f4-b964-72e3ae4ba495" path="/var/lib/kubelet/pods/3da555d5-000a-44f4-b964-72e3ae4ba495/volumes" Dec 11 15:06:21 crc kubenswrapper[5050]: I1211 15:06:21.547342 5050 scope.go:117] "RemoveContainer" containerID="b585bea20b0bb52e58642ee9f3e33b5b182a66a544d35daafc0202a53b35a810" Dec 11 15:06:21 crc kubenswrapper[5050]: E1211 15:06:21.548658 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:06:32 crc kubenswrapper[5050]: I1211 15:06:32.545934 5050 scope.go:117] "RemoveContainer" containerID="b585bea20b0bb52e58642ee9f3e33b5b182a66a544d35daafc0202a53b35a810" Dec 11 15:06:32 crc kubenswrapper[5050]: E1211 15:06:32.546938 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:06:44 crc kubenswrapper[5050]: I1211 15:06:44.546902 5050 scope.go:117] "RemoveContainer" containerID="b585bea20b0bb52e58642ee9f3e33b5b182a66a544d35daafc0202a53b35a810" Dec 11 15:06:44 crc kubenswrapper[5050]: E1211 15:06:44.548275 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:06:48 crc kubenswrapper[5050]: I1211 15:06:48.123316 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-95587bc99-5mqnk"] Dec 11 15:06:48 crc kubenswrapper[5050]: E1211 15:06:48.123853 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da555d5-000a-44f4-b964-72e3ae4ba495" containerName="registry-server" Dec 11 15:06:48 crc kubenswrapper[5050]: I1211 15:06:48.123866 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da555d5-000a-44f4-b964-72e3ae4ba495" containerName="registry-server" Dec 11 15:06:48 crc kubenswrapper[5050]: E1211 15:06:48.123879 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da555d5-000a-44f4-b964-72e3ae4ba495" containerName="extract-utilities" Dec 11 15:06:48 crc kubenswrapper[5050]: I1211 15:06:48.123885 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da555d5-000a-44f4-b964-72e3ae4ba495" containerName="extract-utilities" Dec 11 15:06:48 crc kubenswrapper[5050]: E1211 15:06:48.123907 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3da555d5-000a-44f4-b964-72e3ae4ba495" containerName="extract-content" Dec 11 15:06:48 crc kubenswrapper[5050]: I1211 15:06:48.123913 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="3da555d5-000a-44f4-b964-72e3ae4ba495" containerName="extract-content" Dec 11 15:06:48 crc kubenswrapper[5050]: I1211 15:06:48.124072 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="3da555d5-000a-44f4-b964-72e3ae4ba495" containerName="registry-server" Dec 11 15:06:48 crc kubenswrapper[5050]: I1211 15:06:48.124784 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95587bc99-5mqnk" Dec 11 15:06:48 crc kubenswrapper[5050]: I1211 15:06:48.127351 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 11 15:06:48 crc kubenswrapper[5050]: I1211 15:06:48.128218 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 11 15:06:48 crc kubenswrapper[5050]: I1211 15:06:48.128883 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-kpmgv" Dec 11 15:06:48 crc kubenswrapper[5050]: I1211 15:06:48.130280 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 11 15:06:48 crc kubenswrapper[5050]: I1211 15:06:48.137864 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 11 15:06:48 crc kubenswrapper[5050]: I1211 15:06:48.148258 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c8f2bd2-9e37-4146-a633-fa2e990a6f90-dns-svc\") pod \"dnsmasq-dns-95587bc99-5mqnk\" (UID: \"8c8f2bd2-9e37-4146-a633-fa2e990a6f90\") " pod="openstack/dnsmasq-dns-95587bc99-5mqnk" Dec 11 15:06:48 crc kubenswrapper[5050]: I1211 15:06:48.148378 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5tts\" (UniqueName: \"kubernetes.io/projected/8c8f2bd2-9e37-4146-a633-fa2e990a6f90-kube-api-access-m5tts\") pod \"dnsmasq-dns-95587bc99-5mqnk\" (UID: \"8c8f2bd2-9e37-4146-a633-fa2e990a6f90\") " pod="openstack/dnsmasq-dns-95587bc99-5mqnk" Dec 11 15:06:48 crc kubenswrapper[5050]: I1211 15:06:48.148549 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c8f2bd2-9e37-4146-a633-fa2e990a6f90-config\") pod \"dnsmasq-dns-95587bc99-5mqnk\" (UID: \"8c8f2bd2-9e37-4146-a633-fa2e990a6f90\") " pod="openstack/dnsmasq-dns-95587bc99-5mqnk" Dec 11 15:06:48 crc kubenswrapper[5050]: I1211 15:06:48.163550 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-5mqnk"] Dec 11 15:06:48 crc kubenswrapper[5050]: I1211 15:06:48.257117 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c8f2bd2-9e37-4146-a633-fa2e990a6f90-dns-svc\") pod \"dnsmasq-dns-95587bc99-5mqnk\" (UID: \"8c8f2bd2-9e37-4146-a633-fa2e990a6f90\") " pod="openstack/dnsmasq-dns-95587bc99-5mqnk" Dec 11 15:06:48 crc kubenswrapper[5050]: I1211 15:06:48.257230 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5tts\" (UniqueName: \"kubernetes.io/projected/8c8f2bd2-9e37-4146-a633-fa2e990a6f90-kube-api-access-m5tts\") pod \"dnsmasq-dns-95587bc99-5mqnk\" (UID: \"8c8f2bd2-9e37-4146-a633-fa2e990a6f90\") " pod="openstack/dnsmasq-dns-95587bc99-5mqnk" Dec 11 15:06:48 crc kubenswrapper[5050]: I1211 15:06:48.257260 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c8f2bd2-9e37-4146-a633-fa2e990a6f90-config\") pod \"dnsmasq-dns-95587bc99-5mqnk\" (UID: \"8c8f2bd2-9e37-4146-a633-fa2e990a6f90\") " pod="openstack/dnsmasq-dns-95587bc99-5mqnk" Dec 11 15:06:48 crc kubenswrapper[5050]: I1211 15:06:48.258405 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c8f2bd2-9e37-4146-a633-fa2e990a6f90-dns-svc\") pod \"dnsmasq-dns-95587bc99-5mqnk\" (UID: \"8c8f2bd2-9e37-4146-a633-fa2e990a6f90\") " pod="openstack/dnsmasq-dns-95587bc99-5mqnk" Dec 11 15:06:48 crc kubenswrapper[5050]: I1211 15:06:48.258422 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c8f2bd2-9e37-4146-a633-fa2e990a6f90-config\") pod \"dnsmasq-dns-95587bc99-5mqnk\" (UID: \"8c8f2bd2-9e37-4146-a633-fa2e990a6f90\") " pod="openstack/dnsmasq-dns-95587bc99-5mqnk" Dec 11 15:06:48 crc kubenswrapper[5050]: I1211 15:06:48.286122 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5tts\" (UniqueName: \"kubernetes.io/projected/8c8f2bd2-9e37-4146-a633-fa2e990a6f90-kube-api-access-m5tts\") pod \"dnsmasq-dns-95587bc99-5mqnk\" (UID: \"8c8f2bd2-9e37-4146-a633-fa2e990a6f90\") " pod="openstack/dnsmasq-dns-95587bc99-5mqnk" Dec 11 15:06:48 crc kubenswrapper[5050]: I1211 15:06:48.410851 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-mr6dw"] Dec 11 15:06:48 crc kubenswrapper[5050]: I1211 15:06:48.412777 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-mr6dw" Dec 11 15:06:48 crc kubenswrapper[5050]: I1211 15:06:48.438462 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-mr6dw"] Dec 11 15:06:48 crc kubenswrapper[5050]: I1211 15:06:48.442573 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95587bc99-5mqnk" Dec 11 15:06:48 crc kubenswrapper[5050]: I1211 15:06:48.459911 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgltz\" (UniqueName: \"kubernetes.io/projected/786d8f03-bb4c-47b7-a94a-b86c9bc07c6d-kube-api-access-xgltz\") pod \"dnsmasq-dns-5d79f765b5-mr6dw\" (UID: \"786d8f03-bb4c-47b7-a94a-b86c9bc07c6d\") " pod="openstack/dnsmasq-dns-5d79f765b5-mr6dw" Dec 11 15:06:48 crc kubenswrapper[5050]: I1211 15:06:48.460024 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/786d8f03-bb4c-47b7-a94a-b86c9bc07c6d-config\") pod \"dnsmasq-dns-5d79f765b5-mr6dw\" (UID: \"786d8f03-bb4c-47b7-a94a-b86c9bc07c6d\") " pod="openstack/dnsmasq-dns-5d79f765b5-mr6dw" Dec 11 15:06:48 crc kubenswrapper[5050]: I1211 15:06:48.460221 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/786d8f03-bb4c-47b7-a94a-b86c9bc07c6d-dns-svc\") pod \"dnsmasq-dns-5d79f765b5-mr6dw\" (UID: \"786d8f03-bb4c-47b7-a94a-b86c9bc07c6d\") " pod="openstack/dnsmasq-dns-5d79f765b5-mr6dw" Dec 11 15:06:48 crc kubenswrapper[5050]: I1211 15:06:48.561424 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/786d8f03-bb4c-47b7-a94a-b86c9bc07c6d-dns-svc\") pod \"dnsmasq-dns-5d79f765b5-mr6dw\" (UID: \"786d8f03-bb4c-47b7-a94a-b86c9bc07c6d\") " pod="openstack/dnsmasq-dns-5d79f765b5-mr6dw" Dec 11 15:06:48 crc kubenswrapper[5050]: I1211 15:06:48.561518 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgltz\" (UniqueName: \"kubernetes.io/projected/786d8f03-bb4c-47b7-a94a-b86c9bc07c6d-kube-api-access-xgltz\") pod \"dnsmasq-dns-5d79f765b5-mr6dw\" (UID: \"786d8f03-bb4c-47b7-a94a-b86c9bc07c6d\") " pod="openstack/dnsmasq-dns-5d79f765b5-mr6dw" Dec 11 15:06:48 crc kubenswrapper[5050]: I1211 15:06:48.561571 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/786d8f03-bb4c-47b7-a94a-b86c9bc07c6d-config\") pod \"dnsmasq-dns-5d79f765b5-mr6dw\" (UID: \"786d8f03-bb4c-47b7-a94a-b86c9bc07c6d\") " pod="openstack/dnsmasq-dns-5d79f765b5-mr6dw" Dec 11 15:06:48 crc kubenswrapper[5050]: I1211 15:06:48.562536 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/786d8f03-bb4c-47b7-a94a-b86c9bc07c6d-config\") pod \"dnsmasq-dns-5d79f765b5-mr6dw\" (UID: \"786d8f03-bb4c-47b7-a94a-b86c9bc07c6d\") " pod="openstack/dnsmasq-dns-5d79f765b5-mr6dw" Dec 11 15:06:48 crc kubenswrapper[5050]: I1211 15:06:48.562545 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/786d8f03-bb4c-47b7-a94a-b86c9bc07c6d-dns-svc\") pod \"dnsmasq-dns-5d79f765b5-mr6dw\" (UID: \"786d8f03-bb4c-47b7-a94a-b86c9bc07c6d\") " pod="openstack/dnsmasq-dns-5d79f765b5-mr6dw" Dec 11 15:06:48 crc kubenswrapper[5050]: I1211 15:06:48.789098 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgltz\" (UniqueName: \"kubernetes.io/projected/786d8f03-bb4c-47b7-a94a-b86c9bc07c6d-kube-api-access-xgltz\") pod \"dnsmasq-dns-5d79f765b5-mr6dw\" (UID: \"786d8f03-bb4c-47b7-a94a-b86c9bc07c6d\") " pod="openstack/dnsmasq-dns-5d79f765b5-mr6dw" Dec 11 15:06:48 crc kubenswrapper[5050]: I1211 15:06:48.866539 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-5mqnk"] Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.027685 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-mr6dw" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.276987 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.279670 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.282414 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.286713 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.287073 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.287426 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.289462 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-zd6qh" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.292239 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.398869 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llnqh\" (UniqueName: \"kubernetes.io/projected/609782a3-064e-4127-8ea6-080e428bea44-kube-api-access-llnqh\") pod \"rabbitmq-server-0\" (UID: \"609782a3-064e-4127-8ea6-080e428bea44\") " pod="openstack/rabbitmq-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.398923 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/609782a3-064e-4127-8ea6-080e428bea44-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"609782a3-064e-4127-8ea6-080e428bea44\") " pod="openstack/rabbitmq-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.398949 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/609782a3-064e-4127-8ea6-080e428bea44-server-conf\") pod \"rabbitmq-server-0\" (UID: \"609782a3-064e-4127-8ea6-080e428bea44\") " pod="openstack/rabbitmq-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.398970 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/609782a3-064e-4127-8ea6-080e428bea44-pod-info\") pod \"rabbitmq-server-0\" (UID: \"609782a3-064e-4127-8ea6-080e428bea44\") " pod="openstack/rabbitmq-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.399146 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-10e17a41-a3d8-4545-b0df-9368e3a79b07\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-10e17a41-a3d8-4545-b0df-9368e3a79b07\") pod \"rabbitmq-server-0\" (UID: \"609782a3-064e-4127-8ea6-080e428bea44\") " pod="openstack/rabbitmq-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.399218 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/609782a3-064e-4127-8ea6-080e428bea44-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"609782a3-064e-4127-8ea6-080e428bea44\") " pod="openstack/rabbitmq-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.399385 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/609782a3-064e-4127-8ea6-080e428bea44-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"609782a3-064e-4127-8ea6-080e428bea44\") " pod="openstack/rabbitmq-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.399458 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/609782a3-064e-4127-8ea6-080e428bea44-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"609782a3-064e-4127-8ea6-080e428bea44\") " pod="openstack/rabbitmq-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.399519 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/609782a3-064e-4127-8ea6-080e428bea44-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"609782a3-064e-4127-8ea6-080e428bea44\") " pod="openstack/rabbitmq-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.500872 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llnqh\" (UniqueName: \"kubernetes.io/projected/609782a3-064e-4127-8ea6-080e428bea44-kube-api-access-llnqh\") pod \"rabbitmq-server-0\" (UID: \"609782a3-064e-4127-8ea6-080e428bea44\") " pod="openstack/rabbitmq-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.500924 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/609782a3-064e-4127-8ea6-080e428bea44-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"609782a3-064e-4127-8ea6-080e428bea44\") " pod="openstack/rabbitmq-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.500947 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/609782a3-064e-4127-8ea6-080e428bea44-server-conf\") pod \"rabbitmq-server-0\" (UID: \"609782a3-064e-4127-8ea6-080e428bea44\") " pod="openstack/rabbitmq-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.500964 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/609782a3-064e-4127-8ea6-080e428bea44-pod-info\") pod \"rabbitmq-server-0\" (UID: \"609782a3-064e-4127-8ea6-080e428bea44\") " pod="openstack/rabbitmq-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.501004 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-10e17a41-a3d8-4545-b0df-9368e3a79b07\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-10e17a41-a3d8-4545-b0df-9368e3a79b07\") pod \"rabbitmq-server-0\" (UID: \"609782a3-064e-4127-8ea6-080e428bea44\") " pod="openstack/rabbitmq-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.501047 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/609782a3-064e-4127-8ea6-080e428bea44-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"609782a3-064e-4127-8ea6-080e428bea44\") " pod="openstack/rabbitmq-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.501097 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/609782a3-064e-4127-8ea6-080e428bea44-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"609782a3-064e-4127-8ea6-080e428bea44\") " pod="openstack/rabbitmq-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.501147 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/609782a3-064e-4127-8ea6-080e428bea44-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"609782a3-064e-4127-8ea6-080e428bea44\") " pod="openstack/rabbitmq-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.501177 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/609782a3-064e-4127-8ea6-080e428bea44-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"609782a3-064e-4127-8ea6-080e428bea44\") " pod="openstack/rabbitmq-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.502151 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/609782a3-064e-4127-8ea6-080e428bea44-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"609782a3-064e-4127-8ea6-080e428bea44\") " pod="openstack/rabbitmq-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.502237 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/609782a3-064e-4127-8ea6-080e428bea44-server-conf\") pod \"rabbitmq-server-0\" (UID: \"609782a3-064e-4127-8ea6-080e428bea44\") " pod="openstack/rabbitmq-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.502480 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/609782a3-064e-4127-8ea6-080e428bea44-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"609782a3-064e-4127-8ea6-080e428bea44\") " pod="openstack/rabbitmq-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.502940 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/609782a3-064e-4127-8ea6-080e428bea44-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"609782a3-064e-4127-8ea6-080e428bea44\") " pod="openstack/rabbitmq-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.506383 5050 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.506416 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-10e17a41-a3d8-4545-b0df-9368e3a79b07\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-10e17a41-a3d8-4545-b0df-9368e3a79b07\") pod \"rabbitmq-server-0\" (UID: \"609782a3-064e-4127-8ea6-080e428bea44\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e8b3cb282516c479b71e40677cbee6e03decb409970b73313cde2336ff2ca689/globalmount\"" pod="openstack/rabbitmq-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.508247 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/609782a3-064e-4127-8ea6-080e428bea44-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"609782a3-064e-4127-8ea6-080e428bea44\") " pod="openstack/rabbitmq-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.508471 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/609782a3-064e-4127-8ea6-080e428bea44-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"609782a3-064e-4127-8ea6-080e428bea44\") " pod="openstack/rabbitmq-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.514907 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/609782a3-064e-4127-8ea6-080e428bea44-pod-info\") pod \"rabbitmq-server-0\" (UID: \"609782a3-064e-4127-8ea6-080e428bea44\") " pod="openstack/rabbitmq-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.526670 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llnqh\" (UniqueName: \"kubernetes.io/projected/609782a3-064e-4127-8ea6-080e428bea44-kube-api-access-llnqh\") pod \"rabbitmq-server-0\" (UID: \"609782a3-064e-4127-8ea6-080e428bea44\") " pod="openstack/rabbitmq-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.570052 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.571414 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.575386 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-bvxnm" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.575567 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.575710 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.575817 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.575969 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.708082 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/63be72e0-3588-4234-94d0-80c42d74aff6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"63be72e0-3588-4234-94d0-80c42d74aff6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.708169 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/63be72e0-3588-4234-94d0-80c42d74aff6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"63be72e0-3588-4234-94d0-80c42d74aff6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.708189 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/63be72e0-3588-4234-94d0-80c42d74aff6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"63be72e0-3588-4234-94d0-80c42d74aff6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.708216 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/63be72e0-3588-4234-94d0-80c42d74aff6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"63be72e0-3588-4234-94d0-80c42d74aff6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.708231 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/63be72e0-3588-4234-94d0-80c42d74aff6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"63be72e0-3588-4234-94d0-80c42d74aff6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.708261 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/63be72e0-3588-4234-94d0-80c42d74aff6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"63be72e0-3588-4234-94d0-80c42d74aff6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.708300 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/63be72e0-3588-4234-94d0-80c42d74aff6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"63be72e0-3588-4234-94d0-80c42d74aff6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.708336 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqgcc\" (UniqueName: \"kubernetes.io/projected/63be72e0-3588-4234-94d0-80c42d74aff6-kube-api-access-qqgcc\") pod \"rabbitmq-cell1-server-0\" (UID: \"63be72e0-3588-4234-94d0-80c42d74aff6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.708354 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-22bf139d-2dbc-4fae-b0f0-8d7e33ad3a27\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-22bf139d-2dbc-4fae-b0f0-8d7e33ad3a27\") pod \"rabbitmq-cell1-server-0\" (UID: \"63be72e0-3588-4234-94d0-80c42d74aff6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.810113 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/63be72e0-3588-4234-94d0-80c42d74aff6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"63be72e0-3588-4234-94d0-80c42d74aff6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.810220 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/63be72e0-3588-4234-94d0-80c42d74aff6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"63be72e0-3588-4234-94d0-80c42d74aff6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.810245 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/63be72e0-3588-4234-94d0-80c42d74aff6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"63be72e0-3588-4234-94d0-80c42d74aff6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.810278 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/63be72e0-3588-4234-94d0-80c42d74aff6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"63be72e0-3588-4234-94d0-80c42d74aff6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.810299 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/63be72e0-3588-4234-94d0-80c42d74aff6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"63be72e0-3588-4234-94d0-80c42d74aff6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.810353 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/63be72e0-3588-4234-94d0-80c42d74aff6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"63be72e0-3588-4234-94d0-80c42d74aff6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.810417 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/63be72e0-3588-4234-94d0-80c42d74aff6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"63be72e0-3588-4234-94d0-80c42d74aff6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.810494 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqgcc\" (UniqueName: \"kubernetes.io/projected/63be72e0-3588-4234-94d0-80c42d74aff6-kube-api-access-qqgcc\") pod \"rabbitmq-cell1-server-0\" (UID: \"63be72e0-3588-4234-94d0-80c42d74aff6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.810526 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-22bf139d-2dbc-4fae-b0f0-8d7e33ad3a27\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-22bf139d-2dbc-4fae-b0f0-8d7e33ad3a27\") pod \"rabbitmq-cell1-server-0\" (UID: \"63be72e0-3588-4234-94d0-80c42d74aff6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.811698 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/63be72e0-3588-4234-94d0-80c42d74aff6-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"63be72e0-3588-4234-94d0-80c42d74aff6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.817199 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/63be72e0-3588-4234-94d0-80c42d74aff6-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"63be72e0-3588-4234-94d0-80c42d74aff6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.817472 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/63be72e0-3588-4234-94d0-80c42d74aff6-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"63be72e0-3588-4234-94d0-80c42d74aff6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.820912 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/63be72e0-3588-4234-94d0-80c42d74aff6-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"63be72e0-3588-4234-94d0-80c42d74aff6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.826597 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/63be72e0-3588-4234-94d0-80c42d74aff6-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"63be72e0-3588-4234-94d0-80c42d74aff6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.827037 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/63be72e0-3588-4234-94d0-80c42d74aff6-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"63be72e0-3588-4234-94d0-80c42d74aff6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.828092 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/63be72e0-3588-4234-94d0-80c42d74aff6-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"63be72e0-3588-4234-94d0-80c42d74aff6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.857974 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqgcc\" (UniqueName: \"kubernetes.io/projected/63be72e0-3588-4234-94d0-80c42d74aff6-kube-api-access-qqgcc\") pod \"rabbitmq-cell1-server-0\" (UID: \"63be72e0-3588-4234-94d0-80c42d74aff6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.882199 5050 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.883868 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-22bf139d-2dbc-4fae-b0f0-8d7e33ad3a27\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-22bf139d-2dbc-4fae-b0f0-8d7e33ad3a27\") pod \"rabbitmq-cell1-server-0\" (UID: \"63be72e0-3588-4234-94d0-80c42d74aff6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4fd167e89a34b4de95c9058341d491e0e53431ae52baf09da68564c9e515f4bf/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.885544 5050 generic.go:334] "Generic (PLEG): container finished" podID="8c8f2bd2-9e37-4146-a633-fa2e990a6f90" containerID="6a243224ce900f234ad231a3666c32c5fe50719cb972371fa564e85261c0ea29" exitCode=0 Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.885579 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-5mqnk" event={"ID":"8c8f2bd2-9e37-4146-a633-fa2e990a6f90","Type":"ContainerDied","Data":"6a243224ce900f234ad231a3666c32c5fe50719cb972371fa564e85261c0ea29"} Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.885603 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-5mqnk" event={"ID":"8c8f2bd2-9e37-4146-a633-fa2e990a6f90","Type":"ContainerStarted","Data":"3e3dd5abadbafea7e5e6dbdb3bc513bb57a5e6f595e57a999b01dbd5c31cfc6e"} Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.916190 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.950392 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-mr6dw"] Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.950968 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-10e17a41-a3d8-4545-b0df-9368e3a79b07\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-10e17a41-a3d8-4545-b0df-9368e3a79b07\") pod \"rabbitmq-server-0\" (UID: \"609782a3-064e-4127-8ea6-080e428bea44\") " pod="openstack/rabbitmq-server-0" Dec 11 15:06:49 crc kubenswrapper[5050]: I1211 15:06:49.967725 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-22bf139d-2dbc-4fae-b0f0-8d7e33ad3a27\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-22bf139d-2dbc-4fae-b0f0-8d7e33ad3a27\") pod \"rabbitmq-cell1-server-0\" (UID: \"63be72e0-3588-4234-94d0-80c42d74aff6\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:06:50 crc kubenswrapper[5050]: E1211 15:06:50.086536 5050 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Dec 11 15:06:50 crc kubenswrapper[5050]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/8c8f2bd2-9e37-4146-a633-fa2e990a6f90/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Dec 11 15:06:50 crc kubenswrapper[5050]: > podSandboxID="3e3dd5abadbafea7e5e6dbdb3bc513bb57a5e6f595e57a999b01dbd5c31cfc6e" Dec 11 15:06:50 crc kubenswrapper[5050]: E1211 15:06:50.086793 5050 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 11 15:06:50 crc kubenswrapper[5050]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:ea0bf67f1aa5d95a9a07b9c8692c293470f1311792c55d3d57f1f92e56689c33,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8chc6h5bh56fh546hb7hc8h67h5bchffh577h697h5b5h5bdh59bhf6hf4h558hb5h578h595h5cchfbh644h59ch7fh654h547h587h5cbh5d5h8fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m5tts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-95587bc99-5mqnk_openstack(8c8f2bd2-9e37-4146-a633-fa2e990a6f90): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/8c8f2bd2-9e37-4146-a633-fa2e990a6f90/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Dec 11 15:06:50 crc kubenswrapper[5050]: > logger="UnhandledError" Dec 11 15:06:50 crc kubenswrapper[5050]: E1211 15:06:50.087978 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/8c8f2bd2-9e37-4146-a633-fa2e990a6f90/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-95587bc99-5mqnk" podUID="8c8f2bd2-9e37-4146-a633-fa2e990a6f90" Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.215452 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.240516 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 11 15:06:50 crc kubenswrapper[5050]: W1211 15:06:50.541581 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63be72e0_3588_4234_94d0_80c42d74aff6.slice/crio-d8727f4042c1c95cd8c86c4817dbd6784487eb7f5bb8f2858a787e2fc1f2fc4d WatchSource:0}: Error finding container d8727f4042c1c95cd8c86c4817dbd6784487eb7f5bb8f2858a787e2fc1f2fc4d: Status 404 returned error can't find the container with id d8727f4042c1c95cd8c86c4817dbd6784487eb7f5bb8f2858a787e2fc1f2fc4d Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.542831 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.637844 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.643582 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.649813 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.650164 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.650467 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.651805 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-5gcmv" Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.671531 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.719337 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.749946 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/14ad1594-090d-4024-a999-9ffe77ce58d8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"14ad1594-090d-4024-a999-9ffe77ce58d8\") " pod="openstack/openstack-galera-0" Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.750027 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f505f33b-fb07-41ec-878f-f7928150621b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f505f33b-fb07-41ec-878f-f7928150621b\") pod \"openstack-galera-0\" (UID: \"14ad1594-090d-4024-a999-9ffe77ce58d8\") " pod="openstack/openstack-galera-0" Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.750064 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ad1594-090d-4024-a999-9ffe77ce58d8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"14ad1594-090d-4024-a999-9ffe77ce58d8\") " pod="openstack/openstack-galera-0" Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.750197 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/14ad1594-090d-4024-a999-9ffe77ce58d8-config-data-default\") pod \"openstack-galera-0\" (UID: \"14ad1594-090d-4024-a999-9ffe77ce58d8\") " pod="openstack/openstack-galera-0" Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.750319 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14ad1594-090d-4024-a999-9ffe77ce58d8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"14ad1594-090d-4024-a999-9ffe77ce58d8\") " pod="openstack/openstack-galera-0" Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.750467 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/14ad1594-090d-4024-a999-9ffe77ce58d8-kolla-config\") pod \"openstack-galera-0\" (UID: \"14ad1594-090d-4024-a999-9ffe77ce58d8\") " pod="openstack/openstack-galera-0" Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.750508 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brc8q\" (UniqueName: \"kubernetes.io/projected/14ad1594-090d-4024-a999-9ffe77ce58d8-kube-api-access-brc8q\") pod \"openstack-galera-0\" (UID: \"14ad1594-090d-4024-a999-9ffe77ce58d8\") " pod="openstack/openstack-galera-0" Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.750541 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/14ad1594-090d-4024-a999-9ffe77ce58d8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"14ad1594-090d-4024-a999-9ffe77ce58d8\") " pod="openstack/openstack-galera-0" Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.775636 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.855465 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f505f33b-fb07-41ec-878f-f7928150621b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f505f33b-fb07-41ec-878f-f7928150621b\") pod \"openstack-galera-0\" (UID: \"14ad1594-090d-4024-a999-9ffe77ce58d8\") " pod="openstack/openstack-galera-0" Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.855538 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ad1594-090d-4024-a999-9ffe77ce58d8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"14ad1594-090d-4024-a999-9ffe77ce58d8\") " pod="openstack/openstack-galera-0" Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.855566 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/14ad1594-090d-4024-a999-9ffe77ce58d8-config-data-default\") pod \"openstack-galera-0\" (UID: \"14ad1594-090d-4024-a999-9ffe77ce58d8\") " pod="openstack/openstack-galera-0" Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.855594 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14ad1594-090d-4024-a999-9ffe77ce58d8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"14ad1594-090d-4024-a999-9ffe77ce58d8\") " pod="openstack/openstack-galera-0" Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.855639 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/14ad1594-090d-4024-a999-9ffe77ce58d8-kolla-config\") pod \"openstack-galera-0\" (UID: \"14ad1594-090d-4024-a999-9ffe77ce58d8\") " pod="openstack/openstack-galera-0" Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.855661 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brc8q\" (UniqueName: \"kubernetes.io/projected/14ad1594-090d-4024-a999-9ffe77ce58d8-kube-api-access-brc8q\") pod \"openstack-galera-0\" (UID: \"14ad1594-090d-4024-a999-9ffe77ce58d8\") " pod="openstack/openstack-galera-0" Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.855683 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/14ad1594-090d-4024-a999-9ffe77ce58d8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"14ad1594-090d-4024-a999-9ffe77ce58d8\") " pod="openstack/openstack-galera-0" Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.855731 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/14ad1594-090d-4024-a999-9ffe77ce58d8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"14ad1594-090d-4024-a999-9ffe77ce58d8\") " pod="openstack/openstack-galera-0" Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.857363 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/14ad1594-090d-4024-a999-9ffe77ce58d8-kolla-config\") pod \"openstack-galera-0\" (UID: \"14ad1594-090d-4024-a999-9ffe77ce58d8\") " pod="openstack/openstack-galera-0" Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.857517 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14ad1594-090d-4024-a999-9ffe77ce58d8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"14ad1594-090d-4024-a999-9ffe77ce58d8\") " pod="openstack/openstack-galera-0" Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.858161 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/14ad1594-090d-4024-a999-9ffe77ce58d8-config-data-default\") pod \"openstack-galera-0\" (UID: \"14ad1594-090d-4024-a999-9ffe77ce58d8\") " pod="openstack/openstack-galera-0" Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.859624 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/14ad1594-090d-4024-a999-9ffe77ce58d8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"14ad1594-090d-4024-a999-9ffe77ce58d8\") " pod="openstack/openstack-galera-0" Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.860544 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/14ad1594-090d-4024-a999-9ffe77ce58d8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"14ad1594-090d-4024-a999-9ffe77ce58d8\") " pod="openstack/openstack-galera-0" Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.861139 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14ad1594-090d-4024-a999-9ffe77ce58d8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"14ad1594-090d-4024-a999-9ffe77ce58d8\") " pod="openstack/openstack-galera-0" Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.868055 5050 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.868090 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f505f33b-fb07-41ec-878f-f7928150621b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f505f33b-fb07-41ec-878f-f7928150621b\") pod \"openstack-galera-0\" (UID: \"14ad1594-090d-4024-a999-9ffe77ce58d8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/03f741efe1de37b54d3e57495e740936f09668342677d906bd869fe8b100fd99/globalmount\"" pod="openstack/openstack-galera-0" Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.896461 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brc8q\" (UniqueName: \"kubernetes.io/projected/14ad1594-090d-4024-a999-9ffe77ce58d8-kube-api-access-brc8q\") pod \"openstack-galera-0\" (UID: \"14ad1594-090d-4024-a999-9ffe77ce58d8\") " pod="openstack/openstack-galera-0" Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.902232 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"63be72e0-3588-4234-94d0-80c42d74aff6","Type":"ContainerStarted","Data":"d8727f4042c1c95cd8c86c4817dbd6784487eb7f5bb8f2858a787e2fc1f2fc4d"} Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.904708 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"609782a3-064e-4127-8ea6-080e428bea44","Type":"ContainerStarted","Data":"ef74ef1eee9834b492f3d92def8f0f8f9af78fcb33f8b5ac072621b47e638fd3"} Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.909335 5050 generic.go:334] "Generic (PLEG): container finished" podID="786d8f03-bb4c-47b7-a94a-b86c9bc07c6d" containerID="86332dd77e7fbe45f6dc8f8badf46d8e67b3e5e49ae5675d8958b530134e1997" exitCode=0 Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.909440 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-mr6dw" event={"ID":"786d8f03-bb4c-47b7-a94a-b86c9bc07c6d","Type":"ContainerDied","Data":"86332dd77e7fbe45f6dc8f8badf46d8e67b3e5e49ae5675d8958b530134e1997"} Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.909491 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-mr6dw" event={"ID":"786d8f03-bb4c-47b7-a94a-b86c9bc07c6d","Type":"ContainerStarted","Data":"8f7801357a868d096ff8f134560ab3d8255dc4b66d1cfcc58510f03ecffab581"} Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.928957 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.949726 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.949849 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.956461 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.956614 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-kl4q7" Dec 11 15:06:50 crc kubenswrapper[5050]: I1211 15:06:50.982274 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f505f33b-fb07-41ec-878f-f7928150621b\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f505f33b-fb07-41ec-878f-f7928150621b\") pod \"openstack-galera-0\" (UID: \"14ad1594-090d-4024-a999-9ffe77ce58d8\") " pod="openstack/openstack-galera-0" Dec 11 15:06:51 crc kubenswrapper[5050]: I1211 15:06:51.018293 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 11 15:06:51 crc kubenswrapper[5050]: I1211 15:06:51.059209 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4dfc7ab9-cde7-4f8e-bdc0-4f0851b87095-kolla-config\") pod \"memcached-0\" (UID: \"4dfc7ab9-cde7-4f8e-bdc0-4f0851b87095\") " pod="openstack/memcached-0" Dec 11 15:06:51 crc kubenswrapper[5050]: I1211 15:06:51.059364 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4dfc7ab9-cde7-4f8e-bdc0-4f0851b87095-config-data\") pod \"memcached-0\" (UID: \"4dfc7ab9-cde7-4f8e-bdc0-4f0851b87095\") " pod="openstack/memcached-0" Dec 11 15:06:51 crc kubenswrapper[5050]: I1211 15:06:51.059405 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvqzd\" (UniqueName: \"kubernetes.io/projected/4dfc7ab9-cde7-4f8e-bdc0-4f0851b87095-kube-api-access-hvqzd\") pod \"memcached-0\" (UID: \"4dfc7ab9-cde7-4f8e-bdc0-4f0851b87095\") " pod="openstack/memcached-0" Dec 11 15:06:51 crc kubenswrapper[5050]: I1211 15:06:51.166956 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4dfc7ab9-cde7-4f8e-bdc0-4f0851b87095-config-data\") pod \"memcached-0\" (UID: \"4dfc7ab9-cde7-4f8e-bdc0-4f0851b87095\") " pod="openstack/memcached-0" Dec 11 15:06:51 crc kubenswrapper[5050]: I1211 15:06:51.167398 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvqzd\" (UniqueName: \"kubernetes.io/projected/4dfc7ab9-cde7-4f8e-bdc0-4f0851b87095-kube-api-access-hvqzd\") pod \"memcached-0\" (UID: \"4dfc7ab9-cde7-4f8e-bdc0-4f0851b87095\") " pod="openstack/memcached-0" Dec 11 15:06:51 crc kubenswrapper[5050]: I1211 15:06:51.167449 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4dfc7ab9-cde7-4f8e-bdc0-4f0851b87095-kolla-config\") pod \"memcached-0\" (UID: \"4dfc7ab9-cde7-4f8e-bdc0-4f0851b87095\") " pod="openstack/memcached-0" Dec 11 15:06:51 crc kubenswrapper[5050]: I1211 15:06:51.168089 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4dfc7ab9-cde7-4f8e-bdc0-4f0851b87095-config-data\") pod \"memcached-0\" (UID: \"4dfc7ab9-cde7-4f8e-bdc0-4f0851b87095\") " pod="openstack/memcached-0" Dec 11 15:06:51 crc kubenswrapper[5050]: I1211 15:06:51.168300 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4dfc7ab9-cde7-4f8e-bdc0-4f0851b87095-kolla-config\") pod \"memcached-0\" (UID: \"4dfc7ab9-cde7-4f8e-bdc0-4f0851b87095\") " pod="openstack/memcached-0" Dec 11 15:06:51 crc kubenswrapper[5050]: I1211 15:06:51.194546 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvqzd\" (UniqueName: \"kubernetes.io/projected/4dfc7ab9-cde7-4f8e-bdc0-4f0851b87095-kube-api-access-hvqzd\") pod \"memcached-0\" (UID: \"4dfc7ab9-cde7-4f8e-bdc0-4f0851b87095\") " pod="openstack/memcached-0" Dec 11 15:06:51 crc kubenswrapper[5050]: I1211 15:06:51.281333 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 11 15:06:51 crc kubenswrapper[5050]: I1211 15:06:51.305272 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 11 15:06:51 crc kubenswrapper[5050]: W1211 15:06:51.473541 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14ad1594_090d_4024_a999_9ffe77ce58d8.slice/crio-68e5d658090ca3abedab46a87ac664ba369e8442eda59f830595d81bdf23581f WatchSource:0}: Error finding container 68e5d658090ca3abedab46a87ac664ba369e8442eda59f830595d81bdf23581f: Status 404 returned error can't find the container with id 68e5d658090ca3abedab46a87ac664ba369e8442eda59f830595d81bdf23581f Dec 11 15:06:51 crc kubenswrapper[5050]: I1211 15:06:51.924851 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 11 15:06:51 crc kubenswrapper[5050]: I1211 15:06:51.928262 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"14ad1594-090d-4024-a999-9ffe77ce58d8","Type":"ContainerStarted","Data":"68e5d658090ca3abedab46a87ac664ba369e8442eda59f830595d81bdf23581f"} Dec 11 15:06:51 crc kubenswrapper[5050]: I1211 15:06:51.934393 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-mr6dw" event={"ID":"786d8f03-bb4c-47b7-a94a-b86c9bc07c6d","Type":"ContainerStarted","Data":"094234d197fae2ac8e7fdc4a50ba306b4ebee05f718803e46ae3580eaf17f016"} Dec 11 15:06:51 crc kubenswrapper[5050]: I1211 15:06:51.934552 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d79f765b5-mr6dw" Dec 11 15:06:51 crc kubenswrapper[5050]: I1211 15:06:51.942433 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-5mqnk" event={"ID":"8c8f2bd2-9e37-4146-a633-fa2e990a6f90","Type":"ContainerStarted","Data":"097293ae6df2d257902fa08d8f2ac799ae401990ee6d882750428e99dba458ee"} Dec 11 15:06:51 crc kubenswrapper[5050]: I1211 15:06:51.942730 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-95587bc99-5mqnk" Dec 11 15:06:51 crc kubenswrapper[5050]: I1211 15:06:51.971129 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d79f765b5-mr6dw" podStartSLOduration=3.9711088439999997 podStartE2EDuration="3.971108844s" podCreationTimestamp="2025-12-11 15:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:06:51.968651558 +0000 UTC m=+4702.812374144" watchObservedRunningTime="2025-12-11 15:06:51.971108844 +0000 UTC m=+4702.814831430" Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.003215 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-95587bc99-5mqnk" podStartSLOduration=4.003192066 podStartE2EDuration="4.003192066s" podCreationTimestamp="2025-12-11 15:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:06:51.996989189 +0000 UTC m=+4702.840711785" watchObservedRunningTime="2025-12-11 15:06:52.003192066 +0000 UTC m=+4702.846914652" Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.043879 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.045691 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.053292 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.053401 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-vwxp8" Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.053293 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.053548 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.059720 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.185904 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/531337b1-3bd0-448d-a561-0b19b40214a6-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"531337b1-3bd0-448d-a561-0b19b40214a6\") " pod="openstack/openstack-cell1-galera-0" Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.186488 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/531337b1-3bd0-448d-a561-0b19b40214a6-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"531337b1-3bd0-448d-a561-0b19b40214a6\") " pod="openstack/openstack-cell1-galera-0" Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.186540 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/531337b1-3bd0-448d-a561-0b19b40214a6-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"531337b1-3bd0-448d-a561-0b19b40214a6\") " pod="openstack/openstack-cell1-galera-0" Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.186642 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/531337b1-3bd0-448d-a561-0b19b40214a6-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"531337b1-3bd0-448d-a561-0b19b40214a6\") " pod="openstack/openstack-cell1-galera-0" Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.186676 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/531337b1-3bd0-448d-a561-0b19b40214a6-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"531337b1-3bd0-448d-a561-0b19b40214a6\") " pod="openstack/openstack-cell1-galera-0" Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.186734 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b5a5fc14-85c0-4444-a899-4a5e946b4062\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b5a5fc14-85c0-4444-a899-4a5e946b4062\") pod \"openstack-cell1-galera-0\" (UID: \"531337b1-3bd0-448d-a561-0b19b40214a6\") " pod="openstack/openstack-cell1-galera-0" Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.186768 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/531337b1-3bd0-448d-a561-0b19b40214a6-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"531337b1-3bd0-448d-a561-0b19b40214a6\") " pod="openstack/openstack-cell1-galera-0" Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.186818 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7rn8\" (UniqueName: \"kubernetes.io/projected/531337b1-3bd0-448d-a561-0b19b40214a6-kube-api-access-n7rn8\") pod \"openstack-cell1-galera-0\" (UID: \"531337b1-3bd0-448d-a561-0b19b40214a6\") " pod="openstack/openstack-cell1-galera-0" Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.288178 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/531337b1-3bd0-448d-a561-0b19b40214a6-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"531337b1-3bd0-448d-a561-0b19b40214a6\") " pod="openstack/openstack-cell1-galera-0" Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.288229 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b5a5fc14-85c0-4444-a899-4a5e946b4062\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b5a5fc14-85c0-4444-a899-4a5e946b4062\") pod \"openstack-cell1-galera-0\" (UID: \"531337b1-3bd0-448d-a561-0b19b40214a6\") " pod="openstack/openstack-cell1-galera-0" Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.288274 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/531337b1-3bd0-448d-a561-0b19b40214a6-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"531337b1-3bd0-448d-a561-0b19b40214a6\") " pod="openstack/openstack-cell1-galera-0" Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.288316 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7rn8\" (UniqueName: \"kubernetes.io/projected/531337b1-3bd0-448d-a561-0b19b40214a6-kube-api-access-n7rn8\") pod \"openstack-cell1-galera-0\" (UID: \"531337b1-3bd0-448d-a561-0b19b40214a6\") " pod="openstack/openstack-cell1-galera-0" Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.288370 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/531337b1-3bd0-448d-a561-0b19b40214a6-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"531337b1-3bd0-448d-a561-0b19b40214a6\") " pod="openstack/openstack-cell1-galera-0" Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.288435 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/531337b1-3bd0-448d-a561-0b19b40214a6-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"531337b1-3bd0-448d-a561-0b19b40214a6\") " pod="openstack/openstack-cell1-galera-0" Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.288456 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/531337b1-3bd0-448d-a561-0b19b40214a6-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"531337b1-3bd0-448d-a561-0b19b40214a6\") " pod="openstack/openstack-cell1-galera-0" Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.288511 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/531337b1-3bd0-448d-a561-0b19b40214a6-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"531337b1-3bd0-448d-a561-0b19b40214a6\") " pod="openstack/openstack-cell1-galera-0" Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.289665 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/531337b1-3bd0-448d-a561-0b19b40214a6-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"531337b1-3bd0-448d-a561-0b19b40214a6\") " pod="openstack/openstack-cell1-galera-0" Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.289946 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/531337b1-3bd0-448d-a561-0b19b40214a6-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"531337b1-3bd0-448d-a561-0b19b40214a6\") " pod="openstack/openstack-cell1-galera-0" Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.290391 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/531337b1-3bd0-448d-a561-0b19b40214a6-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"531337b1-3bd0-448d-a561-0b19b40214a6\") " pod="openstack/openstack-cell1-galera-0" Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.290988 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/531337b1-3bd0-448d-a561-0b19b40214a6-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"531337b1-3bd0-448d-a561-0b19b40214a6\") " pod="openstack/openstack-cell1-galera-0" Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.292812 5050 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.292842 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b5a5fc14-85c0-4444-a899-4a5e946b4062\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b5a5fc14-85c0-4444-a899-4a5e946b4062\") pod \"openstack-cell1-galera-0\" (UID: \"531337b1-3bd0-448d-a561-0b19b40214a6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6530921fff9390bc1020442267df86b0ed3cab55fbbea3ad5748b5115b0f82c0/globalmount\"" pod="openstack/openstack-cell1-galera-0" Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.294842 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/531337b1-3bd0-448d-a561-0b19b40214a6-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"531337b1-3bd0-448d-a561-0b19b40214a6\") " pod="openstack/openstack-cell1-galera-0" Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.301088 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/531337b1-3bd0-448d-a561-0b19b40214a6-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"531337b1-3bd0-448d-a561-0b19b40214a6\") " pod="openstack/openstack-cell1-galera-0" Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.309765 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7rn8\" (UniqueName: \"kubernetes.io/projected/531337b1-3bd0-448d-a561-0b19b40214a6-kube-api-access-n7rn8\") pod \"openstack-cell1-galera-0\" (UID: \"531337b1-3bd0-448d-a561-0b19b40214a6\") " pod="openstack/openstack-cell1-galera-0" Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.333410 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b5a5fc14-85c0-4444-a899-4a5e946b4062\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b5a5fc14-85c0-4444-a899-4a5e946b4062\") pod \"openstack-cell1-galera-0\" (UID: \"531337b1-3bd0-448d-a561-0b19b40214a6\") " pod="openstack/openstack-cell1-galera-0" Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.365642 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.607405 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 11 15:06:52 crc kubenswrapper[5050]: W1211 15:06:52.615277 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod531337b1_3bd0_448d_a561_0b19b40214a6.slice/crio-722454ca8720d94d0f48ffd19b699f73734f8b1630ded00248fc58e8652d713c WatchSource:0}: Error finding container 722454ca8720d94d0f48ffd19b699f73734f8b1630ded00248fc58e8652d713c: Status 404 returned error can't find the container with id 722454ca8720d94d0f48ffd19b699f73734f8b1630ded00248fc58e8652d713c Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.951649 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"14ad1594-090d-4024-a999-9ffe77ce58d8","Type":"ContainerStarted","Data":"9d271524a9400fae00d23132b76b4408f1d61eb343960f15cfb23f4065953029"} Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.954282 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"63be72e0-3588-4234-94d0-80c42d74aff6","Type":"ContainerStarted","Data":"aeafc8c5e7c585bd94c8da4dff5b0790ec5875c06a752cafff40b2f26096bfb4"} Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.956606 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"609782a3-064e-4127-8ea6-080e428bea44","Type":"ContainerStarted","Data":"360b96ea491a2086bdb37ea3509eccd0761e2c6a7998f159dbfb22917007fecd"} Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.958513 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"4dfc7ab9-cde7-4f8e-bdc0-4f0851b87095","Type":"ContainerStarted","Data":"b6c6f2cad18254966c15af5e79649c4335f1144134acca6c8cee1164206f49ec"} Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.958561 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"4dfc7ab9-cde7-4f8e-bdc0-4f0851b87095","Type":"ContainerStarted","Data":"c487f863c7de4d4d6fd7a0c63bdbce4a877734d6031153fa95ebd701db2f47a8"} Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.960172 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"531337b1-3bd0-448d-a561-0b19b40214a6","Type":"ContainerStarted","Data":"ff8f5cbfa8c2205bf2fafd7272534da396c6f33025675c1ccd9548584b0997ba"} Dec 11 15:06:52 crc kubenswrapper[5050]: I1211 15:06:52.960292 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"531337b1-3bd0-448d-a561-0b19b40214a6","Type":"ContainerStarted","Data":"722454ca8720d94d0f48ffd19b699f73734f8b1630ded00248fc58e8652d713c"} Dec 11 15:06:53 crc kubenswrapper[5050]: I1211 15:06:53.035459 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.035419741 podStartE2EDuration="3.035419741s" podCreationTimestamp="2025-12-11 15:06:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:06:53.023021758 +0000 UTC m=+4703.866744384" watchObservedRunningTime="2025-12-11 15:06:53.035419741 +0000 UTC m=+4703.879142337" Dec 11 15:06:53 crc kubenswrapper[5050]: I1211 15:06:53.968374 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 11 15:06:55 crc kubenswrapper[5050]: I1211 15:06:55.986554 5050 generic.go:334] "Generic (PLEG): container finished" podID="14ad1594-090d-4024-a999-9ffe77ce58d8" containerID="9d271524a9400fae00d23132b76b4408f1d61eb343960f15cfb23f4065953029" exitCode=0 Dec 11 15:06:55 crc kubenswrapper[5050]: I1211 15:06:55.986649 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"14ad1594-090d-4024-a999-9ffe77ce58d8","Type":"ContainerDied","Data":"9d271524a9400fae00d23132b76b4408f1d61eb343960f15cfb23f4065953029"} Dec 11 15:06:56 crc kubenswrapper[5050]: I1211 15:06:56.546477 5050 scope.go:117] "RemoveContainer" containerID="b585bea20b0bb52e58642ee9f3e33b5b182a66a544d35daafc0202a53b35a810" Dec 11 15:06:56 crc kubenswrapper[5050]: E1211 15:06:56.547590 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:06:57 crc kubenswrapper[5050]: I1211 15:06:57.000881 5050 generic.go:334] "Generic (PLEG): container finished" podID="531337b1-3bd0-448d-a561-0b19b40214a6" containerID="ff8f5cbfa8c2205bf2fafd7272534da396c6f33025675c1ccd9548584b0997ba" exitCode=0 Dec 11 15:06:57 crc kubenswrapper[5050]: I1211 15:06:57.000974 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"531337b1-3bd0-448d-a561-0b19b40214a6","Type":"ContainerDied","Data":"ff8f5cbfa8c2205bf2fafd7272534da396c6f33025675c1ccd9548584b0997ba"} Dec 11 15:06:57 crc kubenswrapper[5050]: I1211 15:06:57.004623 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"14ad1594-090d-4024-a999-9ffe77ce58d8","Type":"ContainerStarted","Data":"11ec72e9f06368997b90f40f70878174683db5a652ebb2eb2c432e34abde950e"} Dec 11 15:06:57 crc kubenswrapper[5050]: I1211 15:06:57.064925 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=8.06489691 podStartE2EDuration="8.06489691s" podCreationTimestamp="2025-12-11 15:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:06:57.057273685 +0000 UTC m=+4707.900996291" watchObservedRunningTime="2025-12-11 15:06:57.06489691 +0000 UTC m=+4707.908619506" Dec 11 15:06:58 crc kubenswrapper[5050]: I1211 15:06:58.019069 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"531337b1-3bd0-448d-a561-0b19b40214a6","Type":"ContainerStarted","Data":"b3c09ac448995d08d77f55d4841bcd031d0b417debdcd8dc1095c80925715dcb"} Dec 11 15:06:58 crc kubenswrapper[5050]: I1211 15:06:58.053415 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=7.053382749 podStartE2EDuration="7.053382749s" podCreationTimestamp="2025-12-11 15:06:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:06:58.046237128 +0000 UTC m=+4708.889959754" watchObservedRunningTime="2025-12-11 15:06:58.053382749 +0000 UTC m=+4708.897105375" Dec 11 15:06:58 crc kubenswrapper[5050]: I1211 15:06:58.445544 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-95587bc99-5mqnk" Dec 11 15:06:59 crc kubenswrapper[5050]: I1211 15:06:59.029495 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d79f765b5-mr6dw" Dec 11 15:06:59 crc kubenswrapper[5050]: I1211 15:06:59.087882 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-5mqnk"] Dec 11 15:06:59 crc kubenswrapper[5050]: I1211 15:06:59.088118 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-95587bc99-5mqnk" podUID="8c8f2bd2-9e37-4146-a633-fa2e990a6f90" containerName="dnsmasq-dns" containerID="cri-o://097293ae6df2d257902fa08d8f2ac799ae401990ee6d882750428e99dba458ee" gracePeriod=10 Dec 11 15:06:59 crc kubenswrapper[5050]: I1211 15:06:59.872624 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95587bc99-5mqnk" Dec 11 15:06:59 crc kubenswrapper[5050]: I1211 15:06:59.951023 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c8f2bd2-9e37-4146-a633-fa2e990a6f90-dns-svc\") pod \"8c8f2bd2-9e37-4146-a633-fa2e990a6f90\" (UID: \"8c8f2bd2-9e37-4146-a633-fa2e990a6f90\") " Dec 11 15:06:59 crc kubenswrapper[5050]: I1211 15:06:59.951107 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c8f2bd2-9e37-4146-a633-fa2e990a6f90-config\") pod \"8c8f2bd2-9e37-4146-a633-fa2e990a6f90\" (UID: \"8c8f2bd2-9e37-4146-a633-fa2e990a6f90\") " Dec 11 15:06:59 crc kubenswrapper[5050]: I1211 15:06:59.951148 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5tts\" (UniqueName: \"kubernetes.io/projected/8c8f2bd2-9e37-4146-a633-fa2e990a6f90-kube-api-access-m5tts\") pod \"8c8f2bd2-9e37-4146-a633-fa2e990a6f90\" (UID: \"8c8f2bd2-9e37-4146-a633-fa2e990a6f90\") " Dec 11 15:06:59 crc kubenswrapper[5050]: I1211 15:06:59.958780 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c8f2bd2-9e37-4146-a633-fa2e990a6f90-kube-api-access-m5tts" (OuterVolumeSpecName: "kube-api-access-m5tts") pod "8c8f2bd2-9e37-4146-a633-fa2e990a6f90" (UID: "8c8f2bd2-9e37-4146-a633-fa2e990a6f90"). InnerVolumeSpecName "kube-api-access-m5tts". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:06:59 crc kubenswrapper[5050]: I1211 15:06:59.988679 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c8f2bd2-9e37-4146-a633-fa2e990a6f90-config" (OuterVolumeSpecName: "config") pod "8c8f2bd2-9e37-4146-a633-fa2e990a6f90" (UID: "8c8f2bd2-9e37-4146-a633-fa2e990a6f90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:06:59 crc kubenswrapper[5050]: I1211 15:06:59.988886 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c8f2bd2-9e37-4146-a633-fa2e990a6f90-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8c8f2bd2-9e37-4146-a633-fa2e990a6f90" (UID: "8c8f2bd2-9e37-4146-a633-fa2e990a6f90"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:07:00 crc kubenswrapper[5050]: I1211 15:07:00.038251 5050 generic.go:334] "Generic (PLEG): container finished" podID="8c8f2bd2-9e37-4146-a633-fa2e990a6f90" containerID="097293ae6df2d257902fa08d8f2ac799ae401990ee6d882750428e99dba458ee" exitCode=0 Dec 11 15:07:00 crc kubenswrapper[5050]: I1211 15:07:00.038298 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-5mqnk" event={"ID":"8c8f2bd2-9e37-4146-a633-fa2e990a6f90","Type":"ContainerDied","Data":"097293ae6df2d257902fa08d8f2ac799ae401990ee6d882750428e99dba458ee"} Dec 11 15:07:00 crc kubenswrapper[5050]: I1211 15:07:00.038327 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-95587bc99-5mqnk" event={"ID":"8c8f2bd2-9e37-4146-a633-fa2e990a6f90","Type":"ContainerDied","Data":"3e3dd5abadbafea7e5e6dbdb3bc513bb57a5e6f595e57a999b01dbd5c31cfc6e"} Dec 11 15:07:00 crc kubenswrapper[5050]: I1211 15:07:00.038324 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-95587bc99-5mqnk" Dec 11 15:07:00 crc kubenswrapper[5050]: I1211 15:07:00.038361 5050 scope.go:117] "RemoveContainer" containerID="097293ae6df2d257902fa08d8f2ac799ae401990ee6d882750428e99dba458ee" Dec 11 15:07:00 crc kubenswrapper[5050]: I1211 15:07:00.053306 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5tts\" (UniqueName: \"kubernetes.io/projected/8c8f2bd2-9e37-4146-a633-fa2e990a6f90-kube-api-access-m5tts\") on node \"crc\" DevicePath \"\"" Dec 11 15:07:00 crc kubenswrapper[5050]: I1211 15:07:00.053334 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c8f2bd2-9e37-4146-a633-fa2e990a6f90-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 15:07:00 crc kubenswrapper[5050]: I1211 15:07:00.053343 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c8f2bd2-9e37-4146-a633-fa2e990a6f90-config\") on node \"crc\" DevicePath \"\"" Dec 11 15:07:00 crc kubenswrapper[5050]: I1211 15:07:00.061691 5050 scope.go:117] "RemoveContainer" containerID="6a243224ce900f234ad231a3666c32c5fe50719cb972371fa564e85261c0ea29" Dec 11 15:07:00 crc kubenswrapper[5050]: I1211 15:07:00.069341 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-5mqnk"] Dec 11 15:07:00 crc kubenswrapper[5050]: I1211 15:07:00.075444 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-95587bc99-5mqnk"] Dec 11 15:07:00 crc kubenswrapper[5050]: I1211 15:07:00.094513 5050 scope.go:117] "RemoveContainer" containerID="097293ae6df2d257902fa08d8f2ac799ae401990ee6d882750428e99dba458ee" Dec 11 15:07:00 crc kubenswrapper[5050]: E1211 15:07:00.094975 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"097293ae6df2d257902fa08d8f2ac799ae401990ee6d882750428e99dba458ee\": container with ID starting with 097293ae6df2d257902fa08d8f2ac799ae401990ee6d882750428e99dba458ee not found: ID does not exist" containerID="097293ae6df2d257902fa08d8f2ac799ae401990ee6d882750428e99dba458ee" Dec 11 15:07:00 crc kubenswrapper[5050]: I1211 15:07:00.095007 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"097293ae6df2d257902fa08d8f2ac799ae401990ee6d882750428e99dba458ee"} err="failed to get container status \"097293ae6df2d257902fa08d8f2ac799ae401990ee6d882750428e99dba458ee\": rpc error: code = NotFound desc = could not find container \"097293ae6df2d257902fa08d8f2ac799ae401990ee6d882750428e99dba458ee\": container with ID starting with 097293ae6df2d257902fa08d8f2ac799ae401990ee6d882750428e99dba458ee not found: ID does not exist" Dec 11 15:07:00 crc kubenswrapper[5050]: I1211 15:07:00.095052 5050 scope.go:117] "RemoveContainer" containerID="6a243224ce900f234ad231a3666c32c5fe50719cb972371fa564e85261c0ea29" Dec 11 15:07:00 crc kubenswrapper[5050]: E1211 15:07:00.095522 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a243224ce900f234ad231a3666c32c5fe50719cb972371fa564e85261c0ea29\": container with ID starting with 6a243224ce900f234ad231a3666c32c5fe50719cb972371fa564e85261c0ea29 not found: ID does not exist" containerID="6a243224ce900f234ad231a3666c32c5fe50719cb972371fa564e85261c0ea29" Dec 11 15:07:00 crc kubenswrapper[5050]: I1211 15:07:00.095563 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a243224ce900f234ad231a3666c32c5fe50719cb972371fa564e85261c0ea29"} err="failed to get container status \"6a243224ce900f234ad231a3666c32c5fe50719cb972371fa564e85261c0ea29\": rpc error: code = NotFound desc = could not find container \"6a243224ce900f234ad231a3666c32c5fe50719cb972371fa564e85261c0ea29\": container with ID starting with 6a243224ce900f234ad231a3666c32c5fe50719cb972371fa564e85261c0ea29 not found: ID does not exist" Dec 11 15:07:01 crc kubenswrapper[5050]: I1211 15:07:01.019209 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 11 15:07:01 crc kubenswrapper[5050]: I1211 15:07:01.020272 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 11 15:07:01 crc kubenswrapper[5050]: I1211 15:07:01.283300 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 11 15:07:01 crc kubenswrapper[5050]: E1211 15:07:01.393964 5050 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.147:40120->38.102.83.147:43539: write tcp 38.102.83.147:40120->38.102.83.147:43539: write: broken pipe Dec 11 15:07:01 crc kubenswrapper[5050]: I1211 15:07:01.557395 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c8f2bd2-9e37-4146-a633-fa2e990a6f90" path="/var/lib/kubelet/pods/8c8f2bd2-9e37-4146-a633-fa2e990a6f90/volumes" Dec 11 15:07:02 crc kubenswrapper[5050]: I1211 15:07:02.366813 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 11 15:07:02 crc kubenswrapper[5050]: I1211 15:07:02.367070 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 11 15:07:03 crc kubenswrapper[5050]: I1211 15:07:03.521217 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 11 15:07:03 crc kubenswrapper[5050]: I1211 15:07:03.611709 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 11 15:07:04 crc kubenswrapper[5050]: I1211 15:07:04.651780 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 11 15:07:04 crc kubenswrapper[5050]: I1211 15:07:04.729537 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 11 15:07:09 crc kubenswrapper[5050]: I1211 15:07:09.550839 5050 scope.go:117] "RemoveContainer" containerID="b585bea20b0bb52e58642ee9f3e33b5b182a66a544d35daafc0202a53b35a810" Dec 11 15:07:09 crc kubenswrapper[5050]: E1211 15:07:09.551486 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:07:23 crc kubenswrapper[5050]: I1211 15:07:23.547606 5050 scope.go:117] "RemoveContainer" containerID="b585bea20b0bb52e58642ee9f3e33b5b182a66a544d35daafc0202a53b35a810" Dec 11 15:07:23 crc kubenswrapper[5050]: E1211 15:07:23.549370 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:07:24 crc kubenswrapper[5050]: I1211 15:07:24.297683 5050 generic.go:334] "Generic (PLEG): container finished" podID="63be72e0-3588-4234-94d0-80c42d74aff6" containerID="aeafc8c5e7c585bd94c8da4dff5b0790ec5875c06a752cafff40b2f26096bfb4" exitCode=0 Dec 11 15:07:24 crc kubenswrapper[5050]: I1211 15:07:24.297777 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"63be72e0-3588-4234-94d0-80c42d74aff6","Type":"ContainerDied","Data":"aeafc8c5e7c585bd94c8da4dff5b0790ec5875c06a752cafff40b2f26096bfb4"} Dec 11 15:07:25 crc kubenswrapper[5050]: I1211 15:07:25.310671 5050 generic.go:334] "Generic (PLEG): container finished" podID="609782a3-064e-4127-8ea6-080e428bea44" containerID="360b96ea491a2086bdb37ea3509eccd0761e2c6a7998f159dbfb22917007fecd" exitCode=0 Dec 11 15:07:25 crc kubenswrapper[5050]: I1211 15:07:25.310744 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"609782a3-064e-4127-8ea6-080e428bea44","Type":"ContainerDied","Data":"360b96ea491a2086bdb37ea3509eccd0761e2c6a7998f159dbfb22917007fecd"} Dec 11 15:07:25 crc kubenswrapper[5050]: I1211 15:07:25.314819 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"63be72e0-3588-4234-94d0-80c42d74aff6","Type":"ContainerStarted","Data":"bbf93b7abe451a02f03049f7c316488cbbff79119cdb784a50af637581ed63d5"} Dec 11 15:07:25 crc kubenswrapper[5050]: I1211 15:07:25.315161 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:07:25 crc kubenswrapper[5050]: I1211 15:07:25.410980 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.410912829 podStartE2EDuration="37.410912829s" podCreationTimestamp="2025-12-11 15:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:07:25.403628794 +0000 UTC m=+4736.247351380" watchObservedRunningTime="2025-12-11 15:07:25.410912829 +0000 UTC m=+4736.254635415" Dec 11 15:07:26 crc kubenswrapper[5050]: I1211 15:07:26.327850 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"609782a3-064e-4127-8ea6-080e428bea44","Type":"ContainerStarted","Data":"bb69c60a4ef7542878a4375ec49350949af8cc72d6005530a6d554d03c6bc135"} Dec 11 15:07:26 crc kubenswrapper[5050]: I1211 15:07:26.328820 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 11 15:07:38 crc kubenswrapper[5050]: I1211 15:07:38.546225 5050 scope.go:117] "RemoveContainer" containerID="b585bea20b0bb52e58642ee9f3e33b5b182a66a544d35daafc0202a53b35a810" Dec 11 15:07:38 crc kubenswrapper[5050]: E1211 15:07:38.547741 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:07:40 crc kubenswrapper[5050]: I1211 15:07:40.218220 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:07:40 crc kubenswrapper[5050]: I1211 15:07:40.248346 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 11 15:07:40 crc kubenswrapper[5050]: I1211 15:07:40.252033 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=52.251994312 podStartE2EDuration="52.251994312s" podCreationTimestamp="2025-12-11 15:06:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:07:26.366070604 +0000 UTC m=+4737.209793190" watchObservedRunningTime="2025-12-11 15:07:40.251994312 +0000 UTC m=+4751.095716898" Dec 11 15:07:45 crc kubenswrapper[5050]: I1211 15:07:45.166185 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-699964fbc-7gqfj"] Dec 11 15:07:45 crc kubenswrapper[5050]: E1211 15:07:45.167096 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c8f2bd2-9e37-4146-a633-fa2e990a6f90" containerName="init" Dec 11 15:07:45 crc kubenswrapper[5050]: I1211 15:07:45.167113 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c8f2bd2-9e37-4146-a633-fa2e990a6f90" containerName="init" Dec 11 15:07:45 crc kubenswrapper[5050]: E1211 15:07:45.167131 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c8f2bd2-9e37-4146-a633-fa2e990a6f90" containerName="dnsmasq-dns" Dec 11 15:07:45 crc kubenswrapper[5050]: I1211 15:07:45.167139 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c8f2bd2-9e37-4146-a633-fa2e990a6f90" containerName="dnsmasq-dns" Dec 11 15:07:45 crc kubenswrapper[5050]: I1211 15:07:45.167343 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c8f2bd2-9e37-4146-a633-fa2e990a6f90" containerName="dnsmasq-dns" Dec 11 15:07:45 crc kubenswrapper[5050]: I1211 15:07:45.168406 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-7gqfj" Dec 11 15:07:45 crc kubenswrapper[5050]: I1211 15:07:45.186369 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-7gqfj"] Dec 11 15:07:45 crc kubenswrapper[5050]: I1211 15:07:45.271074 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51ef9026-602b-4f6a-98f6-9d0f065f6c45-dns-svc\") pod \"dnsmasq-dns-699964fbc-7gqfj\" (UID: \"51ef9026-602b-4f6a-98f6-9d0f065f6c45\") " pod="openstack/dnsmasq-dns-699964fbc-7gqfj" Dec 11 15:07:45 crc kubenswrapper[5050]: I1211 15:07:45.271159 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51ef9026-602b-4f6a-98f6-9d0f065f6c45-config\") pod \"dnsmasq-dns-699964fbc-7gqfj\" (UID: \"51ef9026-602b-4f6a-98f6-9d0f065f6c45\") " pod="openstack/dnsmasq-dns-699964fbc-7gqfj" Dec 11 15:07:45 crc kubenswrapper[5050]: I1211 15:07:45.271629 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnqhs\" (UniqueName: \"kubernetes.io/projected/51ef9026-602b-4f6a-98f6-9d0f065f6c45-kube-api-access-rnqhs\") pod \"dnsmasq-dns-699964fbc-7gqfj\" (UID: \"51ef9026-602b-4f6a-98f6-9d0f065f6c45\") " pod="openstack/dnsmasq-dns-699964fbc-7gqfj" Dec 11 15:07:45 crc kubenswrapper[5050]: I1211 15:07:45.372992 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnqhs\" (UniqueName: \"kubernetes.io/projected/51ef9026-602b-4f6a-98f6-9d0f065f6c45-kube-api-access-rnqhs\") pod \"dnsmasq-dns-699964fbc-7gqfj\" (UID: \"51ef9026-602b-4f6a-98f6-9d0f065f6c45\") " pod="openstack/dnsmasq-dns-699964fbc-7gqfj" Dec 11 15:07:45 crc kubenswrapper[5050]: I1211 15:07:45.373115 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51ef9026-602b-4f6a-98f6-9d0f065f6c45-dns-svc\") pod \"dnsmasq-dns-699964fbc-7gqfj\" (UID: \"51ef9026-602b-4f6a-98f6-9d0f065f6c45\") " pod="openstack/dnsmasq-dns-699964fbc-7gqfj" Dec 11 15:07:45 crc kubenswrapper[5050]: I1211 15:07:45.373147 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51ef9026-602b-4f6a-98f6-9d0f065f6c45-config\") pod \"dnsmasq-dns-699964fbc-7gqfj\" (UID: \"51ef9026-602b-4f6a-98f6-9d0f065f6c45\") " pod="openstack/dnsmasq-dns-699964fbc-7gqfj" Dec 11 15:07:45 crc kubenswrapper[5050]: I1211 15:07:45.374395 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51ef9026-602b-4f6a-98f6-9d0f065f6c45-config\") pod \"dnsmasq-dns-699964fbc-7gqfj\" (UID: \"51ef9026-602b-4f6a-98f6-9d0f065f6c45\") " pod="openstack/dnsmasq-dns-699964fbc-7gqfj" Dec 11 15:07:45 crc kubenswrapper[5050]: I1211 15:07:45.374694 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51ef9026-602b-4f6a-98f6-9d0f065f6c45-dns-svc\") pod \"dnsmasq-dns-699964fbc-7gqfj\" (UID: \"51ef9026-602b-4f6a-98f6-9d0f065f6c45\") " pod="openstack/dnsmasq-dns-699964fbc-7gqfj" Dec 11 15:07:45 crc kubenswrapper[5050]: I1211 15:07:45.397970 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnqhs\" (UniqueName: \"kubernetes.io/projected/51ef9026-602b-4f6a-98f6-9d0f065f6c45-kube-api-access-rnqhs\") pod \"dnsmasq-dns-699964fbc-7gqfj\" (UID: \"51ef9026-602b-4f6a-98f6-9d0f065f6c45\") " pod="openstack/dnsmasq-dns-699964fbc-7gqfj" Dec 11 15:07:45 crc kubenswrapper[5050]: I1211 15:07:45.504766 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-7gqfj" Dec 11 15:07:45 crc kubenswrapper[5050]: I1211 15:07:45.768674 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-7gqfj"] Dec 11 15:07:45 crc kubenswrapper[5050]: I1211 15:07:45.858624 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 15:07:46 crc kubenswrapper[5050]: I1211 15:07:46.537141 5050 generic.go:334] "Generic (PLEG): container finished" podID="51ef9026-602b-4f6a-98f6-9d0f065f6c45" containerID="8a5cc8d6f6aa7909f8ad1edf31dfcd8a02c5a220eb28689be009ff022bb36e2b" exitCode=0 Dec 11 15:07:46 crc kubenswrapper[5050]: I1211 15:07:46.537334 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-7gqfj" event={"ID":"51ef9026-602b-4f6a-98f6-9d0f065f6c45","Type":"ContainerDied","Data":"8a5cc8d6f6aa7909f8ad1edf31dfcd8a02c5a220eb28689be009ff022bb36e2b"} Dec 11 15:07:46 crc kubenswrapper[5050]: I1211 15:07:46.537501 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-7gqfj" event={"ID":"51ef9026-602b-4f6a-98f6-9d0f065f6c45","Type":"ContainerStarted","Data":"c05021bb10c66ca0c86c55d9afe24be6db3a9a8b8b1746a12ef19c4ed5f947a5"} Dec 11 15:07:46 crc kubenswrapper[5050]: I1211 15:07:46.643892 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 15:07:47 crc kubenswrapper[5050]: I1211 15:07:47.558612 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-7gqfj" event={"ID":"51ef9026-602b-4f6a-98f6-9d0f065f6c45","Type":"ContainerStarted","Data":"287588aa8c74548b8956ec69ebc3861fe0b814b83f650cf50df66dc37e1364e0"} Dec 11 15:07:47 crc kubenswrapper[5050]: I1211 15:07:47.558671 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-699964fbc-7gqfj" Dec 11 15:07:47 crc kubenswrapper[5050]: I1211 15:07:47.573487 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-699964fbc-7gqfj" podStartSLOduration=2.57346818 podStartE2EDuration="2.57346818s" podCreationTimestamp="2025-12-11 15:07:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:07:47.570510521 +0000 UTC m=+4758.414233117" watchObservedRunningTime="2025-12-11 15:07:47.57346818 +0000 UTC m=+4758.417190766" Dec 11 15:07:47 crc kubenswrapper[5050]: I1211 15:07:47.966189 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="609782a3-064e-4127-8ea6-080e428bea44" containerName="rabbitmq" containerID="cri-o://bb69c60a4ef7542878a4375ec49350949af8cc72d6005530a6d554d03c6bc135" gracePeriod=604798 Dec 11 15:07:48 crc kubenswrapper[5050]: I1211 15:07:48.700430 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="63be72e0-3588-4234-94d0-80c42d74aff6" containerName="rabbitmq" containerID="cri-o://bbf93b7abe451a02f03049f7c316488cbbff79119cdb784a50af637581ed63d5" gracePeriod=604798 Dec 11 15:07:49 crc kubenswrapper[5050]: I1211 15:07:49.553664 5050 scope.go:117] "RemoveContainer" containerID="b585bea20b0bb52e58642ee9f3e33b5b182a66a544d35daafc0202a53b35a810" Dec 11 15:07:49 crc kubenswrapper[5050]: E1211 15:07:49.554142 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:07:50 crc kubenswrapper[5050]: I1211 15:07:50.216344 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="63be72e0-3588-4234-94d0-80c42d74aff6" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.241:5672: connect: connection refused" Dec 11 15:07:50 crc kubenswrapper[5050]: I1211 15:07:50.241483 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="609782a3-064e-4127-8ea6-080e428bea44" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.240:5672: connect: connection refused" Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.577086 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.626497 5050 generic.go:334] "Generic (PLEG): container finished" podID="609782a3-064e-4127-8ea6-080e428bea44" containerID="bb69c60a4ef7542878a4375ec49350949af8cc72d6005530a6d554d03c6bc135" exitCode=0 Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.626573 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"609782a3-064e-4127-8ea6-080e428bea44","Type":"ContainerDied","Data":"bb69c60a4ef7542878a4375ec49350949af8cc72d6005530a6d554d03c6bc135"} Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.626617 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"609782a3-064e-4127-8ea6-080e428bea44","Type":"ContainerDied","Data":"ef74ef1eee9834b492f3d92def8f0f8f9af78fcb33f8b5ac072621b47e638fd3"} Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.626643 5050 scope.go:117] "RemoveContainer" containerID="bb69c60a4ef7542878a4375ec49350949af8cc72d6005530a6d554d03c6bc135" Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.626837 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.656102 5050 scope.go:117] "RemoveContainer" containerID="360b96ea491a2086bdb37ea3509eccd0761e2c6a7998f159dbfb22917007fecd" Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.657339 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llnqh\" (UniqueName: \"kubernetes.io/projected/609782a3-064e-4127-8ea6-080e428bea44-kube-api-access-llnqh\") pod \"609782a3-064e-4127-8ea6-080e428bea44\" (UID: \"609782a3-064e-4127-8ea6-080e428bea44\") " Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.657435 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/609782a3-064e-4127-8ea6-080e428bea44-rabbitmq-plugins\") pod \"609782a3-064e-4127-8ea6-080e428bea44\" (UID: \"609782a3-064e-4127-8ea6-080e428bea44\") " Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.657460 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/609782a3-064e-4127-8ea6-080e428bea44-pod-info\") pod \"609782a3-064e-4127-8ea6-080e428bea44\" (UID: \"609782a3-064e-4127-8ea6-080e428bea44\") " Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.657521 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/609782a3-064e-4127-8ea6-080e428bea44-rabbitmq-erlang-cookie\") pod \"609782a3-064e-4127-8ea6-080e428bea44\" (UID: \"609782a3-064e-4127-8ea6-080e428bea44\") " Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.657554 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/609782a3-064e-4127-8ea6-080e428bea44-erlang-cookie-secret\") pod \"609782a3-064e-4127-8ea6-080e428bea44\" (UID: \"609782a3-064e-4127-8ea6-080e428bea44\") " Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.657580 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/609782a3-064e-4127-8ea6-080e428bea44-plugins-conf\") pod \"609782a3-064e-4127-8ea6-080e428bea44\" (UID: \"609782a3-064e-4127-8ea6-080e428bea44\") " Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.657612 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/609782a3-064e-4127-8ea6-080e428bea44-server-conf\") pod \"609782a3-064e-4127-8ea6-080e428bea44\" (UID: \"609782a3-064e-4127-8ea6-080e428bea44\") " Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.657760 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-10e17a41-a3d8-4545-b0df-9368e3a79b07\") pod \"609782a3-064e-4127-8ea6-080e428bea44\" (UID: \"609782a3-064e-4127-8ea6-080e428bea44\") " Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.657979 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/609782a3-064e-4127-8ea6-080e428bea44-rabbitmq-confd\") pod \"609782a3-064e-4127-8ea6-080e428bea44\" (UID: \"609782a3-064e-4127-8ea6-080e428bea44\") " Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.658167 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/609782a3-064e-4127-8ea6-080e428bea44-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "609782a3-064e-4127-8ea6-080e428bea44" (UID: "609782a3-064e-4127-8ea6-080e428bea44"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.658749 5050 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/609782a3-064e-4127-8ea6-080e428bea44-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.658979 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/609782a3-064e-4127-8ea6-080e428bea44-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "609782a3-064e-4127-8ea6-080e428bea44" (UID: "609782a3-064e-4127-8ea6-080e428bea44"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.660140 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/609782a3-064e-4127-8ea6-080e428bea44-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "609782a3-064e-4127-8ea6-080e428bea44" (UID: "609782a3-064e-4127-8ea6-080e428bea44"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.664760 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/609782a3-064e-4127-8ea6-080e428bea44-kube-api-access-llnqh" (OuterVolumeSpecName: "kube-api-access-llnqh") pod "609782a3-064e-4127-8ea6-080e428bea44" (UID: "609782a3-064e-4127-8ea6-080e428bea44"). InnerVolumeSpecName "kube-api-access-llnqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.667983 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/609782a3-064e-4127-8ea6-080e428bea44-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "609782a3-064e-4127-8ea6-080e428bea44" (UID: "609782a3-064e-4127-8ea6-080e428bea44"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.668169 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/609782a3-064e-4127-8ea6-080e428bea44-pod-info" (OuterVolumeSpecName: "pod-info") pod "609782a3-064e-4127-8ea6-080e428bea44" (UID: "609782a3-064e-4127-8ea6-080e428bea44"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.671777 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-10e17a41-a3d8-4545-b0df-9368e3a79b07" (OuterVolumeSpecName: "persistence") pod "609782a3-064e-4127-8ea6-080e428bea44" (UID: "609782a3-064e-4127-8ea6-080e428bea44"). InnerVolumeSpecName "pvc-10e17a41-a3d8-4545-b0df-9368e3a79b07". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.680551 5050 scope.go:117] "RemoveContainer" containerID="bb69c60a4ef7542878a4375ec49350949af8cc72d6005530a6d554d03c6bc135" Dec 11 15:07:54 crc kubenswrapper[5050]: E1211 15:07:54.681083 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb69c60a4ef7542878a4375ec49350949af8cc72d6005530a6d554d03c6bc135\": container with ID starting with bb69c60a4ef7542878a4375ec49350949af8cc72d6005530a6d554d03c6bc135 not found: ID does not exist" containerID="bb69c60a4ef7542878a4375ec49350949af8cc72d6005530a6d554d03c6bc135" Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.681135 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb69c60a4ef7542878a4375ec49350949af8cc72d6005530a6d554d03c6bc135"} err="failed to get container status \"bb69c60a4ef7542878a4375ec49350949af8cc72d6005530a6d554d03c6bc135\": rpc error: code = NotFound desc = could not find container \"bb69c60a4ef7542878a4375ec49350949af8cc72d6005530a6d554d03c6bc135\": container with ID starting with bb69c60a4ef7542878a4375ec49350949af8cc72d6005530a6d554d03c6bc135 not found: ID does not exist" Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.681167 5050 scope.go:117] "RemoveContainer" containerID="360b96ea491a2086bdb37ea3509eccd0761e2c6a7998f159dbfb22917007fecd" Dec 11 15:07:54 crc kubenswrapper[5050]: E1211 15:07:54.681484 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"360b96ea491a2086bdb37ea3509eccd0761e2c6a7998f159dbfb22917007fecd\": container with ID starting with 360b96ea491a2086bdb37ea3509eccd0761e2c6a7998f159dbfb22917007fecd not found: ID does not exist" containerID="360b96ea491a2086bdb37ea3509eccd0761e2c6a7998f159dbfb22917007fecd" Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.681520 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"360b96ea491a2086bdb37ea3509eccd0761e2c6a7998f159dbfb22917007fecd"} err="failed to get container status \"360b96ea491a2086bdb37ea3509eccd0761e2c6a7998f159dbfb22917007fecd\": rpc error: code = NotFound desc = could not find container \"360b96ea491a2086bdb37ea3509eccd0761e2c6a7998f159dbfb22917007fecd\": container with ID starting with 360b96ea491a2086bdb37ea3509eccd0761e2c6a7998f159dbfb22917007fecd not found: ID does not exist" Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.709622 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/609782a3-064e-4127-8ea6-080e428bea44-server-conf" (OuterVolumeSpecName: "server-conf") pod "609782a3-064e-4127-8ea6-080e428bea44" (UID: "609782a3-064e-4127-8ea6-080e428bea44"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.751137 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/609782a3-064e-4127-8ea6-080e428bea44-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "609782a3-064e-4127-8ea6-080e428bea44" (UID: "609782a3-064e-4127-8ea6-080e428bea44"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.760261 5050 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/609782a3-064e-4127-8ea6-080e428bea44-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.760289 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llnqh\" (UniqueName: \"kubernetes.io/projected/609782a3-064e-4127-8ea6-080e428bea44-kube-api-access-llnqh\") on node \"crc\" DevicePath \"\"" Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.760300 5050 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/609782a3-064e-4127-8ea6-080e428bea44-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.760310 5050 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/609782a3-064e-4127-8ea6-080e428bea44-pod-info\") on node \"crc\" DevicePath \"\"" Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.760319 5050 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/609782a3-064e-4127-8ea6-080e428bea44-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.760327 5050 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/609782a3-064e-4127-8ea6-080e428bea44-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.760336 5050 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/609782a3-064e-4127-8ea6-080e428bea44-server-conf\") on node \"crc\" DevicePath \"\"" Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.760382 5050 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-10e17a41-a3d8-4545-b0df-9368e3a79b07\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-10e17a41-a3d8-4545-b0df-9368e3a79b07\") on node \"crc\" " Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.776591 5050 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.776747 5050 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-10e17a41-a3d8-4545-b0df-9368e3a79b07" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-10e17a41-a3d8-4545-b0df-9368e3a79b07") on node "crc" Dec 11 15:07:54 crc kubenswrapper[5050]: I1211 15:07:54.862514 5050 reconciler_common.go:293] "Volume detached for volume \"pvc-10e17a41-a3d8-4545-b0df-9368e3a79b07\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-10e17a41-a3d8-4545-b0df-9368e3a79b07\") on node \"crc\" DevicePath \"\"" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.006024 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.014321 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.034677 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 15:07:55 crc kubenswrapper[5050]: E1211 15:07:55.035180 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="609782a3-064e-4127-8ea6-080e428bea44" containerName="rabbitmq" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.035200 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="609782a3-064e-4127-8ea6-080e428bea44" containerName="rabbitmq" Dec 11 15:07:55 crc kubenswrapper[5050]: E1211 15:07:55.035228 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="609782a3-064e-4127-8ea6-080e428bea44" containerName="setup-container" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.035257 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="609782a3-064e-4127-8ea6-080e428bea44" containerName="setup-container" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.035418 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="609782a3-064e-4127-8ea6-080e428bea44" containerName="rabbitmq" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.040407 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.043424 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-zd6qh" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.043808 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.045043 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.045707 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.045973 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.059788 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.065976 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc\") " pod="openstack/rabbitmq-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.066112 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc\") " pod="openstack/rabbitmq-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.066158 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc\") " pod="openstack/rabbitmq-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.066190 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc\") " pod="openstack/rabbitmq-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.066237 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtkb2\" (UniqueName: \"kubernetes.io/projected/f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc-kube-api-access-xtkb2\") pod \"rabbitmq-server-0\" (UID: \"f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc\") " pod="openstack/rabbitmq-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.066285 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-10e17a41-a3d8-4545-b0df-9368e3a79b07\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-10e17a41-a3d8-4545-b0df-9368e3a79b07\") pod \"rabbitmq-server-0\" (UID: \"f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc\") " pod="openstack/rabbitmq-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.066313 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc\") " pod="openstack/rabbitmq-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.066340 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc\") " pod="openstack/rabbitmq-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.066366 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc\") " pod="openstack/rabbitmq-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.168082 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc\") " pod="openstack/rabbitmq-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.168512 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc\") " pod="openstack/rabbitmq-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.168556 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtkb2\" (UniqueName: \"kubernetes.io/projected/f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc-kube-api-access-xtkb2\") pod \"rabbitmq-server-0\" (UID: \"f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc\") " pod="openstack/rabbitmq-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.168594 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-10e17a41-a3d8-4545-b0df-9368e3a79b07\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-10e17a41-a3d8-4545-b0df-9368e3a79b07\") pod \"rabbitmq-server-0\" (UID: \"f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc\") " pod="openstack/rabbitmq-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.168618 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc\") " pod="openstack/rabbitmq-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.168643 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc\") " pod="openstack/rabbitmq-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.168667 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc\") " pod="openstack/rabbitmq-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.168711 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc\") " pod="openstack/rabbitmq-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.168749 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc\") " pod="openstack/rabbitmq-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.169433 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc\") " pod="openstack/rabbitmq-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.169481 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc\") " pod="openstack/rabbitmq-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.169894 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc\") " pod="openstack/rabbitmq-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.170587 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc\") " pod="openstack/rabbitmq-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.172042 5050 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.172080 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-10e17a41-a3d8-4545-b0df-9368e3a79b07\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-10e17a41-a3d8-4545-b0df-9368e3a79b07\") pod \"rabbitmq-server-0\" (UID: \"f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e8b3cb282516c479b71e40677cbee6e03decb409970b73313cde2336ff2ca689/globalmount\"" pod="openstack/rabbitmq-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.177037 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc\") " pod="openstack/rabbitmq-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.177259 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc\") " pod="openstack/rabbitmq-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.177340 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc\") " pod="openstack/rabbitmq-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.195120 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtkb2\" (UniqueName: \"kubernetes.io/projected/f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc-kube-api-access-xtkb2\") pod \"rabbitmq-server-0\" (UID: \"f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc\") " pod="openstack/rabbitmq-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.218272 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-10e17a41-a3d8-4545-b0df-9368e3a79b07\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-10e17a41-a3d8-4545-b0df-9368e3a79b07\") pod \"rabbitmq-server-0\" (UID: \"f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc\") " pod="openstack/rabbitmq-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.354732 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.368250 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.371116 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/63be72e0-3588-4234-94d0-80c42d74aff6-plugins-conf\") pod \"63be72e0-3588-4234-94d0-80c42d74aff6\" (UID: \"63be72e0-3588-4234-94d0-80c42d74aff6\") " Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.371280 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/63be72e0-3588-4234-94d0-80c42d74aff6-erlang-cookie-secret\") pod \"63be72e0-3588-4234-94d0-80c42d74aff6\" (UID: \"63be72e0-3588-4234-94d0-80c42d74aff6\") " Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.371389 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-22bf139d-2dbc-4fae-b0f0-8d7e33ad3a27\") pod \"63be72e0-3588-4234-94d0-80c42d74aff6\" (UID: \"63be72e0-3588-4234-94d0-80c42d74aff6\") " Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.371422 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqgcc\" (UniqueName: \"kubernetes.io/projected/63be72e0-3588-4234-94d0-80c42d74aff6-kube-api-access-qqgcc\") pod \"63be72e0-3588-4234-94d0-80c42d74aff6\" (UID: \"63be72e0-3588-4234-94d0-80c42d74aff6\") " Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.371447 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/63be72e0-3588-4234-94d0-80c42d74aff6-rabbitmq-plugins\") pod \"63be72e0-3588-4234-94d0-80c42d74aff6\" (UID: \"63be72e0-3588-4234-94d0-80c42d74aff6\") " Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.371470 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/63be72e0-3588-4234-94d0-80c42d74aff6-rabbitmq-erlang-cookie\") pod \"63be72e0-3588-4234-94d0-80c42d74aff6\" (UID: \"63be72e0-3588-4234-94d0-80c42d74aff6\") " Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.371497 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/63be72e0-3588-4234-94d0-80c42d74aff6-server-conf\") pod \"63be72e0-3588-4234-94d0-80c42d74aff6\" (UID: \"63be72e0-3588-4234-94d0-80c42d74aff6\") " Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.371524 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/63be72e0-3588-4234-94d0-80c42d74aff6-rabbitmq-confd\") pod \"63be72e0-3588-4234-94d0-80c42d74aff6\" (UID: \"63be72e0-3588-4234-94d0-80c42d74aff6\") " Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.371543 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/63be72e0-3588-4234-94d0-80c42d74aff6-pod-info\") pod \"63be72e0-3588-4234-94d0-80c42d74aff6\" (UID: \"63be72e0-3588-4234-94d0-80c42d74aff6\") " Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.372413 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63be72e0-3588-4234-94d0-80c42d74aff6-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "63be72e0-3588-4234-94d0-80c42d74aff6" (UID: "63be72e0-3588-4234-94d0-80c42d74aff6"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.372695 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63be72e0-3588-4234-94d0-80c42d74aff6-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "63be72e0-3588-4234-94d0-80c42d74aff6" (UID: "63be72e0-3588-4234-94d0-80c42d74aff6"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.372865 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63be72e0-3588-4234-94d0-80c42d74aff6-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "63be72e0-3588-4234-94d0-80c42d74aff6" (UID: "63be72e0-3588-4234-94d0-80c42d74aff6"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.379028 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63be72e0-3588-4234-94d0-80c42d74aff6-kube-api-access-qqgcc" (OuterVolumeSpecName: "kube-api-access-qqgcc") pod "63be72e0-3588-4234-94d0-80c42d74aff6" (UID: "63be72e0-3588-4234-94d0-80c42d74aff6"). InnerVolumeSpecName "kube-api-access-qqgcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.389353 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63be72e0-3588-4234-94d0-80c42d74aff6-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "63be72e0-3588-4234-94d0-80c42d74aff6" (UID: "63be72e0-3588-4234-94d0-80c42d74aff6"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.394239 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/63be72e0-3588-4234-94d0-80c42d74aff6-pod-info" (OuterVolumeSpecName: "pod-info") pod "63be72e0-3588-4234-94d0-80c42d74aff6" (UID: "63be72e0-3588-4234-94d0-80c42d74aff6"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.410632 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-22bf139d-2dbc-4fae-b0f0-8d7e33ad3a27" (OuterVolumeSpecName: "persistence") pod "63be72e0-3588-4234-94d0-80c42d74aff6" (UID: "63be72e0-3588-4234-94d0-80c42d74aff6"). InnerVolumeSpecName "pvc-22bf139d-2dbc-4fae-b0f0-8d7e33ad3a27". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.416238 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63be72e0-3588-4234-94d0-80c42d74aff6-server-conf" (OuterVolumeSpecName: "server-conf") pod "63be72e0-3588-4234-94d0-80c42d74aff6" (UID: "63be72e0-3588-4234-94d0-80c42d74aff6"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.472564 5050 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/63be72e0-3588-4234-94d0-80c42d74aff6-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.472637 5050 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/63be72e0-3588-4234-94d0-80c42d74aff6-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.472650 5050 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/63be72e0-3588-4234-94d0-80c42d74aff6-server-conf\") on node \"crc\" DevicePath \"\"" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.472659 5050 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/63be72e0-3588-4234-94d0-80c42d74aff6-pod-info\") on node \"crc\" DevicePath \"\"" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.472668 5050 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/63be72e0-3588-4234-94d0-80c42d74aff6-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.472677 5050 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/63be72e0-3588-4234-94d0-80c42d74aff6-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.472728 5050 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-22bf139d-2dbc-4fae-b0f0-8d7e33ad3a27\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-22bf139d-2dbc-4fae-b0f0-8d7e33ad3a27\") on node \"crc\" " Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.472739 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqgcc\" (UniqueName: \"kubernetes.io/projected/63be72e0-3588-4234-94d0-80c42d74aff6-kube-api-access-qqgcc\") on node \"crc\" DevicePath \"\"" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.501700 5050 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.501961 5050 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-22bf139d-2dbc-4fae-b0f0-8d7e33ad3a27" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-22bf139d-2dbc-4fae-b0f0-8d7e33ad3a27") on node "crc" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.506148 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-699964fbc-7gqfj" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.520506 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63be72e0-3588-4234-94d0-80c42d74aff6-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "63be72e0-3588-4234-94d0-80c42d74aff6" (UID: "63be72e0-3588-4234-94d0-80c42d74aff6"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.571085 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="609782a3-064e-4127-8ea6-080e428bea44" path="/var/lib/kubelet/pods/609782a3-064e-4127-8ea6-080e428bea44/volumes" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.573629 5050 reconciler_common.go:293] "Volume detached for volume \"pvc-22bf139d-2dbc-4fae-b0f0-8d7e33ad3a27\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-22bf139d-2dbc-4fae-b0f0-8d7e33ad3a27\") on node \"crc\" DevicePath \"\"" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.573668 5050 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/63be72e0-3588-4234-94d0-80c42d74aff6-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.575731 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-mr6dw"] Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.575982 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d79f765b5-mr6dw" podUID="786d8f03-bb4c-47b7-a94a-b86c9bc07c6d" containerName="dnsmasq-dns" containerID="cri-o://094234d197fae2ac8e7fdc4a50ba306b4ebee05f718803e46ae3580eaf17f016" gracePeriod=10 Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.643197 5050 generic.go:334] "Generic (PLEG): container finished" podID="63be72e0-3588-4234-94d0-80c42d74aff6" containerID="bbf93b7abe451a02f03049f7c316488cbbff79119cdb784a50af637581ed63d5" exitCode=0 Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.643251 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.643287 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"63be72e0-3588-4234-94d0-80c42d74aff6","Type":"ContainerDied","Data":"bbf93b7abe451a02f03049f7c316488cbbff79119cdb784a50af637581ed63d5"} Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.643346 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"63be72e0-3588-4234-94d0-80c42d74aff6","Type":"ContainerDied","Data":"d8727f4042c1c95cd8c86c4817dbd6784487eb7f5bb8f2858a787e2fc1f2fc4d"} Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.643367 5050 scope.go:117] "RemoveContainer" containerID="bbf93b7abe451a02f03049f7c316488cbbff79119cdb784a50af637581ed63d5" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.691547 5050 scope.go:117] "RemoveContainer" containerID="aeafc8c5e7c585bd94c8da4dff5b0790ec5875c06a752cafff40b2f26096bfb4" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.711298 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.729498 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.744165 5050 scope.go:117] "RemoveContainer" containerID="bbf93b7abe451a02f03049f7c316488cbbff79119cdb784a50af637581ed63d5" Dec 11 15:07:55 crc kubenswrapper[5050]: E1211 15:07:55.744706 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbf93b7abe451a02f03049f7c316488cbbff79119cdb784a50af637581ed63d5\": container with ID starting with bbf93b7abe451a02f03049f7c316488cbbff79119cdb784a50af637581ed63d5 not found: ID does not exist" containerID="bbf93b7abe451a02f03049f7c316488cbbff79119cdb784a50af637581ed63d5" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.744762 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbf93b7abe451a02f03049f7c316488cbbff79119cdb784a50af637581ed63d5"} err="failed to get container status \"bbf93b7abe451a02f03049f7c316488cbbff79119cdb784a50af637581ed63d5\": rpc error: code = NotFound desc = could not find container \"bbf93b7abe451a02f03049f7c316488cbbff79119cdb784a50af637581ed63d5\": container with ID starting with bbf93b7abe451a02f03049f7c316488cbbff79119cdb784a50af637581ed63d5 not found: ID does not exist" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.744798 5050 scope.go:117] "RemoveContainer" containerID="aeafc8c5e7c585bd94c8da4dff5b0790ec5875c06a752cafff40b2f26096bfb4" Dec 11 15:07:55 crc kubenswrapper[5050]: E1211 15:07:55.746328 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aeafc8c5e7c585bd94c8da4dff5b0790ec5875c06a752cafff40b2f26096bfb4\": container with ID starting with aeafc8c5e7c585bd94c8da4dff5b0790ec5875c06a752cafff40b2f26096bfb4 not found: ID does not exist" containerID="aeafc8c5e7c585bd94c8da4dff5b0790ec5875c06a752cafff40b2f26096bfb4" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.746376 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aeafc8c5e7c585bd94c8da4dff5b0790ec5875c06a752cafff40b2f26096bfb4"} err="failed to get container status \"aeafc8c5e7c585bd94c8da4dff5b0790ec5875c06a752cafff40b2f26096bfb4\": rpc error: code = NotFound desc = could not find container \"aeafc8c5e7c585bd94c8da4dff5b0790ec5875c06a752cafff40b2f26096bfb4\": container with ID starting with aeafc8c5e7c585bd94c8da4dff5b0790ec5875c06a752cafff40b2f26096bfb4 not found: ID does not exist" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.749788 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 15:07:55 crc kubenswrapper[5050]: E1211 15:07:55.750276 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63be72e0-3588-4234-94d0-80c42d74aff6" containerName="setup-container" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.750297 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="63be72e0-3588-4234-94d0-80c42d74aff6" containerName="setup-container" Dec 11 15:07:55 crc kubenswrapper[5050]: E1211 15:07:55.750342 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63be72e0-3588-4234-94d0-80c42d74aff6" containerName="rabbitmq" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.750349 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="63be72e0-3588-4234-94d0-80c42d74aff6" containerName="rabbitmq" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.750514 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="63be72e0-3588-4234-94d0-80c42d74aff6" containerName="rabbitmq" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.751525 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.756934 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.757293 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-bvxnm" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.756956 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.757087 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.757161 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.761558 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.878476 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-22bf139d-2dbc-4fae-b0f0-8d7e33ad3a27\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-22bf139d-2dbc-4fae-b0f0-8d7e33ad3a27\") pod \"rabbitmq-cell1-server-0\" (UID: \"729e0c28-cb57-4539-b523-b8ae848a62c8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.878546 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/729e0c28-cb57-4539-b523-b8ae848a62c8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"729e0c28-cb57-4539-b523-b8ae848a62c8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.878585 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/729e0c28-cb57-4539-b523-b8ae848a62c8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"729e0c28-cb57-4539-b523-b8ae848a62c8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.878623 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/729e0c28-cb57-4539-b523-b8ae848a62c8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"729e0c28-cb57-4539-b523-b8ae848a62c8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.878674 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/729e0c28-cb57-4539-b523-b8ae848a62c8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"729e0c28-cb57-4539-b523-b8ae848a62c8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.878695 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/729e0c28-cb57-4539-b523-b8ae848a62c8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"729e0c28-cb57-4539-b523-b8ae848a62c8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.878720 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/729e0c28-cb57-4539-b523-b8ae848a62c8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"729e0c28-cb57-4539-b523-b8ae848a62c8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.878751 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7c2k\" (UniqueName: \"kubernetes.io/projected/729e0c28-cb57-4539-b523-b8ae848a62c8-kube-api-access-w7c2k\") pod \"rabbitmq-cell1-server-0\" (UID: \"729e0c28-cb57-4539-b523-b8ae848a62c8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.878774 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/729e0c28-cb57-4539-b523-b8ae848a62c8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"729e0c28-cb57-4539-b523-b8ae848a62c8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.938819 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.980268 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/729e0c28-cb57-4539-b523-b8ae848a62c8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"729e0c28-cb57-4539-b523-b8ae848a62c8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.980342 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/729e0c28-cb57-4539-b523-b8ae848a62c8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"729e0c28-cb57-4539-b523-b8ae848a62c8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.980412 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/729e0c28-cb57-4539-b523-b8ae848a62c8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"729e0c28-cb57-4539-b523-b8ae848a62c8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.980434 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/729e0c28-cb57-4539-b523-b8ae848a62c8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"729e0c28-cb57-4539-b523-b8ae848a62c8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.980461 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/729e0c28-cb57-4539-b523-b8ae848a62c8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"729e0c28-cb57-4539-b523-b8ae848a62c8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.980506 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7c2k\" (UniqueName: \"kubernetes.io/projected/729e0c28-cb57-4539-b523-b8ae848a62c8-kube-api-access-w7c2k\") pod \"rabbitmq-cell1-server-0\" (UID: \"729e0c28-cb57-4539-b523-b8ae848a62c8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.980532 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/729e0c28-cb57-4539-b523-b8ae848a62c8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"729e0c28-cb57-4539-b523-b8ae848a62c8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.980580 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-22bf139d-2dbc-4fae-b0f0-8d7e33ad3a27\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-22bf139d-2dbc-4fae-b0f0-8d7e33ad3a27\") pod \"rabbitmq-cell1-server-0\" (UID: \"729e0c28-cb57-4539-b523-b8ae848a62c8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.980610 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/729e0c28-cb57-4539-b523-b8ae848a62c8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"729e0c28-cb57-4539-b523-b8ae848a62c8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.981274 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/729e0c28-cb57-4539-b523-b8ae848a62c8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"729e0c28-cb57-4539-b523-b8ae848a62c8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.981300 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/729e0c28-cb57-4539-b523-b8ae848a62c8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"729e0c28-cb57-4539-b523-b8ae848a62c8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.983198 5050 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.983241 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-22bf139d-2dbc-4fae-b0f0-8d7e33ad3a27\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-22bf139d-2dbc-4fae-b0f0-8d7e33ad3a27\") pod \"rabbitmq-cell1-server-0\" (UID: \"729e0c28-cb57-4539-b523-b8ae848a62c8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/4fd167e89a34b4de95c9058341d491e0e53431ae52baf09da68564c9e515f4bf/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.985643 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/729e0c28-cb57-4539-b523-b8ae848a62c8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"729e0c28-cb57-4539-b523-b8ae848a62c8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.987987 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/729e0c28-cb57-4539-b523-b8ae848a62c8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"729e0c28-cb57-4539-b523-b8ae848a62c8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.988124 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/729e0c28-cb57-4539-b523-b8ae848a62c8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"729e0c28-cb57-4539-b523-b8ae848a62c8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.991179 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/729e0c28-cb57-4539-b523-b8ae848a62c8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"729e0c28-cb57-4539-b523-b8ae848a62c8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.995834 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/729e0c28-cb57-4539-b523-b8ae848a62c8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"729e0c28-cb57-4539-b523-b8ae848a62c8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.998719 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7c2k\" (UniqueName: \"kubernetes.io/projected/729e0c28-cb57-4539-b523-b8ae848a62c8-kube-api-access-w7c2k\") pod \"rabbitmq-cell1-server-0\" (UID: \"729e0c28-cb57-4539-b523-b8ae848a62c8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:07:55 crc kubenswrapper[5050]: I1211 15:07:55.999594 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-mr6dw" Dec 11 15:07:56 crc kubenswrapper[5050]: I1211 15:07:56.021138 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-22bf139d-2dbc-4fae-b0f0-8d7e33ad3a27\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-22bf139d-2dbc-4fae-b0f0-8d7e33ad3a27\") pod \"rabbitmq-cell1-server-0\" (UID: \"729e0c28-cb57-4539-b523-b8ae848a62c8\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:07:56 crc kubenswrapper[5050]: I1211 15:07:56.082930 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:07:56 crc kubenswrapper[5050]: I1211 15:07:56.184323 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/786d8f03-bb4c-47b7-a94a-b86c9bc07c6d-config\") pod \"786d8f03-bb4c-47b7-a94a-b86c9bc07c6d\" (UID: \"786d8f03-bb4c-47b7-a94a-b86c9bc07c6d\") " Dec 11 15:07:56 crc kubenswrapper[5050]: I1211 15:07:56.184425 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgltz\" (UniqueName: \"kubernetes.io/projected/786d8f03-bb4c-47b7-a94a-b86c9bc07c6d-kube-api-access-xgltz\") pod \"786d8f03-bb4c-47b7-a94a-b86c9bc07c6d\" (UID: \"786d8f03-bb4c-47b7-a94a-b86c9bc07c6d\") " Dec 11 15:07:56 crc kubenswrapper[5050]: I1211 15:07:56.184521 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/786d8f03-bb4c-47b7-a94a-b86c9bc07c6d-dns-svc\") pod \"786d8f03-bb4c-47b7-a94a-b86c9bc07c6d\" (UID: \"786d8f03-bb4c-47b7-a94a-b86c9bc07c6d\") " Dec 11 15:07:56 crc kubenswrapper[5050]: I1211 15:07:56.190629 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/786d8f03-bb4c-47b7-a94a-b86c9bc07c6d-kube-api-access-xgltz" (OuterVolumeSpecName: "kube-api-access-xgltz") pod "786d8f03-bb4c-47b7-a94a-b86c9bc07c6d" (UID: "786d8f03-bb4c-47b7-a94a-b86c9bc07c6d"). InnerVolumeSpecName "kube-api-access-xgltz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:07:56 crc kubenswrapper[5050]: I1211 15:07:56.217377 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/786d8f03-bb4c-47b7-a94a-b86c9bc07c6d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "786d8f03-bb4c-47b7-a94a-b86c9bc07c6d" (UID: "786d8f03-bb4c-47b7-a94a-b86c9bc07c6d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:07:56 crc kubenswrapper[5050]: I1211 15:07:56.239356 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/786d8f03-bb4c-47b7-a94a-b86c9bc07c6d-config" (OuterVolumeSpecName: "config") pod "786d8f03-bb4c-47b7-a94a-b86c9bc07c6d" (UID: "786d8f03-bb4c-47b7-a94a-b86c9bc07c6d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:07:56 crc kubenswrapper[5050]: I1211 15:07:56.286572 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/786d8f03-bb4c-47b7-a94a-b86c9bc07c6d-config\") on node \"crc\" DevicePath \"\"" Dec 11 15:07:56 crc kubenswrapper[5050]: I1211 15:07:56.286982 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgltz\" (UniqueName: \"kubernetes.io/projected/786d8f03-bb4c-47b7-a94a-b86c9bc07c6d-kube-api-access-xgltz\") on node \"crc\" DevicePath \"\"" Dec 11 15:07:56 crc kubenswrapper[5050]: I1211 15:07:56.287000 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/786d8f03-bb4c-47b7-a94a-b86c9bc07c6d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 15:07:56 crc kubenswrapper[5050]: I1211 15:07:56.324021 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 15:07:56 crc kubenswrapper[5050]: W1211 15:07:56.328853 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod729e0c28_cb57_4539_b523_b8ae848a62c8.slice/crio-9e6480fbbead604c619a38b4d11a5feeb488782c7f45991ecf2c6679b32cf42c WatchSource:0}: Error finding container 9e6480fbbead604c619a38b4d11a5feeb488782c7f45991ecf2c6679b32cf42c: Status 404 returned error can't find the container with id 9e6480fbbead604c619a38b4d11a5feeb488782c7f45991ecf2c6679b32cf42c Dec 11 15:07:56 crc kubenswrapper[5050]: I1211 15:07:56.656320 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"729e0c28-cb57-4539-b523-b8ae848a62c8","Type":"ContainerStarted","Data":"9e6480fbbead604c619a38b4d11a5feeb488782c7f45991ecf2c6679b32cf42c"} Dec 11 15:07:56 crc kubenswrapper[5050]: I1211 15:07:56.658367 5050 generic.go:334] "Generic (PLEG): container finished" podID="786d8f03-bb4c-47b7-a94a-b86c9bc07c6d" containerID="094234d197fae2ac8e7fdc4a50ba306b4ebee05f718803e46ae3580eaf17f016" exitCode=0 Dec 11 15:07:56 crc kubenswrapper[5050]: I1211 15:07:56.658447 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-mr6dw" event={"ID":"786d8f03-bb4c-47b7-a94a-b86c9bc07c6d","Type":"ContainerDied","Data":"094234d197fae2ac8e7fdc4a50ba306b4ebee05f718803e46ae3580eaf17f016"} Dec 11 15:07:56 crc kubenswrapper[5050]: I1211 15:07:56.658472 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d79f765b5-mr6dw" event={"ID":"786d8f03-bb4c-47b7-a94a-b86c9bc07c6d","Type":"ContainerDied","Data":"8f7801357a868d096ff8f134560ab3d8255dc4b66d1cfcc58510f03ecffab581"} Dec 11 15:07:56 crc kubenswrapper[5050]: I1211 15:07:56.658497 5050 scope.go:117] "RemoveContainer" containerID="094234d197fae2ac8e7fdc4a50ba306b4ebee05f718803e46ae3580eaf17f016" Dec 11 15:07:56 crc kubenswrapper[5050]: I1211 15:07:56.658540 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d79f765b5-mr6dw" Dec 11 15:07:56 crc kubenswrapper[5050]: I1211 15:07:56.664155 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc","Type":"ContainerStarted","Data":"fce06d880bdfeee46a25747760acf06fa1f92c1eeef4d7c60c611726861d1079"} Dec 11 15:07:56 crc kubenswrapper[5050]: I1211 15:07:56.685060 5050 scope.go:117] "RemoveContainer" containerID="86332dd77e7fbe45f6dc8f8badf46d8e67b3e5e49ae5675d8958b530134e1997" Dec 11 15:07:56 crc kubenswrapper[5050]: I1211 15:07:56.706279 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-mr6dw"] Dec 11 15:07:56 crc kubenswrapper[5050]: I1211 15:07:56.711262 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d79f765b5-mr6dw"] Dec 11 15:07:56 crc kubenswrapper[5050]: I1211 15:07:56.736590 5050 scope.go:117] "RemoveContainer" containerID="094234d197fae2ac8e7fdc4a50ba306b4ebee05f718803e46ae3580eaf17f016" Dec 11 15:07:56 crc kubenswrapper[5050]: E1211 15:07:56.737136 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"094234d197fae2ac8e7fdc4a50ba306b4ebee05f718803e46ae3580eaf17f016\": container with ID starting with 094234d197fae2ac8e7fdc4a50ba306b4ebee05f718803e46ae3580eaf17f016 not found: ID does not exist" containerID="094234d197fae2ac8e7fdc4a50ba306b4ebee05f718803e46ae3580eaf17f016" Dec 11 15:07:56 crc kubenswrapper[5050]: I1211 15:07:56.737170 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"094234d197fae2ac8e7fdc4a50ba306b4ebee05f718803e46ae3580eaf17f016"} err="failed to get container status \"094234d197fae2ac8e7fdc4a50ba306b4ebee05f718803e46ae3580eaf17f016\": rpc error: code = NotFound desc = could not find container \"094234d197fae2ac8e7fdc4a50ba306b4ebee05f718803e46ae3580eaf17f016\": container with ID starting with 094234d197fae2ac8e7fdc4a50ba306b4ebee05f718803e46ae3580eaf17f016 not found: ID does not exist" Dec 11 15:07:56 crc kubenswrapper[5050]: I1211 15:07:56.737195 5050 scope.go:117] "RemoveContainer" containerID="86332dd77e7fbe45f6dc8f8badf46d8e67b3e5e49ae5675d8958b530134e1997" Dec 11 15:07:56 crc kubenswrapper[5050]: E1211 15:07:56.737560 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86332dd77e7fbe45f6dc8f8badf46d8e67b3e5e49ae5675d8958b530134e1997\": container with ID starting with 86332dd77e7fbe45f6dc8f8badf46d8e67b3e5e49ae5675d8958b530134e1997 not found: ID does not exist" containerID="86332dd77e7fbe45f6dc8f8badf46d8e67b3e5e49ae5675d8958b530134e1997" Dec 11 15:07:56 crc kubenswrapper[5050]: I1211 15:07:56.737600 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86332dd77e7fbe45f6dc8f8badf46d8e67b3e5e49ae5675d8958b530134e1997"} err="failed to get container status \"86332dd77e7fbe45f6dc8f8badf46d8e67b3e5e49ae5675d8958b530134e1997\": rpc error: code = NotFound desc = could not find container \"86332dd77e7fbe45f6dc8f8badf46d8e67b3e5e49ae5675d8958b530134e1997\": container with ID starting with 86332dd77e7fbe45f6dc8f8badf46d8e67b3e5e49ae5675d8958b530134e1997 not found: ID does not exist" Dec 11 15:07:57 crc kubenswrapper[5050]: I1211 15:07:57.565078 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63be72e0-3588-4234-94d0-80c42d74aff6" path="/var/lib/kubelet/pods/63be72e0-3588-4234-94d0-80c42d74aff6/volumes" Dec 11 15:07:57 crc kubenswrapper[5050]: I1211 15:07:57.568130 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="786d8f03-bb4c-47b7-a94a-b86c9bc07c6d" path="/var/lib/kubelet/pods/786d8f03-bb4c-47b7-a94a-b86c9bc07c6d/volumes" Dec 11 15:07:57 crc kubenswrapper[5050]: I1211 15:07:57.679286 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc","Type":"ContainerStarted","Data":"24940c7cd8174b110536fe5403caed6e694873b1051ca0179db70b156ef7255a"} Dec 11 15:07:58 crc kubenswrapper[5050]: I1211 15:07:58.697597 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"729e0c28-cb57-4539-b523-b8ae848a62c8","Type":"ContainerStarted","Data":"fea2709743c7e6f05784ed653edb58795d95b0152732d2dce5e75f4eaf635ad8"} Dec 11 15:08:02 crc kubenswrapper[5050]: I1211 15:08:02.546283 5050 scope.go:117] "RemoveContainer" containerID="b585bea20b0bb52e58642ee9f3e33b5b182a66a544d35daafc0202a53b35a810" Dec 11 15:08:02 crc kubenswrapper[5050]: E1211 15:08:02.546703 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:08:17 crc kubenswrapper[5050]: I1211 15:08:17.546836 5050 scope.go:117] "RemoveContainer" containerID="b585bea20b0bb52e58642ee9f3e33b5b182a66a544d35daafc0202a53b35a810" Dec 11 15:08:17 crc kubenswrapper[5050]: E1211 15:08:17.547773 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:08:28 crc kubenswrapper[5050]: I1211 15:08:28.546697 5050 scope.go:117] "RemoveContainer" containerID="b585bea20b0bb52e58642ee9f3e33b5b182a66a544d35daafc0202a53b35a810" Dec 11 15:08:28 crc kubenswrapper[5050]: E1211 15:08:28.547584 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:08:31 crc kubenswrapper[5050]: I1211 15:08:31.031845 5050 generic.go:334] "Generic (PLEG): container finished" podID="729e0c28-cb57-4539-b523-b8ae848a62c8" containerID="fea2709743c7e6f05784ed653edb58795d95b0152732d2dce5e75f4eaf635ad8" exitCode=0 Dec 11 15:08:31 crc kubenswrapper[5050]: I1211 15:08:31.031932 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"729e0c28-cb57-4539-b523-b8ae848a62c8","Type":"ContainerDied","Data":"fea2709743c7e6f05784ed653edb58795d95b0152732d2dce5e75f4eaf635ad8"} Dec 11 15:08:31 crc kubenswrapper[5050]: I1211 15:08:31.034098 5050 generic.go:334] "Generic (PLEG): container finished" podID="f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc" containerID="24940c7cd8174b110536fe5403caed6e694873b1051ca0179db70b156ef7255a" exitCode=0 Dec 11 15:08:31 crc kubenswrapper[5050]: I1211 15:08:31.034134 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc","Type":"ContainerDied","Data":"24940c7cd8174b110536fe5403caed6e694873b1051ca0179db70b156ef7255a"} Dec 11 15:08:32 crc kubenswrapper[5050]: I1211 15:08:32.045966 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f3f3a394-a9a0-4a5a-b8e9-f980a18d54fc","Type":"ContainerStarted","Data":"02b6ac1db2a931375cf3759c98faeff2083f15f8d4a19dfd4a23fd670cfcd78c"} Dec 11 15:08:32 crc kubenswrapper[5050]: I1211 15:08:32.046907 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 11 15:08:32 crc kubenswrapper[5050]: I1211 15:08:32.049220 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"729e0c28-cb57-4539-b523-b8ae848a62c8","Type":"ContainerStarted","Data":"23da1849cc9392a5c95162d12757100ee09f9ecede7fb29d825bc7ca8dd80ed2"} Dec 11 15:08:32 crc kubenswrapper[5050]: I1211 15:08:32.049445 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:08:32 crc kubenswrapper[5050]: I1211 15:08:32.078672 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.078633525 podStartE2EDuration="37.078633525s" podCreationTimestamp="2025-12-11 15:07:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:08:32.076308302 +0000 UTC m=+4802.920030908" watchObservedRunningTime="2025-12-11 15:08:32.078633525 +0000 UTC m=+4802.922356151" Dec 11 15:08:40 crc kubenswrapper[5050]: I1211 15:08:40.547163 5050 scope.go:117] "RemoveContainer" containerID="b585bea20b0bb52e58642ee9f3e33b5b182a66a544d35daafc0202a53b35a810" Dec 11 15:08:40 crc kubenswrapper[5050]: E1211 15:08:40.548298 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:08:45 crc kubenswrapper[5050]: I1211 15:08:45.373624 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 11 15:08:45 crc kubenswrapper[5050]: I1211 15:08:45.407349 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=50.407323714 podStartE2EDuration="50.407323714s" podCreationTimestamp="2025-12-11 15:07:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:08:32.109781661 +0000 UTC m=+4802.953504247" watchObservedRunningTime="2025-12-11 15:08:45.407323714 +0000 UTC m=+4816.251046300" Dec 11 15:08:46 crc kubenswrapper[5050]: I1211 15:08:46.087292 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 11 15:08:51 crc kubenswrapper[5050]: I1211 15:08:51.545875 5050 scope.go:117] "RemoveContainer" containerID="b585bea20b0bb52e58642ee9f3e33b5b182a66a544d35daafc0202a53b35a810" Dec 11 15:08:51 crc kubenswrapper[5050]: E1211 15:08:51.546696 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:08:52 crc kubenswrapper[5050]: I1211 15:08:52.715926 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1-default"] Dec 11 15:08:52 crc kubenswrapper[5050]: E1211 15:08:52.716696 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="786d8f03-bb4c-47b7-a94a-b86c9bc07c6d" containerName="init" Dec 11 15:08:52 crc kubenswrapper[5050]: I1211 15:08:52.716709 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="786d8f03-bb4c-47b7-a94a-b86c9bc07c6d" containerName="init" Dec 11 15:08:52 crc kubenswrapper[5050]: E1211 15:08:52.716733 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="786d8f03-bb4c-47b7-a94a-b86c9bc07c6d" containerName="dnsmasq-dns" Dec 11 15:08:52 crc kubenswrapper[5050]: I1211 15:08:52.716740 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="786d8f03-bb4c-47b7-a94a-b86c9bc07c6d" containerName="dnsmasq-dns" Dec 11 15:08:52 crc kubenswrapper[5050]: I1211 15:08:52.716923 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="786d8f03-bb4c-47b7-a94a-b86c9bc07c6d" containerName="dnsmasq-dns" Dec 11 15:08:52 crc kubenswrapper[5050]: I1211 15:08:52.717480 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Dec 11 15:08:52 crc kubenswrapper[5050]: I1211 15:08:52.719800 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-tmtdn" Dec 11 15:08:52 crc kubenswrapper[5050]: I1211 15:08:52.727076 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Dec 11 15:08:52 crc kubenswrapper[5050]: I1211 15:08:52.881701 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgcr4\" (UniqueName: \"kubernetes.io/projected/a65dc9ec-4aab-4adc-a6ee-0d9f5c57e174-kube-api-access-zgcr4\") pod \"mariadb-client-1-default\" (UID: \"a65dc9ec-4aab-4adc-a6ee-0d9f5c57e174\") " pod="openstack/mariadb-client-1-default" Dec 11 15:08:52 crc kubenswrapper[5050]: I1211 15:08:52.983501 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgcr4\" (UniqueName: \"kubernetes.io/projected/a65dc9ec-4aab-4adc-a6ee-0d9f5c57e174-kube-api-access-zgcr4\") pod \"mariadb-client-1-default\" (UID: \"a65dc9ec-4aab-4adc-a6ee-0d9f5c57e174\") " pod="openstack/mariadb-client-1-default" Dec 11 15:08:53 crc kubenswrapper[5050]: I1211 15:08:53.004995 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgcr4\" (UniqueName: \"kubernetes.io/projected/a65dc9ec-4aab-4adc-a6ee-0d9f5c57e174-kube-api-access-zgcr4\") pod \"mariadb-client-1-default\" (UID: \"a65dc9ec-4aab-4adc-a6ee-0d9f5c57e174\") " pod="openstack/mariadb-client-1-default" Dec 11 15:08:53 crc kubenswrapper[5050]: I1211 15:08:53.044685 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Dec 11 15:08:53 crc kubenswrapper[5050]: I1211 15:08:53.332686 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1-default"] Dec 11 15:08:53 crc kubenswrapper[5050]: I1211 15:08:53.356853 5050 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 15:08:54 crc kubenswrapper[5050]: I1211 15:08:54.265068 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"a65dc9ec-4aab-4adc-a6ee-0d9f5c57e174","Type":"ContainerStarted","Data":"220ff0eab15278d080c00b3a1b3ccffd5cf19aaeffdadaf43d3ed357d4c5b2aa"} Dec 11 15:08:55 crc kubenswrapper[5050]: I1211 15:08:55.273372 5050 generic.go:334] "Generic (PLEG): container finished" podID="a65dc9ec-4aab-4adc-a6ee-0d9f5c57e174" containerID="eb5058c059b7bfd37fdf9c68c8cf7bf5613de65886882788fdb0b7d3558a5f68" exitCode=0 Dec 11 15:08:55 crc kubenswrapper[5050]: I1211 15:08:55.273456 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1-default" event={"ID":"a65dc9ec-4aab-4adc-a6ee-0d9f5c57e174","Type":"ContainerDied","Data":"eb5058c059b7bfd37fdf9c68c8cf7bf5613de65886882788fdb0b7d3558a5f68"} Dec 11 15:08:56 crc kubenswrapper[5050]: I1211 15:08:56.695732 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Dec 11 15:08:56 crc kubenswrapper[5050]: I1211 15:08:56.723556 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1-default_a65dc9ec-4aab-4adc-a6ee-0d9f5c57e174/mariadb-client-1-default/0.log" Dec 11 15:08:56 crc kubenswrapper[5050]: I1211 15:08:56.751669 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1-default"] Dec 11 15:08:56 crc kubenswrapper[5050]: I1211 15:08:56.756927 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1-default"] Dec 11 15:08:56 crc kubenswrapper[5050]: I1211 15:08:56.864585 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgcr4\" (UniqueName: \"kubernetes.io/projected/a65dc9ec-4aab-4adc-a6ee-0d9f5c57e174-kube-api-access-zgcr4\") pod \"a65dc9ec-4aab-4adc-a6ee-0d9f5c57e174\" (UID: \"a65dc9ec-4aab-4adc-a6ee-0d9f5c57e174\") " Dec 11 15:08:56 crc kubenswrapper[5050]: I1211 15:08:56.871852 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a65dc9ec-4aab-4adc-a6ee-0d9f5c57e174-kube-api-access-zgcr4" (OuterVolumeSpecName: "kube-api-access-zgcr4") pod "a65dc9ec-4aab-4adc-a6ee-0d9f5c57e174" (UID: "a65dc9ec-4aab-4adc-a6ee-0d9f5c57e174"). InnerVolumeSpecName "kube-api-access-zgcr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:08:56 crc kubenswrapper[5050]: I1211 15:08:56.967482 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgcr4\" (UniqueName: \"kubernetes.io/projected/a65dc9ec-4aab-4adc-a6ee-0d9f5c57e174-kube-api-access-zgcr4\") on node \"crc\" DevicePath \"\"" Dec 11 15:08:57 crc kubenswrapper[5050]: I1211 15:08:57.141562 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2-default"] Dec 11 15:08:57 crc kubenswrapper[5050]: E1211 15:08:57.142273 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a65dc9ec-4aab-4adc-a6ee-0d9f5c57e174" containerName="mariadb-client-1-default" Dec 11 15:08:57 crc kubenswrapper[5050]: I1211 15:08:57.142329 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a65dc9ec-4aab-4adc-a6ee-0d9f5c57e174" containerName="mariadb-client-1-default" Dec 11 15:08:57 crc kubenswrapper[5050]: I1211 15:08:57.142776 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="a65dc9ec-4aab-4adc-a6ee-0d9f5c57e174" containerName="mariadb-client-1-default" Dec 11 15:08:57 crc kubenswrapper[5050]: I1211 15:08:57.143928 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Dec 11 15:08:57 crc kubenswrapper[5050]: I1211 15:08:57.151668 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Dec 11 15:08:57 crc kubenswrapper[5050]: I1211 15:08:57.272569 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdnn4\" (UniqueName: \"kubernetes.io/projected/b27f2381-8ddb-441a-9cfa-af9c52f453ce-kube-api-access-fdnn4\") pod \"mariadb-client-2-default\" (UID: \"b27f2381-8ddb-441a-9cfa-af9c52f453ce\") " pod="openstack/mariadb-client-2-default" Dec 11 15:08:57 crc kubenswrapper[5050]: I1211 15:08:57.294371 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="220ff0eab15278d080c00b3a1b3ccffd5cf19aaeffdadaf43d3ed357d4c5b2aa" Dec 11 15:08:57 crc kubenswrapper[5050]: I1211 15:08:57.294428 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1-default" Dec 11 15:08:57 crc kubenswrapper[5050]: I1211 15:08:57.373980 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdnn4\" (UniqueName: \"kubernetes.io/projected/b27f2381-8ddb-441a-9cfa-af9c52f453ce-kube-api-access-fdnn4\") pod \"mariadb-client-2-default\" (UID: \"b27f2381-8ddb-441a-9cfa-af9c52f453ce\") " pod="openstack/mariadb-client-2-default" Dec 11 15:08:57 crc kubenswrapper[5050]: I1211 15:08:57.393921 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdnn4\" (UniqueName: \"kubernetes.io/projected/b27f2381-8ddb-441a-9cfa-af9c52f453ce-kube-api-access-fdnn4\") pod \"mariadb-client-2-default\" (UID: \"b27f2381-8ddb-441a-9cfa-af9c52f453ce\") " pod="openstack/mariadb-client-2-default" Dec 11 15:08:57 crc kubenswrapper[5050]: I1211 15:08:57.468553 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Dec 11 15:08:57 crc kubenswrapper[5050]: I1211 15:08:57.567435 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a65dc9ec-4aab-4adc-a6ee-0d9f5c57e174" path="/var/lib/kubelet/pods/a65dc9ec-4aab-4adc-a6ee-0d9f5c57e174/volumes" Dec 11 15:08:57 crc kubenswrapper[5050]: I1211 15:08:57.763920 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2-default"] Dec 11 15:08:57 crc kubenswrapper[5050]: W1211 15:08:57.769157 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb27f2381_8ddb_441a_9cfa_af9c52f453ce.slice/crio-debd25ffb1ea01526d52ecdf2b986d508016f4fdfa97bcc2f6529b95f5e7b3c7 WatchSource:0}: Error finding container debd25ffb1ea01526d52ecdf2b986d508016f4fdfa97bcc2f6529b95f5e7b3c7: Status 404 returned error can't find the container with id debd25ffb1ea01526d52ecdf2b986d508016f4fdfa97bcc2f6529b95f5e7b3c7 Dec 11 15:08:58 crc kubenswrapper[5050]: I1211 15:08:58.318871 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"b27f2381-8ddb-441a-9cfa-af9c52f453ce","Type":"ContainerStarted","Data":"d8acce0af52a04fb86554f82cd8dc3d51d4d65484f120a90bf5a82d9097095ef"} Dec 11 15:08:58 crc kubenswrapper[5050]: I1211 15:08:58.319383 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"b27f2381-8ddb-441a-9cfa-af9c52f453ce","Type":"ContainerStarted","Data":"debd25ffb1ea01526d52ecdf2b986d508016f4fdfa97bcc2f6529b95f5e7b3c7"} Dec 11 15:08:58 crc kubenswrapper[5050]: I1211 15:08:58.342927 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client-2-default" podStartSLOduration=1.342900546 podStartE2EDuration="1.342900546s" podCreationTimestamp="2025-12-11 15:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:08:58.338713854 +0000 UTC m=+4829.182436440" watchObservedRunningTime="2025-12-11 15:08:58.342900546 +0000 UTC m=+4829.186623142" Dec 11 15:08:59 crc kubenswrapper[5050]: I1211 15:08:59.327353 5050 generic.go:334] "Generic (PLEG): container finished" podID="b27f2381-8ddb-441a-9cfa-af9c52f453ce" containerID="d8acce0af52a04fb86554f82cd8dc3d51d4d65484f120a90bf5a82d9097095ef" exitCode=1 Dec 11 15:08:59 crc kubenswrapper[5050]: I1211 15:08:59.327405 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2-default" event={"ID":"b27f2381-8ddb-441a-9cfa-af9c52f453ce","Type":"ContainerDied","Data":"d8acce0af52a04fb86554f82cd8dc3d51d4d65484f120a90bf5a82d9097095ef"} Dec 11 15:09:01 crc kubenswrapper[5050]: I1211 15:09:00.704996 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Dec 11 15:09:01 crc kubenswrapper[5050]: I1211 15:09:00.757107 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2-default"] Dec 11 15:09:01 crc kubenswrapper[5050]: I1211 15:09:00.762962 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2-default"] Dec 11 15:09:01 crc kubenswrapper[5050]: I1211 15:09:00.837288 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdnn4\" (UniqueName: \"kubernetes.io/projected/b27f2381-8ddb-441a-9cfa-af9c52f453ce-kube-api-access-fdnn4\") pod \"b27f2381-8ddb-441a-9cfa-af9c52f453ce\" (UID: \"b27f2381-8ddb-441a-9cfa-af9c52f453ce\") " Dec 11 15:09:01 crc kubenswrapper[5050]: I1211 15:09:00.871307 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b27f2381-8ddb-441a-9cfa-af9c52f453ce-kube-api-access-fdnn4" (OuterVolumeSpecName: "kube-api-access-fdnn4") pod "b27f2381-8ddb-441a-9cfa-af9c52f453ce" (UID: "b27f2381-8ddb-441a-9cfa-af9c52f453ce"). InnerVolumeSpecName "kube-api-access-fdnn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:09:01 crc kubenswrapper[5050]: I1211 15:09:00.939176 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdnn4\" (UniqueName: \"kubernetes.io/projected/b27f2381-8ddb-441a-9cfa-af9c52f453ce-kube-api-access-fdnn4\") on node \"crc\" DevicePath \"\"" Dec 11 15:09:01 crc kubenswrapper[5050]: I1211 15:09:01.155688 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-1"] Dec 11 15:09:01 crc kubenswrapper[5050]: E1211 15:09:01.156363 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b27f2381-8ddb-441a-9cfa-af9c52f453ce" containerName="mariadb-client-2-default" Dec 11 15:09:01 crc kubenswrapper[5050]: I1211 15:09:01.156377 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="b27f2381-8ddb-441a-9cfa-af9c52f453ce" containerName="mariadb-client-2-default" Dec 11 15:09:01 crc kubenswrapper[5050]: I1211 15:09:01.156559 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="b27f2381-8ddb-441a-9cfa-af9c52f453ce" containerName="mariadb-client-2-default" Dec 11 15:09:01 crc kubenswrapper[5050]: I1211 15:09:01.157223 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Dec 11 15:09:01 crc kubenswrapper[5050]: I1211 15:09:01.176524 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Dec 11 15:09:01 crc kubenswrapper[5050]: I1211 15:09:01.245666 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5l9f\" (UniqueName: \"kubernetes.io/projected/81ff74e5-c860-43de-aa7d-1b3eba024555-kube-api-access-w5l9f\") pod \"mariadb-client-1\" (UID: \"81ff74e5-c860-43de-aa7d-1b3eba024555\") " pod="openstack/mariadb-client-1" Dec 11 15:09:01 crc kubenswrapper[5050]: I1211 15:09:01.344848 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="debd25ffb1ea01526d52ecdf2b986d508016f4fdfa97bcc2f6529b95f5e7b3c7" Dec 11 15:09:01 crc kubenswrapper[5050]: I1211 15:09:01.344962 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2-default" Dec 11 15:09:01 crc kubenswrapper[5050]: I1211 15:09:01.346569 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5l9f\" (UniqueName: \"kubernetes.io/projected/81ff74e5-c860-43de-aa7d-1b3eba024555-kube-api-access-w5l9f\") pod \"mariadb-client-1\" (UID: \"81ff74e5-c860-43de-aa7d-1b3eba024555\") " pod="openstack/mariadb-client-1" Dec 11 15:09:01 crc kubenswrapper[5050]: I1211 15:09:01.369605 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5l9f\" (UniqueName: \"kubernetes.io/projected/81ff74e5-c860-43de-aa7d-1b3eba024555-kube-api-access-w5l9f\") pod \"mariadb-client-1\" (UID: \"81ff74e5-c860-43de-aa7d-1b3eba024555\") " pod="openstack/mariadb-client-1" Dec 11 15:09:01 crc kubenswrapper[5050]: I1211 15:09:01.484997 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Dec 11 15:09:01 crc kubenswrapper[5050]: I1211 15:09:01.563475 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b27f2381-8ddb-441a-9cfa-af9c52f453ce" path="/var/lib/kubelet/pods/b27f2381-8ddb-441a-9cfa-af9c52f453ce/volumes" Dec 11 15:09:01 crc kubenswrapper[5050]: I1211 15:09:01.830927 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-1"] Dec 11 15:09:02 crc kubenswrapper[5050]: W1211 15:09:02.274671 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81ff74e5_c860_43de_aa7d_1b3eba024555.slice/crio-cd4b3c77bc13db78368ff9a4029d59cafa708a13690756362b1d0b924782b267 WatchSource:0}: Error finding container cd4b3c77bc13db78368ff9a4029d59cafa708a13690756362b1d0b924782b267: Status 404 returned error can't find the container with id cd4b3c77bc13db78368ff9a4029d59cafa708a13690756362b1d0b924782b267 Dec 11 15:09:02 crc kubenswrapper[5050]: I1211 15:09:02.353646 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"81ff74e5-c860-43de-aa7d-1b3eba024555","Type":"ContainerStarted","Data":"cd4b3c77bc13db78368ff9a4029d59cafa708a13690756362b1d0b924782b267"} Dec 11 15:09:03 crc kubenswrapper[5050]: I1211 15:09:03.377766 5050 generic.go:334] "Generic (PLEG): container finished" podID="81ff74e5-c860-43de-aa7d-1b3eba024555" containerID="230f8db9b2d80b148f55ccff672575c069cd97ebc38e2a24c3dd90fbf687bb80" exitCode=0 Dec 11 15:09:03 crc kubenswrapper[5050]: I1211 15:09:03.377832 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-1" event={"ID":"81ff74e5-c860-43de-aa7d-1b3eba024555","Type":"ContainerDied","Data":"230f8db9b2d80b148f55ccff672575c069cd97ebc38e2a24c3dd90fbf687bb80"} Dec 11 15:09:04 crc kubenswrapper[5050]: I1211 15:09:04.546656 5050 scope.go:117] "RemoveContainer" containerID="b585bea20b0bb52e58642ee9f3e33b5b182a66a544d35daafc0202a53b35a810" Dec 11 15:09:04 crc kubenswrapper[5050]: E1211 15:09:04.547199 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:09:04 crc kubenswrapper[5050]: I1211 15:09:04.809087 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Dec 11 15:09:04 crc kubenswrapper[5050]: I1211 15:09:04.828905 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-1_81ff74e5-c860-43de-aa7d-1b3eba024555/mariadb-client-1/0.log" Dec 11 15:09:04 crc kubenswrapper[5050]: I1211 15:09:04.865812 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-1"] Dec 11 15:09:04 crc kubenswrapper[5050]: I1211 15:09:04.874790 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-1"] Dec 11 15:09:04 crc kubenswrapper[5050]: I1211 15:09:04.901957 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5l9f\" (UniqueName: \"kubernetes.io/projected/81ff74e5-c860-43de-aa7d-1b3eba024555-kube-api-access-w5l9f\") pod \"81ff74e5-c860-43de-aa7d-1b3eba024555\" (UID: \"81ff74e5-c860-43de-aa7d-1b3eba024555\") " Dec 11 15:09:04 crc kubenswrapper[5050]: I1211 15:09:04.911527 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81ff74e5-c860-43de-aa7d-1b3eba024555-kube-api-access-w5l9f" (OuterVolumeSpecName: "kube-api-access-w5l9f") pod "81ff74e5-c860-43de-aa7d-1b3eba024555" (UID: "81ff74e5-c860-43de-aa7d-1b3eba024555"). InnerVolumeSpecName "kube-api-access-w5l9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:09:05 crc kubenswrapper[5050]: I1211 15:09:05.006592 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5l9f\" (UniqueName: \"kubernetes.io/projected/81ff74e5-c860-43de-aa7d-1b3eba024555-kube-api-access-w5l9f\") on node \"crc\" DevicePath \"\"" Dec 11 15:09:05 crc kubenswrapper[5050]: I1211 15:09:05.273907 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-4-default"] Dec 11 15:09:05 crc kubenswrapper[5050]: E1211 15:09:05.274624 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ff74e5-c860-43de-aa7d-1b3eba024555" containerName="mariadb-client-1" Dec 11 15:09:05 crc kubenswrapper[5050]: I1211 15:09:05.274643 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ff74e5-c860-43de-aa7d-1b3eba024555" containerName="mariadb-client-1" Dec 11 15:09:05 crc kubenswrapper[5050]: I1211 15:09:05.274785 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="81ff74e5-c860-43de-aa7d-1b3eba024555" containerName="mariadb-client-1" Dec 11 15:09:05 crc kubenswrapper[5050]: I1211 15:09:05.275409 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Dec 11 15:09:05 crc kubenswrapper[5050]: I1211 15:09:05.284779 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Dec 11 15:09:05 crc kubenswrapper[5050]: I1211 15:09:05.400458 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd4b3c77bc13db78368ff9a4029d59cafa708a13690756362b1d0b924782b267" Dec 11 15:09:05 crc kubenswrapper[5050]: I1211 15:09:05.400509 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-1" Dec 11 15:09:05 crc kubenswrapper[5050]: I1211 15:09:05.414737 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pdmm\" (UniqueName: \"kubernetes.io/projected/f61c256d-502e-4470-8e62-0ff9a25d3cff-kube-api-access-7pdmm\") pod \"mariadb-client-4-default\" (UID: \"f61c256d-502e-4470-8e62-0ff9a25d3cff\") " pod="openstack/mariadb-client-4-default" Dec 11 15:09:05 crc kubenswrapper[5050]: I1211 15:09:05.516819 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pdmm\" (UniqueName: \"kubernetes.io/projected/f61c256d-502e-4470-8e62-0ff9a25d3cff-kube-api-access-7pdmm\") pod \"mariadb-client-4-default\" (UID: \"f61c256d-502e-4470-8e62-0ff9a25d3cff\") " pod="openstack/mariadb-client-4-default" Dec 11 15:09:05 crc kubenswrapper[5050]: I1211 15:09:05.552659 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pdmm\" (UniqueName: \"kubernetes.io/projected/f61c256d-502e-4470-8e62-0ff9a25d3cff-kube-api-access-7pdmm\") pod \"mariadb-client-4-default\" (UID: \"f61c256d-502e-4470-8e62-0ff9a25d3cff\") " pod="openstack/mariadb-client-4-default" Dec 11 15:09:05 crc kubenswrapper[5050]: I1211 15:09:05.559370 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81ff74e5-c860-43de-aa7d-1b3eba024555" path="/var/lib/kubelet/pods/81ff74e5-c860-43de-aa7d-1b3eba024555/volumes" Dec 11 15:09:05 crc kubenswrapper[5050]: I1211 15:09:05.610595 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Dec 11 15:09:06 crc kubenswrapper[5050]: I1211 15:09:06.005972 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-4-default"] Dec 11 15:09:06 crc kubenswrapper[5050]: I1211 15:09:06.409519 5050 generic.go:334] "Generic (PLEG): container finished" podID="f61c256d-502e-4470-8e62-0ff9a25d3cff" containerID="daaaabe96903ca5d2e52e3ec5a014d4bd2558efbcebf041c6f9821392c4c4833" exitCode=0 Dec 11 15:09:06 crc kubenswrapper[5050]: I1211 15:09:06.409585 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"f61c256d-502e-4470-8e62-0ff9a25d3cff","Type":"ContainerDied","Data":"daaaabe96903ca5d2e52e3ec5a014d4bd2558efbcebf041c6f9821392c4c4833"} Dec 11 15:09:06 crc kubenswrapper[5050]: I1211 15:09:06.410356 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-4-default" event={"ID":"f61c256d-502e-4470-8e62-0ff9a25d3cff","Type":"ContainerStarted","Data":"53dfa6896ee8e1c24db313dee72083735625c14393ec3c1424765842c75bd8cc"} Dec 11 15:09:07 crc kubenswrapper[5050]: I1211 15:09:07.822733 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Dec 11 15:09:07 crc kubenswrapper[5050]: I1211 15:09:07.842058 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-4-default_f61c256d-502e-4470-8e62-0ff9a25d3cff/mariadb-client-4-default/0.log" Dec 11 15:09:07 crc kubenswrapper[5050]: I1211 15:09:07.871548 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-4-default"] Dec 11 15:09:07 crc kubenswrapper[5050]: I1211 15:09:07.877935 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-4-default"] Dec 11 15:09:07 crc kubenswrapper[5050]: I1211 15:09:07.966236 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pdmm\" (UniqueName: \"kubernetes.io/projected/f61c256d-502e-4470-8e62-0ff9a25d3cff-kube-api-access-7pdmm\") pod \"f61c256d-502e-4470-8e62-0ff9a25d3cff\" (UID: \"f61c256d-502e-4470-8e62-0ff9a25d3cff\") " Dec 11 15:09:07 crc kubenswrapper[5050]: I1211 15:09:07.972221 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f61c256d-502e-4470-8e62-0ff9a25d3cff-kube-api-access-7pdmm" (OuterVolumeSpecName: "kube-api-access-7pdmm") pod "f61c256d-502e-4470-8e62-0ff9a25d3cff" (UID: "f61c256d-502e-4470-8e62-0ff9a25d3cff"). InnerVolumeSpecName "kube-api-access-7pdmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:09:08 crc kubenswrapper[5050]: I1211 15:09:08.071509 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pdmm\" (UniqueName: \"kubernetes.io/projected/f61c256d-502e-4470-8e62-0ff9a25d3cff-kube-api-access-7pdmm\") on node \"crc\" DevicePath \"\"" Dec 11 15:09:08 crc kubenswrapper[5050]: I1211 15:09:08.431398 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53dfa6896ee8e1c24db313dee72083735625c14393ec3c1424765842c75bd8cc" Dec 11 15:09:08 crc kubenswrapper[5050]: I1211 15:09:08.431457 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-4-default" Dec 11 15:09:09 crc kubenswrapper[5050]: I1211 15:09:09.561625 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f61c256d-502e-4470-8e62-0ff9a25d3cff" path="/var/lib/kubelet/pods/f61c256d-502e-4470-8e62-0ff9a25d3cff/volumes" Dec 11 15:09:12 crc kubenswrapper[5050]: I1211 15:09:12.222840 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-5-default"] Dec 11 15:09:12 crc kubenswrapper[5050]: E1211 15:09:12.223890 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f61c256d-502e-4470-8e62-0ff9a25d3cff" containerName="mariadb-client-4-default" Dec 11 15:09:12 crc kubenswrapper[5050]: I1211 15:09:12.223904 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f61c256d-502e-4470-8e62-0ff9a25d3cff" containerName="mariadb-client-4-default" Dec 11 15:09:12 crc kubenswrapper[5050]: I1211 15:09:12.224794 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="f61c256d-502e-4470-8e62-0ff9a25d3cff" containerName="mariadb-client-4-default" Dec 11 15:09:12 crc kubenswrapper[5050]: I1211 15:09:12.225497 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Dec 11 15:09:12 crc kubenswrapper[5050]: I1211 15:09:12.227100 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-tmtdn" Dec 11 15:09:12 crc kubenswrapper[5050]: I1211 15:09:12.230872 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Dec 11 15:09:12 crc kubenswrapper[5050]: I1211 15:09:12.360612 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrpnc\" (UniqueName: \"kubernetes.io/projected/e9488862-529a-465d-8649-27bda4678402-kube-api-access-vrpnc\") pod \"mariadb-client-5-default\" (UID: \"e9488862-529a-465d-8649-27bda4678402\") " pod="openstack/mariadb-client-5-default" Dec 11 15:09:12 crc kubenswrapper[5050]: I1211 15:09:12.463302 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrpnc\" (UniqueName: \"kubernetes.io/projected/e9488862-529a-465d-8649-27bda4678402-kube-api-access-vrpnc\") pod \"mariadb-client-5-default\" (UID: \"e9488862-529a-465d-8649-27bda4678402\") " pod="openstack/mariadb-client-5-default" Dec 11 15:09:12 crc kubenswrapper[5050]: I1211 15:09:12.971772 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrpnc\" (UniqueName: \"kubernetes.io/projected/e9488862-529a-465d-8649-27bda4678402-kube-api-access-vrpnc\") pod \"mariadb-client-5-default\" (UID: \"e9488862-529a-465d-8649-27bda4678402\") " pod="openstack/mariadb-client-5-default" Dec 11 15:09:13 crc kubenswrapper[5050]: I1211 15:09:13.144478 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Dec 11 15:09:13 crc kubenswrapper[5050]: I1211 15:09:13.454301 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-5-default"] Dec 11 15:09:13 crc kubenswrapper[5050]: W1211 15:09:13.455967 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9488862_529a_465d_8649_27bda4678402.slice/crio-da41625c9689d6d58b443c2e63cdcbec839077b2ec411e9330ab4284c2e22048 WatchSource:0}: Error finding container da41625c9689d6d58b443c2e63cdcbec839077b2ec411e9330ab4284c2e22048: Status 404 returned error can't find the container with id da41625c9689d6d58b443c2e63cdcbec839077b2ec411e9330ab4284c2e22048 Dec 11 15:09:13 crc kubenswrapper[5050]: I1211 15:09:13.485852 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"e9488862-529a-465d-8649-27bda4678402","Type":"ContainerStarted","Data":"da41625c9689d6d58b443c2e63cdcbec839077b2ec411e9330ab4284c2e22048"} Dec 11 15:09:14 crc kubenswrapper[5050]: I1211 15:09:14.493631 5050 generic.go:334] "Generic (PLEG): container finished" podID="e9488862-529a-465d-8649-27bda4678402" containerID="146c4c22e9bf52ba1df3f025e911c4d0d7af333fad692ba7bfa0ba97256e8caa" exitCode=0 Dec 11 15:09:14 crc kubenswrapper[5050]: I1211 15:09:14.493681 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-5-default" event={"ID":"e9488862-529a-465d-8649-27bda4678402","Type":"ContainerDied","Data":"146c4c22e9bf52ba1df3f025e911c4d0d7af333fad692ba7bfa0ba97256e8caa"} Dec 11 15:09:15 crc kubenswrapper[5050]: I1211 15:09:15.966251 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Dec 11 15:09:15 crc kubenswrapper[5050]: I1211 15:09:15.989985 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-5-default_e9488862-529a-465d-8649-27bda4678402/mariadb-client-5-default/0.log" Dec 11 15:09:16 crc kubenswrapper[5050]: I1211 15:09:16.034194 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-5-default"] Dec 11 15:09:16 crc kubenswrapper[5050]: I1211 15:09:16.043547 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-5-default"] Dec 11 15:09:16 crc kubenswrapper[5050]: I1211 15:09:16.128260 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrpnc\" (UniqueName: \"kubernetes.io/projected/e9488862-529a-465d-8649-27bda4678402-kube-api-access-vrpnc\") pod \"e9488862-529a-465d-8649-27bda4678402\" (UID: \"e9488862-529a-465d-8649-27bda4678402\") " Dec 11 15:09:16 crc kubenswrapper[5050]: I1211 15:09:16.138342 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9488862-529a-465d-8649-27bda4678402-kube-api-access-vrpnc" (OuterVolumeSpecName: "kube-api-access-vrpnc") pod "e9488862-529a-465d-8649-27bda4678402" (UID: "e9488862-529a-465d-8649-27bda4678402"). InnerVolumeSpecName "kube-api-access-vrpnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:09:16 crc kubenswrapper[5050]: I1211 15:09:16.177403 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-6-default"] Dec 11 15:09:16 crc kubenswrapper[5050]: E1211 15:09:16.178356 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9488862-529a-465d-8649-27bda4678402" containerName="mariadb-client-5-default" Dec 11 15:09:16 crc kubenswrapper[5050]: I1211 15:09:16.178404 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9488862-529a-465d-8649-27bda4678402" containerName="mariadb-client-5-default" Dec 11 15:09:16 crc kubenswrapper[5050]: I1211 15:09:16.178788 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9488862-529a-465d-8649-27bda4678402" containerName="mariadb-client-5-default" Dec 11 15:09:16 crc kubenswrapper[5050]: I1211 15:09:16.180139 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Dec 11 15:09:16 crc kubenswrapper[5050]: I1211 15:09:16.187787 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Dec 11 15:09:16 crc kubenswrapper[5050]: I1211 15:09:16.230443 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrpnc\" (UniqueName: \"kubernetes.io/projected/e9488862-529a-465d-8649-27bda4678402-kube-api-access-vrpnc\") on node \"crc\" DevicePath \"\"" Dec 11 15:09:16 crc kubenswrapper[5050]: I1211 15:09:16.331976 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtdl9\" (UniqueName: \"kubernetes.io/projected/a762550a-0293-41c5-8dbd-8272251471c5-kube-api-access-qtdl9\") pod \"mariadb-client-6-default\" (UID: \"a762550a-0293-41c5-8dbd-8272251471c5\") " pod="openstack/mariadb-client-6-default" Dec 11 15:09:16 crc kubenswrapper[5050]: I1211 15:09:16.433761 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtdl9\" (UniqueName: \"kubernetes.io/projected/a762550a-0293-41c5-8dbd-8272251471c5-kube-api-access-qtdl9\") pod \"mariadb-client-6-default\" (UID: \"a762550a-0293-41c5-8dbd-8272251471c5\") " pod="openstack/mariadb-client-6-default" Dec 11 15:09:16 crc kubenswrapper[5050]: I1211 15:09:16.455968 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtdl9\" (UniqueName: \"kubernetes.io/projected/a762550a-0293-41c5-8dbd-8272251471c5-kube-api-access-qtdl9\") pod \"mariadb-client-6-default\" (UID: \"a762550a-0293-41c5-8dbd-8272251471c5\") " pod="openstack/mariadb-client-6-default" Dec 11 15:09:16 crc kubenswrapper[5050]: I1211 15:09:16.503471 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Dec 11 15:09:16 crc kubenswrapper[5050]: I1211 15:09:16.518074 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da41625c9689d6d58b443c2e63cdcbec839077b2ec411e9330ab4284c2e22048" Dec 11 15:09:16 crc kubenswrapper[5050]: I1211 15:09:16.518115 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-5-default" Dec 11 15:09:16 crc kubenswrapper[5050]: I1211 15:09:16.878968 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-6-default"] Dec 11 15:09:16 crc kubenswrapper[5050]: W1211 15:09:16.886409 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda762550a_0293_41c5_8dbd_8272251471c5.slice/crio-0d1df49bf7d99b512e22e070c792d22f28ee506448b931c65a4c93dd1edfcf78 WatchSource:0}: Error finding container 0d1df49bf7d99b512e22e070c792d22f28ee506448b931c65a4c93dd1edfcf78: Status 404 returned error can't find the container with id 0d1df49bf7d99b512e22e070c792d22f28ee506448b931c65a4c93dd1edfcf78 Dec 11 15:09:17 crc kubenswrapper[5050]: I1211 15:09:17.528687 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"a762550a-0293-41c5-8dbd-8272251471c5","Type":"ContainerStarted","Data":"26be1ce2ff8de8d8a09aece0d3f92738d24a04d8a2e059d8c2ef7767f78a5c5c"} Dec 11 15:09:17 crc kubenswrapper[5050]: I1211 15:09:17.529185 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"a762550a-0293-41c5-8dbd-8272251471c5","Type":"ContainerStarted","Data":"0d1df49bf7d99b512e22e070c792d22f28ee506448b931c65a4c93dd1edfcf78"} Dec 11 15:09:17 crc kubenswrapper[5050]: I1211 15:09:17.566077 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client-6-default" podStartSLOduration=1.566050239 podStartE2EDuration="1.566050239s" podCreationTimestamp="2025-12-11 15:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:09:17.553656227 +0000 UTC m=+4848.397378853" watchObservedRunningTime="2025-12-11 15:09:17.566050239 +0000 UTC m=+4848.409772875" Dec 11 15:09:17 crc kubenswrapper[5050]: I1211 15:09:17.568529 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9488862-529a-465d-8649-27bda4678402" path="/var/lib/kubelet/pods/e9488862-529a-465d-8649-27bda4678402/volumes" Dec 11 15:09:17 crc kubenswrapper[5050]: I1211 15:09:17.625077 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-6-default_a762550a-0293-41c5-8dbd-8272251471c5/mariadb-client-6-default/0.log" Dec 11 15:09:18 crc kubenswrapper[5050]: I1211 15:09:18.540558 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-6-default" event={"ID":"a762550a-0293-41c5-8dbd-8272251471c5","Type":"ContainerDied","Data":"26be1ce2ff8de8d8a09aece0d3f92738d24a04d8a2e059d8c2ef7767f78a5c5c"} Dec 11 15:09:18 crc kubenswrapper[5050]: I1211 15:09:18.541366 5050 generic.go:334] "Generic (PLEG): container finished" podID="a762550a-0293-41c5-8dbd-8272251471c5" containerID="26be1ce2ff8de8d8a09aece0d3f92738d24a04d8a2e059d8c2ef7767f78a5c5c" exitCode=1 Dec 11 15:09:18 crc kubenswrapper[5050]: I1211 15:09:18.546953 5050 scope.go:117] "RemoveContainer" containerID="b585bea20b0bb52e58642ee9f3e33b5b182a66a544d35daafc0202a53b35a810" Dec 11 15:09:18 crc kubenswrapper[5050]: E1211 15:09:18.547993 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:09:20 crc kubenswrapper[5050]: I1211 15:09:20.002131 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Dec 11 15:09:20 crc kubenswrapper[5050]: I1211 15:09:20.043540 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-6-default"] Dec 11 15:09:20 crc kubenswrapper[5050]: I1211 15:09:20.054789 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-6-default"] Dec 11 15:09:20 crc kubenswrapper[5050]: I1211 15:09:20.110612 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtdl9\" (UniqueName: \"kubernetes.io/projected/a762550a-0293-41c5-8dbd-8272251471c5-kube-api-access-qtdl9\") pod \"a762550a-0293-41c5-8dbd-8272251471c5\" (UID: \"a762550a-0293-41c5-8dbd-8272251471c5\") " Dec 11 15:09:20 crc kubenswrapper[5050]: I1211 15:09:20.116712 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a762550a-0293-41c5-8dbd-8272251471c5-kube-api-access-qtdl9" (OuterVolumeSpecName: "kube-api-access-qtdl9") pod "a762550a-0293-41c5-8dbd-8272251471c5" (UID: "a762550a-0293-41c5-8dbd-8272251471c5"). InnerVolumeSpecName "kube-api-access-qtdl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:09:20 crc kubenswrapper[5050]: I1211 15:09:20.213139 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtdl9\" (UniqueName: \"kubernetes.io/projected/a762550a-0293-41c5-8dbd-8272251471c5-kube-api-access-qtdl9\") on node \"crc\" DevicePath \"\"" Dec 11 15:09:20 crc kubenswrapper[5050]: I1211 15:09:20.220231 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-7-default"] Dec 11 15:09:20 crc kubenswrapper[5050]: E1211 15:09:20.220630 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a762550a-0293-41c5-8dbd-8272251471c5" containerName="mariadb-client-6-default" Dec 11 15:09:20 crc kubenswrapper[5050]: I1211 15:09:20.220655 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a762550a-0293-41c5-8dbd-8272251471c5" containerName="mariadb-client-6-default" Dec 11 15:09:20 crc kubenswrapper[5050]: I1211 15:09:20.221220 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="a762550a-0293-41c5-8dbd-8272251471c5" containerName="mariadb-client-6-default" Dec 11 15:09:20 crc kubenswrapper[5050]: I1211 15:09:20.225821 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Dec 11 15:09:20 crc kubenswrapper[5050]: I1211 15:09:20.230097 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Dec 11 15:09:20 crc kubenswrapper[5050]: I1211 15:09:20.315963 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6c74\" (UniqueName: \"kubernetes.io/projected/25abc985-ae4c-48e9-bfcb-d53eec8d02c0-kube-api-access-d6c74\") pod \"mariadb-client-7-default\" (UID: \"25abc985-ae4c-48e9-bfcb-d53eec8d02c0\") " pod="openstack/mariadb-client-7-default" Dec 11 15:09:20 crc kubenswrapper[5050]: I1211 15:09:20.418376 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6c74\" (UniqueName: \"kubernetes.io/projected/25abc985-ae4c-48e9-bfcb-d53eec8d02c0-kube-api-access-d6c74\") pod \"mariadb-client-7-default\" (UID: \"25abc985-ae4c-48e9-bfcb-d53eec8d02c0\") " pod="openstack/mariadb-client-7-default" Dec 11 15:09:20 crc kubenswrapper[5050]: I1211 15:09:20.455160 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6c74\" (UniqueName: \"kubernetes.io/projected/25abc985-ae4c-48e9-bfcb-d53eec8d02c0-kube-api-access-d6c74\") pod \"mariadb-client-7-default\" (UID: \"25abc985-ae4c-48e9-bfcb-d53eec8d02c0\") " pod="openstack/mariadb-client-7-default" Dec 11 15:09:20 crc kubenswrapper[5050]: I1211 15:09:20.548553 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Dec 11 15:09:20 crc kubenswrapper[5050]: I1211 15:09:20.582318 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d1df49bf7d99b512e22e070c792d22f28ee506448b931c65a4c93dd1edfcf78" Dec 11 15:09:20 crc kubenswrapper[5050]: I1211 15:09:20.582440 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-6-default" Dec 11 15:09:20 crc kubenswrapper[5050]: I1211 15:09:20.877628 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-7-default"] Dec 11 15:09:20 crc kubenswrapper[5050]: W1211 15:09:20.880936 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25abc985_ae4c_48e9_bfcb_d53eec8d02c0.slice/crio-c4b62a5ac6c219aceb09b6c3ea7de79ebf2068c7abe45b9004cf49d09c7a5474 WatchSource:0}: Error finding container c4b62a5ac6c219aceb09b6c3ea7de79ebf2068c7abe45b9004cf49d09c7a5474: Status 404 returned error can't find the container with id c4b62a5ac6c219aceb09b6c3ea7de79ebf2068c7abe45b9004cf49d09c7a5474 Dec 11 15:09:21 crc kubenswrapper[5050]: I1211 15:09:21.557787 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a762550a-0293-41c5-8dbd-8272251471c5" path="/var/lib/kubelet/pods/a762550a-0293-41c5-8dbd-8272251471c5/volumes" Dec 11 15:09:21 crc kubenswrapper[5050]: I1211 15:09:21.592735 5050 generic.go:334] "Generic (PLEG): container finished" podID="25abc985-ae4c-48e9-bfcb-d53eec8d02c0" containerID="2cc9cfc2fe068df43629d98d04316b16d784c71d095adcb2dc6a73314fd95c03" exitCode=0 Dec 11 15:09:21 crc kubenswrapper[5050]: I1211 15:09:21.592788 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"25abc985-ae4c-48e9-bfcb-d53eec8d02c0","Type":"ContainerDied","Data":"2cc9cfc2fe068df43629d98d04316b16d784c71d095adcb2dc6a73314fd95c03"} Dec 11 15:09:21 crc kubenswrapper[5050]: I1211 15:09:21.592888 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-7-default" event={"ID":"25abc985-ae4c-48e9-bfcb-d53eec8d02c0","Type":"ContainerStarted","Data":"c4b62a5ac6c219aceb09b6c3ea7de79ebf2068c7abe45b9004cf49d09c7a5474"} Dec 11 15:09:22 crc kubenswrapper[5050]: I1211 15:09:22.992338 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Dec 11 15:09:23 crc kubenswrapper[5050]: I1211 15:09:23.012319 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-7-default_25abc985-ae4c-48e9-bfcb-d53eec8d02c0/mariadb-client-7-default/0.log" Dec 11 15:09:23 crc kubenswrapper[5050]: I1211 15:09:23.046756 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-7-default"] Dec 11 15:09:23 crc kubenswrapper[5050]: I1211 15:09:23.056508 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-7-default"] Dec 11 15:09:23 crc kubenswrapper[5050]: I1211 15:09:23.062289 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6c74\" (UniqueName: \"kubernetes.io/projected/25abc985-ae4c-48e9-bfcb-d53eec8d02c0-kube-api-access-d6c74\") pod \"25abc985-ae4c-48e9-bfcb-d53eec8d02c0\" (UID: \"25abc985-ae4c-48e9-bfcb-d53eec8d02c0\") " Dec 11 15:09:23 crc kubenswrapper[5050]: I1211 15:09:23.092654 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25abc985-ae4c-48e9-bfcb-d53eec8d02c0-kube-api-access-d6c74" (OuterVolumeSpecName: "kube-api-access-d6c74") pod "25abc985-ae4c-48e9-bfcb-d53eec8d02c0" (UID: "25abc985-ae4c-48e9-bfcb-d53eec8d02c0"). InnerVolumeSpecName "kube-api-access-d6c74". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:09:23 crc kubenswrapper[5050]: I1211 15:09:23.165546 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6c74\" (UniqueName: \"kubernetes.io/projected/25abc985-ae4c-48e9-bfcb-d53eec8d02c0-kube-api-access-d6c74\") on node \"crc\" DevicePath \"\"" Dec 11 15:09:23 crc kubenswrapper[5050]: I1211 15:09:23.193635 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client-2"] Dec 11 15:09:23 crc kubenswrapper[5050]: E1211 15:09:23.194106 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25abc985-ae4c-48e9-bfcb-d53eec8d02c0" containerName="mariadb-client-7-default" Dec 11 15:09:23 crc kubenswrapper[5050]: I1211 15:09:23.194127 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="25abc985-ae4c-48e9-bfcb-d53eec8d02c0" containerName="mariadb-client-7-default" Dec 11 15:09:23 crc kubenswrapper[5050]: I1211 15:09:23.194325 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="25abc985-ae4c-48e9-bfcb-d53eec8d02c0" containerName="mariadb-client-7-default" Dec 11 15:09:23 crc kubenswrapper[5050]: I1211 15:09:23.194925 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Dec 11 15:09:23 crc kubenswrapper[5050]: I1211 15:09:23.201129 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Dec 11 15:09:23 crc kubenswrapper[5050]: I1211 15:09:23.267573 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdndt\" (UniqueName: \"kubernetes.io/projected/e4ff7cbf-4ce9-46eb-ae37-b69d73523cd0-kube-api-access-gdndt\") pod \"mariadb-client-2\" (UID: \"e4ff7cbf-4ce9-46eb-ae37-b69d73523cd0\") " pod="openstack/mariadb-client-2" Dec 11 15:09:23 crc kubenswrapper[5050]: I1211 15:09:23.370211 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdndt\" (UniqueName: \"kubernetes.io/projected/e4ff7cbf-4ce9-46eb-ae37-b69d73523cd0-kube-api-access-gdndt\") pod \"mariadb-client-2\" (UID: \"e4ff7cbf-4ce9-46eb-ae37-b69d73523cd0\") " pod="openstack/mariadb-client-2" Dec 11 15:09:23 crc kubenswrapper[5050]: I1211 15:09:23.387209 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdndt\" (UniqueName: \"kubernetes.io/projected/e4ff7cbf-4ce9-46eb-ae37-b69d73523cd0-kube-api-access-gdndt\") pod \"mariadb-client-2\" (UID: \"e4ff7cbf-4ce9-46eb-ae37-b69d73523cd0\") " pod="openstack/mariadb-client-2" Dec 11 15:09:23 crc kubenswrapper[5050]: I1211 15:09:23.510531 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Dec 11 15:09:23 crc kubenswrapper[5050]: I1211 15:09:23.566679 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25abc985-ae4c-48e9-bfcb-d53eec8d02c0" path="/var/lib/kubelet/pods/25abc985-ae4c-48e9-bfcb-d53eec8d02c0/volumes" Dec 11 15:09:23 crc kubenswrapper[5050]: I1211 15:09:23.668569 5050 scope.go:117] "RemoveContainer" containerID="2cc9cfc2fe068df43629d98d04316b16d784c71d095adcb2dc6a73314fd95c03" Dec 11 15:09:23 crc kubenswrapper[5050]: I1211 15:09:23.668719 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-7-default" Dec 11 15:09:23 crc kubenswrapper[5050]: I1211 15:09:23.988748 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client-2"] Dec 11 15:09:23 crc kubenswrapper[5050]: W1211 15:09:23.992452 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4ff7cbf_4ce9_46eb_ae37_b69d73523cd0.slice/crio-b06f00eddb051a1fdfce66bf8ee62de4a3125429d154718d8b064988ad603e05 WatchSource:0}: Error finding container b06f00eddb051a1fdfce66bf8ee62de4a3125429d154718d8b064988ad603e05: Status 404 returned error can't find the container with id b06f00eddb051a1fdfce66bf8ee62de4a3125429d154718d8b064988ad603e05 Dec 11 15:09:24 crc kubenswrapper[5050]: E1211 15:09:24.406330 5050 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4ff7cbf_4ce9_46eb_ae37_b69d73523cd0.slice/crio-conmon-e32109e1280e08cf79385c75aa3a9dbf78a80968778d474e207aee266583ac15.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4ff7cbf_4ce9_46eb_ae37_b69d73523cd0.slice/crio-e32109e1280e08cf79385c75aa3a9dbf78a80968778d474e207aee266583ac15.scope\": RecentStats: unable to find data in memory cache]" Dec 11 15:09:24 crc kubenswrapper[5050]: I1211 15:09:24.677793 5050 generic.go:334] "Generic (PLEG): container finished" podID="e4ff7cbf-4ce9-46eb-ae37-b69d73523cd0" containerID="e32109e1280e08cf79385c75aa3a9dbf78a80968778d474e207aee266583ac15" exitCode=0 Dec 11 15:09:24 crc kubenswrapper[5050]: I1211 15:09:24.677846 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"e4ff7cbf-4ce9-46eb-ae37-b69d73523cd0","Type":"ContainerDied","Data":"e32109e1280e08cf79385c75aa3a9dbf78a80968778d474e207aee266583ac15"} Dec 11 15:09:24 crc kubenswrapper[5050]: I1211 15:09:24.677869 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client-2" event={"ID":"e4ff7cbf-4ce9-46eb-ae37-b69d73523cd0","Type":"ContainerStarted","Data":"b06f00eddb051a1fdfce66bf8ee62de4a3125429d154718d8b064988ad603e05"} Dec 11 15:09:26 crc kubenswrapper[5050]: I1211 15:09:26.355523 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Dec 11 15:09:26 crc kubenswrapper[5050]: I1211 15:09:26.376948 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client-2_e4ff7cbf-4ce9-46eb-ae37-b69d73523cd0/mariadb-client-2/0.log" Dec 11 15:09:26 crc kubenswrapper[5050]: I1211 15:09:26.409521 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client-2"] Dec 11 15:09:26 crc kubenswrapper[5050]: I1211 15:09:26.415923 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client-2"] Dec 11 15:09:26 crc kubenswrapper[5050]: I1211 15:09:26.425791 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdndt\" (UniqueName: \"kubernetes.io/projected/e4ff7cbf-4ce9-46eb-ae37-b69d73523cd0-kube-api-access-gdndt\") pod \"e4ff7cbf-4ce9-46eb-ae37-b69d73523cd0\" (UID: \"e4ff7cbf-4ce9-46eb-ae37-b69d73523cd0\") " Dec 11 15:09:26 crc kubenswrapper[5050]: I1211 15:09:26.431296 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4ff7cbf-4ce9-46eb-ae37-b69d73523cd0-kube-api-access-gdndt" (OuterVolumeSpecName: "kube-api-access-gdndt") pod "e4ff7cbf-4ce9-46eb-ae37-b69d73523cd0" (UID: "e4ff7cbf-4ce9-46eb-ae37-b69d73523cd0"). InnerVolumeSpecName "kube-api-access-gdndt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:09:26 crc kubenswrapper[5050]: I1211 15:09:26.528364 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdndt\" (UniqueName: \"kubernetes.io/projected/e4ff7cbf-4ce9-46eb-ae37-b69d73523cd0-kube-api-access-gdndt\") on node \"crc\" DevicePath \"\"" Dec 11 15:09:26 crc kubenswrapper[5050]: I1211 15:09:26.694102 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b06f00eddb051a1fdfce66bf8ee62de4a3125429d154718d8b064988ad603e05" Dec 11 15:09:26 crc kubenswrapper[5050]: I1211 15:09:26.694151 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client-2" Dec 11 15:09:27 crc kubenswrapper[5050]: I1211 15:09:27.554393 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4ff7cbf-4ce9-46eb-ae37-b69d73523cd0" path="/var/lib/kubelet/pods/e4ff7cbf-4ce9-46eb-ae37-b69d73523cd0/volumes" Dec 11 15:09:29 crc kubenswrapper[5050]: I1211 15:09:29.551646 5050 scope.go:117] "RemoveContainer" containerID="b585bea20b0bb52e58642ee9f3e33b5b182a66a544d35daafc0202a53b35a810" Dec 11 15:09:29 crc kubenswrapper[5050]: E1211 15:09:29.551903 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:09:42 crc kubenswrapper[5050]: I1211 15:09:42.545777 5050 scope.go:117] "RemoveContainer" containerID="b585bea20b0bb52e58642ee9f3e33b5b182a66a544d35daafc0202a53b35a810" Dec 11 15:09:42 crc kubenswrapper[5050]: I1211 15:09:42.830142 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerStarted","Data":"54afae94cdfb9cc257fb467d0ac4a5e3143dbdadf4785f8ed56cea4ab889612b"} Dec 11 15:09:44 crc kubenswrapper[5050]: I1211 15:09:44.801564 5050 scope.go:117] "RemoveContainer" containerID="48daca79bfdb97d1da96c5f43c1776e6c9bfb0a29aebfb5c97419322f8ada43f" Dec 11 15:12:10 crc kubenswrapper[5050]: I1211 15:12:10.797129 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 15:12:10 crc kubenswrapper[5050]: I1211 15:12:10.797714 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 15:12:40 crc kubenswrapper[5050]: I1211 15:12:40.834529 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 15:12:40 crc kubenswrapper[5050]: I1211 15:12:40.834936 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 15:13:10 crc kubenswrapper[5050]: I1211 15:13:10.796774 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 15:13:10 crc kubenswrapper[5050]: I1211 15:13:10.797346 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 15:13:10 crc kubenswrapper[5050]: I1211 15:13:10.797395 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 15:13:10 crc kubenswrapper[5050]: I1211 15:13:10.798382 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"54afae94cdfb9cc257fb467d0ac4a5e3143dbdadf4785f8ed56cea4ab889612b"} pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 15:13:10 crc kubenswrapper[5050]: I1211 15:13:10.798466 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" containerID="cri-o://54afae94cdfb9cc257fb467d0ac4a5e3143dbdadf4785f8ed56cea4ab889612b" gracePeriod=600 Dec 11 15:13:11 crc kubenswrapper[5050]: I1211 15:13:11.688676 5050 generic.go:334] "Generic (PLEG): container finished" podID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerID="54afae94cdfb9cc257fb467d0ac4a5e3143dbdadf4785f8ed56cea4ab889612b" exitCode=0 Dec 11 15:13:11 crc kubenswrapper[5050]: I1211 15:13:11.688776 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerDied","Data":"54afae94cdfb9cc257fb467d0ac4a5e3143dbdadf4785f8ed56cea4ab889612b"} Dec 11 15:13:11 crc kubenswrapper[5050]: I1211 15:13:11.689462 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerStarted","Data":"5e751c2ff817a404aae81889734d5a9bd57af497d8e885add288368bb5b0040e"} Dec 11 15:13:11 crc kubenswrapper[5050]: I1211 15:13:11.689501 5050 scope.go:117] "RemoveContainer" containerID="b585bea20b0bb52e58642ee9f3e33b5b182a66a544d35daafc0202a53b35a810" Dec 11 15:14:37 crc kubenswrapper[5050]: I1211 15:14:37.428728 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zhs5p"] Dec 11 15:14:37 crc kubenswrapper[5050]: E1211 15:14:37.430226 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4ff7cbf-4ce9-46eb-ae37-b69d73523cd0" containerName="mariadb-client-2" Dec 11 15:14:37 crc kubenswrapper[5050]: I1211 15:14:37.430251 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4ff7cbf-4ce9-46eb-ae37-b69d73523cd0" containerName="mariadb-client-2" Dec 11 15:14:37 crc kubenswrapper[5050]: I1211 15:14:37.430513 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4ff7cbf-4ce9-46eb-ae37-b69d73523cd0" containerName="mariadb-client-2" Dec 11 15:14:37 crc kubenswrapper[5050]: I1211 15:14:37.432660 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zhs5p" Dec 11 15:14:37 crc kubenswrapper[5050]: I1211 15:14:37.440729 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zhs5p"] Dec 11 15:14:37 crc kubenswrapper[5050]: I1211 15:14:37.605845 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8674697-9a4e-460b-aa5d-34501c96b784-catalog-content\") pod \"redhat-operators-zhs5p\" (UID: \"a8674697-9a4e-460b-aa5d-34501c96b784\") " pod="openshift-marketplace/redhat-operators-zhs5p" Dec 11 15:14:37 crc kubenswrapper[5050]: I1211 15:14:37.605990 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8674697-9a4e-460b-aa5d-34501c96b784-utilities\") pod \"redhat-operators-zhs5p\" (UID: \"a8674697-9a4e-460b-aa5d-34501c96b784\") " pod="openshift-marketplace/redhat-operators-zhs5p" Dec 11 15:14:37 crc kubenswrapper[5050]: I1211 15:14:37.606054 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n48bk\" (UniqueName: \"kubernetes.io/projected/a8674697-9a4e-460b-aa5d-34501c96b784-kube-api-access-n48bk\") pod \"redhat-operators-zhs5p\" (UID: \"a8674697-9a4e-460b-aa5d-34501c96b784\") " pod="openshift-marketplace/redhat-operators-zhs5p" Dec 11 15:14:37 crc kubenswrapper[5050]: I1211 15:14:37.707322 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8674697-9a4e-460b-aa5d-34501c96b784-catalog-content\") pod \"redhat-operators-zhs5p\" (UID: \"a8674697-9a4e-460b-aa5d-34501c96b784\") " pod="openshift-marketplace/redhat-operators-zhs5p" Dec 11 15:14:37 crc kubenswrapper[5050]: I1211 15:14:37.707401 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8674697-9a4e-460b-aa5d-34501c96b784-utilities\") pod \"redhat-operators-zhs5p\" (UID: \"a8674697-9a4e-460b-aa5d-34501c96b784\") " pod="openshift-marketplace/redhat-operators-zhs5p" Dec 11 15:14:37 crc kubenswrapper[5050]: I1211 15:14:37.707426 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n48bk\" (UniqueName: \"kubernetes.io/projected/a8674697-9a4e-460b-aa5d-34501c96b784-kube-api-access-n48bk\") pod \"redhat-operators-zhs5p\" (UID: \"a8674697-9a4e-460b-aa5d-34501c96b784\") " pod="openshift-marketplace/redhat-operators-zhs5p" Dec 11 15:14:37 crc kubenswrapper[5050]: I1211 15:14:37.707899 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8674697-9a4e-460b-aa5d-34501c96b784-catalog-content\") pod \"redhat-operators-zhs5p\" (UID: \"a8674697-9a4e-460b-aa5d-34501c96b784\") " pod="openshift-marketplace/redhat-operators-zhs5p" Dec 11 15:14:37 crc kubenswrapper[5050]: I1211 15:14:37.708089 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8674697-9a4e-460b-aa5d-34501c96b784-utilities\") pod \"redhat-operators-zhs5p\" (UID: \"a8674697-9a4e-460b-aa5d-34501c96b784\") " pod="openshift-marketplace/redhat-operators-zhs5p" Dec 11 15:14:37 crc kubenswrapper[5050]: I1211 15:14:37.727736 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n48bk\" (UniqueName: \"kubernetes.io/projected/a8674697-9a4e-460b-aa5d-34501c96b784-kube-api-access-n48bk\") pod \"redhat-operators-zhs5p\" (UID: \"a8674697-9a4e-460b-aa5d-34501c96b784\") " pod="openshift-marketplace/redhat-operators-zhs5p" Dec 11 15:14:37 crc kubenswrapper[5050]: I1211 15:14:37.782716 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zhs5p" Dec 11 15:14:38 crc kubenswrapper[5050]: I1211 15:14:38.046072 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zhs5p"] Dec 11 15:14:38 crc kubenswrapper[5050]: I1211 15:14:38.542620 5050 generic.go:334] "Generic (PLEG): container finished" podID="a8674697-9a4e-460b-aa5d-34501c96b784" containerID="ab5950ea71c0a5fe04a67347b1a69514360f602448b74acef0953a8bdbf8ce0b" exitCode=0 Dec 11 15:14:38 crc kubenswrapper[5050]: I1211 15:14:38.542715 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhs5p" event={"ID":"a8674697-9a4e-460b-aa5d-34501c96b784","Type":"ContainerDied","Data":"ab5950ea71c0a5fe04a67347b1a69514360f602448b74acef0953a8bdbf8ce0b"} Dec 11 15:14:38 crc kubenswrapper[5050]: I1211 15:14:38.542920 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhs5p" event={"ID":"a8674697-9a4e-460b-aa5d-34501c96b784","Type":"ContainerStarted","Data":"0a3c4225d638f978fad46f0d5da9db6bed61d4835870d2d3129909e869027c97"} Dec 11 15:14:38 crc kubenswrapper[5050]: I1211 15:14:38.546462 5050 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 15:14:40 crc kubenswrapper[5050]: I1211 15:14:40.561846 5050 generic.go:334] "Generic (PLEG): container finished" podID="a8674697-9a4e-460b-aa5d-34501c96b784" containerID="4a83622b289ae2b7db3b6535bbf38a15844da535025e7ecc799ca16910815e7e" exitCode=0 Dec 11 15:14:40 crc kubenswrapper[5050]: I1211 15:14:40.561949 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhs5p" event={"ID":"a8674697-9a4e-460b-aa5d-34501c96b784","Type":"ContainerDied","Data":"4a83622b289ae2b7db3b6535bbf38a15844da535025e7ecc799ca16910815e7e"} Dec 11 15:14:41 crc kubenswrapper[5050]: I1211 15:14:41.067077 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Dec 11 15:14:41 crc kubenswrapper[5050]: I1211 15:14:41.068435 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Dec 11 15:14:41 crc kubenswrapper[5050]: I1211 15:14:41.075462 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-tmtdn" Dec 11 15:14:41 crc kubenswrapper[5050]: I1211 15:14:41.108082 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Dec 11 15:14:41 crc kubenswrapper[5050]: I1211 15:14:41.266521 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ed21fc46-d45d-498f-8eda-ff4c2217985a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ed21fc46-d45d-498f-8eda-ff4c2217985a\") pod \"mariadb-copy-data\" (UID: \"c7f1d12f-9eb6-4436-84dd-0c831345f9b7\") " pod="openstack/mariadb-copy-data" Dec 11 15:14:41 crc kubenswrapper[5050]: I1211 15:14:41.266613 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4s2n\" (UniqueName: \"kubernetes.io/projected/c7f1d12f-9eb6-4436-84dd-0c831345f9b7-kube-api-access-c4s2n\") pod \"mariadb-copy-data\" (UID: \"c7f1d12f-9eb6-4436-84dd-0c831345f9b7\") " pod="openstack/mariadb-copy-data" Dec 11 15:14:41 crc kubenswrapper[5050]: I1211 15:14:41.367765 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4s2n\" (UniqueName: \"kubernetes.io/projected/c7f1d12f-9eb6-4436-84dd-0c831345f9b7-kube-api-access-c4s2n\") pod \"mariadb-copy-data\" (UID: \"c7f1d12f-9eb6-4436-84dd-0c831345f9b7\") " pod="openstack/mariadb-copy-data" Dec 11 15:14:41 crc kubenswrapper[5050]: I1211 15:14:41.367897 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ed21fc46-d45d-498f-8eda-ff4c2217985a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ed21fc46-d45d-498f-8eda-ff4c2217985a\") pod \"mariadb-copy-data\" (UID: \"c7f1d12f-9eb6-4436-84dd-0c831345f9b7\") " pod="openstack/mariadb-copy-data" Dec 11 15:14:41 crc kubenswrapper[5050]: I1211 15:14:41.371706 5050 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 11 15:14:41 crc kubenswrapper[5050]: I1211 15:14:41.371952 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ed21fc46-d45d-498f-8eda-ff4c2217985a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ed21fc46-d45d-498f-8eda-ff4c2217985a\") pod \"mariadb-copy-data\" (UID: \"c7f1d12f-9eb6-4436-84dd-0c831345f9b7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a1073e28a694bcb9ad386f516ed85f20a9a04b17edbd651a9493637ef8949383/globalmount\"" pod="openstack/mariadb-copy-data" Dec 11 15:14:41 crc kubenswrapper[5050]: I1211 15:14:41.396135 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4s2n\" (UniqueName: \"kubernetes.io/projected/c7f1d12f-9eb6-4436-84dd-0c831345f9b7-kube-api-access-c4s2n\") pod \"mariadb-copy-data\" (UID: \"c7f1d12f-9eb6-4436-84dd-0c831345f9b7\") " pod="openstack/mariadb-copy-data" Dec 11 15:14:41 crc kubenswrapper[5050]: I1211 15:14:41.414160 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ed21fc46-d45d-498f-8eda-ff4c2217985a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ed21fc46-d45d-498f-8eda-ff4c2217985a\") pod \"mariadb-copy-data\" (UID: \"c7f1d12f-9eb6-4436-84dd-0c831345f9b7\") " pod="openstack/mariadb-copy-data" Dec 11 15:14:41 crc kubenswrapper[5050]: I1211 15:14:41.571972 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhs5p" event={"ID":"a8674697-9a4e-460b-aa5d-34501c96b784","Type":"ContainerStarted","Data":"1364579dec627c8c6494e0c429ac38e7706ccfcba5d3e4bc2e71e35fac3053ed"} Dec 11 15:14:41 crc kubenswrapper[5050]: I1211 15:14:41.595039 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zhs5p" podStartSLOduration=1.821385926 podStartE2EDuration="4.594997286s" podCreationTimestamp="2025-12-11 15:14:37 +0000 UTC" firstStartedPulling="2025-12-11 15:14:38.546140746 +0000 UTC m=+5169.389863332" lastFinishedPulling="2025-12-11 15:14:41.319752116 +0000 UTC m=+5172.163474692" observedRunningTime="2025-12-11 15:14:41.588535393 +0000 UTC m=+5172.432257979" watchObservedRunningTime="2025-12-11 15:14:41.594997286 +0000 UTC m=+5172.438719872" Dec 11 15:14:41 crc kubenswrapper[5050]: I1211 15:14:41.700371 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Dec 11 15:14:42 crc kubenswrapper[5050]: I1211 15:14:42.188164 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Dec 11 15:14:42 crc kubenswrapper[5050]: W1211 15:14:42.195381 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7f1d12f_9eb6_4436_84dd_0c831345f9b7.slice/crio-1c0f36c41ea593577af73fbb7e2d970168c1fcde82e15662534f2ad9de6ac68a WatchSource:0}: Error finding container 1c0f36c41ea593577af73fbb7e2d970168c1fcde82e15662534f2ad9de6ac68a: Status 404 returned error can't find the container with id 1c0f36c41ea593577af73fbb7e2d970168c1fcde82e15662534f2ad9de6ac68a Dec 11 15:14:42 crc kubenswrapper[5050]: I1211 15:14:42.596195 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"c7f1d12f-9eb6-4436-84dd-0c831345f9b7","Type":"ContainerStarted","Data":"453ff1e8315d1476c94cda73b087a34bc4ba5f3dc89653701c91ec52d032709d"} Dec 11 15:14:42 crc kubenswrapper[5050]: I1211 15:14:42.596605 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"c7f1d12f-9eb6-4436-84dd-0c831345f9b7","Type":"ContainerStarted","Data":"1c0f36c41ea593577af73fbb7e2d970168c1fcde82e15662534f2ad9de6ac68a"} Dec 11 15:14:42 crc kubenswrapper[5050]: I1211 15:14:42.618617 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=2.618599929 podStartE2EDuration="2.618599929s" podCreationTimestamp="2025-12-11 15:14:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:14:42.615122095 +0000 UTC m=+5173.458844681" watchObservedRunningTime="2025-12-11 15:14:42.618599929 +0000 UTC m=+5173.462322515" Dec 11 15:14:45 crc kubenswrapper[5050]: I1211 15:14:45.511313 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Dec 11 15:14:45 crc kubenswrapper[5050]: I1211 15:14:45.512592 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Dec 11 15:14:45 crc kubenswrapper[5050]: I1211 15:14:45.526766 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Dec 11 15:14:45 crc kubenswrapper[5050]: I1211 15:14:45.641735 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlmlq\" (UniqueName: \"kubernetes.io/projected/3f9b7efe-242a-4015-8e95-b3f0f04c2318-kube-api-access-qlmlq\") pod \"mariadb-client\" (UID: \"3f9b7efe-242a-4015-8e95-b3f0f04c2318\") " pod="openstack/mariadb-client" Dec 11 15:14:45 crc kubenswrapper[5050]: I1211 15:14:45.743078 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlmlq\" (UniqueName: \"kubernetes.io/projected/3f9b7efe-242a-4015-8e95-b3f0f04c2318-kube-api-access-qlmlq\") pod \"mariadb-client\" (UID: \"3f9b7efe-242a-4015-8e95-b3f0f04c2318\") " pod="openstack/mariadb-client" Dec 11 15:14:45 crc kubenswrapper[5050]: I1211 15:14:45.760734 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlmlq\" (UniqueName: \"kubernetes.io/projected/3f9b7efe-242a-4015-8e95-b3f0f04c2318-kube-api-access-qlmlq\") pod \"mariadb-client\" (UID: \"3f9b7efe-242a-4015-8e95-b3f0f04c2318\") " pod="openstack/mariadb-client" Dec 11 15:14:45 crc kubenswrapper[5050]: I1211 15:14:45.832527 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Dec 11 15:14:46 crc kubenswrapper[5050]: I1211 15:14:46.315620 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Dec 11 15:14:46 crc kubenswrapper[5050]: I1211 15:14:46.637632 5050 generic.go:334] "Generic (PLEG): container finished" podID="3f9b7efe-242a-4015-8e95-b3f0f04c2318" containerID="f55b3c9fdbfff71d47bbf096e3b38f61f282575fdfc0df338dab5054db2f39bf" exitCode=0 Dec 11 15:14:46 crc kubenswrapper[5050]: I1211 15:14:46.637692 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"3f9b7efe-242a-4015-8e95-b3f0f04c2318","Type":"ContainerDied","Data":"f55b3c9fdbfff71d47bbf096e3b38f61f282575fdfc0df338dab5054db2f39bf"} Dec 11 15:14:46 crc kubenswrapper[5050]: I1211 15:14:46.637734 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"3f9b7efe-242a-4015-8e95-b3f0f04c2318","Type":"ContainerStarted","Data":"0630c75e31ea3d17f1aa19ae4a256bcaff0aa9bf75e67c3788c7e72a362466cb"} Dec 11 15:14:47 crc kubenswrapper[5050]: I1211 15:14:47.783224 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zhs5p" Dec 11 15:14:47 crc kubenswrapper[5050]: I1211 15:14:47.784396 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zhs5p" Dec 11 15:14:47 crc kubenswrapper[5050]: I1211 15:14:47.835568 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zhs5p" Dec 11 15:14:47 crc kubenswrapper[5050]: I1211 15:14:47.960891 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Dec 11 15:14:47 crc kubenswrapper[5050]: I1211 15:14:47.986619 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_3f9b7efe-242a-4015-8e95-b3f0f04c2318/mariadb-client/0.log" Dec 11 15:14:48 crc kubenswrapper[5050]: I1211 15:14:48.014588 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Dec 11 15:14:48 crc kubenswrapper[5050]: I1211 15:14:48.020291 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Dec 11 15:14:48 crc kubenswrapper[5050]: I1211 15:14:48.081973 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlmlq\" (UniqueName: \"kubernetes.io/projected/3f9b7efe-242a-4015-8e95-b3f0f04c2318-kube-api-access-qlmlq\") pod \"3f9b7efe-242a-4015-8e95-b3f0f04c2318\" (UID: \"3f9b7efe-242a-4015-8e95-b3f0f04c2318\") " Dec 11 15:14:48 crc kubenswrapper[5050]: I1211 15:14:48.088149 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f9b7efe-242a-4015-8e95-b3f0f04c2318-kube-api-access-qlmlq" (OuterVolumeSpecName: "kube-api-access-qlmlq") pod "3f9b7efe-242a-4015-8e95-b3f0f04c2318" (UID: "3f9b7efe-242a-4015-8e95-b3f0f04c2318"). InnerVolumeSpecName "kube-api-access-qlmlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:14:48 crc kubenswrapper[5050]: I1211 15:14:48.160419 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Dec 11 15:14:48 crc kubenswrapper[5050]: E1211 15:14:48.160772 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f9b7efe-242a-4015-8e95-b3f0f04c2318" containerName="mariadb-client" Dec 11 15:14:48 crc kubenswrapper[5050]: I1211 15:14:48.160790 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f9b7efe-242a-4015-8e95-b3f0f04c2318" containerName="mariadb-client" Dec 11 15:14:48 crc kubenswrapper[5050]: I1211 15:14:48.160950 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f9b7efe-242a-4015-8e95-b3f0f04c2318" containerName="mariadb-client" Dec 11 15:14:48 crc kubenswrapper[5050]: I1211 15:14:48.161511 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Dec 11 15:14:48 crc kubenswrapper[5050]: I1211 15:14:48.183508 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qlmlq\" (UniqueName: \"kubernetes.io/projected/3f9b7efe-242a-4015-8e95-b3f0f04c2318-kube-api-access-qlmlq\") on node \"crc\" DevicePath \"\"" Dec 11 15:14:48 crc kubenswrapper[5050]: I1211 15:14:48.186110 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Dec 11 15:14:48 crc kubenswrapper[5050]: I1211 15:14:48.284797 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8tq9\" (UniqueName: \"kubernetes.io/projected/831fa39e-6e92-4e5b-8883-419719805ac6-kube-api-access-b8tq9\") pod \"mariadb-client\" (UID: \"831fa39e-6e92-4e5b-8883-419719805ac6\") " pod="openstack/mariadb-client" Dec 11 15:14:48 crc kubenswrapper[5050]: I1211 15:14:48.386061 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8tq9\" (UniqueName: \"kubernetes.io/projected/831fa39e-6e92-4e5b-8883-419719805ac6-kube-api-access-b8tq9\") pod \"mariadb-client\" (UID: \"831fa39e-6e92-4e5b-8883-419719805ac6\") " pod="openstack/mariadb-client" Dec 11 15:14:48 crc kubenswrapper[5050]: I1211 15:14:48.407254 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8tq9\" (UniqueName: \"kubernetes.io/projected/831fa39e-6e92-4e5b-8883-419719805ac6-kube-api-access-b8tq9\") pod \"mariadb-client\" (UID: \"831fa39e-6e92-4e5b-8883-419719805ac6\") " pod="openstack/mariadb-client" Dec 11 15:14:48 crc kubenswrapper[5050]: I1211 15:14:48.482959 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Dec 11 15:14:48 crc kubenswrapper[5050]: I1211 15:14:48.653614 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0630c75e31ea3d17f1aa19ae4a256bcaff0aa9bf75e67c3788c7e72a362466cb" Dec 11 15:14:48 crc kubenswrapper[5050]: I1211 15:14:48.653649 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Dec 11 15:14:48 crc kubenswrapper[5050]: I1211 15:14:48.675817 5050 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="3f9b7efe-242a-4015-8e95-b3f0f04c2318" podUID="831fa39e-6e92-4e5b-8883-419719805ac6" Dec 11 15:14:48 crc kubenswrapper[5050]: I1211 15:14:48.719855 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zhs5p" Dec 11 15:14:48 crc kubenswrapper[5050]: I1211 15:14:48.757610 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Dec 11 15:14:48 crc kubenswrapper[5050]: I1211 15:14:48.774929 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zhs5p"] Dec 11 15:14:49 crc kubenswrapper[5050]: I1211 15:14:49.562291 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f9b7efe-242a-4015-8e95-b3f0f04c2318" path="/var/lib/kubelet/pods/3f9b7efe-242a-4015-8e95-b3f0f04c2318/volumes" Dec 11 15:14:49 crc kubenswrapper[5050]: I1211 15:14:49.667724 5050 generic.go:334] "Generic (PLEG): container finished" podID="831fa39e-6e92-4e5b-8883-419719805ac6" containerID="826c2bed67dcb027345b0302f22b6d1cac2e2d8b053834c0b5ee4d1e312d3035" exitCode=0 Dec 11 15:14:49 crc kubenswrapper[5050]: I1211 15:14:49.667824 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"831fa39e-6e92-4e5b-8883-419719805ac6","Type":"ContainerDied","Data":"826c2bed67dcb027345b0302f22b6d1cac2e2d8b053834c0b5ee4d1e312d3035"} Dec 11 15:14:49 crc kubenswrapper[5050]: I1211 15:14:49.667883 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"831fa39e-6e92-4e5b-8883-419719805ac6","Type":"ContainerStarted","Data":"03e074a92c09ab01e9defc82094c7493ecd590c1d2711de9c518dd0a6b2f013d"} Dec 11 15:14:50 crc kubenswrapper[5050]: I1211 15:14:50.679925 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zhs5p" podUID="a8674697-9a4e-460b-aa5d-34501c96b784" containerName="registry-server" containerID="cri-o://1364579dec627c8c6494e0c429ac38e7706ccfcba5d3e4bc2e71e35fac3053ed" gracePeriod=2 Dec 11 15:14:51 crc kubenswrapper[5050]: I1211 15:14:51.213300 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Dec 11 15:14:51 crc kubenswrapper[5050]: I1211 15:14:51.233907 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_831fa39e-6e92-4e5b-8883-419719805ac6/mariadb-client/0.log" Dec 11 15:14:51 crc kubenswrapper[5050]: I1211 15:14:51.272835 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Dec 11 15:14:51 crc kubenswrapper[5050]: I1211 15:14:51.282814 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Dec 11 15:14:51 crc kubenswrapper[5050]: I1211 15:14:51.335529 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8tq9\" (UniqueName: \"kubernetes.io/projected/831fa39e-6e92-4e5b-8883-419719805ac6-kube-api-access-b8tq9\") pod \"831fa39e-6e92-4e5b-8883-419719805ac6\" (UID: \"831fa39e-6e92-4e5b-8883-419719805ac6\") " Dec 11 15:14:51 crc kubenswrapper[5050]: I1211 15:14:51.341292 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/831fa39e-6e92-4e5b-8883-419719805ac6-kube-api-access-b8tq9" (OuterVolumeSpecName: "kube-api-access-b8tq9") pod "831fa39e-6e92-4e5b-8883-419719805ac6" (UID: "831fa39e-6e92-4e5b-8883-419719805ac6"). InnerVolumeSpecName "kube-api-access-b8tq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:14:51 crc kubenswrapper[5050]: I1211 15:14:51.437222 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8tq9\" (UniqueName: \"kubernetes.io/projected/831fa39e-6e92-4e5b-8883-419719805ac6-kube-api-access-b8tq9\") on node \"crc\" DevicePath \"\"" Dec 11 15:14:51 crc kubenswrapper[5050]: I1211 15:14:51.559468 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="831fa39e-6e92-4e5b-8883-419719805ac6" path="/var/lib/kubelet/pods/831fa39e-6e92-4e5b-8883-419719805ac6/volumes" Dec 11 15:14:51 crc kubenswrapper[5050]: I1211 15:14:51.688871 5050 generic.go:334] "Generic (PLEG): container finished" podID="a8674697-9a4e-460b-aa5d-34501c96b784" containerID="1364579dec627c8c6494e0c429ac38e7706ccfcba5d3e4bc2e71e35fac3053ed" exitCode=0 Dec 11 15:14:51 crc kubenswrapper[5050]: I1211 15:14:51.688951 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhs5p" event={"ID":"a8674697-9a4e-460b-aa5d-34501c96b784","Type":"ContainerDied","Data":"1364579dec627c8c6494e0c429ac38e7706ccfcba5d3e4bc2e71e35fac3053ed"} Dec 11 15:14:51 crc kubenswrapper[5050]: I1211 15:14:51.690276 5050 scope.go:117] "RemoveContainer" containerID="826c2bed67dcb027345b0302f22b6d1cac2e2d8b053834c0b5ee4d1e312d3035" Dec 11 15:14:51 crc kubenswrapper[5050]: I1211 15:14:51.690330 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Dec 11 15:14:52 crc kubenswrapper[5050]: I1211 15:14:52.219302 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zhs5p" Dec 11 15:14:52 crc kubenswrapper[5050]: I1211 15:14:52.255648 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8674697-9a4e-460b-aa5d-34501c96b784-catalog-content\") pod \"a8674697-9a4e-460b-aa5d-34501c96b784\" (UID: \"a8674697-9a4e-460b-aa5d-34501c96b784\") " Dec 11 15:14:52 crc kubenswrapper[5050]: I1211 15:14:52.255731 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8674697-9a4e-460b-aa5d-34501c96b784-utilities\") pod \"a8674697-9a4e-460b-aa5d-34501c96b784\" (UID: \"a8674697-9a4e-460b-aa5d-34501c96b784\") " Dec 11 15:14:52 crc kubenswrapper[5050]: I1211 15:14:52.255817 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n48bk\" (UniqueName: \"kubernetes.io/projected/a8674697-9a4e-460b-aa5d-34501c96b784-kube-api-access-n48bk\") pod \"a8674697-9a4e-460b-aa5d-34501c96b784\" (UID: \"a8674697-9a4e-460b-aa5d-34501c96b784\") " Dec 11 15:14:52 crc kubenswrapper[5050]: I1211 15:14:52.258369 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8674697-9a4e-460b-aa5d-34501c96b784-utilities" (OuterVolumeSpecName: "utilities") pod "a8674697-9a4e-460b-aa5d-34501c96b784" (UID: "a8674697-9a4e-460b-aa5d-34501c96b784"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:14:52 crc kubenswrapper[5050]: I1211 15:14:52.267461 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8674697-9a4e-460b-aa5d-34501c96b784-kube-api-access-n48bk" (OuterVolumeSpecName: "kube-api-access-n48bk") pod "a8674697-9a4e-460b-aa5d-34501c96b784" (UID: "a8674697-9a4e-460b-aa5d-34501c96b784"). InnerVolumeSpecName "kube-api-access-n48bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:14:52 crc kubenswrapper[5050]: I1211 15:14:52.357396 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8674697-9a4e-460b-aa5d-34501c96b784-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 15:14:52 crc kubenswrapper[5050]: I1211 15:14:52.357432 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n48bk\" (UniqueName: \"kubernetes.io/projected/a8674697-9a4e-460b-aa5d-34501c96b784-kube-api-access-n48bk\") on node \"crc\" DevicePath \"\"" Dec 11 15:14:52 crc kubenswrapper[5050]: I1211 15:14:52.400719 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8674697-9a4e-460b-aa5d-34501c96b784-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8674697-9a4e-460b-aa5d-34501c96b784" (UID: "a8674697-9a4e-460b-aa5d-34501c96b784"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:14:52 crc kubenswrapper[5050]: I1211 15:14:52.458626 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8674697-9a4e-460b-aa5d-34501c96b784-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 15:14:52 crc kubenswrapper[5050]: I1211 15:14:52.699135 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zhs5p" event={"ID":"a8674697-9a4e-460b-aa5d-34501c96b784","Type":"ContainerDied","Data":"0a3c4225d638f978fad46f0d5da9db6bed61d4835870d2d3129909e869027c97"} Dec 11 15:14:52 crc kubenswrapper[5050]: I1211 15:14:52.699163 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zhs5p" Dec 11 15:14:52 crc kubenswrapper[5050]: I1211 15:14:52.699205 5050 scope.go:117] "RemoveContainer" containerID="1364579dec627c8c6494e0c429ac38e7706ccfcba5d3e4bc2e71e35fac3053ed" Dec 11 15:14:52 crc kubenswrapper[5050]: I1211 15:14:52.732787 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zhs5p"] Dec 11 15:14:52 crc kubenswrapper[5050]: I1211 15:14:52.736690 5050 scope.go:117] "RemoveContainer" containerID="4a83622b289ae2b7db3b6535bbf38a15844da535025e7ecc799ca16910815e7e" Dec 11 15:14:52 crc kubenswrapper[5050]: I1211 15:14:52.743155 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zhs5p"] Dec 11 15:14:52 crc kubenswrapper[5050]: I1211 15:14:52.761763 5050 scope.go:117] "RemoveContainer" containerID="ab5950ea71c0a5fe04a67347b1a69514360f602448b74acef0953a8bdbf8ce0b" Dec 11 15:14:53 crc kubenswrapper[5050]: I1211 15:14:53.558078 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8674697-9a4e-460b-aa5d-34501c96b784" path="/var/lib/kubelet/pods/a8674697-9a4e-460b-aa5d-34501c96b784/volumes" Dec 11 15:15:00 crc kubenswrapper[5050]: I1211 15:15:00.148961 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424435-sqnfm"] Dec 11 15:15:00 crc kubenswrapper[5050]: E1211 15:15:00.149895 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="831fa39e-6e92-4e5b-8883-419719805ac6" containerName="mariadb-client" Dec 11 15:15:00 crc kubenswrapper[5050]: I1211 15:15:00.149911 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="831fa39e-6e92-4e5b-8883-419719805ac6" containerName="mariadb-client" Dec 11 15:15:00 crc kubenswrapper[5050]: E1211 15:15:00.149925 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8674697-9a4e-460b-aa5d-34501c96b784" containerName="extract-content" Dec 11 15:15:00 crc kubenswrapper[5050]: I1211 15:15:00.149934 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8674697-9a4e-460b-aa5d-34501c96b784" containerName="extract-content" Dec 11 15:15:00 crc kubenswrapper[5050]: E1211 15:15:00.149947 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8674697-9a4e-460b-aa5d-34501c96b784" containerName="registry-server" Dec 11 15:15:00 crc kubenswrapper[5050]: I1211 15:15:00.149955 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8674697-9a4e-460b-aa5d-34501c96b784" containerName="registry-server" Dec 11 15:15:00 crc kubenswrapper[5050]: E1211 15:15:00.149976 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8674697-9a4e-460b-aa5d-34501c96b784" containerName="extract-utilities" Dec 11 15:15:00 crc kubenswrapper[5050]: I1211 15:15:00.149984 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8674697-9a4e-460b-aa5d-34501c96b784" containerName="extract-utilities" Dec 11 15:15:00 crc kubenswrapper[5050]: I1211 15:15:00.150247 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8674697-9a4e-460b-aa5d-34501c96b784" containerName="registry-server" Dec 11 15:15:00 crc kubenswrapper[5050]: I1211 15:15:00.150261 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="831fa39e-6e92-4e5b-8883-419719805ac6" containerName="mariadb-client" Dec 11 15:15:00 crc kubenswrapper[5050]: I1211 15:15:00.150921 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424435-sqnfm" Dec 11 15:15:00 crc kubenswrapper[5050]: I1211 15:15:00.160781 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424435-sqnfm"] Dec 11 15:15:00 crc kubenswrapper[5050]: I1211 15:15:00.162094 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 11 15:15:00 crc kubenswrapper[5050]: I1211 15:15:00.163005 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 11 15:15:00 crc kubenswrapper[5050]: I1211 15:15:00.184139 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f8ab5e0-9bbf-46ed-a20c-7d0f66466035-config-volume\") pod \"collect-profiles-29424435-sqnfm\" (UID: \"7f8ab5e0-9bbf-46ed-a20c-7d0f66466035\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424435-sqnfm" Dec 11 15:15:00 crc kubenswrapper[5050]: I1211 15:15:00.184205 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f8ab5e0-9bbf-46ed-a20c-7d0f66466035-secret-volume\") pod \"collect-profiles-29424435-sqnfm\" (UID: \"7f8ab5e0-9bbf-46ed-a20c-7d0f66466035\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424435-sqnfm" Dec 11 15:15:00 crc kubenswrapper[5050]: I1211 15:15:00.184405 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87d9q\" (UniqueName: \"kubernetes.io/projected/7f8ab5e0-9bbf-46ed-a20c-7d0f66466035-kube-api-access-87d9q\") pod \"collect-profiles-29424435-sqnfm\" (UID: \"7f8ab5e0-9bbf-46ed-a20c-7d0f66466035\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424435-sqnfm" Dec 11 15:15:00 crc kubenswrapper[5050]: I1211 15:15:00.285557 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87d9q\" (UniqueName: \"kubernetes.io/projected/7f8ab5e0-9bbf-46ed-a20c-7d0f66466035-kube-api-access-87d9q\") pod \"collect-profiles-29424435-sqnfm\" (UID: \"7f8ab5e0-9bbf-46ed-a20c-7d0f66466035\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424435-sqnfm" Dec 11 15:15:00 crc kubenswrapper[5050]: I1211 15:15:00.285614 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f8ab5e0-9bbf-46ed-a20c-7d0f66466035-config-volume\") pod \"collect-profiles-29424435-sqnfm\" (UID: \"7f8ab5e0-9bbf-46ed-a20c-7d0f66466035\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424435-sqnfm" Dec 11 15:15:00 crc kubenswrapper[5050]: I1211 15:15:00.285642 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f8ab5e0-9bbf-46ed-a20c-7d0f66466035-secret-volume\") pod \"collect-profiles-29424435-sqnfm\" (UID: \"7f8ab5e0-9bbf-46ed-a20c-7d0f66466035\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424435-sqnfm" Dec 11 15:15:00 crc kubenswrapper[5050]: I1211 15:15:00.286906 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f8ab5e0-9bbf-46ed-a20c-7d0f66466035-config-volume\") pod \"collect-profiles-29424435-sqnfm\" (UID: \"7f8ab5e0-9bbf-46ed-a20c-7d0f66466035\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424435-sqnfm" Dec 11 15:15:00 crc kubenswrapper[5050]: I1211 15:15:00.294698 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f8ab5e0-9bbf-46ed-a20c-7d0f66466035-secret-volume\") pod \"collect-profiles-29424435-sqnfm\" (UID: \"7f8ab5e0-9bbf-46ed-a20c-7d0f66466035\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424435-sqnfm" Dec 11 15:15:00 crc kubenswrapper[5050]: I1211 15:15:00.301915 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87d9q\" (UniqueName: \"kubernetes.io/projected/7f8ab5e0-9bbf-46ed-a20c-7d0f66466035-kube-api-access-87d9q\") pod \"collect-profiles-29424435-sqnfm\" (UID: \"7f8ab5e0-9bbf-46ed-a20c-7d0f66466035\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424435-sqnfm" Dec 11 15:15:00 crc kubenswrapper[5050]: I1211 15:15:00.487366 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424435-sqnfm" Dec 11 15:15:00 crc kubenswrapper[5050]: I1211 15:15:00.696691 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424435-sqnfm"] Dec 11 15:15:00 crc kubenswrapper[5050]: W1211 15:15:00.706182 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f8ab5e0_9bbf_46ed_a20c_7d0f66466035.slice/crio-02e116a85cf052a2ebc48bf9b767beaa3c4440dd91b7ea30392edb7ac705526b WatchSource:0}: Error finding container 02e116a85cf052a2ebc48bf9b767beaa3c4440dd91b7ea30392edb7ac705526b: Status 404 returned error can't find the container with id 02e116a85cf052a2ebc48bf9b767beaa3c4440dd91b7ea30392edb7ac705526b Dec 11 15:15:00 crc kubenswrapper[5050]: I1211 15:15:00.760320 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424435-sqnfm" event={"ID":"7f8ab5e0-9bbf-46ed-a20c-7d0f66466035","Type":"ContainerStarted","Data":"02e116a85cf052a2ebc48bf9b767beaa3c4440dd91b7ea30392edb7ac705526b"} Dec 11 15:15:01 crc kubenswrapper[5050]: I1211 15:15:01.770364 5050 generic.go:334] "Generic (PLEG): container finished" podID="7f8ab5e0-9bbf-46ed-a20c-7d0f66466035" containerID="8d5b84cb053ecbcb4b7f87a27d8786c4e6b207f0a52800139b0ca31a23fdca72" exitCode=0 Dec 11 15:15:01 crc kubenswrapper[5050]: I1211 15:15:01.770429 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424435-sqnfm" event={"ID":"7f8ab5e0-9bbf-46ed-a20c-7d0f66466035","Type":"ContainerDied","Data":"8d5b84cb053ecbcb4b7f87a27d8786c4e6b207f0a52800139b0ca31a23fdca72"} Dec 11 15:15:03 crc kubenswrapper[5050]: I1211 15:15:03.058298 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424435-sqnfm" Dec 11 15:15:03 crc kubenswrapper[5050]: I1211 15:15:03.131822 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87d9q\" (UniqueName: \"kubernetes.io/projected/7f8ab5e0-9bbf-46ed-a20c-7d0f66466035-kube-api-access-87d9q\") pod \"7f8ab5e0-9bbf-46ed-a20c-7d0f66466035\" (UID: \"7f8ab5e0-9bbf-46ed-a20c-7d0f66466035\") " Dec 11 15:15:03 crc kubenswrapper[5050]: I1211 15:15:03.131884 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f8ab5e0-9bbf-46ed-a20c-7d0f66466035-config-volume\") pod \"7f8ab5e0-9bbf-46ed-a20c-7d0f66466035\" (UID: \"7f8ab5e0-9bbf-46ed-a20c-7d0f66466035\") " Dec 11 15:15:03 crc kubenswrapper[5050]: I1211 15:15:03.131942 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f8ab5e0-9bbf-46ed-a20c-7d0f66466035-secret-volume\") pod \"7f8ab5e0-9bbf-46ed-a20c-7d0f66466035\" (UID: \"7f8ab5e0-9bbf-46ed-a20c-7d0f66466035\") " Dec 11 15:15:03 crc kubenswrapper[5050]: I1211 15:15:03.132691 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f8ab5e0-9bbf-46ed-a20c-7d0f66466035-config-volume" (OuterVolumeSpecName: "config-volume") pod "7f8ab5e0-9bbf-46ed-a20c-7d0f66466035" (UID: "7f8ab5e0-9bbf-46ed-a20c-7d0f66466035"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:15:03 crc kubenswrapper[5050]: I1211 15:15:03.136717 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f8ab5e0-9bbf-46ed-a20c-7d0f66466035-kube-api-access-87d9q" (OuterVolumeSpecName: "kube-api-access-87d9q") pod "7f8ab5e0-9bbf-46ed-a20c-7d0f66466035" (UID: "7f8ab5e0-9bbf-46ed-a20c-7d0f66466035"). InnerVolumeSpecName "kube-api-access-87d9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:15:03 crc kubenswrapper[5050]: I1211 15:15:03.136864 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f8ab5e0-9bbf-46ed-a20c-7d0f66466035-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7f8ab5e0-9bbf-46ed-a20c-7d0f66466035" (UID: "7f8ab5e0-9bbf-46ed-a20c-7d0f66466035"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:15:03 crc kubenswrapper[5050]: I1211 15:15:03.234596 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87d9q\" (UniqueName: \"kubernetes.io/projected/7f8ab5e0-9bbf-46ed-a20c-7d0f66466035-kube-api-access-87d9q\") on node \"crc\" DevicePath \"\"" Dec 11 15:15:03 crc kubenswrapper[5050]: I1211 15:15:03.234653 5050 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7f8ab5e0-9bbf-46ed-a20c-7d0f66466035-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 15:15:03 crc kubenswrapper[5050]: I1211 15:15:03.234666 5050 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7f8ab5e0-9bbf-46ed-a20c-7d0f66466035-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 15:15:03 crc kubenswrapper[5050]: I1211 15:15:03.786144 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424435-sqnfm" event={"ID":"7f8ab5e0-9bbf-46ed-a20c-7d0f66466035","Type":"ContainerDied","Data":"02e116a85cf052a2ebc48bf9b767beaa3c4440dd91b7ea30392edb7ac705526b"} Dec 11 15:15:03 crc kubenswrapper[5050]: I1211 15:15:03.786411 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02e116a85cf052a2ebc48bf9b767beaa3c4440dd91b7ea30392edb7ac705526b" Dec 11 15:15:03 crc kubenswrapper[5050]: I1211 15:15:03.786235 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424435-sqnfm" Dec 11 15:15:04 crc kubenswrapper[5050]: I1211 15:15:04.141290 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424390-rbjjw"] Dec 11 15:15:04 crc kubenswrapper[5050]: I1211 15:15:04.148293 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424390-rbjjw"] Dec 11 15:15:05 crc kubenswrapper[5050]: I1211 15:15:05.558886 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f53ebb7-1953-41d9-a350-67ee00ac6559" path="/var/lib/kubelet/pods/2f53ebb7-1953-41d9-a350-67ee00ac6559/volumes" Dec 11 15:15:17 crc kubenswrapper[5050]: E1211 15:15:17.306823 5050 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.147:48810->38.102.83.147:43539: read tcp 38.102.83.147:48810->38.102.83.147:43539: read: connection reset by peer Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.112909 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 11 15:15:20 crc kubenswrapper[5050]: E1211 15:15:20.114198 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f8ab5e0-9bbf-46ed-a20c-7d0f66466035" containerName="collect-profiles" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.114239 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f8ab5e0-9bbf-46ed-a20c-7d0f66466035" containerName="collect-profiles" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.114706 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f8ab5e0-9bbf-46ed-a20c-7d0f66466035" containerName="collect-profiles" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.116611 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.118886 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-t4t27" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.118889 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.124165 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.128779 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.131348 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.144308 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.146811 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.151954 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.158859 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.165524 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.214396 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfab5143-8fd9-4772-8804-96c45edbe169-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"cfab5143-8fd9-4772-8804-96c45edbe169\") " pod="openstack/ovsdbserver-nb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.214441 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fb89\" (UniqueName: \"kubernetes.io/projected/cfab5143-8fd9-4772-8804-96c45edbe169-kube-api-access-2fb89\") pod \"ovsdbserver-nb-2\" (UID: \"cfab5143-8fd9-4772-8804-96c45edbe169\") " pod="openstack/ovsdbserver-nb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.214520 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cfab5143-8fd9-4772-8804-96c45edbe169-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"cfab5143-8fd9-4772-8804-96c45edbe169\") " pod="openstack/ovsdbserver-nb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.214627 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4b1ded48-8c8a-4417-b1f0-bf57c8ec15f2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b1ded48-8c8a-4417-b1f0-bf57c8ec15f2\") pod \"ovsdbserver-nb-2\" (UID: \"cfab5143-8fd9-4772-8804-96c45edbe169\") " pod="openstack/ovsdbserver-nb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.214666 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfab5143-8fd9-4772-8804-96c45edbe169-config\") pod \"ovsdbserver-nb-2\" (UID: \"cfab5143-8fd9-4772-8804-96c45edbe169\") " pod="openstack/ovsdbserver-nb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.214726 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cfab5143-8fd9-4772-8804-96c45edbe169-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"cfab5143-8fd9-4772-8804-96c45edbe169\") " pod="openstack/ovsdbserver-nb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.314339 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.315625 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.316168 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de54385e-2f40-49ff-a722-598787e12fae-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"de54385e-2f40-49ff-a722-598787e12fae\") " pod="openstack/ovsdbserver-nb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.316257 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/de54385e-2f40-49ff-a722-598787e12fae-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"de54385e-2f40-49ff-a722-598787e12fae\") " pod="openstack/ovsdbserver-nb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.316314 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de54385e-2f40-49ff-a722-598787e12fae-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"de54385e-2f40-49ff-a722-598787e12fae\") " pod="openstack/ovsdbserver-nb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.316355 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c28f2ad9-f9ea-43f5-968c-114f8ad42141\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c28f2ad9-f9ea-43f5-968c-114f8ad42141\") pod \"ovsdbserver-nb-0\" (UID: \"de54385e-2f40-49ff-a722-598787e12fae\") " pod="openstack/ovsdbserver-nb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.316427 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ef13899e-f0c9-41e9-aac9-37f01ee4c9e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef13899e-f0c9-41e9-aac9-37f01ee4c9e5\") pod \"ovsdbserver-nb-1\" (UID: \"01914887-980b-49ab-ad4a-30fe3b76a8b8\") " pod="openstack/ovsdbserver-nb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.316471 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfab5143-8fd9-4772-8804-96c45edbe169-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"cfab5143-8fd9-4772-8804-96c45edbe169\") " pod="openstack/ovsdbserver-nb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.316528 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pddz\" (UniqueName: \"kubernetes.io/projected/de54385e-2f40-49ff-a722-598787e12fae-kube-api-access-5pddz\") pod \"ovsdbserver-nb-0\" (UID: \"de54385e-2f40-49ff-a722-598787e12fae\") " pod="openstack/ovsdbserver-nb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.316588 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fb89\" (UniqueName: \"kubernetes.io/projected/cfab5143-8fd9-4772-8804-96c45edbe169-kube-api-access-2fb89\") pod \"ovsdbserver-nb-2\" (UID: \"cfab5143-8fd9-4772-8804-96c45edbe169\") " pod="openstack/ovsdbserver-nb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.316627 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01914887-980b-49ab-ad4a-30fe3b76a8b8-config\") pod \"ovsdbserver-nb-1\" (UID: \"01914887-980b-49ab-ad4a-30fe3b76a8b8\") " pod="openstack/ovsdbserver-nb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.316691 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cfab5143-8fd9-4772-8804-96c45edbe169-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"cfab5143-8fd9-4772-8804-96c45edbe169\") " pod="openstack/ovsdbserver-nb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.316777 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/01914887-980b-49ab-ad4a-30fe3b76a8b8-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"01914887-980b-49ab-ad4a-30fe3b76a8b8\") " pod="openstack/ovsdbserver-nb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.316826 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4b1ded48-8c8a-4417-b1f0-bf57c8ec15f2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b1ded48-8c8a-4417-b1f0-bf57c8ec15f2\") pod \"ovsdbserver-nb-2\" (UID: \"cfab5143-8fd9-4772-8804-96c45edbe169\") " pod="openstack/ovsdbserver-nb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.316863 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9d2m\" (UniqueName: \"kubernetes.io/projected/01914887-980b-49ab-ad4a-30fe3b76a8b8-kube-api-access-p9d2m\") pod \"ovsdbserver-nb-1\" (UID: \"01914887-980b-49ab-ad4a-30fe3b76a8b8\") " pod="openstack/ovsdbserver-nb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.316909 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfab5143-8fd9-4772-8804-96c45edbe169-config\") pod \"ovsdbserver-nb-2\" (UID: \"cfab5143-8fd9-4772-8804-96c45edbe169\") " pod="openstack/ovsdbserver-nb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.316955 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de54385e-2f40-49ff-a722-598787e12fae-config\") pod \"ovsdbserver-nb-0\" (UID: \"de54385e-2f40-49ff-a722-598787e12fae\") " pod="openstack/ovsdbserver-nb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.317053 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/01914887-980b-49ab-ad4a-30fe3b76a8b8-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"01914887-980b-49ab-ad4a-30fe3b76a8b8\") " pod="openstack/ovsdbserver-nb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.317104 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01914887-980b-49ab-ad4a-30fe3b76a8b8-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"01914887-980b-49ab-ad4a-30fe3b76a8b8\") " pod="openstack/ovsdbserver-nb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.317142 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cfab5143-8fd9-4772-8804-96c45edbe169-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"cfab5143-8fd9-4772-8804-96c45edbe169\") " pod="openstack/ovsdbserver-nb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.318348 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/cfab5143-8fd9-4772-8804-96c45edbe169-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"cfab5143-8fd9-4772-8804-96c45edbe169\") " pod="openstack/ovsdbserver-nb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.318482 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfab5143-8fd9-4772-8804-96c45edbe169-config\") pod \"ovsdbserver-nb-2\" (UID: \"cfab5143-8fd9-4772-8804-96c45edbe169\") " pod="openstack/ovsdbserver-nb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.319848 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cfab5143-8fd9-4772-8804-96c45edbe169-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"cfab5143-8fd9-4772-8804-96c45edbe169\") " pod="openstack/ovsdbserver-nb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.328897 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.329996 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-whqpr" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.330450 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.332148 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfab5143-8fd9-4772-8804-96c45edbe169-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"cfab5143-8fd9-4772-8804-96c45edbe169\") " pod="openstack/ovsdbserver-nb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.332190 5050 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.332240 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4b1ded48-8c8a-4417-b1f0-bf57c8ec15f2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b1ded48-8c8a-4417-b1f0-bf57c8ec15f2\") pod \"ovsdbserver-nb-2\" (UID: \"cfab5143-8fd9-4772-8804-96c45edbe169\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c4db75646537eaca7d8eaa722046fc084ac7655b81e9943c8d4c98b5ee3037af/globalmount\"" pod="openstack/ovsdbserver-nb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.365146 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.372914 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fb89\" (UniqueName: \"kubernetes.io/projected/cfab5143-8fd9-4772-8804-96c45edbe169-kube-api-access-2fb89\") pod \"ovsdbserver-nb-2\" (UID: \"cfab5143-8fd9-4772-8804-96c45edbe169\") " pod="openstack/ovsdbserver-nb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.380173 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.381798 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.390092 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4b1ded48-8c8a-4417-b1f0-bf57c8ec15f2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b1ded48-8c8a-4417-b1f0-bf57c8ec15f2\") pod \"ovsdbserver-nb-2\" (UID: \"cfab5143-8fd9-4772-8804-96c45edbe169\") " pod="openstack/ovsdbserver-nb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.397853 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.406297 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.407635 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.413086 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.418806 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/656a6cd1-5588-4a14-a6f2-8252ad2894cb-config\") pod \"ovsdbserver-sb-0\" (UID: \"656a6cd1-5588-4a14-a6f2-8252ad2894cb\") " pod="openstack/ovsdbserver-sb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.418860 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de54385e-2f40-49ff-a722-598787e12fae-config\") pod \"ovsdbserver-nb-0\" (UID: \"de54385e-2f40-49ff-a722-598787e12fae\") " pod="openstack/ovsdbserver-nb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.418927 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e12f0186-bde8-4f99-80b5-a6733495c8e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e12f0186-bde8-4f99-80b5-a6733495c8e1\") pod \"ovsdbserver-sb-0\" (UID: \"656a6cd1-5588-4a14-a6f2-8252ad2894cb\") " pod="openstack/ovsdbserver-sb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.418966 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/656a6cd1-5588-4a14-a6f2-8252ad2894cb-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"656a6cd1-5588-4a14-a6f2-8252ad2894cb\") " pod="openstack/ovsdbserver-sb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.419246 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/01914887-980b-49ab-ad4a-30fe3b76a8b8-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"01914887-980b-49ab-ad4a-30fe3b76a8b8\") " pod="openstack/ovsdbserver-nb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.419672 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de54385e-2f40-49ff-a722-598787e12fae-config\") pod \"ovsdbserver-nb-0\" (UID: \"de54385e-2f40-49ff-a722-598787e12fae\") " pod="openstack/ovsdbserver-nb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.419941 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/01914887-980b-49ab-ad4a-30fe3b76a8b8-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"01914887-980b-49ab-ad4a-30fe3b76a8b8\") " pod="openstack/ovsdbserver-nb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.419986 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01914887-980b-49ab-ad4a-30fe3b76a8b8-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"01914887-980b-49ab-ad4a-30fe3b76a8b8\") " pod="openstack/ovsdbserver-nb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.420040 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de54385e-2f40-49ff-a722-598787e12fae-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"de54385e-2f40-49ff-a722-598787e12fae\") " pod="openstack/ovsdbserver-nb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.420063 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/de54385e-2f40-49ff-a722-598787e12fae-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"de54385e-2f40-49ff-a722-598787e12fae\") " pod="openstack/ovsdbserver-nb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.423593 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/de54385e-2f40-49ff-a722-598787e12fae-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"de54385e-2f40-49ff-a722-598787e12fae\") " pod="openstack/ovsdbserver-nb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.424074 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de54385e-2f40-49ff-a722-598787e12fae-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"de54385e-2f40-49ff-a722-598787e12fae\") " pod="openstack/ovsdbserver-nb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.424135 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de54385e-2f40-49ff-a722-598787e12fae-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"de54385e-2f40-49ff-a722-598787e12fae\") " pod="openstack/ovsdbserver-nb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.424162 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-c28f2ad9-f9ea-43f5-968c-114f8ad42141\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c28f2ad9-f9ea-43f5-968c-114f8ad42141\") pod \"ovsdbserver-nb-0\" (UID: \"de54385e-2f40-49ff-a722-598787e12fae\") " pod="openstack/ovsdbserver-nb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.424190 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/656a6cd1-5588-4a14-a6f2-8252ad2894cb-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"656a6cd1-5588-4a14-a6f2-8252ad2894cb\") " pod="openstack/ovsdbserver-sb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.424245 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ef13899e-f0c9-41e9-aac9-37f01ee4c9e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef13899e-f0c9-41e9-aac9-37f01ee4c9e5\") pod \"ovsdbserver-nb-1\" (UID: \"01914887-980b-49ab-ad4a-30fe3b76a8b8\") " pod="openstack/ovsdbserver-nb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.424266 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pddz\" (UniqueName: \"kubernetes.io/projected/de54385e-2f40-49ff-a722-598787e12fae-kube-api-access-5pddz\") pod \"ovsdbserver-nb-0\" (UID: \"de54385e-2f40-49ff-a722-598787e12fae\") " pod="openstack/ovsdbserver-nb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.424288 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01914887-980b-49ab-ad4a-30fe3b76a8b8-config\") pod \"ovsdbserver-nb-1\" (UID: \"01914887-980b-49ab-ad4a-30fe3b76a8b8\") " pod="openstack/ovsdbserver-nb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.424358 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf798\" (UniqueName: \"kubernetes.io/projected/656a6cd1-5588-4a14-a6f2-8252ad2894cb-kube-api-access-lf798\") pod \"ovsdbserver-sb-0\" (UID: \"656a6cd1-5588-4a14-a6f2-8252ad2894cb\") " pod="openstack/ovsdbserver-sb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.424395 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/01914887-980b-49ab-ad4a-30fe3b76a8b8-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"01914887-980b-49ab-ad4a-30fe3b76a8b8\") " pod="openstack/ovsdbserver-nb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.424416 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/656a6cd1-5588-4a14-a6f2-8252ad2894cb-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"656a6cd1-5588-4a14-a6f2-8252ad2894cb\") " pod="openstack/ovsdbserver-sb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.424437 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9d2m\" (UniqueName: \"kubernetes.io/projected/01914887-980b-49ab-ad4a-30fe3b76a8b8-kube-api-access-p9d2m\") pod \"ovsdbserver-nb-1\" (UID: \"01914887-980b-49ab-ad4a-30fe3b76a8b8\") " pod="openstack/ovsdbserver-nb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.425369 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01914887-980b-49ab-ad4a-30fe3b76a8b8-config\") pod \"ovsdbserver-nb-1\" (UID: \"01914887-980b-49ab-ad4a-30fe3b76a8b8\") " pod="openstack/ovsdbserver-nb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.425708 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01914887-980b-49ab-ad4a-30fe3b76a8b8-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"01914887-980b-49ab-ad4a-30fe3b76a8b8\") " pod="openstack/ovsdbserver-nb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.427347 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/01914887-980b-49ab-ad4a-30fe3b76a8b8-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"01914887-980b-49ab-ad4a-30fe3b76a8b8\") " pod="openstack/ovsdbserver-nb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.430166 5050 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.430192 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ef13899e-f0c9-41e9-aac9-37f01ee4c9e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef13899e-f0c9-41e9-aac9-37f01ee4c9e5\") pod \"ovsdbserver-nb-1\" (UID: \"01914887-980b-49ab-ad4a-30fe3b76a8b8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/771cd00cff99f1b25d9c2c2c87cedd89cd323f012897cb336bf1dc7df90de4c2/globalmount\"" pod="openstack/ovsdbserver-nb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.430436 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de54385e-2f40-49ff-a722-598787e12fae-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"de54385e-2f40-49ff-a722-598787e12fae\") " pod="openstack/ovsdbserver-nb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.431001 5050 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.431042 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-c28f2ad9-f9ea-43f5-968c-114f8ad42141\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c28f2ad9-f9ea-43f5-968c-114f8ad42141\") pod \"ovsdbserver-nb-0\" (UID: \"de54385e-2f40-49ff-a722-598787e12fae\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d522bc3e5a9fc3bcc0ad0d34554cd4d39aa17b73ddc7b223bd4abc373d6f95e5/globalmount\"" pod="openstack/ovsdbserver-nb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.439660 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pddz\" (UniqueName: \"kubernetes.io/projected/de54385e-2f40-49ff-a722-598787e12fae-kube-api-access-5pddz\") pod \"ovsdbserver-nb-0\" (UID: \"de54385e-2f40-49ff-a722-598787e12fae\") " pod="openstack/ovsdbserver-nb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.442468 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9d2m\" (UniqueName: \"kubernetes.io/projected/01914887-980b-49ab-ad4a-30fe3b76a8b8-kube-api-access-p9d2m\") pod \"ovsdbserver-nb-1\" (UID: \"01914887-980b-49ab-ad4a-30fe3b76a8b8\") " pod="openstack/ovsdbserver-nb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.461281 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-c28f2ad9-f9ea-43f5-968c-114f8ad42141\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-c28f2ad9-f9ea-43f5-968c-114f8ad42141\") pod \"ovsdbserver-nb-0\" (UID: \"de54385e-2f40-49ff-a722-598787e12fae\") " pod="openstack/ovsdbserver-nb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.468270 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ef13899e-f0c9-41e9-aac9-37f01ee4c9e5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef13899e-f0c9-41e9-aac9-37f01ee4c9e5\") pod \"ovsdbserver-nb-1\" (UID: \"01914887-980b-49ab-ad4a-30fe3b76a8b8\") " pod="openstack/ovsdbserver-nb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.477826 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.525977 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf798\" (UniqueName: \"kubernetes.io/projected/656a6cd1-5588-4a14-a6f2-8252ad2894cb-kube-api-access-lf798\") pod \"ovsdbserver-sb-0\" (UID: \"656a6cd1-5588-4a14-a6f2-8252ad2894cb\") " pod="openstack/ovsdbserver-sb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.526078 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a48ff80-6e64-4fed-bca4-85204bb3749e-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"2a48ff80-6e64-4fed-bca4-85204bb3749e\") " pod="openstack/ovsdbserver-sb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.526296 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/656a6cd1-5588-4a14-a6f2-8252ad2894cb-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"656a6cd1-5588-4a14-a6f2-8252ad2894cb\") " pod="openstack/ovsdbserver-sb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.526335 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/443aa8a8-83bf-4d4a-84ea-baebc02bcf8a-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"443aa8a8-83bf-4d4a-84ea-baebc02bcf8a\") " pod="openstack/ovsdbserver-sb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.526383 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a48ff80-6e64-4fed-bca4-85204bb3749e-config\") pod \"ovsdbserver-sb-1\" (UID: \"2a48ff80-6e64-4fed-bca4-85204bb3749e\") " pod="openstack/ovsdbserver-sb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.526401 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/656a6cd1-5588-4a14-a6f2-8252ad2894cb-config\") pod \"ovsdbserver-sb-0\" (UID: \"656a6cd1-5588-4a14-a6f2-8252ad2894cb\") " pod="openstack/ovsdbserver-sb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.526432 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e12f0186-bde8-4f99-80b5-a6733495c8e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e12f0186-bde8-4f99-80b5-a6733495c8e1\") pod \"ovsdbserver-sb-0\" (UID: \"656a6cd1-5588-4a14-a6f2-8252ad2894cb\") " pod="openstack/ovsdbserver-sb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.526450 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f5d8ea54-721f-42ab-a785-1c0c685d35cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d8ea54-721f-42ab-a785-1c0c685d35cd\") pod \"ovsdbserver-sb-1\" (UID: \"2a48ff80-6e64-4fed-bca4-85204bb3749e\") " pod="openstack/ovsdbserver-sb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.526472 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p26dx\" (UniqueName: \"kubernetes.io/projected/2a48ff80-6e64-4fed-bca4-85204bb3749e-kube-api-access-p26dx\") pod \"ovsdbserver-sb-1\" (UID: \"2a48ff80-6e64-4fed-bca4-85204bb3749e\") " pod="openstack/ovsdbserver-sb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.526492 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/656a6cd1-5588-4a14-a6f2-8252ad2894cb-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"656a6cd1-5588-4a14-a6f2-8252ad2894cb\") " pod="openstack/ovsdbserver-sb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.526657 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/443aa8a8-83bf-4d4a-84ea-baebc02bcf8a-config\") pod \"ovsdbserver-sb-2\" (UID: \"443aa8a8-83bf-4d4a-84ea-baebc02bcf8a\") " pod="openstack/ovsdbserver-sb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.526754 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a48ff80-6e64-4fed-bca4-85204bb3749e-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"2a48ff80-6e64-4fed-bca4-85204bb3749e\") " pod="openstack/ovsdbserver-sb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.526780 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2a48ff80-6e64-4fed-bca4-85204bb3749e-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"2a48ff80-6e64-4fed-bca4-85204bb3749e\") " pod="openstack/ovsdbserver-sb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.526794 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/443aa8a8-83bf-4d4a-84ea-baebc02bcf8a-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"443aa8a8-83bf-4d4a-84ea-baebc02bcf8a\") " pod="openstack/ovsdbserver-sb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.526818 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/656a6cd1-5588-4a14-a6f2-8252ad2894cb-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"656a6cd1-5588-4a14-a6f2-8252ad2894cb\") " pod="openstack/ovsdbserver-sb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.526851 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdg9d\" (UniqueName: \"kubernetes.io/projected/443aa8a8-83bf-4d4a-84ea-baebc02bcf8a-kube-api-access-qdg9d\") pod \"ovsdbserver-sb-2\" (UID: \"443aa8a8-83bf-4d4a-84ea-baebc02bcf8a\") " pod="openstack/ovsdbserver-sb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.526882 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-02d241b1-4e2a-4380-8962-22435cfd5332\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-02d241b1-4e2a-4380-8962-22435cfd5332\") pod \"ovsdbserver-sb-2\" (UID: \"443aa8a8-83bf-4d4a-84ea-baebc02bcf8a\") " pod="openstack/ovsdbserver-sb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.526906 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/443aa8a8-83bf-4d4a-84ea-baebc02bcf8a-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"443aa8a8-83bf-4d4a-84ea-baebc02bcf8a\") " pod="openstack/ovsdbserver-sb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.527917 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/656a6cd1-5588-4a14-a6f2-8252ad2894cb-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"656a6cd1-5588-4a14-a6f2-8252ad2894cb\") " pod="openstack/ovsdbserver-sb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.528126 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/656a6cd1-5588-4a14-a6f2-8252ad2894cb-config\") pod \"ovsdbserver-sb-0\" (UID: \"656a6cd1-5588-4a14-a6f2-8252ad2894cb\") " pod="openstack/ovsdbserver-sb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.528138 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/656a6cd1-5588-4a14-a6f2-8252ad2894cb-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"656a6cd1-5588-4a14-a6f2-8252ad2894cb\") " pod="openstack/ovsdbserver-sb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.530333 5050 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.530365 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e12f0186-bde8-4f99-80b5-a6733495c8e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e12f0186-bde8-4f99-80b5-a6733495c8e1\") pod \"ovsdbserver-sb-0\" (UID: \"656a6cd1-5588-4a14-a6f2-8252ad2894cb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/079cd63cf4368a8d7bca283eb2d6365cc7f746b8a95063552ff68884c15a0cb5/globalmount\"" pod="openstack/ovsdbserver-sb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.534952 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/656a6cd1-5588-4a14-a6f2-8252ad2894cb-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"656a6cd1-5588-4a14-a6f2-8252ad2894cb\") " pod="openstack/ovsdbserver-sb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.549518 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf798\" (UniqueName: \"kubernetes.io/projected/656a6cd1-5588-4a14-a6f2-8252ad2894cb-kube-api-access-lf798\") pod \"ovsdbserver-sb-0\" (UID: \"656a6cd1-5588-4a14-a6f2-8252ad2894cb\") " pod="openstack/ovsdbserver-sb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.561367 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e12f0186-bde8-4f99-80b5-a6733495c8e1\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e12f0186-bde8-4f99-80b5-a6733495c8e1\") pod \"ovsdbserver-sb-0\" (UID: \"656a6cd1-5588-4a14-a6f2-8252ad2894cb\") " pod="openstack/ovsdbserver-sb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.630083 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdg9d\" (UniqueName: \"kubernetes.io/projected/443aa8a8-83bf-4d4a-84ea-baebc02bcf8a-kube-api-access-qdg9d\") pod \"ovsdbserver-sb-2\" (UID: \"443aa8a8-83bf-4d4a-84ea-baebc02bcf8a\") " pod="openstack/ovsdbserver-sb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.630271 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-02d241b1-4e2a-4380-8962-22435cfd5332\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-02d241b1-4e2a-4380-8962-22435cfd5332\") pod \"ovsdbserver-sb-2\" (UID: \"443aa8a8-83bf-4d4a-84ea-baebc02bcf8a\") " pod="openstack/ovsdbserver-sb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.630302 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/443aa8a8-83bf-4d4a-84ea-baebc02bcf8a-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"443aa8a8-83bf-4d4a-84ea-baebc02bcf8a\") " pod="openstack/ovsdbserver-sb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.630447 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a48ff80-6e64-4fed-bca4-85204bb3749e-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"2a48ff80-6e64-4fed-bca4-85204bb3749e\") " pod="openstack/ovsdbserver-sb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.630490 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/443aa8a8-83bf-4d4a-84ea-baebc02bcf8a-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"443aa8a8-83bf-4d4a-84ea-baebc02bcf8a\") " pod="openstack/ovsdbserver-sb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.630533 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a48ff80-6e64-4fed-bca4-85204bb3749e-config\") pod \"ovsdbserver-sb-1\" (UID: \"2a48ff80-6e64-4fed-bca4-85204bb3749e\") " pod="openstack/ovsdbserver-sb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.630601 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f5d8ea54-721f-42ab-a785-1c0c685d35cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d8ea54-721f-42ab-a785-1c0c685d35cd\") pod \"ovsdbserver-sb-1\" (UID: \"2a48ff80-6e64-4fed-bca4-85204bb3749e\") " pod="openstack/ovsdbserver-sb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.630636 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p26dx\" (UniqueName: \"kubernetes.io/projected/2a48ff80-6e64-4fed-bca4-85204bb3749e-kube-api-access-p26dx\") pod \"ovsdbserver-sb-1\" (UID: \"2a48ff80-6e64-4fed-bca4-85204bb3749e\") " pod="openstack/ovsdbserver-sb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.630782 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/443aa8a8-83bf-4d4a-84ea-baebc02bcf8a-config\") pod \"ovsdbserver-sb-2\" (UID: \"443aa8a8-83bf-4d4a-84ea-baebc02bcf8a\") " pod="openstack/ovsdbserver-sb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.630873 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a48ff80-6e64-4fed-bca4-85204bb3749e-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"2a48ff80-6e64-4fed-bca4-85204bb3749e\") " pod="openstack/ovsdbserver-sb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.630899 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2a48ff80-6e64-4fed-bca4-85204bb3749e-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"2a48ff80-6e64-4fed-bca4-85204bb3749e\") " pod="openstack/ovsdbserver-sb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.630951 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/443aa8a8-83bf-4d4a-84ea-baebc02bcf8a-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"443aa8a8-83bf-4d4a-84ea-baebc02bcf8a\") " pod="openstack/ovsdbserver-sb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.631711 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/443aa8a8-83bf-4d4a-84ea-baebc02bcf8a-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"443aa8a8-83bf-4d4a-84ea-baebc02bcf8a\") " pod="openstack/ovsdbserver-sb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.632127 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2a48ff80-6e64-4fed-bca4-85204bb3749e-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"2a48ff80-6e64-4fed-bca4-85204bb3749e\") " pod="openstack/ovsdbserver-sb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.632226 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a48ff80-6e64-4fed-bca4-85204bb3749e-config\") pod \"ovsdbserver-sb-1\" (UID: \"2a48ff80-6e64-4fed-bca4-85204bb3749e\") " pod="openstack/ovsdbserver-sb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.632395 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/443aa8a8-83bf-4d4a-84ea-baebc02bcf8a-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"443aa8a8-83bf-4d4a-84ea-baebc02bcf8a\") " pod="openstack/ovsdbserver-sb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.632931 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/443aa8a8-83bf-4d4a-84ea-baebc02bcf8a-config\") pod \"ovsdbserver-sb-2\" (UID: \"443aa8a8-83bf-4d4a-84ea-baebc02bcf8a\") " pod="openstack/ovsdbserver-sb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.633365 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a48ff80-6e64-4fed-bca4-85204bb3749e-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"2a48ff80-6e64-4fed-bca4-85204bb3749e\") " pod="openstack/ovsdbserver-sb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.639587 5050 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.639631 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-02d241b1-4e2a-4380-8962-22435cfd5332\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-02d241b1-4e2a-4380-8962-22435cfd5332\") pod \"ovsdbserver-sb-2\" (UID: \"443aa8a8-83bf-4d4a-84ea-baebc02bcf8a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b8a830ac6c9579f619223baf1a3b4c88f7c9e1dbb54635060de8c7ff1b1d33e8/globalmount\"" pod="openstack/ovsdbserver-sb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.639738 5050 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.639799 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f5d8ea54-721f-42ab-a785-1c0c685d35cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d8ea54-721f-42ab-a785-1c0c685d35cd\") pod \"ovsdbserver-sb-1\" (UID: \"2a48ff80-6e64-4fed-bca4-85204bb3749e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1c3e543320abd99a6308694a3b0a902910e456171816232813cbad856494b2b0/globalmount\"" pod="openstack/ovsdbserver-sb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.642377 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a48ff80-6e64-4fed-bca4-85204bb3749e-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"2a48ff80-6e64-4fed-bca4-85204bb3749e\") " pod="openstack/ovsdbserver-sb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.642973 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/443aa8a8-83bf-4d4a-84ea-baebc02bcf8a-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"443aa8a8-83bf-4d4a-84ea-baebc02bcf8a\") " pod="openstack/ovsdbserver-sb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.652121 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p26dx\" (UniqueName: \"kubernetes.io/projected/2a48ff80-6e64-4fed-bca4-85204bb3749e-kube-api-access-p26dx\") pod \"ovsdbserver-sb-1\" (UID: \"2a48ff80-6e64-4fed-bca4-85204bb3749e\") " pod="openstack/ovsdbserver-sb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.654143 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdg9d\" (UniqueName: \"kubernetes.io/projected/443aa8a8-83bf-4d4a-84ea-baebc02bcf8a-kube-api-access-qdg9d\") pod \"ovsdbserver-sb-2\" (UID: \"443aa8a8-83bf-4d4a-84ea-baebc02bcf8a\") " pod="openstack/ovsdbserver-sb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.676572 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-02d241b1-4e2a-4380-8962-22435cfd5332\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-02d241b1-4e2a-4380-8962-22435cfd5332\") pod \"ovsdbserver-sb-2\" (UID: \"443aa8a8-83bf-4d4a-84ea-baebc02bcf8a\") " pod="openstack/ovsdbserver-sb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.685069 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f5d8ea54-721f-42ab-a785-1c0c685d35cd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f5d8ea54-721f-42ab-a785-1c0c685d35cd\") pod \"ovsdbserver-sb-1\" (UID: \"2a48ff80-6e64-4fed-bca4-85204bb3749e\") " pod="openstack/ovsdbserver-sb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.712495 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.724126 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.757683 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.762644 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.767687 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.800503 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Dec 11 15:15:20 crc kubenswrapper[5050]: I1211 15:15:20.951023 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"cfab5143-8fd9-4772-8804-96c45edbe169","Type":"ContainerStarted","Data":"17696295ada170940e72fb3cebc8e5eccc05fcd2089a8e626ab8ea9b36d4385f"} Dec 11 15:15:21 crc kubenswrapper[5050]: I1211 15:15:21.070288 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 11 15:15:21 crc kubenswrapper[5050]: I1211 15:15:21.176290 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Dec 11 15:15:21 crc kubenswrapper[5050]: W1211 15:15:21.188098 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a48ff80_6e64_4fed_bca4_85204bb3749e.slice/crio-4ae5943acc086bcbc062cd2a885a0a763dec9f8980b8237230ef805328de02df WatchSource:0}: Error finding container 4ae5943acc086bcbc062cd2a885a0a763dec9f8980b8237230ef805328de02df: Status 404 returned error can't find the container with id 4ae5943acc086bcbc062cd2a885a0a763dec9f8980b8237230ef805328de02df Dec 11 15:15:21 crc kubenswrapper[5050]: I1211 15:15:21.348625 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 11 15:15:21 crc kubenswrapper[5050]: I1211 15:15:21.459538 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Dec 11 15:15:21 crc kubenswrapper[5050]: W1211 15:15:21.469218 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01914887_980b_49ab_ad4a_30fe3b76a8b8.slice/crio-72d5744eef773678756e3c37615d296f51796ef3e4a9d3b2b21aa651aecb8007 WatchSource:0}: Error finding container 72d5744eef773678756e3c37615d296f51796ef3e4a9d3b2b21aa651aecb8007: Status 404 returned error can't find the container with id 72d5744eef773678756e3c37615d296f51796ef3e4a9d3b2b21aa651aecb8007 Dec 11 15:15:21 crc kubenswrapper[5050]: I1211 15:15:21.960620 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"2a48ff80-6e64-4fed-bca4-85204bb3749e","Type":"ContainerStarted","Data":"4cb35a1458a2244a72c8d495dd6ae12192936a5a36ae185ab5b462b722787f98"} Dec 11 15:15:21 crc kubenswrapper[5050]: I1211 15:15:21.960685 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"2a48ff80-6e64-4fed-bca4-85204bb3749e","Type":"ContainerStarted","Data":"43ae8588116bbac5966dcb6cf3f3d79d9013c9e5c51ed9e1bc03160d25c3b0dc"} Dec 11 15:15:21 crc kubenswrapper[5050]: I1211 15:15:21.960708 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"2a48ff80-6e64-4fed-bca4-85204bb3749e","Type":"ContainerStarted","Data":"4ae5943acc086bcbc062cd2a885a0a763dec9f8980b8237230ef805328de02df"} Dec 11 15:15:21 crc kubenswrapper[5050]: I1211 15:15:21.962398 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"01914887-980b-49ab-ad4a-30fe3b76a8b8","Type":"ContainerStarted","Data":"4285a255b9fd3034be5bd60c728c073998b841b7679b1e3c07848fb3cd0f063c"} Dec 11 15:15:21 crc kubenswrapper[5050]: I1211 15:15:21.962446 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"01914887-980b-49ab-ad4a-30fe3b76a8b8","Type":"ContainerStarted","Data":"c478202cd7025b0f89f7c94ba65e45efd58deaefba065c71acfe28827f74f139"} Dec 11 15:15:21 crc kubenswrapper[5050]: I1211 15:15:21.962457 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"01914887-980b-49ab-ad4a-30fe3b76a8b8","Type":"ContainerStarted","Data":"72d5744eef773678756e3c37615d296f51796ef3e4a9d3b2b21aa651aecb8007"} Dec 11 15:15:21 crc kubenswrapper[5050]: I1211 15:15:21.965746 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"656a6cd1-5588-4a14-a6f2-8252ad2894cb","Type":"ContainerStarted","Data":"5c3956b8bf842facc5d264a28430996ae04e8d874460c9ac1e8dae2928c9f2db"} Dec 11 15:15:21 crc kubenswrapper[5050]: I1211 15:15:21.965795 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"656a6cd1-5588-4a14-a6f2-8252ad2894cb","Type":"ContainerStarted","Data":"f5edb556be5f9a9993ae13ddcc7ebc4c76b321793c16fc2eb9c379c7ab2202d6"} Dec 11 15:15:21 crc kubenswrapper[5050]: I1211 15:15:21.965819 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"656a6cd1-5588-4a14-a6f2-8252ad2894cb","Type":"ContainerStarted","Data":"1b7f3c85f751b87412e9c8cda8c9edb7f90d25c20439d643d79175d81e69e893"} Dec 11 15:15:21 crc kubenswrapper[5050]: I1211 15:15:21.967988 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"de54385e-2f40-49ff-a722-598787e12fae","Type":"ContainerStarted","Data":"a4949c778f78ff72b260820aed9fe819edf5c3220b4429d70e36975d4ccf8894"} Dec 11 15:15:21 crc kubenswrapper[5050]: I1211 15:15:21.968038 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"de54385e-2f40-49ff-a722-598787e12fae","Type":"ContainerStarted","Data":"795e49c2322867a85190bde40c3359e282105a53b29312e71b531547f694ffec"} Dec 11 15:15:21 crc kubenswrapper[5050]: I1211 15:15:21.968050 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"de54385e-2f40-49ff-a722-598787e12fae","Type":"ContainerStarted","Data":"1c8ca1a6efef47c3afc03122222ca626498fab39e3d949ba9d1928bede65f378"} Dec 11 15:15:21 crc kubenswrapper[5050]: I1211 15:15:21.970590 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"cfab5143-8fd9-4772-8804-96c45edbe169","Type":"ContainerStarted","Data":"d0550e27ee6bde303355af0fbcd9ea8af1d64bd280d72aa0e40493eab81c4d4b"} Dec 11 15:15:21 crc kubenswrapper[5050]: I1211 15:15:21.970613 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"cfab5143-8fd9-4772-8804-96c45edbe169","Type":"ContainerStarted","Data":"44a01d25ec1ea57fae395ccf65b5d25e1dafd677cc7c0e7134413b74d2c0b945"} Dec 11 15:15:22 crc kubenswrapper[5050]: I1211 15:15:22.029883 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=3.029860957 podStartE2EDuration="3.029860957s" podCreationTimestamp="2025-12-11 15:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:15:22.028682966 +0000 UTC m=+5212.872405562" watchObservedRunningTime="2025-12-11 15:15:22.029860957 +0000 UTC m=+5212.873583543" Dec 11 15:15:22 crc kubenswrapper[5050]: I1211 15:15:22.034553 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=3.034537113 podStartE2EDuration="3.034537113s" podCreationTimestamp="2025-12-11 15:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:15:22.007387544 +0000 UTC m=+5212.851110130" watchObservedRunningTime="2025-12-11 15:15:22.034537113 +0000 UTC m=+5212.878259699" Dec 11 15:15:22 crc kubenswrapper[5050]: I1211 15:15:22.056073 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=3.05605009 podStartE2EDuration="3.05605009s" podCreationTimestamp="2025-12-11 15:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:15:22.049863584 +0000 UTC m=+5212.893586180" watchObservedRunningTime="2025-12-11 15:15:22.05605009 +0000 UTC m=+5212.899772676" Dec 11 15:15:22 crc kubenswrapper[5050]: I1211 15:15:22.070716 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.070694134 podStartE2EDuration="3.070694134s" podCreationTimestamp="2025-12-11 15:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:15:22.063949603 +0000 UTC m=+5212.907672239" watchObservedRunningTime="2025-12-11 15:15:22.070694134 +0000 UTC m=+5212.914416710" Dec 11 15:15:22 crc kubenswrapper[5050]: I1211 15:15:22.088135 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=3.088111311 podStartE2EDuration="3.088111311s" podCreationTimestamp="2025-12-11 15:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:15:22.084748371 +0000 UTC m=+5212.928470947" watchObservedRunningTime="2025-12-11 15:15:22.088111311 +0000 UTC m=+5212.931833917" Dec 11 15:15:22 crc kubenswrapper[5050]: I1211 15:15:22.403197 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Dec 11 15:15:22 crc kubenswrapper[5050]: W1211 15:15:22.410736 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod443aa8a8_83bf_4d4a_84ea_baebc02bcf8a.slice/crio-e61de623b185065742934a70c96ca3998c892c2bcea27fc2341f81007a512632 WatchSource:0}: Error finding container e61de623b185065742934a70c96ca3998c892c2bcea27fc2341f81007a512632: Status 404 returned error can't find the container with id e61de623b185065742934a70c96ca3998c892c2bcea27fc2341f81007a512632 Dec 11 15:15:22 crc kubenswrapper[5050]: I1211 15:15:22.980833 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"443aa8a8-83bf-4d4a-84ea-baebc02bcf8a","Type":"ContainerStarted","Data":"f32e35cb9029ee603a51008e37813d2862ea932eee5fdac85b8290f4cc2c157f"} Dec 11 15:15:22 crc kubenswrapper[5050]: I1211 15:15:22.981215 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"443aa8a8-83bf-4d4a-84ea-baebc02bcf8a","Type":"ContainerStarted","Data":"84a7e6fb76b7b959a87d72b5b74e631c057fdc0b15d124765c7cce4efcafdcc9"} Dec 11 15:15:22 crc kubenswrapper[5050]: I1211 15:15:22.981234 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"443aa8a8-83bf-4d4a-84ea-baebc02bcf8a","Type":"ContainerStarted","Data":"e61de623b185065742934a70c96ca3998c892c2bcea27fc2341f81007a512632"} Dec 11 15:15:23 crc kubenswrapper[5050]: I1211 15:15:23.012828 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=4.01281253 podStartE2EDuration="4.01281253s" podCreationTimestamp="2025-12-11 15:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:15:23.005768541 +0000 UTC m=+5213.849491117" watchObservedRunningTime="2025-12-11 15:15:23.01281253 +0000 UTC m=+5213.856535116" Dec 11 15:15:23 crc kubenswrapper[5050]: I1211 15:15:23.478243 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Dec 11 15:15:23 crc kubenswrapper[5050]: I1211 15:15:23.530382 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Dec 11 15:15:23 crc kubenswrapper[5050]: I1211 15:15:23.712927 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 11 15:15:23 crc kubenswrapper[5050]: I1211 15:15:23.724785 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Dec 11 15:15:23 crc kubenswrapper[5050]: I1211 15:15:23.758824 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 11 15:15:23 crc kubenswrapper[5050]: I1211 15:15:23.768421 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Dec 11 15:15:23 crc kubenswrapper[5050]: I1211 15:15:23.801495 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Dec 11 15:15:23 crc kubenswrapper[5050]: I1211 15:15:23.993886 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Dec 11 15:15:25 crc kubenswrapper[5050]: I1211 15:15:25.078238 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Dec 11 15:15:25 crc kubenswrapper[5050]: I1211 15:15:25.385459 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6976f77f65-bjq7x"] Dec 11 15:15:25 crc kubenswrapper[5050]: I1211 15:15:25.387001 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6976f77f65-bjq7x" Dec 11 15:15:25 crc kubenswrapper[5050]: I1211 15:15:25.389839 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 11 15:15:25 crc kubenswrapper[5050]: I1211 15:15:25.404038 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6976f77f65-bjq7x"] Dec 11 15:15:25 crc kubenswrapper[5050]: I1211 15:15:25.529773 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/042124c0-72e3-47f9-af98-1b83ea808890-config\") pod \"dnsmasq-dns-6976f77f65-bjq7x\" (UID: \"042124c0-72e3-47f9-af98-1b83ea808890\") " pod="openstack/dnsmasq-dns-6976f77f65-bjq7x" Dec 11 15:15:25 crc kubenswrapper[5050]: I1211 15:15:25.529887 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/042124c0-72e3-47f9-af98-1b83ea808890-dns-svc\") pod \"dnsmasq-dns-6976f77f65-bjq7x\" (UID: \"042124c0-72e3-47f9-af98-1b83ea808890\") " pod="openstack/dnsmasq-dns-6976f77f65-bjq7x" Dec 11 15:15:25 crc kubenswrapper[5050]: I1211 15:15:25.529931 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/042124c0-72e3-47f9-af98-1b83ea808890-ovsdbserver-nb\") pod \"dnsmasq-dns-6976f77f65-bjq7x\" (UID: \"042124c0-72e3-47f9-af98-1b83ea808890\") " pod="openstack/dnsmasq-dns-6976f77f65-bjq7x" Dec 11 15:15:25 crc kubenswrapper[5050]: I1211 15:15:25.529956 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w99mj\" (UniqueName: \"kubernetes.io/projected/042124c0-72e3-47f9-af98-1b83ea808890-kube-api-access-w99mj\") pod \"dnsmasq-dns-6976f77f65-bjq7x\" (UID: \"042124c0-72e3-47f9-af98-1b83ea808890\") " pod="openstack/dnsmasq-dns-6976f77f65-bjq7x" Dec 11 15:15:25 crc kubenswrapper[5050]: I1211 15:15:25.631487 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/042124c0-72e3-47f9-af98-1b83ea808890-config\") pod \"dnsmasq-dns-6976f77f65-bjq7x\" (UID: \"042124c0-72e3-47f9-af98-1b83ea808890\") " pod="openstack/dnsmasq-dns-6976f77f65-bjq7x" Dec 11 15:15:25 crc kubenswrapper[5050]: I1211 15:15:25.631578 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/042124c0-72e3-47f9-af98-1b83ea808890-dns-svc\") pod \"dnsmasq-dns-6976f77f65-bjq7x\" (UID: \"042124c0-72e3-47f9-af98-1b83ea808890\") " pod="openstack/dnsmasq-dns-6976f77f65-bjq7x" Dec 11 15:15:25 crc kubenswrapper[5050]: I1211 15:15:25.631610 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/042124c0-72e3-47f9-af98-1b83ea808890-ovsdbserver-nb\") pod \"dnsmasq-dns-6976f77f65-bjq7x\" (UID: \"042124c0-72e3-47f9-af98-1b83ea808890\") " pod="openstack/dnsmasq-dns-6976f77f65-bjq7x" Dec 11 15:15:25 crc kubenswrapper[5050]: I1211 15:15:25.631627 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w99mj\" (UniqueName: \"kubernetes.io/projected/042124c0-72e3-47f9-af98-1b83ea808890-kube-api-access-w99mj\") pod \"dnsmasq-dns-6976f77f65-bjq7x\" (UID: \"042124c0-72e3-47f9-af98-1b83ea808890\") " pod="openstack/dnsmasq-dns-6976f77f65-bjq7x" Dec 11 15:15:25 crc kubenswrapper[5050]: I1211 15:15:25.632617 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/042124c0-72e3-47f9-af98-1b83ea808890-config\") pod \"dnsmasq-dns-6976f77f65-bjq7x\" (UID: \"042124c0-72e3-47f9-af98-1b83ea808890\") " pod="openstack/dnsmasq-dns-6976f77f65-bjq7x" Dec 11 15:15:25 crc kubenswrapper[5050]: I1211 15:15:25.632623 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/042124c0-72e3-47f9-af98-1b83ea808890-dns-svc\") pod \"dnsmasq-dns-6976f77f65-bjq7x\" (UID: \"042124c0-72e3-47f9-af98-1b83ea808890\") " pod="openstack/dnsmasq-dns-6976f77f65-bjq7x" Dec 11 15:15:25 crc kubenswrapper[5050]: I1211 15:15:25.633327 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/042124c0-72e3-47f9-af98-1b83ea808890-ovsdbserver-nb\") pod \"dnsmasq-dns-6976f77f65-bjq7x\" (UID: \"042124c0-72e3-47f9-af98-1b83ea808890\") " pod="openstack/dnsmasq-dns-6976f77f65-bjq7x" Dec 11 15:15:25 crc kubenswrapper[5050]: I1211 15:15:25.649278 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w99mj\" (UniqueName: \"kubernetes.io/projected/042124c0-72e3-47f9-af98-1b83ea808890-kube-api-access-w99mj\") pod \"dnsmasq-dns-6976f77f65-bjq7x\" (UID: \"042124c0-72e3-47f9-af98-1b83ea808890\") " pod="openstack/dnsmasq-dns-6976f77f65-bjq7x" Dec 11 15:15:25 crc kubenswrapper[5050]: I1211 15:15:25.704928 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6976f77f65-bjq7x" Dec 11 15:15:25 crc kubenswrapper[5050]: I1211 15:15:25.713494 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 11 15:15:25 crc kubenswrapper[5050]: I1211 15:15:25.724550 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Dec 11 15:15:25 crc kubenswrapper[5050]: I1211 15:15:25.758939 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 11 15:15:25 crc kubenswrapper[5050]: I1211 15:15:25.768893 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Dec 11 15:15:25 crc kubenswrapper[5050]: I1211 15:15:25.801526 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Dec 11 15:15:25 crc kubenswrapper[5050]: I1211 15:15:25.920908 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6976f77f65-bjq7x"] Dec 11 15:15:25 crc kubenswrapper[5050]: W1211 15:15:25.930815 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod042124c0_72e3_47f9_af98_1b83ea808890.slice/crio-e373187f1c727c84be9fdf1dc7597c6ee7e7d1021c48554ad0a8ba8b8753560b WatchSource:0}: Error finding container e373187f1c727c84be9fdf1dc7597c6ee7e7d1021c48554ad0a8ba8b8753560b: Status 404 returned error can't find the container with id e373187f1c727c84be9fdf1dc7597c6ee7e7d1021c48554ad0a8ba8b8753560b Dec 11 15:15:26 crc kubenswrapper[5050]: I1211 15:15:26.014939 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6976f77f65-bjq7x" event={"ID":"042124c0-72e3-47f9-af98-1b83ea808890","Type":"ContainerStarted","Data":"e373187f1c727c84be9fdf1dc7597c6ee7e7d1021c48554ad0a8ba8b8753560b"} Dec 11 15:15:26 crc kubenswrapper[5050]: I1211 15:15:26.770124 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 11 15:15:26 crc kubenswrapper[5050]: I1211 15:15:26.805948 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Dec 11 15:15:26 crc kubenswrapper[5050]: I1211 15:15:26.832649 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 11 15:15:26 crc kubenswrapper[5050]: I1211 15:15:26.835698 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Dec 11 15:15:26 crc kubenswrapper[5050]: I1211 15:15:26.837038 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 11 15:15:26 crc kubenswrapper[5050]: I1211 15:15:26.882097 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Dec 11 15:15:26 crc kubenswrapper[5050]: I1211 15:15:26.894833 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 11 15:15:26 crc kubenswrapper[5050]: I1211 15:15:26.896844 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Dec 11 15:15:26 crc kubenswrapper[5050]: I1211 15:15:26.906828 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Dec 11 15:15:27 crc kubenswrapper[5050]: I1211 15:15:27.049735 5050 generic.go:334] "Generic (PLEG): container finished" podID="042124c0-72e3-47f9-af98-1b83ea808890" containerID="c71108081becd7d4ed9db84e96bc8551299c6b15077e3d9a1138dd3c9d901520" exitCode=0 Dec 11 15:15:27 crc kubenswrapper[5050]: I1211 15:15:27.051230 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6976f77f65-bjq7x" event={"ID":"042124c0-72e3-47f9-af98-1b83ea808890","Type":"ContainerDied","Data":"c71108081becd7d4ed9db84e96bc8551299c6b15077e3d9a1138dd3c9d901520"} Dec 11 15:15:27 crc kubenswrapper[5050]: I1211 15:15:27.142303 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Dec 11 15:15:27 crc kubenswrapper[5050]: I1211 15:15:27.207101 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6976f77f65-bjq7x"] Dec 11 15:15:27 crc kubenswrapper[5050]: I1211 15:15:27.247495 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f8fb65bfc-krgh4"] Dec 11 15:15:27 crc kubenswrapper[5050]: I1211 15:15:27.248885 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8fb65bfc-krgh4" Dec 11 15:15:27 crc kubenswrapper[5050]: I1211 15:15:27.251625 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 11 15:15:27 crc kubenswrapper[5050]: I1211 15:15:27.267482 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f8fb65bfc-krgh4"] Dec 11 15:15:27 crc kubenswrapper[5050]: I1211 15:15:27.369069 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6910859-c36e-4687-a1a5-abed4bbb8e30-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8fb65bfc-krgh4\" (UID: \"d6910859-c36e-4687-a1a5-abed4bbb8e30\") " pod="openstack/dnsmasq-dns-6f8fb65bfc-krgh4" Dec 11 15:15:27 crc kubenswrapper[5050]: I1211 15:15:27.370130 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6910859-c36e-4687-a1a5-abed4bbb8e30-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8fb65bfc-krgh4\" (UID: \"d6910859-c36e-4687-a1a5-abed4bbb8e30\") " pod="openstack/dnsmasq-dns-6f8fb65bfc-krgh4" Dec 11 15:15:27 crc kubenswrapper[5050]: I1211 15:15:27.370214 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6910859-c36e-4687-a1a5-abed4bbb8e30-dns-svc\") pod \"dnsmasq-dns-6f8fb65bfc-krgh4\" (UID: \"d6910859-c36e-4687-a1a5-abed4bbb8e30\") " pod="openstack/dnsmasq-dns-6f8fb65bfc-krgh4" Dec 11 15:15:27 crc kubenswrapper[5050]: I1211 15:15:27.370367 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlktv\" (UniqueName: \"kubernetes.io/projected/d6910859-c36e-4687-a1a5-abed4bbb8e30-kube-api-access-jlktv\") pod \"dnsmasq-dns-6f8fb65bfc-krgh4\" (UID: \"d6910859-c36e-4687-a1a5-abed4bbb8e30\") " pod="openstack/dnsmasq-dns-6f8fb65bfc-krgh4" Dec 11 15:15:27 crc kubenswrapper[5050]: I1211 15:15:27.370448 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6910859-c36e-4687-a1a5-abed4bbb8e30-config\") pod \"dnsmasq-dns-6f8fb65bfc-krgh4\" (UID: \"d6910859-c36e-4687-a1a5-abed4bbb8e30\") " pod="openstack/dnsmasq-dns-6f8fb65bfc-krgh4" Dec 11 15:15:27 crc kubenswrapper[5050]: I1211 15:15:27.472239 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6910859-c36e-4687-a1a5-abed4bbb8e30-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8fb65bfc-krgh4\" (UID: \"d6910859-c36e-4687-a1a5-abed4bbb8e30\") " pod="openstack/dnsmasq-dns-6f8fb65bfc-krgh4" Dec 11 15:15:27 crc kubenswrapper[5050]: I1211 15:15:27.472301 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6910859-c36e-4687-a1a5-abed4bbb8e30-dns-svc\") pod \"dnsmasq-dns-6f8fb65bfc-krgh4\" (UID: \"d6910859-c36e-4687-a1a5-abed4bbb8e30\") " pod="openstack/dnsmasq-dns-6f8fb65bfc-krgh4" Dec 11 15:15:27 crc kubenswrapper[5050]: I1211 15:15:27.472365 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlktv\" (UniqueName: \"kubernetes.io/projected/d6910859-c36e-4687-a1a5-abed4bbb8e30-kube-api-access-jlktv\") pod \"dnsmasq-dns-6f8fb65bfc-krgh4\" (UID: \"d6910859-c36e-4687-a1a5-abed4bbb8e30\") " pod="openstack/dnsmasq-dns-6f8fb65bfc-krgh4" Dec 11 15:15:27 crc kubenswrapper[5050]: I1211 15:15:27.472400 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6910859-c36e-4687-a1a5-abed4bbb8e30-config\") pod \"dnsmasq-dns-6f8fb65bfc-krgh4\" (UID: \"d6910859-c36e-4687-a1a5-abed4bbb8e30\") " pod="openstack/dnsmasq-dns-6f8fb65bfc-krgh4" Dec 11 15:15:27 crc kubenswrapper[5050]: I1211 15:15:27.472437 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6910859-c36e-4687-a1a5-abed4bbb8e30-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8fb65bfc-krgh4\" (UID: \"d6910859-c36e-4687-a1a5-abed4bbb8e30\") " pod="openstack/dnsmasq-dns-6f8fb65bfc-krgh4" Dec 11 15:15:27 crc kubenswrapper[5050]: I1211 15:15:27.473403 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6910859-c36e-4687-a1a5-abed4bbb8e30-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8fb65bfc-krgh4\" (UID: \"d6910859-c36e-4687-a1a5-abed4bbb8e30\") " pod="openstack/dnsmasq-dns-6f8fb65bfc-krgh4" Dec 11 15:15:27 crc kubenswrapper[5050]: I1211 15:15:27.473507 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6910859-c36e-4687-a1a5-abed4bbb8e30-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8fb65bfc-krgh4\" (UID: \"d6910859-c36e-4687-a1a5-abed4bbb8e30\") " pod="openstack/dnsmasq-dns-6f8fb65bfc-krgh4" Dec 11 15:15:27 crc kubenswrapper[5050]: I1211 15:15:27.473989 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6910859-c36e-4687-a1a5-abed4bbb8e30-config\") pod \"dnsmasq-dns-6f8fb65bfc-krgh4\" (UID: \"d6910859-c36e-4687-a1a5-abed4bbb8e30\") " pod="openstack/dnsmasq-dns-6f8fb65bfc-krgh4" Dec 11 15:15:27 crc kubenswrapper[5050]: I1211 15:15:27.474144 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6910859-c36e-4687-a1a5-abed4bbb8e30-dns-svc\") pod \"dnsmasq-dns-6f8fb65bfc-krgh4\" (UID: \"d6910859-c36e-4687-a1a5-abed4bbb8e30\") " pod="openstack/dnsmasq-dns-6f8fb65bfc-krgh4" Dec 11 15:15:27 crc kubenswrapper[5050]: I1211 15:15:27.493490 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlktv\" (UniqueName: \"kubernetes.io/projected/d6910859-c36e-4687-a1a5-abed4bbb8e30-kube-api-access-jlktv\") pod \"dnsmasq-dns-6f8fb65bfc-krgh4\" (UID: \"d6910859-c36e-4687-a1a5-abed4bbb8e30\") " pod="openstack/dnsmasq-dns-6f8fb65bfc-krgh4" Dec 11 15:15:27 crc kubenswrapper[5050]: I1211 15:15:27.595918 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8fb65bfc-krgh4" Dec 11 15:15:27 crc kubenswrapper[5050]: I1211 15:15:27.828497 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f8fb65bfc-krgh4"] Dec 11 15:15:27 crc kubenswrapper[5050]: W1211 15:15:27.838433 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6910859_c36e_4687_a1a5_abed4bbb8e30.slice/crio-ec2cf7d9e9da68dcbba2c9be6c7366d393d6d365a6d42a19dfc345a8c803604b WatchSource:0}: Error finding container ec2cf7d9e9da68dcbba2c9be6c7366d393d6d365a6d42a19dfc345a8c803604b: Status 404 returned error can't find the container with id ec2cf7d9e9da68dcbba2c9be6c7366d393d6d365a6d42a19dfc345a8c803604b Dec 11 15:15:28 crc kubenswrapper[5050]: I1211 15:15:28.059310 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8fb65bfc-krgh4" event={"ID":"d6910859-c36e-4687-a1a5-abed4bbb8e30","Type":"ContainerStarted","Data":"c3b6e227780c90f2819eaa56c59c7c01720c86ef55d8beb8ab6445a002609a2a"} Dec 11 15:15:28 crc kubenswrapper[5050]: I1211 15:15:28.059373 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8fb65bfc-krgh4" event={"ID":"d6910859-c36e-4687-a1a5-abed4bbb8e30","Type":"ContainerStarted","Data":"ec2cf7d9e9da68dcbba2c9be6c7366d393d6d365a6d42a19dfc345a8c803604b"} Dec 11 15:15:28 crc kubenswrapper[5050]: I1211 15:15:28.063236 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6976f77f65-bjq7x" event={"ID":"042124c0-72e3-47f9-af98-1b83ea808890","Type":"ContainerStarted","Data":"f8d30d8800107178a734e2b8a529ca6e70435b1f28517db85825d4ca2d320d4f"} Dec 11 15:15:28 crc kubenswrapper[5050]: I1211 15:15:28.063356 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6976f77f65-bjq7x" podUID="042124c0-72e3-47f9-af98-1b83ea808890" containerName="dnsmasq-dns" containerID="cri-o://f8d30d8800107178a734e2b8a529ca6e70435b1f28517db85825d4ca2d320d4f" gracePeriod=10 Dec 11 15:15:28 crc kubenswrapper[5050]: I1211 15:15:28.063475 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6976f77f65-bjq7x" Dec 11 15:15:28 crc kubenswrapper[5050]: I1211 15:15:28.119288 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6976f77f65-bjq7x" podStartSLOduration=3.119266795 podStartE2EDuration="3.119266795s" podCreationTimestamp="2025-12-11 15:15:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:15:28.114658911 +0000 UTC m=+5218.958381497" watchObservedRunningTime="2025-12-11 15:15:28.119266795 +0000 UTC m=+5218.962989381" Dec 11 15:15:28 crc kubenswrapper[5050]: I1211 15:15:28.492825 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6976f77f65-bjq7x" Dec 11 15:15:28 crc kubenswrapper[5050]: I1211 15:15:28.604489 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/042124c0-72e3-47f9-af98-1b83ea808890-dns-svc\") pod \"042124c0-72e3-47f9-af98-1b83ea808890\" (UID: \"042124c0-72e3-47f9-af98-1b83ea808890\") " Dec 11 15:15:28 crc kubenswrapper[5050]: I1211 15:15:28.604612 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/042124c0-72e3-47f9-af98-1b83ea808890-config\") pod \"042124c0-72e3-47f9-af98-1b83ea808890\" (UID: \"042124c0-72e3-47f9-af98-1b83ea808890\") " Dec 11 15:15:28 crc kubenswrapper[5050]: I1211 15:15:28.604675 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w99mj\" (UniqueName: \"kubernetes.io/projected/042124c0-72e3-47f9-af98-1b83ea808890-kube-api-access-w99mj\") pod \"042124c0-72e3-47f9-af98-1b83ea808890\" (UID: \"042124c0-72e3-47f9-af98-1b83ea808890\") " Dec 11 15:15:28 crc kubenswrapper[5050]: I1211 15:15:28.604742 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/042124c0-72e3-47f9-af98-1b83ea808890-ovsdbserver-nb\") pod \"042124c0-72e3-47f9-af98-1b83ea808890\" (UID: \"042124c0-72e3-47f9-af98-1b83ea808890\") " Dec 11 15:15:28 crc kubenswrapper[5050]: I1211 15:15:28.614083 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/042124c0-72e3-47f9-af98-1b83ea808890-kube-api-access-w99mj" (OuterVolumeSpecName: "kube-api-access-w99mj") pod "042124c0-72e3-47f9-af98-1b83ea808890" (UID: "042124c0-72e3-47f9-af98-1b83ea808890"). InnerVolumeSpecName "kube-api-access-w99mj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:15:28 crc kubenswrapper[5050]: I1211 15:15:28.647605 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/042124c0-72e3-47f9-af98-1b83ea808890-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "042124c0-72e3-47f9-af98-1b83ea808890" (UID: "042124c0-72e3-47f9-af98-1b83ea808890"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:15:28 crc kubenswrapper[5050]: I1211 15:15:28.661778 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/042124c0-72e3-47f9-af98-1b83ea808890-config" (OuterVolumeSpecName: "config") pod "042124c0-72e3-47f9-af98-1b83ea808890" (UID: "042124c0-72e3-47f9-af98-1b83ea808890"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:15:28 crc kubenswrapper[5050]: I1211 15:15:28.662721 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/042124c0-72e3-47f9-af98-1b83ea808890-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "042124c0-72e3-47f9-af98-1b83ea808890" (UID: "042124c0-72e3-47f9-af98-1b83ea808890"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:15:28 crc kubenswrapper[5050]: I1211 15:15:28.707072 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/042124c0-72e3-47f9-af98-1b83ea808890-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 15:15:28 crc kubenswrapper[5050]: I1211 15:15:28.707114 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/042124c0-72e3-47f9-af98-1b83ea808890-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 15:15:28 crc kubenswrapper[5050]: I1211 15:15:28.707124 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/042124c0-72e3-47f9-af98-1b83ea808890-config\") on node \"crc\" DevicePath \"\"" Dec 11 15:15:28 crc kubenswrapper[5050]: I1211 15:15:28.707135 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w99mj\" (UniqueName: \"kubernetes.io/projected/042124c0-72e3-47f9-af98-1b83ea808890-kube-api-access-w99mj\") on node \"crc\" DevicePath \"\"" Dec 11 15:15:29 crc kubenswrapper[5050]: I1211 15:15:29.079345 5050 generic.go:334] "Generic (PLEG): container finished" podID="d6910859-c36e-4687-a1a5-abed4bbb8e30" containerID="c3b6e227780c90f2819eaa56c59c7c01720c86ef55d8beb8ab6445a002609a2a" exitCode=0 Dec 11 15:15:29 crc kubenswrapper[5050]: I1211 15:15:29.079437 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8fb65bfc-krgh4" event={"ID":"d6910859-c36e-4687-a1a5-abed4bbb8e30","Type":"ContainerDied","Data":"c3b6e227780c90f2819eaa56c59c7c01720c86ef55d8beb8ab6445a002609a2a"} Dec 11 15:15:29 crc kubenswrapper[5050]: I1211 15:15:29.082626 5050 generic.go:334] "Generic (PLEG): container finished" podID="042124c0-72e3-47f9-af98-1b83ea808890" containerID="f8d30d8800107178a734e2b8a529ca6e70435b1f28517db85825d4ca2d320d4f" exitCode=0 Dec 11 15:15:29 crc kubenswrapper[5050]: I1211 15:15:29.082714 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6976f77f65-bjq7x" event={"ID":"042124c0-72e3-47f9-af98-1b83ea808890","Type":"ContainerDied","Data":"f8d30d8800107178a734e2b8a529ca6e70435b1f28517db85825d4ca2d320d4f"} Dec 11 15:15:29 crc kubenswrapper[5050]: I1211 15:15:29.082766 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6976f77f65-bjq7x" event={"ID":"042124c0-72e3-47f9-af98-1b83ea808890","Type":"ContainerDied","Data":"e373187f1c727c84be9fdf1dc7597c6ee7e7d1021c48554ad0a8ba8b8753560b"} Dec 11 15:15:29 crc kubenswrapper[5050]: I1211 15:15:29.082769 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6976f77f65-bjq7x" Dec 11 15:15:29 crc kubenswrapper[5050]: I1211 15:15:29.082800 5050 scope.go:117] "RemoveContainer" containerID="f8d30d8800107178a734e2b8a529ca6e70435b1f28517db85825d4ca2d320d4f" Dec 11 15:15:29 crc kubenswrapper[5050]: I1211 15:15:29.123747 5050 scope.go:117] "RemoveContainer" containerID="c71108081becd7d4ed9db84e96bc8551299c6b15077e3d9a1138dd3c9d901520" Dec 11 15:15:29 crc kubenswrapper[5050]: I1211 15:15:29.214394 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6976f77f65-bjq7x"] Dec 11 15:15:29 crc kubenswrapper[5050]: I1211 15:15:29.220182 5050 scope.go:117] "RemoveContainer" containerID="f8d30d8800107178a734e2b8a529ca6e70435b1f28517db85825d4ca2d320d4f" Dec 11 15:15:29 crc kubenswrapper[5050]: E1211 15:15:29.220988 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8d30d8800107178a734e2b8a529ca6e70435b1f28517db85825d4ca2d320d4f\": container with ID starting with f8d30d8800107178a734e2b8a529ca6e70435b1f28517db85825d4ca2d320d4f not found: ID does not exist" containerID="f8d30d8800107178a734e2b8a529ca6e70435b1f28517db85825d4ca2d320d4f" Dec 11 15:15:29 crc kubenswrapper[5050]: I1211 15:15:29.221088 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8d30d8800107178a734e2b8a529ca6e70435b1f28517db85825d4ca2d320d4f"} err="failed to get container status \"f8d30d8800107178a734e2b8a529ca6e70435b1f28517db85825d4ca2d320d4f\": rpc error: code = NotFound desc = could not find container \"f8d30d8800107178a734e2b8a529ca6e70435b1f28517db85825d4ca2d320d4f\": container with ID starting with f8d30d8800107178a734e2b8a529ca6e70435b1f28517db85825d4ca2d320d4f not found: ID does not exist" Dec 11 15:15:29 crc kubenswrapper[5050]: I1211 15:15:29.221131 5050 scope.go:117] "RemoveContainer" containerID="c71108081becd7d4ed9db84e96bc8551299c6b15077e3d9a1138dd3c9d901520" Dec 11 15:15:29 crc kubenswrapper[5050]: E1211 15:15:29.221569 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c71108081becd7d4ed9db84e96bc8551299c6b15077e3d9a1138dd3c9d901520\": container with ID starting with c71108081becd7d4ed9db84e96bc8551299c6b15077e3d9a1138dd3c9d901520 not found: ID does not exist" containerID="c71108081becd7d4ed9db84e96bc8551299c6b15077e3d9a1138dd3c9d901520" Dec 11 15:15:29 crc kubenswrapper[5050]: I1211 15:15:29.221692 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c71108081becd7d4ed9db84e96bc8551299c6b15077e3d9a1138dd3c9d901520"} err="failed to get container status \"c71108081becd7d4ed9db84e96bc8551299c6b15077e3d9a1138dd3c9d901520\": rpc error: code = NotFound desc = could not find container \"c71108081becd7d4ed9db84e96bc8551299c6b15077e3d9a1138dd3c9d901520\": container with ID starting with c71108081becd7d4ed9db84e96bc8551299c6b15077e3d9a1138dd3c9d901520 not found: ID does not exist" Dec 11 15:15:29 crc kubenswrapper[5050]: I1211 15:15:29.225865 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6976f77f65-bjq7x"] Dec 11 15:15:29 crc kubenswrapper[5050]: I1211 15:15:29.562677 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="042124c0-72e3-47f9-af98-1b83ea808890" path="/var/lib/kubelet/pods/042124c0-72e3-47f9-af98-1b83ea808890/volumes" Dec 11 15:15:30 crc kubenswrapper[5050]: I1211 15:15:30.098987 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8fb65bfc-krgh4" event={"ID":"d6910859-c36e-4687-a1a5-abed4bbb8e30","Type":"ContainerStarted","Data":"ba443ba3ffa6d894702955de12decddbceac8e6ce866323177a1c202d8b03b34"} Dec 11 15:15:30 crc kubenswrapper[5050]: I1211 15:15:30.099395 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6f8fb65bfc-krgh4" Dec 11 15:15:30 crc kubenswrapper[5050]: I1211 15:15:30.137834 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6f8fb65bfc-krgh4" podStartSLOduration=3.13778115 podStartE2EDuration="3.13778115s" podCreationTimestamp="2025-12-11 15:15:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:15:30.123810665 +0000 UTC m=+5220.967533291" watchObservedRunningTime="2025-12-11 15:15:30.13778115 +0000 UTC m=+5220.981503776" Dec 11 15:15:30 crc kubenswrapper[5050]: I1211 15:15:30.366697 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Dec 11 15:15:30 crc kubenswrapper[5050]: E1211 15:15:30.367110 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="042124c0-72e3-47f9-af98-1b83ea808890" containerName="init" Dec 11 15:15:30 crc kubenswrapper[5050]: I1211 15:15:30.367130 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="042124c0-72e3-47f9-af98-1b83ea808890" containerName="init" Dec 11 15:15:30 crc kubenswrapper[5050]: E1211 15:15:30.367177 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="042124c0-72e3-47f9-af98-1b83ea808890" containerName="dnsmasq-dns" Dec 11 15:15:30 crc kubenswrapper[5050]: I1211 15:15:30.367186 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="042124c0-72e3-47f9-af98-1b83ea808890" containerName="dnsmasq-dns" Dec 11 15:15:30 crc kubenswrapper[5050]: I1211 15:15:30.367442 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="042124c0-72e3-47f9-af98-1b83ea808890" containerName="dnsmasq-dns" Dec 11 15:15:30 crc kubenswrapper[5050]: I1211 15:15:30.368557 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Dec 11 15:15:30 crc kubenswrapper[5050]: I1211 15:15:30.371235 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Dec 11 15:15:30 crc kubenswrapper[5050]: I1211 15:15:30.386199 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Dec 11 15:15:30 crc kubenswrapper[5050]: I1211 15:15:30.448876 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxphw\" (UniqueName: \"kubernetes.io/projected/a60f04c1-1364-481d-ab88-5cf3cba81d1c-kube-api-access-vxphw\") pod \"ovn-copy-data\" (UID: \"a60f04c1-1364-481d-ab88-5cf3cba81d1c\") " pod="openstack/ovn-copy-data" Dec 11 15:15:30 crc kubenswrapper[5050]: I1211 15:15:30.449005 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/a60f04c1-1364-481d-ab88-5cf3cba81d1c-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"a60f04c1-1364-481d-ab88-5cf3cba81d1c\") " pod="openstack/ovn-copy-data" Dec 11 15:15:30 crc kubenswrapper[5050]: I1211 15:15:30.449448 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-80bd6988-1a21-45ba-99ac-7dfd72e82322\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-80bd6988-1a21-45ba-99ac-7dfd72e82322\") pod \"ovn-copy-data\" (UID: \"a60f04c1-1364-481d-ab88-5cf3cba81d1c\") " pod="openstack/ovn-copy-data" Dec 11 15:15:30 crc kubenswrapper[5050]: I1211 15:15:30.551744 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-80bd6988-1a21-45ba-99ac-7dfd72e82322\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-80bd6988-1a21-45ba-99ac-7dfd72e82322\") pod \"ovn-copy-data\" (UID: \"a60f04c1-1364-481d-ab88-5cf3cba81d1c\") " pod="openstack/ovn-copy-data" Dec 11 15:15:30 crc kubenswrapper[5050]: I1211 15:15:30.551896 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxphw\" (UniqueName: \"kubernetes.io/projected/a60f04c1-1364-481d-ab88-5cf3cba81d1c-kube-api-access-vxphw\") pod \"ovn-copy-data\" (UID: \"a60f04c1-1364-481d-ab88-5cf3cba81d1c\") " pod="openstack/ovn-copy-data" Dec 11 15:15:30 crc kubenswrapper[5050]: I1211 15:15:30.551982 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/a60f04c1-1364-481d-ab88-5cf3cba81d1c-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"a60f04c1-1364-481d-ab88-5cf3cba81d1c\") " pod="openstack/ovn-copy-data" Dec 11 15:15:30 crc kubenswrapper[5050]: I1211 15:15:30.554159 5050 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 11 15:15:30 crc kubenswrapper[5050]: I1211 15:15:30.554212 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-80bd6988-1a21-45ba-99ac-7dfd72e82322\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-80bd6988-1a21-45ba-99ac-7dfd72e82322\") pod \"ovn-copy-data\" (UID: \"a60f04c1-1364-481d-ab88-5cf3cba81d1c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7610280f00224035b568941feb8180e91dbcb807a38d4fcf22df53cb55ccd733/globalmount\"" pod="openstack/ovn-copy-data" Dec 11 15:15:30 crc kubenswrapper[5050]: I1211 15:15:30.560225 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/a60f04c1-1364-481d-ab88-5cf3cba81d1c-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"a60f04c1-1364-481d-ab88-5cf3cba81d1c\") " pod="openstack/ovn-copy-data" Dec 11 15:15:30 crc kubenswrapper[5050]: I1211 15:15:30.586619 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxphw\" (UniqueName: \"kubernetes.io/projected/a60f04c1-1364-481d-ab88-5cf3cba81d1c-kube-api-access-vxphw\") pod \"ovn-copy-data\" (UID: \"a60f04c1-1364-481d-ab88-5cf3cba81d1c\") " pod="openstack/ovn-copy-data" Dec 11 15:15:30 crc kubenswrapper[5050]: I1211 15:15:30.609259 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-80bd6988-1a21-45ba-99ac-7dfd72e82322\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-80bd6988-1a21-45ba-99ac-7dfd72e82322\") pod \"ovn-copy-data\" (UID: \"a60f04c1-1364-481d-ab88-5cf3cba81d1c\") " pod="openstack/ovn-copy-data" Dec 11 15:15:30 crc kubenswrapper[5050]: I1211 15:15:30.705198 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Dec 11 15:15:31 crc kubenswrapper[5050]: I1211 15:15:31.028892 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Dec 11 15:15:31 crc kubenswrapper[5050]: W1211 15:15:31.030139 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda60f04c1_1364_481d_ab88_5cf3cba81d1c.slice/crio-8559a4a9541e394c16449808c7e5efcc4f4157d4f458088ebc67a190477bdf5d WatchSource:0}: Error finding container 8559a4a9541e394c16449808c7e5efcc4f4157d4f458088ebc67a190477bdf5d: Status 404 returned error can't find the container with id 8559a4a9541e394c16449808c7e5efcc4f4157d4f458088ebc67a190477bdf5d Dec 11 15:15:31 crc kubenswrapper[5050]: I1211 15:15:31.112953 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"a60f04c1-1364-481d-ab88-5cf3cba81d1c","Type":"ContainerStarted","Data":"8559a4a9541e394c16449808c7e5efcc4f4157d4f458088ebc67a190477bdf5d"} Dec 11 15:15:32 crc kubenswrapper[5050]: I1211 15:15:32.123255 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"a60f04c1-1364-481d-ab88-5cf3cba81d1c","Type":"ContainerStarted","Data":"c8c634891cbc3f8113219956ea7500a95c495ce4f02383b255504e6a536d9469"} Dec 11 15:15:32 crc kubenswrapper[5050]: I1211 15:15:32.162451 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=2.476064994 podStartE2EDuration="3.162415062s" podCreationTimestamp="2025-12-11 15:15:29 +0000 UTC" firstStartedPulling="2025-12-11 15:15:31.037075317 +0000 UTC m=+5221.880797913" lastFinishedPulling="2025-12-11 15:15:31.723425355 +0000 UTC m=+5222.567147981" observedRunningTime="2025-12-11 15:15:32.147906392 +0000 UTC m=+5222.991628978" watchObservedRunningTime="2025-12-11 15:15:32.162415062 +0000 UTC m=+5223.006137658" Dec 11 15:15:37 crc kubenswrapper[5050]: I1211 15:15:37.598144 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6f8fb65bfc-krgh4" Dec 11 15:15:37 crc kubenswrapper[5050]: I1211 15:15:37.684493 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-7gqfj"] Dec 11 15:15:37 crc kubenswrapper[5050]: I1211 15:15:37.684758 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-699964fbc-7gqfj" podUID="51ef9026-602b-4f6a-98f6-9d0f065f6c45" containerName="dnsmasq-dns" containerID="cri-o://287588aa8c74548b8956ec69ebc3861fe0b814b83f650cf50df66dc37e1364e0" gracePeriod=10 Dec 11 15:15:38 crc kubenswrapper[5050]: I1211 15:15:38.196162 5050 generic.go:334] "Generic (PLEG): container finished" podID="51ef9026-602b-4f6a-98f6-9d0f065f6c45" containerID="287588aa8c74548b8956ec69ebc3861fe0b814b83f650cf50df66dc37e1364e0" exitCode=0 Dec 11 15:15:38 crc kubenswrapper[5050]: I1211 15:15:38.196422 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-7gqfj" event={"ID":"51ef9026-602b-4f6a-98f6-9d0f065f6c45","Type":"ContainerDied","Data":"287588aa8c74548b8956ec69ebc3861fe0b814b83f650cf50df66dc37e1364e0"} Dec 11 15:15:38 crc kubenswrapper[5050]: I1211 15:15:38.196444 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699964fbc-7gqfj" event={"ID":"51ef9026-602b-4f6a-98f6-9d0f065f6c45","Type":"ContainerDied","Data":"c05021bb10c66ca0c86c55d9afe24be6db3a9a8b8b1746a12ef19c4ed5f947a5"} Dec 11 15:15:38 crc kubenswrapper[5050]: I1211 15:15:38.196454 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c05021bb10c66ca0c86c55d9afe24be6db3a9a8b8b1746a12ef19c4ed5f947a5" Dec 11 15:15:38 crc kubenswrapper[5050]: I1211 15:15:38.196553 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-7gqfj" Dec 11 15:15:38 crc kubenswrapper[5050]: I1211 15:15:38.307905 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51ef9026-602b-4f6a-98f6-9d0f065f6c45-dns-svc\") pod \"51ef9026-602b-4f6a-98f6-9d0f065f6c45\" (UID: \"51ef9026-602b-4f6a-98f6-9d0f065f6c45\") " Dec 11 15:15:38 crc kubenswrapper[5050]: I1211 15:15:38.307950 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51ef9026-602b-4f6a-98f6-9d0f065f6c45-config\") pod \"51ef9026-602b-4f6a-98f6-9d0f065f6c45\" (UID: \"51ef9026-602b-4f6a-98f6-9d0f065f6c45\") " Dec 11 15:15:38 crc kubenswrapper[5050]: I1211 15:15:38.308193 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnqhs\" (UniqueName: \"kubernetes.io/projected/51ef9026-602b-4f6a-98f6-9d0f065f6c45-kube-api-access-rnqhs\") pod \"51ef9026-602b-4f6a-98f6-9d0f065f6c45\" (UID: \"51ef9026-602b-4f6a-98f6-9d0f065f6c45\") " Dec 11 15:15:38 crc kubenswrapper[5050]: I1211 15:15:38.313415 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51ef9026-602b-4f6a-98f6-9d0f065f6c45-kube-api-access-rnqhs" (OuterVolumeSpecName: "kube-api-access-rnqhs") pod "51ef9026-602b-4f6a-98f6-9d0f065f6c45" (UID: "51ef9026-602b-4f6a-98f6-9d0f065f6c45"). InnerVolumeSpecName "kube-api-access-rnqhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:15:38 crc kubenswrapper[5050]: I1211 15:15:38.344876 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51ef9026-602b-4f6a-98f6-9d0f065f6c45-config" (OuterVolumeSpecName: "config") pod "51ef9026-602b-4f6a-98f6-9d0f065f6c45" (UID: "51ef9026-602b-4f6a-98f6-9d0f065f6c45"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:15:38 crc kubenswrapper[5050]: I1211 15:15:38.354485 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51ef9026-602b-4f6a-98f6-9d0f065f6c45-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "51ef9026-602b-4f6a-98f6-9d0f065f6c45" (UID: "51ef9026-602b-4f6a-98f6-9d0f065f6c45"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:15:38 crc kubenswrapper[5050]: I1211 15:15:38.411006 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnqhs\" (UniqueName: \"kubernetes.io/projected/51ef9026-602b-4f6a-98f6-9d0f065f6c45-kube-api-access-rnqhs\") on node \"crc\" DevicePath \"\"" Dec 11 15:15:38 crc kubenswrapper[5050]: I1211 15:15:38.411054 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51ef9026-602b-4f6a-98f6-9d0f065f6c45-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 15:15:38 crc kubenswrapper[5050]: I1211 15:15:38.411066 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51ef9026-602b-4f6a-98f6-9d0f065f6c45-config\") on node \"crc\" DevicePath \"\"" Dec 11 15:15:38 crc kubenswrapper[5050]: I1211 15:15:38.852925 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 11 15:15:38 crc kubenswrapper[5050]: E1211 15:15:38.853467 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ef9026-602b-4f6a-98f6-9d0f065f6c45" containerName="init" Dec 11 15:15:38 crc kubenswrapper[5050]: I1211 15:15:38.853486 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ef9026-602b-4f6a-98f6-9d0f065f6c45" containerName="init" Dec 11 15:15:38 crc kubenswrapper[5050]: E1211 15:15:38.853524 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ef9026-602b-4f6a-98f6-9d0f065f6c45" containerName="dnsmasq-dns" Dec 11 15:15:38 crc kubenswrapper[5050]: I1211 15:15:38.853533 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ef9026-602b-4f6a-98f6-9d0f065f6c45" containerName="dnsmasq-dns" Dec 11 15:15:38 crc kubenswrapper[5050]: I1211 15:15:38.853741 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="51ef9026-602b-4f6a-98f6-9d0f065f6c45" containerName="dnsmasq-dns" Dec 11 15:15:38 crc kubenswrapper[5050]: I1211 15:15:38.854962 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 11 15:15:38 crc kubenswrapper[5050]: I1211 15:15:38.857293 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-55k4b" Dec 11 15:15:38 crc kubenswrapper[5050]: I1211 15:15:38.857964 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 11 15:15:38 crc kubenswrapper[5050]: I1211 15:15:38.858431 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 11 15:15:38 crc kubenswrapper[5050]: I1211 15:15:38.861514 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 11 15:15:38 crc kubenswrapper[5050]: I1211 15:15:38.920826 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/654ba650-97ba-422e-931f-c97a03d7ff9c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"654ba650-97ba-422e-931f-c97a03d7ff9c\") " pod="openstack/ovn-northd-0" Dec 11 15:15:38 crc kubenswrapper[5050]: I1211 15:15:38.920907 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/654ba650-97ba-422e-931f-c97a03d7ff9c-scripts\") pod \"ovn-northd-0\" (UID: \"654ba650-97ba-422e-931f-c97a03d7ff9c\") " pod="openstack/ovn-northd-0" Dec 11 15:15:39 crc kubenswrapper[5050]: I1211 15:15:39.022967 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/654ba650-97ba-422e-931f-c97a03d7ff9c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"654ba650-97ba-422e-931f-c97a03d7ff9c\") " pod="openstack/ovn-northd-0" Dec 11 15:15:39 crc kubenswrapper[5050]: I1211 15:15:39.023263 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwjb9\" (UniqueName: \"kubernetes.io/projected/654ba650-97ba-422e-931f-c97a03d7ff9c-kube-api-access-bwjb9\") pod \"ovn-northd-0\" (UID: \"654ba650-97ba-422e-931f-c97a03d7ff9c\") " pod="openstack/ovn-northd-0" Dec 11 15:15:39 crc kubenswrapper[5050]: I1211 15:15:39.023395 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/654ba650-97ba-422e-931f-c97a03d7ff9c-scripts\") pod \"ovn-northd-0\" (UID: \"654ba650-97ba-422e-931f-c97a03d7ff9c\") " pod="openstack/ovn-northd-0" Dec 11 15:15:39 crc kubenswrapper[5050]: I1211 15:15:39.023480 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/654ba650-97ba-422e-931f-c97a03d7ff9c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"654ba650-97ba-422e-931f-c97a03d7ff9c\") " pod="openstack/ovn-northd-0" Dec 11 15:15:39 crc kubenswrapper[5050]: I1211 15:15:39.023577 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/654ba650-97ba-422e-931f-c97a03d7ff9c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"654ba650-97ba-422e-931f-c97a03d7ff9c\") " pod="openstack/ovn-northd-0" Dec 11 15:15:39 crc kubenswrapper[5050]: I1211 15:15:39.023585 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/654ba650-97ba-422e-931f-c97a03d7ff9c-config\") pod \"ovn-northd-0\" (UID: \"654ba650-97ba-422e-931f-c97a03d7ff9c\") " pod="openstack/ovn-northd-0" Dec 11 15:15:39 crc kubenswrapper[5050]: I1211 15:15:39.024144 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/654ba650-97ba-422e-931f-c97a03d7ff9c-scripts\") pod \"ovn-northd-0\" (UID: \"654ba650-97ba-422e-931f-c97a03d7ff9c\") " pod="openstack/ovn-northd-0" Dec 11 15:15:39 crc kubenswrapper[5050]: I1211 15:15:39.124895 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/654ba650-97ba-422e-931f-c97a03d7ff9c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"654ba650-97ba-422e-931f-c97a03d7ff9c\") " pod="openstack/ovn-northd-0" Dec 11 15:15:39 crc kubenswrapper[5050]: I1211 15:15:39.125065 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/654ba650-97ba-422e-931f-c97a03d7ff9c-config\") pod \"ovn-northd-0\" (UID: \"654ba650-97ba-422e-931f-c97a03d7ff9c\") " pod="openstack/ovn-northd-0" Dec 11 15:15:39 crc kubenswrapper[5050]: I1211 15:15:39.125188 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwjb9\" (UniqueName: \"kubernetes.io/projected/654ba650-97ba-422e-931f-c97a03d7ff9c-kube-api-access-bwjb9\") pod \"ovn-northd-0\" (UID: \"654ba650-97ba-422e-931f-c97a03d7ff9c\") " pod="openstack/ovn-northd-0" Dec 11 15:15:39 crc kubenswrapper[5050]: I1211 15:15:39.125941 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/654ba650-97ba-422e-931f-c97a03d7ff9c-config\") pod \"ovn-northd-0\" (UID: \"654ba650-97ba-422e-931f-c97a03d7ff9c\") " pod="openstack/ovn-northd-0" Dec 11 15:15:39 crc kubenswrapper[5050]: I1211 15:15:39.130510 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/654ba650-97ba-422e-931f-c97a03d7ff9c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"654ba650-97ba-422e-931f-c97a03d7ff9c\") " pod="openstack/ovn-northd-0" Dec 11 15:15:39 crc kubenswrapper[5050]: I1211 15:15:39.143129 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwjb9\" (UniqueName: \"kubernetes.io/projected/654ba650-97ba-422e-931f-c97a03d7ff9c-kube-api-access-bwjb9\") pod \"ovn-northd-0\" (UID: \"654ba650-97ba-422e-931f-c97a03d7ff9c\") " pod="openstack/ovn-northd-0" Dec 11 15:15:39 crc kubenswrapper[5050]: I1211 15:15:39.174650 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 11 15:15:39 crc kubenswrapper[5050]: I1211 15:15:39.203685 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699964fbc-7gqfj" Dec 11 15:15:39 crc kubenswrapper[5050]: I1211 15:15:39.296793 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-7gqfj"] Dec 11 15:15:39 crc kubenswrapper[5050]: I1211 15:15:39.304613 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-699964fbc-7gqfj"] Dec 11 15:15:39 crc kubenswrapper[5050]: I1211 15:15:39.560968 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51ef9026-602b-4f6a-98f6-9d0f065f6c45" path="/var/lib/kubelet/pods/51ef9026-602b-4f6a-98f6-9d0f065f6c45/volumes" Dec 11 15:15:39 crc kubenswrapper[5050]: I1211 15:15:39.624977 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 11 15:15:39 crc kubenswrapper[5050]: W1211 15:15:39.640232 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod654ba650_97ba_422e_931f_c97a03d7ff9c.slice/crio-8761485bb19a667408d1c9e919b5c3f7ff141f036f9fc8922a0bf0a9b06485a5 WatchSource:0}: Error finding container 8761485bb19a667408d1c9e919b5c3f7ff141f036f9fc8922a0bf0a9b06485a5: Status 404 returned error can't find the container with id 8761485bb19a667408d1c9e919b5c3f7ff141f036f9fc8922a0bf0a9b06485a5 Dec 11 15:15:40 crc kubenswrapper[5050]: I1211 15:15:40.215626 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"654ba650-97ba-422e-931f-c97a03d7ff9c","Type":"ContainerStarted","Data":"921ef0f371daef6e1532bd5ec32b0709d5dd3a81072bf492a3505626e72a7c76"} Dec 11 15:15:40 crc kubenswrapper[5050]: I1211 15:15:40.215696 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"654ba650-97ba-422e-931f-c97a03d7ff9c","Type":"ContainerStarted","Data":"bc90a779d1cfcea35573cd6819ae6ea5fc51d195927f343cea1566a23abd22c9"} Dec 11 15:15:40 crc kubenswrapper[5050]: I1211 15:15:40.215710 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"654ba650-97ba-422e-931f-c97a03d7ff9c","Type":"ContainerStarted","Data":"8761485bb19a667408d1c9e919b5c3f7ff141f036f9fc8922a0bf0a9b06485a5"} Dec 11 15:15:40 crc kubenswrapper[5050]: I1211 15:15:40.215841 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 11 15:15:40 crc kubenswrapper[5050]: I1211 15:15:40.232847 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.232822467 podStartE2EDuration="2.232822467s" podCreationTimestamp="2025-12-11 15:15:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:15:40.232746725 +0000 UTC m=+5231.076469301" watchObservedRunningTime="2025-12-11 15:15:40.232822467 +0000 UTC m=+5231.076545053" Dec 11 15:15:40 crc kubenswrapper[5050]: I1211 15:15:40.796911 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 15:15:40 crc kubenswrapper[5050]: I1211 15:15:40.797634 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 15:15:44 crc kubenswrapper[5050]: I1211 15:15:44.492626 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-6lrzt"] Dec 11 15:15:44 crc kubenswrapper[5050]: I1211 15:15:44.494325 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6lrzt" Dec 11 15:15:44 crc kubenswrapper[5050]: I1211 15:15:44.506898 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-2f73-account-create-update-q7tdz"] Dec 11 15:15:44 crc kubenswrapper[5050]: I1211 15:15:44.512168 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2f73-account-create-update-q7tdz" Dec 11 15:15:44 crc kubenswrapper[5050]: I1211 15:15:44.521114 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 11 15:15:44 crc kubenswrapper[5050]: I1211 15:15:44.528824 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6lrzt"] Dec 11 15:15:44 crc kubenswrapper[5050]: I1211 15:15:44.536356 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bnzg\" (UniqueName: \"kubernetes.io/projected/8d7528a4-821a-4b77-8dc9-91b73ead942f-kube-api-access-5bnzg\") pod \"keystone-db-create-6lrzt\" (UID: \"8d7528a4-821a-4b77-8dc9-91b73ead942f\") " pod="openstack/keystone-db-create-6lrzt" Dec 11 15:15:44 crc kubenswrapper[5050]: I1211 15:15:44.536482 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d7528a4-821a-4b77-8dc9-91b73ead942f-operator-scripts\") pod \"keystone-db-create-6lrzt\" (UID: \"8d7528a4-821a-4b77-8dc9-91b73ead942f\") " pod="openstack/keystone-db-create-6lrzt" Dec 11 15:15:44 crc kubenswrapper[5050]: I1211 15:15:44.536530 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0df6fee-7c04-4607-9472-294071bcb806-operator-scripts\") pod \"keystone-2f73-account-create-update-q7tdz\" (UID: \"e0df6fee-7c04-4607-9472-294071bcb806\") " pod="openstack/keystone-2f73-account-create-update-q7tdz" Dec 11 15:15:44 crc kubenswrapper[5050]: I1211 15:15:44.536553 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnxnx\" (UniqueName: \"kubernetes.io/projected/e0df6fee-7c04-4607-9472-294071bcb806-kube-api-access-rnxnx\") pod \"keystone-2f73-account-create-update-q7tdz\" (UID: \"e0df6fee-7c04-4607-9472-294071bcb806\") " pod="openstack/keystone-2f73-account-create-update-q7tdz" Dec 11 15:15:44 crc kubenswrapper[5050]: I1211 15:15:44.540665 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2f73-account-create-update-q7tdz"] Dec 11 15:15:44 crc kubenswrapper[5050]: I1211 15:15:44.637552 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0df6fee-7c04-4607-9472-294071bcb806-operator-scripts\") pod \"keystone-2f73-account-create-update-q7tdz\" (UID: \"e0df6fee-7c04-4607-9472-294071bcb806\") " pod="openstack/keystone-2f73-account-create-update-q7tdz" Dec 11 15:15:44 crc kubenswrapper[5050]: I1211 15:15:44.637841 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnxnx\" (UniqueName: \"kubernetes.io/projected/e0df6fee-7c04-4607-9472-294071bcb806-kube-api-access-rnxnx\") pod \"keystone-2f73-account-create-update-q7tdz\" (UID: \"e0df6fee-7c04-4607-9472-294071bcb806\") " pod="openstack/keystone-2f73-account-create-update-q7tdz" Dec 11 15:15:44 crc kubenswrapper[5050]: I1211 15:15:44.637952 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bnzg\" (UniqueName: \"kubernetes.io/projected/8d7528a4-821a-4b77-8dc9-91b73ead942f-kube-api-access-5bnzg\") pod \"keystone-db-create-6lrzt\" (UID: \"8d7528a4-821a-4b77-8dc9-91b73ead942f\") " pod="openstack/keystone-db-create-6lrzt" Dec 11 15:15:44 crc kubenswrapper[5050]: I1211 15:15:44.638120 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d7528a4-821a-4b77-8dc9-91b73ead942f-operator-scripts\") pod \"keystone-db-create-6lrzt\" (UID: \"8d7528a4-821a-4b77-8dc9-91b73ead942f\") " pod="openstack/keystone-db-create-6lrzt" Dec 11 15:15:44 crc kubenswrapper[5050]: I1211 15:15:44.639513 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d7528a4-821a-4b77-8dc9-91b73ead942f-operator-scripts\") pod \"keystone-db-create-6lrzt\" (UID: \"8d7528a4-821a-4b77-8dc9-91b73ead942f\") " pod="openstack/keystone-db-create-6lrzt" Dec 11 15:15:44 crc kubenswrapper[5050]: I1211 15:15:44.639629 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0df6fee-7c04-4607-9472-294071bcb806-operator-scripts\") pod \"keystone-2f73-account-create-update-q7tdz\" (UID: \"e0df6fee-7c04-4607-9472-294071bcb806\") " pod="openstack/keystone-2f73-account-create-update-q7tdz" Dec 11 15:15:44 crc kubenswrapper[5050]: I1211 15:15:44.666704 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bnzg\" (UniqueName: \"kubernetes.io/projected/8d7528a4-821a-4b77-8dc9-91b73ead942f-kube-api-access-5bnzg\") pod \"keystone-db-create-6lrzt\" (UID: \"8d7528a4-821a-4b77-8dc9-91b73ead942f\") " pod="openstack/keystone-db-create-6lrzt" Dec 11 15:15:44 crc kubenswrapper[5050]: I1211 15:15:44.667782 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnxnx\" (UniqueName: \"kubernetes.io/projected/e0df6fee-7c04-4607-9472-294071bcb806-kube-api-access-rnxnx\") pod \"keystone-2f73-account-create-update-q7tdz\" (UID: \"e0df6fee-7c04-4607-9472-294071bcb806\") " pod="openstack/keystone-2f73-account-create-update-q7tdz" Dec 11 15:15:44 crc kubenswrapper[5050]: I1211 15:15:44.822351 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6lrzt" Dec 11 15:15:44 crc kubenswrapper[5050]: I1211 15:15:44.836047 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2f73-account-create-update-q7tdz" Dec 11 15:15:45 crc kubenswrapper[5050]: I1211 15:15:45.004149 5050 scope.go:117] "RemoveContainer" containerID="8a5cc8d6f6aa7909f8ad1edf31dfcd8a02c5a220eb28689be009ff022bb36e2b" Dec 11 15:15:45 crc kubenswrapper[5050]: I1211 15:15:45.057722 5050 scope.go:117] "RemoveContainer" containerID="e32109e1280e08cf79385c75aa3a9dbf78a80968778d474e207aee266583ac15" Dec 11 15:15:45 crc kubenswrapper[5050]: I1211 15:15:45.079586 5050 scope.go:117] "RemoveContainer" containerID="146c4c22e9bf52ba1df3f025e911c4d0d7af333fad692ba7bfa0ba97256e8caa" Dec 11 15:15:45 crc kubenswrapper[5050]: I1211 15:15:45.104072 5050 scope.go:117] "RemoveContainer" containerID="daaaabe96903ca5d2e52e3ec5a014d4bd2558efbcebf041c6f9821392c4c4833" Dec 11 15:15:45 crc kubenswrapper[5050]: I1211 15:15:45.127934 5050 scope.go:117] "RemoveContainer" containerID="287588aa8c74548b8956ec69ebc3861fe0b814b83f650cf50df66dc37e1364e0" Dec 11 15:15:45 crc kubenswrapper[5050]: I1211 15:15:45.157603 5050 scope.go:117] "RemoveContainer" containerID="422a3e52c9c362f015f715c21d04d80d41d46ca46883be3250ef8da67c8a01e6" Dec 11 15:15:45 crc kubenswrapper[5050]: I1211 15:15:45.176555 5050 scope.go:117] "RemoveContainer" containerID="230f8db9b2d80b148f55ccff672575c069cd97ebc38e2a24c3dd90fbf687bb80" Dec 11 15:15:45 crc kubenswrapper[5050]: I1211 15:15:45.198834 5050 scope.go:117] "RemoveContainer" containerID="26be1ce2ff8de8d8a09aece0d3f92738d24a04d8a2e059d8c2ef7767f78a5c5c" Dec 11 15:15:45 crc kubenswrapper[5050]: I1211 15:15:45.216162 5050 scope.go:117] "RemoveContainer" containerID="d8acce0af52a04fb86554f82cd8dc3d51d4d65484f120a90bf5a82d9097095ef" Dec 11 15:15:45 crc kubenswrapper[5050]: I1211 15:15:45.235816 5050 scope.go:117] "RemoveContainer" containerID="eb5058c059b7bfd37fdf9c68c8cf7bf5613de65886882788fdb0b7d3558a5f68" Dec 11 15:15:45 crc kubenswrapper[5050]: I1211 15:15:45.329250 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6lrzt"] Dec 11 15:15:45 crc kubenswrapper[5050]: I1211 15:15:45.388136 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-2f73-account-create-update-q7tdz"] Dec 11 15:15:46 crc kubenswrapper[5050]: I1211 15:15:46.312335 5050 generic.go:334] "Generic (PLEG): container finished" podID="e0df6fee-7c04-4607-9472-294071bcb806" containerID="dad52fb14ae49173970fec9a1474f102876cd319baf5b98e1a2e258db6195b1c" exitCode=0 Dec 11 15:15:46 crc kubenswrapper[5050]: I1211 15:15:46.312430 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2f73-account-create-update-q7tdz" event={"ID":"e0df6fee-7c04-4607-9472-294071bcb806","Type":"ContainerDied","Data":"dad52fb14ae49173970fec9a1474f102876cd319baf5b98e1a2e258db6195b1c"} Dec 11 15:15:46 crc kubenswrapper[5050]: I1211 15:15:46.312741 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2f73-account-create-update-q7tdz" event={"ID":"e0df6fee-7c04-4607-9472-294071bcb806","Type":"ContainerStarted","Data":"c1bb02f4df7bb8de478ade73e84aea1a32ef302a0d79dbc62a369712a08b1718"} Dec 11 15:15:46 crc kubenswrapper[5050]: I1211 15:15:46.314441 5050 generic.go:334] "Generic (PLEG): container finished" podID="8d7528a4-821a-4b77-8dc9-91b73ead942f" containerID="d05b3caa18e07de84e75cefb136db179f84b31d2cc199887e819c2c980ee5dd1" exitCode=0 Dec 11 15:15:46 crc kubenswrapper[5050]: I1211 15:15:46.314483 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6lrzt" event={"ID":"8d7528a4-821a-4b77-8dc9-91b73ead942f","Type":"ContainerDied","Data":"d05b3caa18e07de84e75cefb136db179f84b31d2cc199887e819c2c980ee5dd1"} Dec 11 15:15:46 crc kubenswrapper[5050]: I1211 15:15:46.314527 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6lrzt" event={"ID":"8d7528a4-821a-4b77-8dc9-91b73ead942f","Type":"ContainerStarted","Data":"b54f373d4efa82f6f37e520a2fdfecaef7edecef3078a39bb7bda9e125e22eab"} Dec 11 15:15:47 crc kubenswrapper[5050]: I1211 15:15:47.754295 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2f73-account-create-update-q7tdz" Dec 11 15:15:47 crc kubenswrapper[5050]: I1211 15:15:47.760166 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6lrzt" Dec 11 15:15:47 crc kubenswrapper[5050]: I1211 15:15:47.893670 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0df6fee-7c04-4607-9472-294071bcb806-operator-scripts\") pod \"e0df6fee-7c04-4607-9472-294071bcb806\" (UID: \"e0df6fee-7c04-4607-9472-294071bcb806\") " Dec 11 15:15:47 crc kubenswrapper[5050]: I1211 15:15:47.893870 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnxnx\" (UniqueName: \"kubernetes.io/projected/e0df6fee-7c04-4607-9472-294071bcb806-kube-api-access-rnxnx\") pod \"e0df6fee-7c04-4607-9472-294071bcb806\" (UID: \"e0df6fee-7c04-4607-9472-294071bcb806\") " Dec 11 15:15:47 crc kubenswrapper[5050]: I1211 15:15:47.893942 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bnzg\" (UniqueName: \"kubernetes.io/projected/8d7528a4-821a-4b77-8dc9-91b73ead942f-kube-api-access-5bnzg\") pod \"8d7528a4-821a-4b77-8dc9-91b73ead942f\" (UID: \"8d7528a4-821a-4b77-8dc9-91b73ead942f\") " Dec 11 15:15:47 crc kubenswrapper[5050]: I1211 15:15:47.893981 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d7528a4-821a-4b77-8dc9-91b73ead942f-operator-scripts\") pod \"8d7528a4-821a-4b77-8dc9-91b73ead942f\" (UID: \"8d7528a4-821a-4b77-8dc9-91b73ead942f\") " Dec 11 15:15:47 crc kubenswrapper[5050]: I1211 15:15:47.894472 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0df6fee-7c04-4607-9472-294071bcb806-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e0df6fee-7c04-4607-9472-294071bcb806" (UID: "e0df6fee-7c04-4607-9472-294071bcb806"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:15:47 crc kubenswrapper[5050]: I1211 15:15:47.894481 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d7528a4-821a-4b77-8dc9-91b73ead942f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8d7528a4-821a-4b77-8dc9-91b73ead942f" (UID: "8d7528a4-821a-4b77-8dc9-91b73ead942f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:15:47 crc kubenswrapper[5050]: I1211 15:15:47.894924 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0df6fee-7c04-4607-9472-294071bcb806-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:15:47 crc kubenswrapper[5050]: I1211 15:15:47.894948 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8d7528a4-821a-4b77-8dc9-91b73ead942f-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:15:47 crc kubenswrapper[5050]: I1211 15:15:47.899259 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0df6fee-7c04-4607-9472-294071bcb806-kube-api-access-rnxnx" (OuterVolumeSpecName: "kube-api-access-rnxnx") pod "e0df6fee-7c04-4607-9472-294071bcb806" (UID: "e0df6fee-7c04-4607-9472-294071bcb806"). InnerVolumeSpecName "kube-api-access-rnxnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:15:47 crc kubenswrapper[5050]: I1211 15:15:47.900332 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d7528a4-821a-4b77-8dc9-91b73ead942f-kube-api-access-5bnzg" (OuterVolumeSpecName: "kube-api-access-5bnzg") pod "8d7528a4-821a-4b77-8dc9-91b73ead942f" (UID: "8d7528a4-821a-4b77-8dc9-91b73ead942f"). InnerVolumeSpecName "kube-api-access-5bnzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:15:47 crc kubenswrapper[5050]: I1211 15:15:47.996149 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnxnx\" (UniqueName: \"kubernetes.io/projected/e0df6fee-7c04-4607-9472-294071bcb806-kube-api-access-rnxnx\") on node \"crc\" DevicePath \"\"" Dec 11 15:15:47 crc kubenswrapper[5050]: I1211 15:15:47.996178 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bnzg\" (UniqueName: \"kubernetes.io/projected/8d7528a4-821a-4b77-8dc9-91b73ead942f-kube-api-access-5bnzg\") on node \"crc\" DevicePath \"\"" Dec 11 15:15:48 crc kubenswrapper[5050]: I1211 15:15:48.332245 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-2f73-account-create-update-q7tdz" event={"ID":"e0df6fee-7c04-4607-9472-294071bcb806","Type":"ContainerDied","Data":"c1bb02f4df7bb8de478ade73e84aea1a32ef302a0d79dbc62a369712a08b1718"} Dec 11 15:15:48 crc kubenswrapper[5050]: I1211 15:15:48.332283 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1bb02f4df7bb8de478ade73e84aea1a32ef302a0d79dbc62a369712a08b1718" Dec 11 15:15:48 crc kubenswrapper[5050]: I1211 15:15:48.332338 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-2f73-account-create-update-q7tdz" Dec 11 15:15:48 crc kubenswrapper[5050]: I1211 15:15:48.335123 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6lrzt" event={"ID":"8d7528a4-821a-4b77-8dc9-91b73ead942f","Type":"ContainerDied","Data":"b54f373d4efa82f6f37e520a2fdfecaef7edecef3078a39bb7bda9e125e22eab"} Dec 11 15:15:48 crc kubenswrapper[5050]: I1211 15:15:48.335171 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b54f373d4efa82f6f37e520a2fdfecaef7edecef3078a39bb7bda9e125e22eab" Dec 11 15:15:48 crc kubenswrapper[5050]: I1211 15:15:48.335186 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6lrzt" Dec 11 15:15:49 crc kubenswrapper[5050]: I1211 15:15:49.269250 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 11 15:15:50 crc kubenswrapper[5050]: I1211 15:15:50.052759 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-nm5r6"] Dec 11 15:15:50 crc kubenswrapper[5050]: E1211 15:15:50.053130 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d7528a4-821a-4b77-8dc9-91b73ead942f" containerName="mariadb-database-create" Dec 11 15:15:50 crc kubenswrapper[5050]: I1211 15:15:50.053147 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d7528a4-821a-4b77-8dc9-91b73ead942f" containerName="mariadb-database-create" Dec 11 15:15:50 crc kubenswrapper[5050]: E1211 15:15:50.053164 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0df6fee-7c04-4607-9472-294071bcb806" containerName="mariadb-account-create-update" Dec 11 15:15:50 crc kubenswrapper[5050]: I1211 15:15:50.053171 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0df6fee-7c04-4607-9472-294071bcb806" containerName="mariadb-account-create-update" Dec 11 15:15:50 crc kubenswrapper[5050]: I1211 15:15:50.053309 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d7528a4-821a-4b77-8dc9-91b73ead942f" containerName="mariadb-database-create" Dec 11 15:15:50 crc kubenswrapper[5050]: I1211 15:15:50.053328 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0df6fee-7c04-4607-9472-294071bcb806" containerName="mariadb-account-create-update" Dec 11 15:15:50 crc kubenswrapper[5050]: I1211 15:15:50.053892 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-nm5r6" Dec 11 15:15:50 crc kubenswrapper[5050]: I1211 15:15:50.056542 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 11 15:15:50 crc kubenswrapper[5050]: I1211 15:15:50.057354 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 11 15:15:50 crc kubenswrapper[5050]: I1211 15:15:50.058138 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jrbb7" Dec 11 15:15:50 crc kubenswrapper[5050]: I1211 15:15:50.059197 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 11 15:15:50 crc kubenswrapper[5050]: I1211 15:15:50.088580 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-nm5r6"] Dec 11 15:15:50 crc kubenswrapper[5050]: I1211 15:15:50.132791 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d056d58-517c-49db-bb61-1a0394fdd271-combined-ca-bundle\") pod \"keystone-db-sync-nm5r6\" (UID: \"8d056d58-517c-49db-bb61-1a0394fdd271\") " pod="openstack/keystone-db-sync-nm5r6" Dec 11 15:15:50 crc kubenswrapper[5050]: I1211 15:15:50.132849 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggjpq\" (UniqueName: \"kubernetes.io/projected/8d056d58-517c-49db-bb61-1a0394fdd271-kube-api-access-ggjpq\") pod \"keystone-db-sync-nm5r6\" (UID: \"8d056d58-517c-49db-bb61-1a0394fdd271\") " pod="openstack/keystone-db-sync-nm5r6" Dec 11 15:15:50 crc kubenswrapper[5050]: I1211 15:15:50.132879 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d056d58-517c-49db-bb61-1a0394fdd271-config-data\") pod \"keystone-db-sync-nm5r6\" (UID: \"8d056d58-517c-49db-bb61-1a0394fdd271\") " pod="openstack/keystone-db-sync-nm5r6" Dec 11 15:15:50 crc kubenswrapper[5050]: I1211 15:15:50.234175 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d056d58-517c-49db-bb61-1a0394fdd271-combined-ca-bundle\") pod \"keystone-db-sync-nm5r6\" (UID: \"8d056d58-517c-49db-bb61-1a0394fdd271\") " pod="openstack/keystone-db-sync-nm5r6" Dec 11 15:15:50 crc kubenswrapper[5050]: I1211 15:15:50.234572 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggjpq\" (UniqueName: \"kubernetes.io/projected/8d056d58-517c-49db-bb61-1a0394fdd271-kube-api-access-ggjpq\") pod \"keystone-db-sync-nm5r6\" (UID: \"8d056d58-517c-49db-bb61-1a0394fdd271\") " pod="openstack/keystone-db-sync-nm5r6" Dec 11 15:15:50 crc kubenswrapper[5050]: I1211 15:15:50.234605 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d056d58-517c-49db-bb61-1a0394fdd271-config-data\") pod \"keystone-db-sync-nm5r6\" (UID: \"8d056d58-517c-49db-bb61-1a0394fdd271\") " pod="openstack/keystone-db-sync-nm5r6" Dec 11 15:15:50 crc kubenswrapper[5050]: I1211 15:15:50.239902 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d056d58-517c-49db-bb61-1a0394fdd271-config-data\") pod \"keystone-db-sync-nm5r6\" (UID: \"8d056d58-517c-49db-bb61-1a0394fdd271\") " pod="openstack/keystone-db-sync-nm5r6" Dec 11 15:15:50 crc kubenswrapper[5050]: I1211 15:15:50.240319 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d056d58-517c-49db-bb61-1a0394fdd271-combined-ca-bundle\") pod \"keystone-db-sync-nm5r6\" (UID: \"8d056d58-517c-49db-bb61-1a0394fdd271\") " pod="openstack/keystone-db-sync-nm5r6" Dec 11 15:15:50 crc kubenswrapper[5050]: I1211 15:15:50.254900 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggjpq\" (UniqueName: \"kubernetes.io/projected/8d056d58-517c-49db-bb61-1a0394fdd271-kube-api-access-ggjpq\") pod \"keystone-db-sync-nm5r6\" (UID: \"8d056d58-517c-49db-bb61-1a0394fdd271\") " pod="openstack/keystone-db-sync-nm5r6" Dec 11 15:15:50 crc kubenswrapper[5050]: I1211 15:15:50.373892 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-nm5r6" Dec 11 15:15:50 crc kubenswrapper[5050]: I1211 15:15:50.816727 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-nm5r6"] Dec 11 15:15:51 crc kubenswrapper[5050]: I1211 15:15:51.375714 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-nm5r6" event={"ID":"8d056d58-517c-49db-bb61-1a0394fdd271","Type":"ContainerStarted","Data":"bd0e53fec676ed986a2d1d9e01bc075c850dac71cef43be6f141386032947922"} Dec 11 15:15:51 crc kubenswrapper[5050]: I1211 15:15:51.375761 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-nm5r6" event={"ID":"8d056d58-517c-49db-bb61-1a0394fdd271","Type":"ContainerStarted","Data":"353cc26ea07d9d57654fb28664f893f5d15b6142da1065aab67684d49ae0b6c8"} Dec 11 15:15:51 crc kubenswrapper[5050]: I1211 15:15:51.407062 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-nm5r6" podStartSLOduration=1.407038189 podStartE2EDuration="1.407038189s" podCreationTimestamp="2025-12-11 15:15:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:15:51.397111382 +0000 UTC m=+5242.240833968" watchObservedRunningTime="2025-12-11 15:15:51.407038189 +0000 UTC m=+5242.250760815" Dec 11 15:15:52 crc kubenswrapper[5050]: E1211 15:15:52.848048 5050 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d056d58_517c_49db_bb61_1a0394fdd271.slice/crio-conmon-bd0e53fec676ed986a2d1d9e01bc075c850dac71cef43be6f141386032947922.scope\": RecentStats: unable to find data in memory cache]" Dec 11 15:15:53 crc kubenswrapper[5050]: I1211 15:15:53.392832 5050 generic.go:334] "Generic (PLEG): container finished" podID="8d056d58-517c-49db-bb61-1a0394fdd271" containerID="bd0e53fec676ed986a2d1d9e01bc075c850dac71cef43be6f141386032947922" exitCode=0 Dec 11 15:15:53 crc kubenswrapper[5050]: I1211 15:15:53.392875 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-nm5r6" event={"ID":"8d056d58-517c-49db-bb61-1a0394fdd271","Type":"ContainerDied","Data":"bd0e53fec676ed986a2d1d9e01bc075c850dac71cef43be6f141386032947922"} Dec 11 15:15:54 crc kubenswrapper[5050]: I1211 15:15:54.809256 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-nm5r6" Dec 11 15:15:54 crc kubenswrapper[5050]: I1211 15:15:54.849626 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggjpq\" (UniqueName: \"kubernetes.io/projected/8d056d58-517c-49db-bb61-1a0394fdd271-kube-api-access-ggjpq\") pod \"8d056d58-517c-49db-bb61-1a0394fdd271\" (UID: \"8d056d58-517c-49db-bb61-1a0394fdd271\") " Dec 11 15:15:54 crc kubenswrapper[5050]: I1211 15:15:54.849730 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d056d58-517c-49db-bb61-1a0394fdd271-config-data\") pod \"8d056d58-517c-49db-bb61-1a0394fdd271\" (UID: \"8d056d58-517c-49db-bb61-1a0394fdd271\") " Dec 11 15:15:54 crc kubenswrapper[5050]: I1211 15:15:54.849846 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d056d58-517c-49db-bb61-1a0394fdd271-combined-ca-bundle\") pod \"8d056d58-517c-49db-bb61-1a0394fdd271\" (UID: \"8d056d58-517c-49db-bb61-1a0394fdd271\") " Dec 11 15:15:54 crc kubenswrapper[5050]: I1211 15:15:54.857334 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d056d58-517c-49db-bb61-1a0394fdd271-kube-api-access-ggjpq" (OuterVolumeSpecName: "kube-api-access-ggjpq") pod "8d056d58-517c-49db-bb61-1a0394fdd271" (UID: "8d056d58-517c-49db-bb61-1a0394fdd271"). InnerVolumeSpecName "kube-api-access-ggjpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:15:54 crc kubenswrapper[5050]: I1211 15:15:54.875824 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d056d58-517c-49db-bb61-1a0394fdd271-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d056d58-517c-49db-bb61-1a0394fdd271" (UID: "8d056d58-517c-49db-bb61-1a0394fdd271"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:15:54 crc kubenswrapper[5050]: I1211 15:15:54.901513 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d056d58-517c-49db-bb61-1a0394fdd271-config-data" (OuterVolumeSpecName: "config-data") pod "8d056d58-517c-49db-bb61-1a0394fdd271" (UID: "8d056d58-517c-49db-bb61-1a0394fdd271"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:15:54 crc kubenswrapper[5050]: I1211 15:15:54.951450 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d056d58-517c-49db-bb61-1a0394fdd271-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:15:54 crc kubenswrapper[5050]: I1211 15:15:54.951485 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggjpq\" (UniqueName: \"kubernetes.io/projected/8d056d58-517c-49db-bb61-1a0394fdd271-kube-api-access-ggjpq\") on node \"crc\" DevicePath \"\"" Dec 11 15:15:54 crc kubenswrapper[5050]: I1211 15:15:54.951497 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d056d58-517c-49db-bb61-1a0394fdd271-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.412037 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-nm5r6" event={"ID":"8d056d58-517c-49db-bb61-1a0394fdd271","Type":"ContainerDied","Data":"353cc26ea07d9d57654fb28664f893f5d15b6142da1065aab67684d49ae0b6c8"} Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.412082 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="353cc26ea07d9d57654fb28664f893f5d15b6142da1065aab67684d49ae0b6c8" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.412148 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-nm5r6" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.638801 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b8d99c58c-44jtb"] Dec 11 15:15:55 crc kubenswrapper[5050]: E1211 15:15:55.639205 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d056d58-517c-49db-bb61-1a0394fdd271" containerName="keystone-db-sync" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.639221 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d056d58-517c-49db-bb61-1a0394fdd271" containerName="keystone-db-sync" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.639385 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d056d58-517c-49db-bb61-1a0394fdd271" containerName="keystone-db-sync" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.640227 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b8d99c58c-44jtb" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.693250 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-h9vfz"] Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.694358 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h9vfz" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.697029 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.697378 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.697591 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.697813 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.700605 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jrbb7" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.722464 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b8d99c58c-44jtb"] Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.733564 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-h9vfz"] Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.774644 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9847fc6d-b9e1-4fbe-9a61-502c243377a7-dns-svc\") pod \"dnsmasq-dns-7b8d99c58c-44jtb\" (UID: \"9847fc6d-b9e1-4fbe-9a61-502c243377a7\") " pod="openstack/dnsmasq-dns-7b8d99c58c-44jtb" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.774693 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9847fc6d-b9e1-4fbe-9a61-502c243377a7-config\") pod \"dnsmasq-dns-7b8d99c58c-44jtb\" (UID: \"9847fc6d-b9e1-4fbe-9a61-502c243377a7\") " pod="openstack/dnsmasq-dns-7b8d99c58c-44jtb" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.774737 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktswf\" (UniqueName: \"kubernetes.io/projected/9847fc6d-b9e1-4fbe-9a61-502c243377a7-kube-api-access-ktswf\") pod \"dnsmasq-dns-7b8d99c58c-44jtb\" (UID: \"9847fc6d-b9e1-4fbe-9a61-502c243377a7\") " pod="openstack/dnsmasq-dns-7b8d99c58c-44jtb" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.774858 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9847fc6d-b9e1-4fbe-9a61-502c243377a7-ovsdbserver-nb\") pod \"dnsmasq-dns-7b8d99c58c-44jtb\" (UID: \"9847fc6d-b9e1-4fbe-9a61-502c243377a7\") " pod="openstack/dnsmasq-dns-7b8d99c58c-44jtb" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.774881 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9847fc6d-b9e1-4fbe-9a61-502c243377a7-ovsdbserver-sb\") pod \"dnsmasq-dns-7b8d99c58c-44jtb\" (UID: \"9847fc6d-b9e1-4fbe-9a61-502c243377a7\") " pod="openstack/dnsmasq-dns-7b8d99c58c-44jtb" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.889977 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktswf\" (UniqueName: \"kubernetes.io/projected/9847fc6d-b9e1-4fbe-9a61-502c243377a7-kube-api-access-ktswf\") pod \"dnsmasq-dns-7b8d99c58c-44jtb\" (UID: \"9847fc6d-b9e1-4fbe-9a61-502c243377a7\") " pod="openstack/dnsmasq-dns-7b8d99c58c-44jtb" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.890293 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ac06f2c6-b325-4f89-b1bb-e1e641f9e340-credential-keys\") pod \"keystone-bootstrap-h9vfz\" (UID: \"ac06f2c6-b325-4f89-b1bb-e1e641f9e340\") " pod="openstack/keystone-bootstrap-h9vfz" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.890328 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac06f2c6-b325-4f89-b1bb-e1e641f9e340-scripts\") pod \"keystone-bootstrap-h9vfz\" (UID: \"ac06f2c6-b325-4f89-b1bb-e1e641f9e340\") " pod="openstack/keystone-bootstrap-h9vfz" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.890350 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-278qw\" (UniqueName: \"kubernetes.io/projected/ac06f2c6-b325-4f89-b1bb-e1e641f9e340-kube-api-access-278qw\") pod \"keystone-bootstrap-h9vfz\" (UID: \"ac06f2c6-b325-4f89-b1bb-e1e641f9e340\") " pod="openstack/keystone-bootstrap-h9vfz" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.890407 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ac06f2c6-b325-4f89-b1bb-e1e641f9e340-fernet-keys\") pod \"keystone-bootstrap-h9vfz\" (UID: \"ac06f2c6-b325-4f89-b1bb-e1e641f9e340\") " pod="openstack/keystone-bootstrap-h9vfz" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.890426 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9847fc6d-b9e1-4fbe-9a61-502c243377a7-ovsdbserver-nb\") pod \"dnsmasq-dns-7b8d99c58c-44jtb\" (UID: \"9847fc6d-b9e1-4fbe-9a61-502c243377a7\") " pod="openstack/dnsmasq-dns-7b8d99c58c-44jtb" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.890449 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9847fc6d-b9e1-4fbe-9a61-502c243377a7-ovsdbserver-sb\") pod \"dnsmasq-dns-7b8d99c58c-44jtb\" (UID: \"9847fc6d-b9e1-4fbe-9a61-502c243377a7\") " pod="openstack/dnsmasq-dns-7b8d99c58c-44jtb" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.890469 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac06f2c6-b325-4f89-b1bb-e1e641f9e340-config-data\") pod \"keystone-bootstrap-h9vfz\" (UID: \"ac06f2c6-b325-4f89-b1bb-e1e641f9e340\") " pod="openstack/keystone-bootstrap-h9vfz" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.890486 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac06f2c6-b325-4f89-b1bb-e1e641f9e340-combined-ca-bundle\") pod \"keystone-bootstrap-h9vfz\" (UID: \"ac06f2c6-b325-4f89-b1bb-e1e641f9e340\") " pod="openstack/keystone-bootstrap-h9vfz" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.890512 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9847fc6d-b9e1-4fbe-9a61-502c243377a7-dns-svc\") pod \"dnsmasq-dns-7b8d99c58c-44jtb\" (UID: \"9847fc6d-b9e1-4fbe-9a61-502c243377a7\") " pod="openstack/dnsmasq-dns-7b8d99c58c-44jtb" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.890534 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9847fc6d-b9e1-4fbe-9a61-502c243377a7-config\") pod \"dnsmasq-dns-7b8d99c58c-44jtb\" (UID: \"9847fc6d-b9e1-4fbe-9a61-502c243377a7\") " pod="openstack/dnsmasq-dns-7b8d99c58c-44jtb" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.891498 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9847fc6d-b9e1-4fbe-9a61-502c243377a7-ovsdbserver-sb\") pod \"dnsmasq-dns-7b8d99c58c-44jtb\" (UID: \"9847fc6d-b9e1-4fbe-9a61-502c243377a7\") " pod="openstack/dnsmasq-dns-7b8d99c58c-44jtb" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.891572 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9847fc6d-b9e1-4fbe-9a61-502c243377a7-ovsdbserver-nb\") pod \"dnsmasq-dns-7b8d99c58c-44jtb\" (UID: \"9847fc6d-b9e1-4fbe-9a61-502c243377a7\") " pod="openstack/dnsmasq-dns-7b8d99c58c-44jtb" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.891742 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9847fc6d-b9e1-4fbe-9a61-502c243377a7-config\") pod \"dnsmasq-dns-7b8d99c58c-44jtb\" (UID: \"9847fc6d-b9e1-4fbe-9a61-502c243377a7\") " pod="openstack/dnsmasq-dns-7b8d99c58c-44jtb" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.892037 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9847fc6d-b9e1-4fbe-9a61-502c243377a7-dns-svc\") pod \"dnsmasq-dns-7b8d99c58c-44jtb\" (UID: \"9847fc6d-b9e1-4fbe-9a61-502c243377a7\") " pod="openstack/dnsmasq-dns-7b8d99c58c-44jtb" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.923885 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktswf\" (UniqueName: \"kubernetes.io/projected/9847fc6d-b9e1-4fbe-9a61-502c243377a7-kube-api-access-ktswf\") pod \"dnsmasq-dns-7b8d99c58c-44jtb\" (UID: \"9847fc6d-b9e1-4fbe-9a61-502c243377a7\") " pod="openstack/dnsmasq-dns-7b8d99c58c-44jtb" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.963690 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b8d99c58c-44jtb" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.997901 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ac06f2c6-b325-4f89-b1bb-e1e641f9e340-credential-keys\") pod \"keystone-bootstrap-h9vfz\" (UID: \"ac06f2c6-b325-4f89-b1bb-e1e641f9e340\") " pod="openstack/keystone-bootstrap-h9vfz" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.997960 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac06f2c6-b325-4f89-b1bb-e1e641f9e340-scripts\") pod \"keystone-bootstrap-h9vfz\" (UID: \"ac06f2c6-b325-4f89-b1bb-e1e641f9e340\") " pod="openstack/keystone-bootstrap-h9vfz" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.997986 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-278qw\" (UniqueName: \"kubernetes.io/projected/ac06f2c6-b325-4f89-b1bb-e1e641f9e340-kube-api-access-278qw\") pod \"keystone-bootstrap-h9vfz\" (UID: \"ac06f2c6-b325-4f89-b1bb-e1e641f9e340\") " pod="openstack/keystone-bootstrap-h9vfz" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.998551 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ac06f2c6-b325-4f89-b1bb-e1e641f9e340-fernet-keys\") pod \"keystone-bootstrap-h9vfz\" (UID: \"ac06f2c6-b325-4f89-b1bb-e1e641f9e340\") " pod="openstack/keystone-bootstrap-h9vfz" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.998597 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac06f2c6-b325-4f89-b1bb-e1e641f9e340-config-data\") pod \"keystone-bootstrap-h9vfz\" (UID: \"ac06f2c6-b325-4f89-b1bb-e1e641f9e340\") " pod="openstack/keystone-bootstrap-h9vfz" Dec 11 15:15:55 crc kubenswrapper[5050]: I1211 15:15:55.998613 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac06f2c6-b325-4f89-b1bb-e1e641f9e340-combined-ca-bundle\") pod \"keystone-bootstrap-h9vfz\" (UID: \"ac06f2c6-b325-4f89-b1bb-e1e641f9e340\") " pod="openstack/keystone-bootstrap-h9vfz" Dec 11 15:15:56 crc kubenswrapper[5050]: I1211 15:15:56.001348 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ac06f2c6-b325-4f89-b1bb-e1e641f9e340-fernet-keys\") pod \"keystone-bootstrap-h9vfz\" (UID: \"ac06f2c6-b325-4f89-b1bb-e1e641f9e340\") " pod="openstack/keystone-bootstrap-h9vfz" Dec 11 15:15:56 crc kubenswrapper[5050]: I1211 15:15:56.002596 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac06f2c6-b325-4f89-b1bb-e1e641f9e340-config-data\") pod \"keystone-bootstrap-h9vfz\" (UID: \"ac06f2c6-b325-4f89-b1bb-e1e641f9e340\") " pod="openstack/keystone-bootstrap-h9vfz" Dec 11 15:15:56 crc kubenswrapper[5050]: I1211 15:15:56.005251 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac06f2c6-b325-4f89-b1bb-e1e641f9e340-scripts\") pod \"keystone-bootstrap-h9vfz\" (UID: \"ac06f2c6-b325-4f89-b1bb-e1e641f9e340\") " pod="openstack/keystone-bootstrap-h9vfz" Dec 11 15:15:56 crc kubenswrapper[5050]: I1211 15:15:56.005460 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ac06f2c6-b325-4f89-b1bb-e1e641f9e340-credential-keys\") pod \"keystone-bootstrap-h9vfz\" (UID: \"ac06f2c6-b325-4f89-b1bb-e1e641f9e340\") " pod="openstack/keystone-bootstrap-h9vfz" Dec 11 15:15:56 crc kubenswrapper[5050]: I1211 15:15:56.005872 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac06f2c6-b325-4f89-b1bb-e1e641f9e340-combined-ca-bundle\") pod \"keystone-bootstrap-h9vfz\" (UID: \"ac06f2c6-b325-4f89-b1bb-e1e641f9e340\") " pod="openstack/keystone-bootstrap-h9vfz" Dec 11 15:15:56 crc kubenswrapper[5050]: I1211 15:15:56.040634 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-278qw\" (UniqueName: \"kubernetes.io/projected/ac06f2c6-b325-4f89-b1bb-e1e641f9e340-kube-api-access-278qw\") pod \"keystone-bootstrap-h9vfz\" (UID: \"ac06f2c6-b325-4f89-b1bb-e1e641f9e340\") " pod="openstack/keystone-bootstrap-h9vfz" Dec 11 15:15:56 crc kubenswrapper[5050]: I1211 15:15:56.076304 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h9vfz" Dec 11 15:15:56 crc kubenswrapper[5050]: I1211 15:15:56.481194 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b8d99c58c-44jtb"] Dec 11 15:15:56 crc kubenswrapper[5050]: W1211 15:15:56.486297 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9847fc6d_b9e1_4fbe_9a61_502c243377a7.slice/crio-fc183c026fb960af6f3d928cbe9f95f7f06afffde46c9b41aeebc9af9184302e WatchSource:0}: Error finding container fc183c026fb960af6f3d928cbe9f95f7f06afffde46c9b41aeebc9af9184302e: Status 404 returned error can't find the container with id fc183c026fb960af6f3d928cbe9f95f7f06afffde46c9b41aeebc9af9184302e Dec 11 15:15:56 crc kubenswrapper[5050]: I1211 15:15:56.552161 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-h9vfz"] Dec 11 15:15:56 crc kubenswrapper[5050]: W1211 15:15:56.553314 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac06f2c6_b325_4f89_b1bb_e1e641f9e340.slice/crio-ce63807cc8d13632d0a710e68236aabfe68c8e1e77c78bf373c6a720552d74c0 WatchSource:0}: Error finding container ce63807cc8d13632d0a710e68236aabfe68c8e1e77c78bf373c6a720552d74c0: Status 404 returned error can't find the container with id ce63807cc8d13632d0a710e68236aabfe68c8e1e77c78bf373c6a720552d74c0 Dec 11 15:15:57 crc kubenswrapper[5050]: I1211 15:15:57.433800 5050 generic.go:334] "Generic (PLEG): container finished" podID="9847fc6d-b9e1-4fbe-9a61-502c243377a7" containerID="5d34aa36c5a771a44f89db955b8ef686ad86bed16b04d4702e2db2cd19fef2eb" exitCode=0 Dec 11 15:15:57 crc kubenswrapper[5050]: I1211 15:15:57.433866 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b8d99c58c-44jtb" event={"ID":"9847fc6d-b9e1-4fbe-9a61-502c243377a7","Type":"ContainerDied","Data":"5d34aa36c5a771a44f89db955b8ef686ad86bed16b04d4702e2db2cd19fef2eb"} Dec 11 15:15:57 crc kubenswrapper[5050]: I1211 15:15:57.433895 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b8d99c58c-44jtb" event={"ID":"9847fc6d-b9e1-4fbe-9a61-502c243377a7","Type":"ContainerStarted","Data":"fc183c026fb960af6f3d928cbe9f95f7f06afffde46c9b41aeebc9af9184302e"} Dec 11 15:15:57 crc kubenswrapper[5050]: I1211 15:15:57.441242 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h9vfz" event={"ID":"ac06f2c6-b325-4f89-b1bb-e1e641f9e340","Type":"ContainerStarted","Data":"9bd6f9acf447f4e9a54e3747076bd9c7bb0faf106c245f17cea47181eb6df076"} Dec 11 15:15:57 crc kubenswrapper[5050]: I1211 15:15:57.441599 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h9vfz" event={"ID":"ac06f2c6-b325-4f89-b1bb-e1e641f9e340","Type":"ContainerStarted","Data":"ce63807cc8d13632d0a710e68236aabfe68c8e1e77c78bf373c6a720552d74c0"} Dec 11 15:15:57 crc kubenswrapper[5050]: I1211 15:15:57.482539 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-h9vfz" podStartSLOduration=2.482512812 podStartE2EDuration="2.482512812s" podCreationTimestamp="2025-12-11 15:15:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:15:57.477549509 +0000 UTC m=+5248.321272135" watchObservedRunningTime="2025-12-11 15:15:57.482512812 +0000 UTC m=+5248.326235428" Dec 11 15:15:58 crc kubenswrapper[5050]: I1211 15:15:58.451984 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b8d99c58c-44jtb" event={"ID":"9847fc6d-b9e1-4fbe-9a61-502c243377a7","Type":"ContainerStarted","Data":"2f2d0c3fb867e6c786d0de990fb6f92fd802fb9da1518f9b030460e71cf3e8fd"} Dec 11 15:15:58 crc kubenswrapper[5050]: I1211 15:15:58.482137 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b8d99c58c-44jtb" podStartSLOduration=3.48211635 podStartE2EDuration="3.48211635s" podCreationTimestamp="2025-12-11 15:15:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:15:58.473455808 +0000 UTC m=+5249.317178394" watchObservedRunningTime="2025-12-11 15:15:58.48211635 +0000 UTC m=+5249.325838936" Dec 11 15:15:59 crc kubenswrapper[5050]: I1211 15:15:59.462117 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b8d99c58c-44jtb" Dec 11 15:16:00 crc kubenswrapper[5050]: I1211 15:16:00.480050 5050 generic.go:334] "Generic (PLEG): container finished" podID="ac06f2c6-b325-4f89-b1bb-e1e641f9e340" containerID="9bd6f9acf447f4e9a54e3747076bd9c7bb0faf106c245f17cea47181eb6df076" exitCode=0 Dec 11 15:16:00 crc kubenswrapper[5050]: I1211 15:16:00.480064 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h9vfz" event={"ID":"ac06f2c6-b325-4f89-b1bb-e1e641f9e340","Type":"ContainerDied","Data":"9bd6f9acf447f4e9a54e3747076bd9c7bb0faf106c245f17cea47181eb6df076"} Dec 11 15:16:01 crc kubenswrapper[5050]: I1211 15:16:01.404913 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wl6kb"] Dec 11 15:16:01 crc kubenswrapper[5050]: I1211 15:16:01.411399 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wl6kb" Dec 11 15:16:01 crc kubenswrapper[5050]: I1211 15:16:01.419998 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wl6kb"] Dec 11 15:16:01 crc kubenswrapper[5050]: I1211 15:16:01.607401 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swkj9\" (UniqueName: \"kubernetes.io/projected/f03ba7a8-bb79-438a-a359-5b6e74d8f8db-kube-api-access-swkj9\") pod \"redhat-marketplace-wl6kb\" (UID: \"f03ba7a8-bb79-438a-a359-5b6e74d8f8db\") " pod="openshift-marketplace/redhat-marketplace-wl6kb" Dec 11 15:16:01 crc kubenswrapper[5050]: I1211 15:16:01.607493 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f03ba7a8-bb79-438a-a359-5b6e74d8f8db-utilities\") pod \"redhat-marketplace-wl6kb\" (UID: \"f03ba7a8-bb79-438a-a359-5b6e74d8f8db\") " pod="openshift-marketplace/redhat-marketplace-wl6kb" Dec 11 15:16:01 crc kubenswrapper[5050]: I1211 15:16:01.607543 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f03ba7a8-bb79-438a-a359-5b6e74d8f8db-catalog-content\") pod \"redhat-marketplace-wl6kb\" (UID: \"f03ba7a8-bb79-438a-a359-5b6e74d8f8db\") " pod="openshift-marketplace/redhat-marketplace-wl6kb" Dec 11 15:16:01 crc kubenswrapper[5050]: I1211 15:16:01.710383 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swkj9\" (UniqueName: \"kubernetes.io/projected/f03ba7a8-bb79-438a-a359-5b6e74d8f8db-kube-api-access-swkj9\") pod \"redhat-marketplace-wl6kb\" (UID: \"f03ba7a8-bb79-438a-a359-5b6e74d8f8db\") " pod="openshift-marketplace/redhat-marketplace-wl6kb" Dec 11 15:16:01 crc kubenswrapper[5050]: I1211 15:16:01.710504 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f03ba7a8-bb79-438a-a359-5b6e74d8f8db-utilities\") pod \"redhat-marketplace-wl6kb\" (UID: \"f03ba7a8-bb79-438a-a359-5b6e74d8f8db\") " pod="openshift-marketplace/redhat-marketplace-wl6kb" Dec 11 15:16:01 crc kubenswrapper[5050]: I1211 15:16:01.710575 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f03ba7a8-bb79-438a-a359-5b6e74d8f8db-catalog-content\") pod \"redhat-marketplace-wl6kb\" (UID: \"f03ba7a8-bb79-438a-a359-5b6e74d8f8db\") " pod="openshift-marketplace/redhat-marketplace-wl6kb" Dec 11 15:16:01 crc kubenswrapper[5050]: I1211 15:16:01.711883 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f03ba7a8-bb79-438a-a359-5b6e74d8f8db-catalog-content\") pod \"redhat-marketplace-wl6kb\" (UID: \"f03ba7a8-bb79-438a-a359-5b6e74d8f8db\") " pod="openshift-marketplace/redhat-marketplace-wl6kb" Dec 11 15:16:01 crc kubenswrapper[5050]: I1211 15:16:01.712223 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f03ba7a8-bb79-438a-a359-5b6e74d8f8db-utilities\") pod \"redhat-marketplace-wl6kb\" (UID: \"f03ba7a8-bb79-438a-a359-5b6e74d8f8db\") " pod="openshift-marketplace/redhat-marketplace-wl6kb" Dec 11 15:16:01 crc kubenswrapper[5050]: I1211 15:16:01.744668 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swkj9\" (UniqueName: \"kubernetes.io/projected/f03ba7a8-bb79-438a-a359-5b6e74d8f8db-kube-api-access-swkj9\") pod \"redhat-marketplace-wl6kb\" (UID: \"f03ba7a8-bb79-438a-a359-5b6e74d8f8db\") " pod="openshift-marketplace/redhat-marketplace-wl6kb" Dec 11 15:16:01 crc kubenswrapper[5050]: I1211 15:16:01.777226 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wl6kb" Dec 11 15:16:01 crc kubenswrapper[5050]: I1211 15:16:01.888338 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h9vfz" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.014773 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ac06f2c6-b325-4f89-b1bb-e1e641f9e340-fernet-keys\") pod \"ac06f2c6-b325-4f89-b1bb-e1e641f9e340\" (UID: \"ac06f2c6-b325-4f89-b1bb-e1e641f9e340\") " Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.014873 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac06f2c6-b325-4f89-b1bb-e1e641f9e340-config-data\") pod \"ac06f2c6-b325-4f89-b1bb-e1e641f9e340\" (UID: \"ac06f2c6-b325-4f89-b1bb-e1e641f9e340\") " Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.014918 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac06f2c6-b325-4f89-b1bb-e1e641f9e340-scripts\") pod \"ac06f2c6-b325-4f89-b1bb-e1e641f9e340\" (UID: \"ac06f2c6-b325-4f89-b1bb-e1e641f9e340\") " Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.014992 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-278qw\" (UniqueName: \"kubernetes.io/projected/ac06f2c6-b325-4f89-b1bb-e1e641f9e340-kube-api-access-278qw\") pod \"ac06f2c6-b325-4f89-b1bb-e1e641f9e340\" (UID: \"ac06f2c6-b325-4f89-b1bb-e1e641f9e340\") " Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.015049 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac06f2c6-b325-4f89-b1bb-e1e641f9e340-combined-ca-bundle\") pod \"ac06f2c6-b325-4f89-b1bb-e1e641f9e340\" (UID: \"ac06f2c6-b325-4f89-b1bb-e1e641f9e340\") " Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.015091 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ac06f2c6-b325-4f89-b1bb-e1e641f9e340-credential-keys\") pod \"ac06f2c6-b325-4f89-b1bb-e1e641f9e340\" (UID: \"ac06f2c6-b325-4f89-b1bb-e1e641f9e340\") " Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.022129 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac06f2c6-b325-4f89-b1bb-e1e641f9e340-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ac06f2c6-b325-4f89-b1bb-e1e641f9e340" (UID: "ac06f2c6-b325-4f89-b1bb-e1e641f9e340"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.022166 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac06f2c6-b325-4f89-b1bb-e1e641f9e340-kube-api-access-278qw" (OuterVolumeSpecName: "kube-api-access-278qw") pod "ac06f2c6-b325-4f89-b1bb-e1e641f9e340" (UID: "ac06f2c6-b325-4f89-b1bb-e1e641f9e340"). InnerVolumeSpecName "kube-api-access-278qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.022322 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac06f2c6-b325-4f89-b1bb-e1e641f9e340-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "ac06f2c6-b325-4f89-b1bb-e1e641f9e340" (UID: "ac06f2c6-b325-4f89-b1bb-e1e641f9e340"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.034437 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac06f2c6-b325-4f89-b1bb-e1e641f9e340-scripts" (OuterVolumeSpecName: "scripts") pod "ac06f2c6-b325-4f89-b1bb-e1e641f9e340" (UID: "ac06f2c6-b325-4f89-b1bb-e1e641f9e340"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.044158 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac06f2c6-b325-4f89-b1bb-e1e641f9e340-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac06f2c6-b325-4f89-b1bb-e1e641f9e340" (UID: "ac06f2c6-b325-4f89-b1bb-e1e641f9e340"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.073900 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac06f2c6-b325-4f89-b1bb-e1e641f9e340-config-data" (OuterVolumeSpecName: "config-data") pod "ac06f2c6-b325-4f89-b1bb-e1e641f9e340" (UID: "ac06f2c6-b325-4f89-b1bb-e1e641f9e340"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.117070 5050 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ac06f2c6-b325-4f89-b1bb-e1e641f9e340-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.117103 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac06f2c6-b325-4f89-b1bb-e1e641f9e340-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.117114 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac06f2c6-b325-4f89-b1bb-e1e641f9e340-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.117127 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-278qw\" (UniqueName: \"kubernetes.io/projected/ac06f2c6-b325-4f89-b1bb-e1e641f9e340-kube-api-access-278qw\") on node \"crc\" DevicePath \"\"" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.117141 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac06f2c6-b325-4f89-b1bb-e1e641f9e340-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.117154 5050 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ac06f2c6-b325-4f89-b1bb-e1e641f9e340-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.235036 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wl6kb"] Dec 11 15:16:02 crc kubenswrapper[5050]: W1211 15:16:02.247502 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf03ba7a8_bb79_438a_a359_5b6e74d8f8db.slice/crio-b7dbca5e7484f782a77112f1b78f78a874d896af5cf127ff2e192089233e8b3e WatchSource:0}: Error finding container b7dbca5e7484f782a77112f1b78f78a874d896af5cf127ff2e192089233e8b3e: Status 404 returned error can't find the container with id b7dbca5e7484f782a77112f1b78f78a874d896af5cf127ff2e192089233e8b3e Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.504466 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h9vfz" event={"ID":"ac06f2c6-b325-4f89-b1bb-e1e641f9e340","Type":"ContainerDied","Data":"ce63807cc8d13632d0a710e68236aabfe68c8e1e77c78bf373c6a720552d74c0"} Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.504989 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce63807cc8d13632d0a710e68236aabfe68c8e1e77c78bf373c6a720552d74c0" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.505172 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h9vfz" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.509427 5050 generic.go:334] "Generic (PLEG): container finished" podID="f03ba7a8-bb79-438a-a359-5b6e74d8f8db" containerID="5090ed60c2c19a120200602b360bdd401a1a4e4b4723991c2f7abf415bb8dc79" exitCode=0 Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.509485 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wl6kb" event={"ID":"f03ba7a8-bb79-438a-a359-5b6e74d8f8db","Type":"ContainerDied","Data":"5090ed60c2c19a120200602b360bdd401a1a4e4b4723991c2f7abf415bb8dc79"} Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.509520 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wl6kb" event={"ID":"f03ba7a8-bb79-438a-a359-5b6e74d8f8db","Type":"ContainerStarted","Data":"b7dbca5e7484f782a77112f1b78f78a874d896af5cf127ff2e192089233e8b3e"} Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.599918 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-h9vfz"] Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.624399 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-h9vfz"] Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.669609 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-q4zxd"] Dec 11 15:16:02 crc kubenswrapper[5050]: E1211 15:16:02.669962 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac06f2c6-b325-4f89-b1bb-e1e641f9e340" containerName="keystone-bootstrap" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.669981 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac06f2c6-b325-4f89-b1bb-e1e641f9e340" containerName="keystone-bootstrap" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.670201 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac06f2c6-b325-4f89-b1bb-e1e641f9e340" containerName="keystone-bootstrap" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.670782 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-q4zxd" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.674422 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.676731 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.676799 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.676794 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jrbb7" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.676995 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.685229 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-q4zxd"] Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.834716 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/947107ea-e024-4476-944e-6c3662bc6557-combined-ca-bundle\") pod \"keystone-bootstrap-q4zxd\" (UID: \"947107ea-e024-4476-944e-6c3662bc6557\") " pod="openstack/keystone-bootstrap-q4zxd" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.835120 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/947107ea-e024-4476-944e-6c3662bc6557-config-data\") pod \"keystone-bootstrap-q4zxd\" (UID: \"947107ea-e024-4476-944e-6c3662bc6557\") " pod="openstack/keystone-bootstrap-q4zxd" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.835431 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd4vb\" (UniqueName: \"kubernetes.io/projected/947107ea-e024-4476-944e-6c3662bc6557-kube-api-access-cd4vb\") pod \"keystone-bootstrap-q4zxd\" (UID: \"947107ea-e024-4476-944e-6c3662bc6557\") " pod="openstack/keystone-bootstrap-q4zxd" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.835522 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/947107ea-e024-4476-944e-6c3662bc6557-scripts\") pod \"keystone-bootstrap-q4zxd\" (UID: \"947107ea-e024-4476-944e-6c3662bc6557\") " pod="openstack/keystone-bootstrap-q4zxd" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.835809 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/947107ea-e024-4476-944e-6c3662bc6557-credential-keys\") pod \"keystone-bootstrap-q4zxd\" (UID: \"947107ea-e024-4476-944e-6c3662bc6557\") " pod="openstack/keystone-bootstrap-q4zxd" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.835863 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/947107ea-e024-4476-944e-6c3662bc6557-fernet-keys\") pod \"keystone-bootstrap-q4zxd\" (UID: \"947107ea-e024-4476-944e-6c3662bc6557\") " pod="openstack/keystone-bootstrap-q4zxd" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.937933 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/947107ea-e024-4476-944e-6c3662bc6557-fernet-keys\") pod \"keystone-bootstrap-q4zxd\" (UID: \"947107ea-e024-4476-944e-6c3662bc6557\") " pod="openstack/keystone-bootstrap-q4zxd" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.937969 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/947107ea-e024-4476-944e-6c3662bc6557-credential-keys\") pod \"keystone-bootstrap-q4zxd\" (UID: \"947107ea-e024-4476-944e-6c3662bc6557\") " pod="openstack/keystone-bootstrap-q4zxd" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.937996 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/947107ea-e024-4476-944e-6c3662bc6557-combined-ca-bundle\") pod \"keystone-bootstrap-q4zxd\" (UID: \"947107ea-e024-4476-944e-6c3662bc6557\") " pod="openstack/keystone-bootstrap-q4zxd" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.938072 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/947107ea-e024-4476-944e-6c3662bc6557-config-data\") pod \"keystone-bootstrap-q4zxd\" (UID: \"947107ea-e024-4476-944e-6c3662bc6557\") " pod="openstack/keystone-bootstrap-q4zxd" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.938165 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cd4vb\" (UniqueName: \"kubernetes.io/projected/947107ea-e024-4476-944e-6c3662bc6557-kube-api-access-cd4vb\") pod \"keystone-bootstrap-q4zxd\" (UID: \"947107ea-e024-4476-944e-6c3662bc6557\") " pod="openstack/keystone-bootstrap-q4zxd" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.938909 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/947107ea-e024-4476-944e-6c3662bc6557-scripts\") pod \"keystone-bootstrap-q4zxd\" (UID: \"947107ea-e024-4476-944e-6c3662bc6557\") " pod="openstack/keystone-bootstrap-q4zxd" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.943831 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/947107ea-e024-4476-944e-6c3662bc6557-scripts\") pod \"keystone-bootstrap-q4zxd\" (UID: \"947107ea-e024-4476-944e-6c3662bc6557\") " pod="openstack/keystone-bootstrap-q4zxd" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.944075 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/947107ea-e024-4476-944e-6c3662bc6557-credential-keys\") pod \"keystone-bootstrap-q4zxd\" (UID: \"947107ea-e024-4476-944e-6c3662bc6557\") " pod="openstack/keystone-bootstrap-q4zxd" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.944262 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/947107ea-e024-4476-944e-6c3662bc6557-combined-ca-bundle\") pod \"keystone-bootstrap-q4zxd\" (UID: \"947107ea-e024-4476-944e-6c3662bc6557\") " pod="openstack/keystone-bootstrap-q4zxd" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.944492 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/947107ea-e024-4476-944e-6c3662bc6557-config-data\") pod \"keystone-bootstrap-q4zxd\" (UID: \"947107ea-e024-4476-944e-6c3662bc6557\") " pod="openstack/keystone-bootstrap-q4zxd" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.948430 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/947107ea-e024-4476-944e-6c3662bc6557-fernet-keys\") pod \"keystone-bootstrap-q4zxd\" (UID: \"947107ea-e024-4476-944e-6c3662bc6557\") " pod="openstack/keystone-bootstrap-q4zxd" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.956753 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cd4vb\" (UniqueName: \"kubernetes.io/projected/947107ea-e024-4476-944e-6c3662bc6557-kube-api-access-cd4vb\") pod \"keystone-bootstrap-q4zxd\" (UID: \"947107ea-e024-4476-944e-6c3662bc6557\") " pod="openstack/keystone-bootstrap-q4zxd" Dec 11 15:16:02 crc kubenswrapper[5050]: I1211 15:16:02.998467 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-q4zxd" Dec 11 15:16:03 crc kubenswrapper[5050]: I1211 15:16:03.453993 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-q4zxd"] Dec 11 15:16:03 crc kubenswrapper[5050]: I1211 15:16:03.556138 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac06f2c6-b325-4f89-b1bb-e1e641f9e340" path="/var/lib/kubelet/pods/ac06f2c6-b325-4f89-b1bb-e1e641f9e340/volumes" Dec 11 15:16:03 crc kubenswrapper[5050]: I1211 15:16:03.556713 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-q4zxd" event={"ID":"947107ea-e024-4476-944e-6c3662bc6557","Type":"ContainerStarted","Data":"b3da3b262d510580c1894d9b0e0c6e5c7fbbcd75bde11b96c010457666585a94"} Dec 11 15:16:04 crc kubenswrapper[5050]: I1211 15:16:04.559454 5050 generic.go:334] "Generic (PLEG): container finished" podID="f03ba7a8-bb79-438a-a359-5b6e74d8f8db" containerID="95574c85143e722a797380e59847ccda75e0bb777797ab163fdc0c195ef2e351" exitCode=0 Dec 11 15:16:04 crc kubenswrapper[5050]: I1211 15:16:04.559640 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wl6kb" event={"ID":"f03ba7a8-bb79-438a-a359-5b6e74d8f8db","Type":"ContainerDied","Data":"95574c85143e722a797380e59847ccda75e0bb777797ab163fdc0c195ef2e351"} Dec 11 15:16:04 crc kubenswrapper[5050]: I1211 15:16:04.563167 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-q4zxd" event={"ID":"947107ea-e024-4476-944e-6c3662bc6557","Type":"ContainerStarted","Data":"f011bec977616880d8a08188f06bc011ef7fbffa37afa2194d46c72b23506ff5"} Dec 11 15:16:04 crc kubenswrapper[5050]: I1211 15:16:04.622594 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-q4zxd" podStartSLOduration=2.62256949 podStartE2EDuration="2.62256949s" podCreationTimestamp="2025-12-11 15:16:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:16:04.613105866 +0000 UTC m=+5255.456828472" watchObservedRunningTime="2025-12-11 15:16:04.62256949 +0000 UTC m=+5255.466292096" Dec 11 15:16:05 crc kubenswrapper[5050]: I1211 15:16:05.574604 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wl6kb" event={"ID":"f03ba7a8-bb79-438a-a359-5b6e74d8f8db","Type":"ContainerStarted","Data":"ed1d3f23f9b6ef4474802d88ea28fe8162e3959bdf1383a3d3fd1fa36174d763"} Dec 11 15:16:05 crc kubenswrapper[5050]: I1211 15:16:05.607096 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wl6kb" podStartSLOduration=2.135953815 podStartE2EDuration="4.607074713s" podCreationTimestamp="2025-12-11 15:16:01 +0000 UTC" firstStartedPulling="2025-12-11 15:16:02.511685313 +0000 UTC m=+5253.355407909" lastFinishedPulling="2025-12-11 15:16:04.982806221 +0000 UTC m=+5255.826528807" observedRunningTime="2025-12-11 15:16:05.597520916 +0000 UTC m=+5256.441243512" watchObservedRunningTime="2025-12-11 15:16:05.607074713 +0000 UTC m=+5256.450797289" Dec 11 15:16:05 crc kubenswrapper[5050]: I1211 15:16:05.966162 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b8d99c58c-44jtb" Dec 11 15:16:06 crc kubenswrapper[5050]: I1211 15:16:06.050635 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8fb65bfc-krgh4"] Dec 11 15:16:06 crc kubenswrapper[5050]: I1211 15:16:06.051330 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6f8fb65bfc-krgh4" podUID="d6910859-c36e-4687-a1a5-abed4bbb8e30" containerName="dnsmasq-dns" containerID="cri-o://ba443ba3ffa6d894702955de12decddbceac8e6ce866323177a1c202d8b03b34" gracePeriod=10 Dec 11 15:16:06 crc kubenswrapper[5050]: I1211 15:16:06.528285 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8fb65bfc-krgh4" Dec 11 15:16:06 crc kubenswrapper[5050]: I1211 15:16:06.584435 5050 generic.go:334] "Generic (PLEG): container finished" podID="947107ea-e024-4476-944e-6c3662bc6557" containerID="f011bec977616880d8a08188f06bc011ef7fbffa37afa2194d46c72b23506ff5" exitCode=0 Dec 11 15:16:06 crc kubenswrapper[5050]: I1211 15:16:06.584484 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-q4zxd" event={"ID":"947107ea-e024-4476-944e-6c3662bc6557","Type":"ContainerDied","Data":"f011bec977616880d8a08188f06bc011ef7fbffa37afa2194d46c72b23506ff5"} Dec 11 15:16:06 crc kubenswrapper[5050]: I1211 15:16:06.587288 5050 generic.go:334] "Generic (PLEG): container finished" podID="d6910859-c36e-4687-a1a5-abed4bbb8e30" containerID="ba443ba3ffa6d894702955de12decddbceac8e6ce866323177a1c202d8b03b34" exitCode=0 Dec 11 15:16:06 crc kubenswrapper[5050]: I1211 15:16:06.587337 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8fb65bfc-krgh4" Dec 11 15:16:06 crc kubenswrapper[5050]: I1211 15:16:06.587354 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8fb65bfc-krgh4" event={"ID":"d6910859-c36e-4687-a1a5-abed4bbb8e30","Type":"ContainerDied","Data":"ba443ba3ffa6d894702955de12decddbceac8e6ce866323177a1c202d8b03b34"} Dec 11 15:16:06 crc kubenswrapper[5050]: I1211 15:16:06.587382 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8fb65bfc-krgh4" event={"ID":"d6910859-c36e-4687-a1a5-abed4bbb8e30","Type":"ContainerDied","Data":"ec2cf7d9e9da68dcbba2c9be6c7366d393d6d365a6d42a19dfc345a8c803604b"} Dec 11 15:16:06 crc kubenswrapper[5050]: I1211 15:16:06.587421 5050 scope.go:117] "RemoveContainer" containerID="ba443ba3ffa6d894702955de12decddbceac8e6ce866323177a1c202d8b03b34" Dec 11 15:16:06 crc kubenswrapper[5050]: I1211 15:16:06.602211 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6910859-c36e-4687-a1a5-abed4bbb8e30-dns-svc\") pod \"d6910859-c36e-4687-a1a5-abed4bbb8e30\" (UID: \"d6910859-c36e-4687-a1a5-abed4bbb8e30\") " Dec 11 15:16:06 crc kubenswrapper[5050]: I1211 15:16:06.602337 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlktv\" (UniqueName: \"kubernetes.io/projected/d6910859-c36e-4687-a1a5-abed4bbb8e30-kube-api-access-jlktv\") pod \"d6910859-c36e-4687-a1a5-abed4bbb8e30\" (UID: \"d6910859-c36e-4687-a1a5-abed4bbb8e30\") " Dec 11 15:16:06 crc kubenswrapper[5050]: I1211 15:16:06.602396 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6910859-c36e-4687-a1a5-abed4bbb8e30-ovsdbserver-sb\") pod \"d6910859-c36e-4687-a1a5-abed4bbb8e30\" (UID: \"d6910859-c36e-4687-a1a5-abed4bbb8e30\") " Dec 11 15:16:06 crc kubenswrapper[5050]: I1211 15:16:06.602440 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6910859-c36e-4687-a1a5-abed4bbb8e30-ovsdbserver-nb\") pod \"d6910859-c36e-4687-a1a5-abed4bbb8e30\" (UID: \"d6910859-c36e-4687-a1a5-abed4bbb8e30\") " Dec 11 15:16:06 crc kubenswrapper[5050]: I1211 15:16:06.602546 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6910859-c36e-4687-a1a5-abed4bbb8e30-config\") pod \"d6910859-c36e-4687-a1a5-abed4bbb8e30\" (UID: \"d6910859-c36e-4687-a1a5-abed4bbb8e30\") " Dec 11 15:16:06 crc kubenswrapper[5050]: I1211 15:16:06.618349 5050 scope.go:117] "RemoveContainer" containerID="c3b6e227780c90f2819eaa56c59c7c01720c86ef55d8beb8ab6445a002609a2a" Dec 11 15:16:06 crc kubenswrapper[5050]: I1211 15:16:06.631272 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6910859-c36e-4687-a1a5-abed4bbb8e30-kube-api-access-jlktv" (OuterVolumeSpecName: "kube-api-access-jlktv") pod "d6910859-c36e-4687-a1a5-abed4bbb8e30" (UID: "d6910859-c36e-4687-a1a5-abed4bbb8e30"). InnerVolumeSpecName "kube-api-access-jlktv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:16:06 crc kubenswrapper[5050]: I1211 15:16:06.677192 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6910859-c36e-4687-a1a5-abed4bbb8e30-config" (OuterVolumeSpecName: "config") pod "d6910859-c36e-4687-a1a5-abed4bbb8e30" (UID: "d6910859-c36e-4687-a1a5-abed4bbb8e30"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:16:06 crc kubenswrapper[5050]: I1211 15:16:06.692142 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6910859-c36e-4687-a1a5-abed4bbb8e30-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d6910859-c36e-4687-a1a5-abed4bbb8e30" (UID: "d6910859-c36e-4687-a1a5-abed4bbb8e30"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:16:06 crc kubenswrapper[5050]: I1211 15:16:06.701728 5050 scope.go:117] "RemoveContainer" containerID="ba443ba3ffa6d894702955de12decddbceac8e6ce866323177a1c202d8b03b34" Dec 11 15:16:06 crc kubenswrapper[5050]: E1211 15:16:06.702263 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba443ba3ffa6d894702955de12decddbceac8e6ce866323177a1c202d8b03b34\": container with ID starting with ba443ba3ffa6d894702955de12decddbceac8e6ce866323177a1c202d8b03b34 not found: ID does not exist" containerID="ba443ba3ffa6d894702955de12decddbceac8e6ce866323177a1c202d8b03b34" Dec 11 15:16:06 crc kubenswrapper[5050]: I1211 15:16:06.702295 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba443ba3ffa6d894702955de12decddbceac8e6ce866323177a1c202d8b03b34"} err="failed to get container status \"ba443ba3ffa6d894702955de12decddbceac8e6ce866323177a1c202d8b03b34\": rpc error: code = NotFound desc = could not find container \"ba443ba3ffa6d894702955de12decddbceac8e6ce866323177a1c202d8b03b34\": container with ID starting with ba443ba3ffa6d894702955de12decddbceac8e6ce866323177a1c202d8b03b34 not found: ID does not exist" Dec 11 15:16:06 crc kubenswrapper[5050]: I1211 15:16:06.702316 5050 scope.go:117] "RemoveContainer" containerID="c3b6e227780c90f2819eaa56c59c7c01720c86ef55d8beb8ab6445a002609a2a" Dec 11 15:16:06 crc kubenswrapper[5050]: E1211 15:16:06.702938 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3b6e227780c90f2819eaa56c59c7c01720c86ef55d8beb8ab6445a002609a2a\": container with ID starting with c3b6e227780c90f2819eaa56c59c7c01720c86ef55d8beb8ab6445a002609a2a not found: ID does not exist" containerID="c3b6e227780c90f2819eaa56c59c7c01720c86ef55d8beb8ab6445a002609a2a" Dec 11 15:16:06 crc kubenswrapper[5050]: I1211 15:16:06.702963 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3b6e227780c90f2819eaa56c59c7c01720c86ef55d8beb8ab6445a002609a2a"} err="failed to get container status \"c3b6e227780c90f2819eaa56c59c7c01720c86ef55d8beb8ab6445a002609a2a\": rpc error: code = NotFound desc = could not find container \"c3b6e227780c90f2819eaa56c59c7c01720c86ef55d8beb8ab6445a002609a2a\": container with ID starting with c3b6e227780c90f2819eaa56c59c7c01720c86ef55d8beb8ab6445a002609a2a not found: ID does not exist" Dec 11 15:16:06 crc kubenswrapper[5050]: I1211 15:16:06.703025 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6910859-c36e-4687-a1a5-abed4bbb8e30-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d6910859-c36e-4687-a1a5-abed4bbb8e30" (UID: "d6910859-c36e-4687-a1a5-abed4bbb8e30"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:16:06 crc kubenswrapper[5050]: I1211 15:16:06.704789 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6910859-c36e-4687-a1a5-abed4bbb8e30-config\") on node \"crc\" DevicePath \"\"" Dec 11 15:16:06 crc kubenswrapper[5050]: I1211 15:16:06.704819 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6910859-c36e-4687-a1a5-abed4bbb8e30-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 15:16:06 crc kubenswrapper[5050]: I1211 15:16:06.704833 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlktv\" (UniqueName: \"kubernetes.io/projected/d6910859-c36e-4687-a1a5-abed4bbb8e30-kube-api-access-jlktv\") on node \"crc\" DevicePath \"\"" Dec 11 15:16:06 crc kubenswrapper[5050]: I1211 15:16:06.704846 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6910859-c36e-4687-a1a5-abed4bbb8e30-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 15:16:06 crc kubenswrapper[5050]: I1211 15:16:06.707620 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6910859-c36e-4687-a1a5-abed4bbb8e30-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d6910859-c36e-4687-a1a5-abed4bbb8e30" (UID: "d6910859-c36e-4687-a1a5-abed4bbb8e30"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:16:06 crc kubenswrapper[5050]: I1211 15:16:06.807328 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6910859-c36e-4687-a1a5-abed4bbb8e30-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 15:16:06 crc kubenswrapper[5050]: I1211 15:16:06.931246 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8fb65bfc-krgh4"] Dec 11 15:16:06 crc kubenswrapper[5050]: I1211 15:16:06.939652 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f8fb65bfc-krgh4"] Dec 11 15:16:07 crc kubenswrapper[5050]: I1211 15:16:07.556922 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6910859-c36e-4687-a1a5-abed4bbb8e30" path="/var/lib/kubelet/pods/d6910859-c36e-4687-a1a5-abed4bbb8e30/volumes" Dec 11 15:16:07 crc kubenswrapper[5050]: I1211 15:16:07.928460 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-q4zxd" Dec 11 15:16:08 crc kubenswrapper[5050]: I1211 15:16:08.031143 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/947107ea-e024-4476-944e-6c3662bc6557-combined-ca-bundle\") pod \"947107ea-e024-4476-944e-6c3662bc6557\" (UID: \"947107ea-e024-4476-944e-6c3662bc6557\") " Dec 11 15:16:08 crc kubenswrapper[5050]: I1211 15:16:08.031296 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/947107ea-e024-4476-944e-6c3662bc6557-scripts\") pod \"947107ea-e024-4476-944e-6c3662bc6557\" (UID: \"947107ea-e024-4476-944e-6c3662bc6557\") " Dec 11 15:16:08 crc kubenswrapper[5050]: I1211 15:16:08.031343 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cd4vb\" (UniqueName: \"kubernetes.io/projected/947107ea-e024-4476-944e-6c3662bc6557-kube-api-access-cd4vb\") pod \"947107ea-e024-4476-944e-6c3662bc6557\" (UID: \"947107ea-e024-4476-944e-6c3662bc6557\") " Dec 11 15:16:08 crc kubenswrapper[5050]: I1211 15:16:08.031405 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/947107ea-e024-4476-944e-6c3662bc6557-fernet-keys\") pod \"947107ea-e024-4476-944e-6c3662bc6557\" (UID: \"947107ea-e024-4476-944e-6c3662bc6557\") " Dec 11 15:16:08 crc kubenswrapper[5050]: I1211 15:16:08.031469 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/947107ea-e024-4476-944e-6c3662bc6557-config-data\") pod \"947107ea-e024-4476-944e-6c3662bc6557\" (UID: \"947107ea-e024-4476-944e-6c3662bc6557\") " Dec 11 15:16:08 crc kubenswrapper[5050]: I1211 15:16:08.031559 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/947107ea-e024-4476-944e-6c3662bc6557-credential-keys\") pod \"947107ea-e024-4476-944e-6c3662bc6557\" (UID: \"947107ea-e024-4476-944e-6c3662bc6557\") " Dec 11 15:16:08 crc kubenswrapper[5050]: I1211 15:16:08.037303 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/947107ea-e024-4476-944e-6c3662bc6557-scripts" (OuterVolumeSpecName: "scripts") pod "947107ea-e024-4476-944e-6c3662bc6557" (UID: "947107ea-e024-4476-944e-6c3662bc6557"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:16:08 crc kubenswrapper[5050]: I1211 15:16:08.037313 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/947107ea-e024-4476-944e-6c3662bc6557-kube-api-access-cd4vb" (OuterVolumeSpecName: "kube-api-access-cd4vb") pod "947107ea-e024-4476-944e-6c3662bc6557" (UID: "947107ea-e024-4476-944e-6c3662bc6557"). InnerVolumeSpecName "kube-api-access-cd4vb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:16:08 crc kubenswrapper[5050]: I1211 15:16:08.038875 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/947107ea-e024-4476-944e-6c3662bc6557-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "947107ea-e024-4476-944e-6c3662bc6557" (UID: "947107ea-e024-4476-944e-6c3662bc6557"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:16:08 crc kubenswrapper[5050]: I1211 15:16:08.041195 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/947107ea-e024-4476-944e-6c3662bc6557-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "947107ea-e024-4476-944e-6c3662bc6557" (UID: "947107ea-e024-4476-944e-6c3662bc6557"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:16:08 crc kubenswrapper[5050]: I1211 15:16:08.075387 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/947107ea-e024-4476-944e-6c3662bc6557-config-data" (OuterVolumeSpecName: "config-data") pod "947107ea-e024-4476-944e-6c3662bc6557" (UID: "947107ea-e024-4476-944e-6c3662bc6557"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:16:08 crc kubenswrapper[5050]: I1211 15:16:08.078109 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/947107ea-e024-4476-944e-6c3662bc6557-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "947107ea-e024-4476-944e-6c3662bc6557" (UID: "947107ea-e024-4476-944e-6c3662bc6557"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:16:08 crc kubenswrapper[5050]: I1211 15:16:08.134937 5050 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/947107ea-e024-4476-944e-6c3662bc6557-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 11 15:16:08 crc kubenswrapper[5050]: I1211 15:16:08.135066 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/947107ea-e024-4476-944e-6c3662bc6557-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:16:08 crc kubenswrapper[5050]: I1211 15:16:08.135081 5050 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/947107ea-e024-4476-944e-6c3662bc6557-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 11 15:16:08 crc kubenswrapper[5050]: I1211 15:16:08.135101 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/947107ea-e024-4476-944e-6c3662bc6557-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:16:08 crc kubenswrapper[5050]: I1211 15:16:08.135118 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/947107ea-e024-4476-944e-6c3662bc6557-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:16:08 crc kubenswrapper[5050]: I1211 15:16:08.135130 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cd4vb\" (UniqueName: \"kubernetes.io/projected/947107ea-e024-4476-944e-6c3662bc6557-kube-api-access-cd4vb\") on node \"crc\" DevicePath \"\"" Dec 11 15:16:08 crc kubenswrapper[5050]: I1211 15:16:08.618094 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-q4zxd" Dec 11 15:16:08 crc kubenswrapper[5050]: I1211 15:16:08.618091 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-q4zxd" event={"ID":"947107ea-e024-4476-944e-6c3662bc6557","Type":"ContainerDied","Data":"b3da3b262d510580c1894d9b0e0c6e5c7fbbcd75bde11b96c010457666585a94"} Dec 11 15:16:08 crc kubenswrapper[5050]: I1211 15:16:08.618733 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3da3b262d510580c1894d9b0e0c6e5c7fbbcd75bde11b96c010457666585a94" Dec 11 15:16:09 crc kubenswrapper[5050]: I1211 15:16:09.058746 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5f746bd6cb-j62jf"] Dec 11 15:16:09 crc kubenswrapper[5050]: E1211 15:16:09.059124 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6910859-c36e-4687-a1a5-abed4bbb8e30" containerName="init" Dec 11 15:16:09 crc kubenswrapper[5050]: I1211 15:16:09.059142 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6910859-c36e-4687-a1a5-abed4bbb8e30" containerName="init" Dec 11 15:16:09 crc kubenswrapper[5050]: E1211 15:16:09.059165 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="947107ea-e024-4476-944e-6c3662bc6557" containerName="keystone-bootstrap" Dec 11 15:16:09 crc kubenswrapper[5050]: I1211 15:16:09.059173 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="947107ea-e024-4476-944e-6c3662bc6557" containerName="keystone-bootstrap" Dec 11 15:16:09 crc kubenswrapper[5050]: E1211 15:16:09.059206 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6910859-c36e-4687-a1a5-abed4bbb8e30" containerName="dnsmasq-dns" Dec 11 15:16:09 crc kubenswrapper[5050]: I1211 15:16:09.059216 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6910859-c36e-4687-a1a5-abed4bbb8e30" containerName="dnsmasq-dns" Dec 11 15:16:09 crc kubenswrapper[5050]: I1211 15:16:09.059401 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6910859-c36e-4687-a1a5-abed4bbb8e30" containerName="dnsmasq-dns" Dec 11 15:16:09 crc kubenswrapper[5050]: I1211 15:16:09.059421 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="947107ea-e024-4476-944e-6c3662bc6557" containerName="keystone-bootstrap" Dec 11 15:16:09 crc kubenswrapper[5050]: I1211 15:16:09.060137 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5f746bd6cb-j62jf" Dec 11 15:16:09 crc kubenswrapper[5050]: I1211 15:16:09.061858 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 11 15:16:09 crc kubenswrapper[5050]: I1211 15:16:09.067906 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jrbb7" Dec 11 15:16:09 crc kubenswrapper[5050]: I1211 15:16:09.068238 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 11 15:16:09 crc kubenswrapper[5050]: I1211 15:16:09.068348 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 11 15:16:09 crc kubenswrapper[5050]: I1211 15:16:09.083146 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5f746bd6cb-j62jf"] Dec 11 15:16:09 crc kubenswrapper[5050]: I1211 15:16:09.155687 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c02b76d-5daf-4868-b128-73f03ca4e24c-config-data\") pod \"keystone-5f746bd6cb-j62jf\" (UID: \"0c02b76d-5daf-4868-b128-73f03ca4e24c\") " pod="openstack/keystone-5f746bd6cb-j62jf" Dec 11 15:16:09 crc kubenswrapper[5050]: I1211 15:16:09.155771 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c02b76d-5daf-4868-b128-73f03ca4e24c-scripts\") pod \"keystone-5f746bd6cb-j62jf\" (UID: \"0c02b76d-5daf-4868-b128-73f03ca4e24c\") " pod="openstack/keystone-5f746bd6cb-j62jf" Dec 11 15:16:09 crc kubenswrapper[5050]: I1211 15:16:09.155820 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0c02b76d-5daf-4868-b128-73f03ca4e24c-credential-keys\") pod \"keystone-5f746bd6cb-j62jf\" (UID: \"0c02b76d-5daf-4868-b128-73f03ca4e24c\") " pod="openstack/keystone-5f746bd6cb-j62jf" Dec 11 15:16:09 crc kubenswrapper[5050]: I1211 15:16:09.156132 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c02b76d-5daf-4868-b128-73f03ca4e24c-combined-ca-bundle\") pod \"keystone-5f746bd6cb-j62jf\" (UID: \"0c02b76d-5daf-4868-b128-73f03ca4e24c\") " pod="openstack/keystone-5f746bd6cb-j62jf" Dec 11 15:16:09 crc kubenswrapper[5050]: I1211 15:16:09.156196 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0c02b76d-5daf-4868-b128-73f03ca4e24c-fernet-keys\") pod \"keystone-5f746bd6cb-j62jf\" (UID: \"0c02b76d-5daf-4868-b128-73f03ca4e24c\") " pod="openstack/keystone-5f746bd6cb-j62jf" Dec 11 15:16:09 crc kubenswrapper[5050]: I1211 15:16:09.156422 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p68pw\" (UniqueName: \"kubernetes.io/projected/0c02b76d-5daf-4868-b128-73f03ca4e24c-kube-api-access-p68pw\") pod \"keystone-5f746bd6cb-j62jf\" (UID: \"0c02b76d-5daf-4868-b128-73f03ca4e24c\") " pod="openstack/keystone-5f746bd6cb-j62jf" Dec 11 15:16:09 crc kubenswrapper[5050]: I1211 15:16:09.258146 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c02b76d-5daf-4868-b128-73f03ca4e24c-scripts\") pod \"keystone-5f746bd6cb-j62jf\" (UID: \"0c02b76d-5daf-4868-b128-73f03ca4e24c\") " pod="openstack/keystone-5f746bd6cb-j62jf" Dec 11 15:16:09 crc kubenswrapper[5050]: I1211 15:16:09.258211 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0c02b76d-5daf-4868-b128-73f03ca4e24c-credential-keys\") pod \"keystone-5f746bd6cb-j62jf\" (UID: \"0c02b76d-5daf-4868-b128-73f03ca4e24c\") " pod="openstack/keystone-5f746bd6cb-j62jf" Dec 11 15:16:09 crc kubenswrapper[5050]: I1211 15:16:09.258285 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c02b76d-5daf-4868-b128-73f03ca4e24c-combined-ca-bundle\") pod \"keystone-5f746bd6cb-j62jf\" (UID: \"0c02b76d-5daf-4868-b128-73f03ca4e24c\") " pod="openstack/keystone-5f746bd6cb-j62jf" Dec 11 15:16:09 crc kubenswrapper[5050]: I1211 15:16:09.258319 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0c02b76d-5daf-4868-b128-73f03ca4e24c-fernet-keys\") pod \"keystone-5f746bd6cb-j62jf\" (UID: \"0c02b76d-5daf-4868-b128-73f03ca4e24c\") " pod="openstack/keystone-5f746bd6cb-j62jf" Dec 11 15:16:09 crc kubenswrapper[5050]: I1211 15:16:09.258355 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p68pw\" (UniqueName: \"kubernetes.io/projected/0c02b76d-5daf-4868-b128-73f03ca4e24c-kube-api-access-p68pw\") pod \"keystone-5f746bd6cb-j62jf\" (UID: \"0c02b76d-5daf-4868-b128-73f03ca4e24c\") " pod="openstack/keystone-5f746bd6cb-j62jf" Dec 11 15:16:09 crc kubenswrapper[5050]: I1211 15:16:09.258421 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c02b76d-5daf-4868-b128-73f03ca4e24c-config-data\") pod \"keystone-5f746bd6cb-j62jf\" (UID: \"0c02b76d-5daf-4868-b128-73f03ca4e24c\") " pod="openstack/keystone-5f746bd6cb-j62jf" Dec 11 15:16:09 crc kubenswrapper[5050]: I1211 15:16:09.263174 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/0c02b76d-5daf-4868-b128-73f03ca4e24c-credential-keys\") pod \"keystone-5f746bd6cb-j62jf\" (UID: \"0c02b76d-5daf-4868-b128-73f03ca4e24c\") " pod="openstack/keystone-5f746bd6cb-j62jf" Dec 11 15:16:09 crc kubenswrapper[5050]: I1211 15:16:09.264339 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0c02b76d-5daf-4868-b128-73f03ca4e24c-scripts\") pod \"keystone-5f746bd6cb-j62jf\" (UID: \"0c02b76d-5daf-4868-b128-73f03ca4e24c\") " pod="openstack/keystone-5f746bd6cb-j62jf" Dec 11 15:16:09 crc kubenswrapper[5050]: I1211 15:16:09.265023 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c02b76d-5daf-4868-b128-73f03ca4e24c-config-data\") pod \"keystone-5f746bd6cb-j62jf\" (UID: \"0c02b76d-5daf-4868-b128-73f03ca4e24c\") " pod="openstack/keystone-5f746bd6cb-j62jf" Dec 11 15:16:09 crc kubenswrapper[5050]: I1211 15:16:09.266250 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/0c02b76d-5daf-4868-b128-73f03ca4e24c-fernet-keys\") pod \"keystone-5f746bd6cb-j62jf\" (UID: \"0c02b76d-5daf-4868-b128-73f03ca4e24c\") " pod="openstack/keystone-5f746bd6cb-j62jf" Dec 11 15:16:09 crc kubenswrapper[5050]: I1211 15:16:09.273600 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c02b76d-5daf-4868-b128-73f03ca4e24c-combined-ca-bundle\") pod \"keystone-5f746bd6cb-j62jf\" (UID: \"0c02b76d-5daf-4868-b128-73f03ca4e24c\") " pod="openstack/keystone-5f746bd6cb-j62jf" Dec 11 15:16:09 crc kubenswrapper[5050]: I1211 15:16:09.275188 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p68pw\" (UniqueName: \"kubernetes.io/projected/0c02b76d-5daf-4868-b128-73f03ca4e24c-kube-api-access-p68pw\") pod \"keystone-5f746bd6cb-j62jf\" (UID: \"0c02b76d-5daf-4868-b128-73f03ca4e24c\") " pod="openstack/keystone-5f746bd6cb-j62jf" Dec 11 15:16:09 crc kubenswrapper[5050]: I1211 15:16:09.378857 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5f746bd6cb-j62jf" Dec 11 15:16:09 crc kubenswrapper[5050]: I1211 15:16:09.762237 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5f746bd6cb-j62jf"] Dec 11 15:16:10 crc kubenswrapper[5050]: I1211 15:16:10.635297 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5f746bd6cb-j62jf" event={"ID":"0c02b76d-5daf-4868-b128-73f03ca4e24c","Type":"ContainerStarted","Data":"0da1f9c2f2e10117da32b117b04dcbf013bea9b667cea895855748fd4dc16dc2"} Dec 11 15:16:10 crc kubenswrapper[5050]: I1211 15:16:10.636237 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5f746bd6cb-j62jf" Dec 11 15:16:10 crc kubenswrapper[5050]: I1211 15:16:10.636262 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5f746bd6cb-j62jf" event={"ID":"0c02b76d-5daf-4868-b128-73f03ca4e24c","Type":"ContainerStarted","Data":"0ebad6ce02853ca2a6d62705fb1018cf7286e800fae1024e474b767528ebda37"} Dec 11 15:16:10 crc kubenswrapper[5050]: I1211 15:16:10.660683 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5f746bd6cb-j62jf" podStartSLOduration=1.660659319 podStartE2EDuration="1.660659319s" podCreationTimestamp="2025-12-11 15:16:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:16:10.653352153 +0000 UTC m=+5261.497074739" watchObservedRunningTime="2025-12-11 15:16:10.660659319 +0000 UTC m=+5261.504381895" Dec 11 15:16:10 crc kubenswrapper[5050]: I1211 15:16:10.796389 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 15:16:10 crc kubenswrapper[5050]: I1211 15:16:10.796450 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 15:16:11 crc kubenswrapper[5050]: I1211 15:16:11.777637 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wl6kb" Dec 11 15:16:11 crc kubenswrapper[5050]: I1211 15:16:11.777740 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wl6kb" Dec 11 15:16:11 crc kubenswrapper[5050]: I1211 15:16:11.859412 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wl6kb" Dec 11 15:16:12 crc kubenswrapper[5050]: I1211 15:16:12.697883 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wl6kb" Dec 11 15:16:12 crc kubenswrapper[5050]: I1211 15:16:12.759234 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wl6kb"] Dec 11 15:16:14 crc kubenswrapper[5050]: I1211 15:16:14.677556 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wl6kb" podUID="f03ba7a8-bb79-438a-a359-5b6e74d8f8db" containerName="registry-server" containerID="cri-o://ed1d3f23f9b6ef4474802d88ea28fe8162e3959bdf1383a3d3fd1fa36174d763" gracePeriod=2 Dec 11 15:16:15 crc kubenswrapper[5050]: I1211 15:16:15.689662 5050 generic.go:334] "Generic (PLEG): container finished" podID="f03ba7a8-bb79-438a-a359-5b6e74d8f8db" containerID="ed1d3f23f9b6ef4474802d88ea28fe8162e3959bdf1383a3d3fd1fa36174d763" exitCode=0 Dec 11 15:16:15 crc kubenswrapper[5050]: I1211 15:16:15.689705 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wl6kb" event={"ID":"f03ba7a8-bb79-438a-a359-5b6e74d8f8db","Type":"ContainerDied","Data":"ed1d3f23f9b6ef4474802d88ea28fe8162e3959bdf1383a3d3fd1fa36174d763"} Dec 11 15:16:15 crc kubenswrapper[5050]: I1211 15:16:15.690269 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wl6kb" event={"ID":"f03ba7a8-bb79-438a-a359-5b6e74d8f8db","Type":"ContainerDied","Data":"b7dbca5e7484f782a77112f1b78f78a874d896af5cf127ff2e192089233e8b3e"} Dec 11 15:16:15 crc kubenswrapper[5050]: I1211 15:16:15.690299 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7dbca5e7484f782a77112f1b78f78a874d896af5cf127ff2e192089233e8b3e" Dec 11 15:16:15 crc kubenswrapper[5050]: I1211 15:16:15.705792 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wl6kb" Dec 11 15:16:15 crc kubenswrapper[5050]: I1211 15:16:15.798108 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f03ba7a8-bb79-438a-a359-5b6e74d8f8db-catalog-content\") pod \"f03ba7a8-bb79-438a-a359-5b6e74d8f8db\" (UID: \"f03ba7a8-bb79-438a-a359-5b6e74d8f8db\") " Dec 11 15:16:15 crc kubenswrapper[5050]: I1211 15:16:15.798250 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swkj9\" (UniqueName: \"kubernetes.io/projected/f03ba7a8-bb79-438a-a359-5b6e74d8f8db-kube-api-access-swkj9\") pod \"f03ba7a8-bb79-438a-a359-5b6e74d8f8db\" (UID: \"f03ba7a8-bb79-438a-a359-5b6e74d8f8db\") " Dec 11 15:16:15 crc kubenswrapper[5050]: I1211 15:16:15.798314 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f03ba7a8-bb79-438a-a359-5b6e74d8f8db-utilities\") pod \"f03ba7a8-bb79-438a-a359-5b6e74d8f8db\" (UID: \"f03ba7a8-bb79-438a-a359-5b6e74d8f8db\") " Dec 11 15:16:15 crc kubenswrapper[5050]: I1211 15:16:15.799867 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f03ba7a8-bb79-438a-a359-5b6e74d8f8db-utilities" (OuterVolumeSpecName: "utilities") pod "f03ba7a8-bb79-438a-a359-5b6e74d8f8db" (UID: "f03ba7a8-bb79-438a-a359-5b6e74d8f8db"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:16:15 crc kubenswrapper[5050]: I1211 15:16:15.804391 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f03ba7a8-bb79-438a-a359-5b6e74d8f8db-kube-api-access-swkj9" (OuterVolumeSpecName: "kube-api-access-swkj9") pod "f03ba7a8-bb79-438a-a359-5b6e74d8f8db" (UID: "f03ba7a8-bb79-438a-a359-5b6e74d8f8db"). InnerVolumeSpecName "kube-api-access-swkj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:16:15 crc kubenswrapper[5050]: I1211 15:16:15.825995 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f03ba7a8-bb79-438a-a359-5b6e74d8f8db-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f03ba7a8-bb79-438a-a359-5b6e74d8f8db" (UID: "f03ba7a8-bb79-438a-a359-5b6e74d8f8db"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:16:15 crc kubenswrapper[5050]: I1211 15:16:15.900699 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swkj9\" (UniqueName: \"kubernetes.io/projected/f03ba7a8-bb79-438a-a359-5b6e74d8f8db-kube-api-access-swkj9\") on node \"crc\" DevicePath \"\"" Dec 11 15:16:15 crc kubenswrapper[5050]: I1211 15:16:15.900759 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f03ba7a8-bb79-438a-a359-5b6e74d8f8db-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 15:16:15 crc kubenswrapper[5050]: I1211 15:16:15.900774 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f03ba7a8-bb79-438a-a359-5b6e74d8f8db-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 15:16:16 crc kubenswrapper[5050]: I1211 15:16:16.701243 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wl6kb" Dec 11 15:16:16 crc kubenswrapper[5050]: I1211 15:16:16.755429 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wl6kb"] Dec 11 15:16:16 crc kubenswrapper[5050]: I1211 15:16:16.765526 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wl6kb"] Dec 11 15:16:17 crc kubenswrapper[5050]: I1211 15:16:17.564684 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f03ba7a8-bb79-438a-a359-5b6e74d8f8db" path="/var/lib/kubelet/pods/f03ba7a8-bb79-438a-a359-5b6e74d8f8db/volumes" Dec 11 15:16:40 crc kubenswrapper[5050]: I1211 15:16:40.801171 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 15:16:40 crc kubenswrapper[5050]: I1211 15:16:40.802336 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 15:16:40 crc kubenswrapper[5050]: I1211 15:16:40.802492 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 15:16:40 crc kubenswrapper[5050]: I1211 15:16:40.803274 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5e751c2ff817a404aae81889734d5a9bd57af497d8e885add288368bb5b0040e"} pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 15:16:40 crc kubenswrapper[5050]: I1211 15:16:40.803336 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" containerID="cri-o://5e751c2ff817a404aae81889734d5a9bd57af497d8e885add288368bb5b0040e" gracePeriod=600 Dec 11 15:16:40 crc kubenswrapper[5050]: I1211 15:16:40.808810 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5f746bd6cb-j62jf" Dec 11 15:16:40 crc kubenswrapper[5050]: E1211 15:16:40.950553 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:16:40 crc kubenswrapper[5050]: I1211 15:16:40.969356 5050 generic.go:334] "Generic (PLEG): container finished" podID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerID="5e751c2ff817a404aae81889734d5a9bd57af497d8e885add288368bb5b0040e" exitCode=0 Dec 11 15:16:40 crc kubenswrapper[5050]: I1211 15:16:40.969423 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerDied","Data":"5e751c2ff817a404aae81889734d5a9bd57af497d8e885add288368bb5b0040e"} Dec 11 15:16:40 crc kubenswrapper[5050]: I1211 15:16:40.969538 5050 scope.go:117] "RemoveContainer" containerID="54afae94cdfb9cc257fb467d0ac4a5e3143dbdadf4785f8ed56cea4ab889612b" Dec 11 15:16:40 crc kubenswrapper[5050]: I1211 15:16:40.970032 5050 scope.go:117] "RemoveContainer" containerID="5e751c2ff817a404aae81889734d5a9bd57af497d8e885add288368bb5b0040e" Dec 11 15:16:40 crc kubenswrapper[5050]: E1211 15:16:40.970340 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:16:44 crc kubenswrapper[5050]: I1211 15:16:44.426077 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 11 15:16:44 crc kubenswrapper[5050]: E1211 15:16:44.427357 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f03ba7a8-bb79-438a-a359-5b6e74d8f8db" containerName="extract-content" Dec 11 15:16:44 crc kubenswrapper[5050]: I1211 15:16:44.427377 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f03ba7a8-bb79-438a-a359-5b6e74d8f8db" containerName="extract-content" Dec 11 15:16:44 crc kubenswrapper[5050]: E1211 15:16:44.427412 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f03ba7a8-bb79-438a-a359-5b6e74d8f8db" containerName="registry-server" Dec 11 15:16:44 crc kubenswrapper[5050]: I1211 15:16:44.427420 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f03ba7a8-bb79-438a-a359-5b6e74d8f8db" containerName="registry-server" Dec 11 15:16:44 crc kubenswrapper[5050]: E1211 15:16:44.427446 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f03ba7a8-bb79-438a-a359-5b6e74d8f8db" containerName="extract-utilities" Dec 11 15:16:44 crc kubenswrapper[5050]: I1211 15:16:44.427456 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="f03ba7a8-bb79-438a-a359-5b6e74d8f8db" containerName="extract-utilities" Dec 11 15:16:44 crc kubenswrapper[5050]: I1211 15:16:44.427675 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="f03ba7a8-bb79-438a-a359-5b6e74d8f8db" containerName="registry-server" Dec 11 15:16:44 crc kubenswrapper[5050]: I1211 15:16:44.428455 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 11 15:16:44 crc kubenswrapper[5050]: I1211 15:16:44.433440 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 11 15:16:44 crc kubenswrapper[5050]: I1211 15:16:44.435201 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-c88pf" Dec 11 15:16:44 crc kubenswrapper[5050]: I1211 15:16:44.436491 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 11 15:16:44 crc kubenswrapper[5050]: I1211 15:16:44.442187 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 11 15:16:44 crc kubenswrapper[5050]: I1211 15:16:44.462146 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Dec 11 15:16:44 crc kubenswrapper[5050]: E1211 15:16:44.463462 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-5q8q5 openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="73b060cb-f74f-41e7-95cc-6ee6b05877c8" Dec 11 15:16:44 crc kubenswrapper[5050]: I1211 15:16:44.465613 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Dec 11 15:16:44 crc kubenswrapper[5050]: I1211 15:16:44.467686 5050 status_manager.go:875] "Failed to update status for pod" pod="openstack/openstackclient" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73b060cb-f74f-41e7-95cc-6ee6b05877c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T15:16:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T15:16:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T15:16:44Z\\\",\\\"message\\\":\\\"containers with unready status: [openstackclient]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T15:16:44Z\\\",\\\"message\\\":\\\"containers with unready status: [openstackclient]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/podified-antelope-centos9/openstack-openstackclient@sha256:2b4f8494513a3af102066fec5868ab167ac8664aceb2f0c639d7a0b60260a944\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"openstackclient\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/home/cloud-admin/.config/openstack/clouds.yaml\\\",\\\"name\\\":\\\"openstack-config\\\"},{\\\"mountPath\\\":\\\"/home/cloud-admin/.config/openstack/secure.yaml\\\",\\\"name\\\":\\\"openstack-config-secret\\\"},{\\\"mountPath\\\":\\\"/home/cloud-admin/cloudrc\\\",\\\"name\\\":\\\"openstack-config-secret\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5q8q5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T15:16:44Z\\\"}}\" for pod \"openstack\"/\"openstackclient\": pods \"openstackclient\" not found" Dec 11 15:16:44 crc kubenswrapper[5050]: I1211 15:16:44.541908 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/73b060cb-f74f-41e7-95cc-6ee6b05877c8-openstack-config\") pod \"openstackclient\" (UID: \"73b060cb-f74f-41e7-95cc-6ee6b05877c8\") " pod="openstack/openstackclient" Dec 11 15:16:44 crc kubenswrapper[5050]: I1211 15:16:44.542111 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q8q5\" (UniqueName: \"kubernetes.io/projected/73b060cb-f74f-41e7-95cc-6ee6b05877c8-kube-api-access-5q8q5\") pod \"openstackclient\" (UID: \"73b060cb-f74f-41e7-95cc-6ee6b05877c8\") " pod="openstack/openstackclient" Dec 11 15:16:44 crc kubenswrapper[5050]: I1211 15:16:44.542164 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/73b060cb-f74f-41e7-95cc-6ee6b05877c8-openstack-config-secret\") pod \"openstackclient\" (UID: \"73b060cb-f74f-41e7-95cc-6ee6b05877c8\") " pod="openstack/openstackclient" Dec 11 15:16:44 crc kubenswrapper[5050]: I1211 15:16:44.606213 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 11 15:16:44 crc kubenswrapper[5050]: I1211 15:16:44.607457 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 11 15:16:44 crc kubenswrapper[5050]: I1211 15:16:44.625761 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 11 15:16:44 crc kubenswrapper[5050]: I1211 15:16:44.644039 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/73b060cb-f74f-41e7-95cc-6ee6b05877c8-openstack-config-secret\") pod \"openstackclient\" (UID: \"73b060cb-f74f-41e7-95cc-6ee6b05877c8\") " pod="openstack/openstackclient" Dec 11 15:16:44 crc kubenswrapper[5050]: I1211 15:16:44.644311 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/73b060cb-f74f-41e7-95cc-6ee6b05877c8-openstack-config\") pod \"openstackclient\" (UID: \"73b060cb-f74f-41e7-95cc-6ee6b05877c8\") " pod="openstack/openstackclient" Dec 11 15:16:44 crc kubenswrapper[5050]: I1211 15:16:44.644499 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q8q5\" (UniqueName: \"kubernetes.io/projected/73b060cb-f74f-41e7-95cc-6ee6b05877c8-kube-api-access-5q8q5\") pod \"openstackclient\" (UID: \"73b060cb-f74f-41e7-95cc-6ee6b05877c8\") " pod="openstack/openstackclient" Dec 11 15:16:44 crc kubenswrapper[5050]: I1211 15:16:44.649039 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/73b060cb-f74f-41e7-95cc-6ee6b05877c8-openstack-config\") pod \"openstackclient\" (UID: \"73b060cb-f74f-41e7-95cc-6ee6b05877c8\") " pod="openstack/openstackclient" Dec 11 15:16:44 crc kubenswrapper[5050]: I1211 15:16:44.652966 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/73b060cb-f74f-41e7-95cc-6ee6b05877c8-openstack-config-secret\") pod \"openstackclient\" (UID: \"73b060cb-f74f-41e7-95cc-6ee6b05877c8\") " pod="openstack/openstackclient" Dec 11 15:16:44 crc kubenswrapper[5050]: E1211 15:16:44.658084 5050 projected.go:194] Error preparing data for projected volume kube-api-access-5q8q5 for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (73b060cb-f74f-41e7-95cc-6ee6b05877c8) does not match the UID in record. The object might have been deleted and then recreated Dec 11 15:16:44 crc kubenswrapper[5050]: E1211 15:16:44.658166 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/73b060cb-f74f-41e7-95cc-6ee6b05877c8-kube-api-access-5q8q5 podName:73b060cb-f74f-41e7-95cc-6ee6b05877c8 nodeName:}" failed. No retries permitted until 2025-12-11 15:16:45.158139356 +0000 UTC m=+5296.001861962 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5q8q5" (UniqueName: "kubernetes.io/projected/73b060cb-f74f-41e7-95cc-6ee6b05877c8-kube-api-access-5q8q5") pod "openstackclient" (UID: "73b060cb-f74f-41e7-95cc-6ee6b05877c8") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (73b060cb-f74f-41e7-95cc-6ee6b05877c8) does not match the UID in record. The object might have been deleted and then recreated Dec 11 15:16:44 crc kubenswrapper[5050]: I1211 15:16:44.667105 5050 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="73b060cb-f74f-41e7-95cc-6ee6b05877c8" podUID="9dab0594-84c1-48fa-b0f9-a010ae461c08" Dec 11 15:16:44 crc kubenswrapper[5050]: I1211 15:16:44.746334 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9dab0594-84c1-48fa-b0f9-a010ae461c08-openstack-config\") pod \"openstackclient\" (UID: \"9dab0594-84c1-48fa-b0f9-a010ae461c08\") " pod="openstack/openstackclient" Dec 11 15:16:44 crc kubenswrapper[5050]: I1211 15:16:44.746688 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9dab0594-84c1-48fa-b0f9-a010ae461c08-openstack-config-secret\") pod \"openstackclient\" (UID: \"9dab0594-84c1-48fa-b0f9-a010ae461c08\") " pod="openstack/openstackclient" Dec 11 15:16:44 crc kubenswrapper[5050]: I1211 15:16:44.746758 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75lr8\" (UniqueName: \"kubernetes.io/projected/9dab0594-84c1-48fa-b0f9-a010ae461c08-kube-api-access-75lr8\") pod \"openstackclient\" (UID: \"9dab0594-84c1-48fa-b0f9-a010ae461c08\") " pod="openstack/openstackclient" Dec 11 15:16:44 crc kubenswrapper[5050]: I1211 15:16:44.848734 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9dab0594-84c1-48fa-b0f9-a010ae461c08-openstack-config\") pod \"openstackclient\" (UID: \"9dab0594-84c1-48fa-b0f9-a010ae461c08\") " pod="openstack/openstackclient" Dec 11 15:16:44 crc kubenswrapper[5050]: I1211 15:16:44.848888 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9dab0594-84c1-48fa-b0f9-a010ae461c08-openstack-config-secret\") pod \"openstackclient\" (UID: \"9dab0594-84c1-48fa-b0f9-a010ae461c08\") " pod="openstack/openstackclient" Dec 11 15:16:44 crc kubenswrapper[5050]: I1211 15:16:44.848922 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75lr8\" (UniqueName: \"kubernetes.io/projected/9dab0594-84c1-48fa-b0f9-a010ae461c08-kube-api-access-75lr8\") pod \"openstackclient\" (UID: \"9dab0594-84c1-48fa-b0f9-a010ae461c08\") " pod="openstack/openstackclient" Dec 11 15:16:44 crc kubenswrapper[5050]: I1211 15:16:44.850313 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9dab0594-84c1-48fa-b0f9-a010ae461c08-openstack-config\") pod \"openstackclient\" (UID: \"9dab0594-84c1-48fa-b0f9-a010ae461c08\") " pod="openstack/openstackclient" Dec 11 15:16:44 crc kubenswrapper[5050]: I1211 15:16:44.855669 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9dab0594-84c1-48fa-b0f9-a010ae461c08-openstack-config-secret\") pod \"openstackclient\" (UID: \"9dab0594-84c1-48fa-b0f9-a010ae461c08\") " pod="openstack/openstackclient" Dec 11 15:16:44 crc kubenswrapper[5050]: I1211 15:16:44.864217 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75lr8\" (UniqueName: \"kubernetes.io/projected/9dab0594-84c1-48fa-b0f9-a010ae461c08-kube-api-access-75lr8\") pod \"openstackclient\" (UID: \"9dab0594-84c1-48fa-b0f9-a010ae461c08\") " pod="openstack/openstackclient" Dec 11 15:16:44 crc kubenswrapper[5050]: I1211 15:16:44.927314 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 11 15:16:45 crc kubenswrapper[5050]: I1211 15:16:45.006551 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 11 15:16:45 crc kubenswrapper[5050]: I1211 15:16:45.011289 5050 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="73b060cb-f74f-41e7-95cc-6ee6b05877c8" podUID="9dab0594-84c1-48fa-b0f9-a010ae461c08" Dec 11 15:16:45 crc kubenswrapper[5050]: I1211 15:16:45.058572 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 11 15:16:45 crc kubenswrapper[5050]: I1211 15:16:45.062337 5050 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="73b060cb-f74f-41e7-95cc-6ee6b05877c8" podUID="9dab0594-84c1-48fa-b0f9-a010ae461c08" Dec 11 15:16:45 crc kubenswrapper[5050]: I1211 15:16:45.155304 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/73b060cb-f74f-41e7-95cc-6ee6b05877c8-openstack-config\") pod \"73b060cb-f74f-41e7-95cc-6ee6b05877c8\" (UID: \"73b060cb-f74f-41e7-95cc-6ee6b05877c8\") " Dec 11 15:16:45 crc kubenswrapper[5050]: I1211 15:16:45.155442 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/73b060cb-f74f-41e7-95cc-6ee6b05877c8-openstack-config-secret\") pod \"73b060cb-f74f-41e7-95cc-6ee6b05877c8\" (UID: \"73b060cb-f74f-41e7-95cc-6ee6b05877c8\") " Dec 11 15:16:45 crc kubenswrapper[5050]: I1211 15:16:45.155823 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73b060cb-f74f-41e7-95cc-6ee6b05877c8-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "73b060cb-f74f-41e7-95cc-6ee6b05877c8" (UID: "73b060cb-f74f-41e7-95cc-6ee6b05877c8"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:16:45 crc kubenswrapper[5050]: I1211 15:16:45.156471 5050 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/73b060cb-f74f-41e7-95cc-6ee6b05877c8-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 11 15:16:45 crc kubenswrapper[5050]: I1211 15:16:45.156495 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5q8q5\" (UniqueName: \"kubernetes.io/projected/73b060cb-f74f-41e7-95cc-6ee6b05877c8-kube-api-access-5q8q5\") on node \"crc\" DevicePath \"\"" Dec 11 15:16:45 crc kubenswrapper[5050]: I1211 15:16:45.161659 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73b060cb-f74f-41e7-95cc-6ee6b05877c8-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "73b060cb-f74f-41e7-95cc-6ee6b05877c8" (UID: "73b060cb-f74f-41e7-95cc-6ee6b05877c8"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:16:45 crc kubenswrapper[5050]: I1211 15:16:45.257722 5050 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/73b060cb-f74f-41e7-95cc-6ee6b05877c8-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 11 15:16:45 crc kubenswrapper[5050]: I1211 15:16:45.307789 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 11 15:16:45 crc kubenswrapper[5050]: I1211 15:16:45.567510 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73b060cb-f74f-41e7-95cc-6ee6b05877c8" path="/var/lib/kubelet/pods/73b060cb-f74f-41e7-95cc-6ee6b05877c8/volumes" Dec 11 15:16:46 crc kubenswrapper[5050]: I1211 15:16:46.019686 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 11 15:16:46 crc kubenswrapper[5050]: I1211 15:16:46.021153 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9dab0594-84c1-48fa-b0f9-a010ae461c08","Type":"ContainerStarted","Data":"c4217da09ab7b280b39f45e31215e32e283a8504c6d9bb8d8c904940e92c9f87"} Dec 11 15:16:46 crc kubenswrapper[5050]: I1211 15:16:46.021185 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9dab0594-84c1-48fa-b0f9-a010ae461c08","Type":"ContainerStarted","Data":"df5eab6d3a9b81f555062ac42f225ae4005d6181468747ef4e1276049cdfb884"} Dec 11 15:16:46 crc kubenswrapper[5050]: I1211 15:16:46.054547 5050 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="73b060cb-f74f-41e7-95cc-6ee6b05877c8" podUID="9dab0594-84c1-48fa-b0f9-a010ae461c08" Dec 11 15:16:49 crc kubenswrapper[5050]: I1211 15:16:49.085080 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=5.085001905 podStartE2EDuration="5.085001905s" podCreationTimestamp="2025-12-11 15:16:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:16:46.049332869 +0000 UTC m=+5296.893055455" watchObservedRunningTime="2025-12-11 15:16:49.085001905 +0000 UTC m=+5299.928724531" Dec 11 15:16:49 crc kubenswrapper[5050]: I1211 15:16:49.090398 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7c54p"] Dec 11 15:16:49 crc kubenswrapper[5050]: I1211 15:16:49.094211 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7c54p" Dec 11 15:16:49 crc kubenswrapper[5050]: I1211 15:16:49.117999 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7c54p"] Dec 11 15:16:49 crc kubenswrapper[5050]: I1211 15:16:49.232005 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/514df745-2f59-4d2e-b2b3-9d59abbf6cd2-catalog-content\") pod \"certified-operators-7c54p\" (UID: \"514df745-2f59-4d2e-b2b3-9d59abbf6cd2\") " pod="openshift-marketplace/certified-operators-7c54p" Dec 11 15:16:49 crc kubenswrapper[5050]: I1211 15:16:49.232143 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4jw4\" (UniqueName: \"kubernetes.io/projected/514df745-2f59-4d2e-b2b3-9d59abbf6cd2-kube-api-access-z4jw4\") pod \"certified-operators-7c54p\" (UID: \"514df745-2f59-4d2e-b2b3-9d59abbf6cd2\") " pod="openshift-marketplace/certified-operators-7c54p" Dec 11 15:16:49 crc kubenswrapper[5050]: I1211 15:16:49.232491 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/514df745-2f59-4d2e-b2b3-9d59abbf6cd2-utilities\") pod \"certified-operators-7c54p\" (UID: \"514df745-2f59-4d2e-b2b3-9d59abbf6cd2\") " pod="openshift-marketplace/certified-operators-7c54p" Dec 11 15:16:49 crc kubenswrapper[5050]: I1211 15:16:49.334366 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/514df745-2f59-4d2e-b2b3-9d59abbf6cd2-utilities\") pod \"certified-operators-7c54p\" (UID: \"514df745-2f59-4d2e-b2b3-9d59abbf6cd2\") " pod="openshift-marketplace/certified-operators-7c54p" Dec 11 15:16:49 crc kubenswrapper[5050]: I1211 15:16:49.334422 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/514df745-2f59-4d2e-b2b3-9d59abbf6cd2-catalog-content\") pod \"certified-operators-7c54p\" (UID: \"514df745-2f59-4d2e-b2b3-9d59abbf6cd2\") " pod="openshift-marketplace/certified-operators-7c54p" Dec 11 15:16:49 crc kubenswrapper[5050]: I1211 15:16:49.334449 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4jw4\" (UniqueName: \"kubernetes.io/projected/514df745-2f59-4d2e-b2b3-9d59abbf6cd2-kube-api-access-z4jw4\") pod \"certified-operators-7c54p\" (UID: \"514df745-2f59-4d2e-b2b3-9d59abbf6cd2\") " pod="openshift-marketplace/certified-operators-7c54p" Dec 11 15:16:49 crc kubenswrapper[5050]: I1211 15:16:49.335085 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/514df745-2f59-4d2e-b2b3-9d59abbf6cd2-utilities\") pod \"certified-operators-7c54p\" (UID: \"514df745-2f59-4d2e-b2b3-9d59abbf6cd2\") " pod="openshift-marketplace/certified-operators-7c54p" Dec 11 15:16:49 crc kubenswrapper[5050]: I1211 15:16:49.335195 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/514df745-2f59-4d2e-b2b3-9d59abbf6cd2-catalog-content\") pod \"certified-operators-7c54p\" (UID: \"514df745-2f59-4d2e-b2b3-9d59abbf6cd2\") " pod="openshift-marketplace/certified-operators-7c54p" Dec 11 15:16:49 crc kubenswrapper[5050]: I1211 15:16:49.355992 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4jw4\" (UniqueName: \"kubernetes.io/projected/514df745-2f59-4d2e-b2b3-9d59abbf6cd2-kube-api-access-z4jw4\") pod \"certified-operators-7c54p\" (UID: \"514df745-2f59-4d2e-b2b3-9d59abbf6cd2\") " pod="openshift-marketplace/certified-operators-7c54p" Dec 11 15:16:49 crc kubenswrapper[5050]: I1211 15:16:49.428881 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7c54p" Dec 11 15:16:49 crc kubenswrapper[5050]: I1211 15:16:49.733613 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7c54p"] Dec 11 15:16:50 crc kubenswrapper[5050]: I1211 15:16:50.067874 5050 generic.go:334] "Generic (PLEG): container finished" podID="514df745-2f59-4d2e-b2b3-9d59abbf6cd2" containerID="42ccb70a77d5d5ab1253adf0391f03f108a77ee1c66cdc67101f136155cf949b" exitCode=0 Dec 11 15:16:50 crc kubenswrapper[5050]: I1211 15:16:50.067986 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7c54p" event={"ID":"514df745-2f59-4d2e-b2b3-9d59abbf6cd2","Type":"ContainerDied","Data":"42ccb70a77d5d5ab1253adf0391f03f108a77ee1c66cdc67101f136155cf949b"} Dec 11 15:16:50 crc kubenswrapper[5050]: I1211 15:16:50.068303 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7c54p" event={"ID":"514df745-2f59-4d2e-b2b3-9d59abbf6cd2","Type":"ContainerStarted","Data":"876b1765876dc8bc224ac757923e4f054483702aa91f73575aacb3121fbc683f"} Dec 11 15:16:51 crc kubenswrapper[5050]: I1211 15:16:51.547371 5050 scope.go:117] "RemoveContainer" containerID="5e751c2ff817a404aae81889734d5a9bd57af497d8e885add288368bb5b0040e" Dec 11 15:16:51 crc kubenswrapper[5050]: E1211 15:16:51.548396 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:16:52 crc kubenswrapper[5050]: I1211 15:16:52.089392 5050 generic.go:334] "Generic (PLEG): container finished" podID="514df745-2f59-4d2e-b2b3-9d59abbf6cd2" containerID="ee2aa0b0e95a12724bb40f367e7d8b97f0bf2de750f8298bfc68175923e332f1" exitCode=0 Dec 11 15:16:52 crc kubenswrapper[5050]: I1211 15:16:52.089451 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7c54p" event={"ID":"514df745-2f59-4d2e-b2b3-9d59abbf6cd2","Type":"ContainerDied","Data":"ee2aa0b0e95a12724bb40f367e7d8b97f0bf2de750f8298bfc68175923e332f1"} Dec 11 15:16:53 crc kubenswrapper[5050]: I1211 15:16:53.100953 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7c54p" event={"ID":"514df745-2f59-4d2e-b2b3-9d59abbf6cd2","Type":"ContainerStarted","Data":"b6ea5c2153416843b73e847d93e70b92a0895c742c2747a17941824de5780bb0"} Dec 11 15:16:53 crc kubenswrapper[5050]: I1211 15:16:53.129820 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7c54p" podStartSLOduration=1.6904443310000001 podStartE2EDuration="4.129783405s" podCreationTimestamp="2025-12-11 15:16:49 +0000 UTC" firstStartedPulling="2025-12-11 15:16:50.070235578 +0000 UTC m=+5300.913958174" lastFinishedPulling="2025-12-11 15:16:52.509574662 +0000 UTC m=+5303.353297248" observedRunningTime="2025-12-11 15:16:53.120894466 +0000 UTC m=+5303.964617062" watchObservedRunningTime="2025-12-11 15:16:53.129783405 +0000 UTC m=+5303.973506031" Dec 11 15:16:59 crc kubenswrapper[5050]: I1211 15:16:59.430299 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7c54p" Dec 11 15:16:59 crc kubenswrapper[5050]: I1211 15:16:59.430819 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7c54p" Dec 11 15:16:59 crc kubenswrapper[5050]: I1211 15:16:59.559092 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7c54p" Dec 11 15:17:00 crc kubenswrapper[5050]: I1211 15:17:00.220379 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7c54p" Dec 11 15:17:00 crc kubenswrapper[5050]: I1211 15:17:00.278066 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7c54p"] Dec 11 15:17:02 crc kubenswrapper[5050]: I1211 15:17:02.184852 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7c54p" podUID="514df745-2f59-4d2e-b2b3-9d59abbf6cd2" containerName="registry-server" containerID="cri-o://b6ea5c2153416843b73e847d93e70b92a0895c742c2747a17941824de5780bb0" gracePeriod=2 Dec 11 15:17:03 crc kubenswrapper[5050]: I1211 15:17:03.131673 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7c54p" Dec 11 15:17:03 crc kubenswrapper[5050]: I1211 15:17:03.179205 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/514df745-2f59-4d2e-b2b3-9d59abbf6cd2-catalog-content\") pod \"514df745-2f59-4d2e-b2b3-9d59abbf6cd2\" (UID: \"514df745-2f59-4d2e-b2b3-9d59abbf6cd2\") " Dec 11 15:17:03 crc kubenswrapper[5050]: I1211 15:17:03.179272 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4jw4\" (UniqueName: \"kubernetes.io/projected/514df745-2f59-4d2e-b2b3-9d59abbf6cd2-kube-api-access-z4jw4\") pod \"514df745-2f59-4d2e-b2b3-9d59abbf6cd2\" (UID: \"514df745-2f59-4d2e-b2b3-9d59abbf6cd2\") " Dec 11 15:17:03 crc kubenswrapper[5050]: I1211 15:17:03.179410 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/514df745-2f59-4d2e-b2b3-9d59abbf6cd2-utilities\") pod \"514df745-2f59-4d2e-b2b3-9d59abbf6cd2\" (UID: \"514df745-2f59-4d2e-b2b3-9d59abbf6cd2\") " Dec 11 15:17:03 crc kubenswrapper[5050]: I1211 15:17:03.180620 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/514df745-2f59-4d2e-b2b3-9d59abbf6cd2-utilities" (OuterVolumeSpecName: "utilities") pod "514df745-2f59-4d2e-b2b3-9d59abbf6cd2" (UID: "514df745-2f59-4d2e-b2b3-9d59abbf6cd2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:17:03 crc kubenswrapper[5050]: I1211 15:17:03.188571 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/514df745-2f59-4d2e-b2b3-9d59abbf6cd2-kube-api-access-z4jw4" (OuterVolumeSpecName: "kube-api-access-z4jw4") pod "514df745-2f59-4d2e-b2b3-9d59abbf6cd2" (UID: "514df745-2f59-4d2e-b2b3-9d59abbf6cd2"). InnerVolumeSpecName "kube-api-access-z4jw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:17:03 crc kubenswrapper[5050]: I1211 15:17:03.194439 5050 generic.go:334] "Generic (PLEG): container finished" podID="514df745-2f59-4d2e-b2b3-9d59abbf6cd2" containerID="b6ea5c2153416843b73e847d93e70b92a0895c742c2747a17941824de5780bb0" exitCode=0 Dec 11 15:17:03 crc kubenswrapper[5050]: I1211 15:17:03.194483 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7c54p" event={"ID":"514df745-2f59-4d2e-b2b3-9d59abbf6cd2","Type":"ContainerDied","Data":"b6ea5c2153416843b73e847d93e70b92a0895c742c2747a17941824de5780bb0"} Dec 11 15:17:03 crc kubenswrapper[5050]: I1211 15:17:03.194513 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7c54p" event={"ID":"514df745-2f59-4d2e-b2b3-9d59abbf6cd2","Type":"ContainerDied","Data":"876b1765876dc8bc224ac757923e4f054483702aa91f73575aacb3121fbc683f"} Dec 11 15:17:03 crc kubenswrapper[5050]: I1211 15:17:03.194533 5050 scope.go:117] "RemoveContainer" containerID="b6ea5c2153416843b73e847d93e70b92a0895c742c2747a17941824de5780bb0" Dec 11 15:17:03 crc kubenswrapper[5050]: I1211 15:17:03.194682 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7c54p" Dec 11 15:17:03 crc kubenswrapper[5050]: I1211 15:17:03.229197 5050 scope.go:117] "RemoveContainer" containerID="ee2aa0b0e95a12724bb40f367e7d8b97f0bf2de750f8298bfc68175923e332f1" Dec 11 15:17:03 crc kubenswrapper[5050]: I1211 15:17:03.240989 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/514df745-2f59-4d2e-b2b3-9d59abbf6cd2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "514df745-2f59-4d2e-b2b3-9d59abbf6cd2" (UID: "514df745-2f59-4d2e-b2b3-9d59abbf6cd2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:17:03 crc kubenswrapper[5050]: I1211 15:17:03.248875 5050 scope.go:117] "RemoveContainer" containerID="42ccb70a77d5d5ab1253adf0391f03f108a77ee1c66cdc67101f136155cf949b" Dec 11 15:17:03 crc kubenswrapper[5050]: I1211 15:17:03.279336 5050 scope.go:117] "RemoveContainer" containerID="b6ea5c2153416843b73e847d93e70b92a0895c742c2747a17941824de5780bb0" Dec 11 15:17:03 crc kubenswrapper[5050]: E1211 15:17:03.279696 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6ea5c2153416843b73e847d93e70b92a0895c742c2747a17941824de5780bb0\": container with ID starting with b6ea5c2153416843b73e847d93e70b92a0895c742c2747a17941824de5780bb0 not found: ID does not exist" containerID="b6ea5c2153416843b73e847d93e70b92a0895c742c2747a17941824de5780bb0" Dec 11 15:17:03 crc kubenswrapper[5050]: I1211 15:17:03.279727 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6ea5c2153416843b73e847d93e70b92a0895c742c2747a17941824de5780bb0"} err="failed to get container status \"b6ea5c2153416843b73e847d93e70b92a0895c742c2747a17941824de5780bb0\": rpc error: code = NotFound desc = could not find container \"b6ea5c2153416843b73e847d93e70b92a0895c742c2747a17941824de5780bb0\": container with ID starting with b6ea5c2153416843b73e847d93e70b92a0895c742c2747a17941824de5780bb0 not found: ID does not exist" Dec 11 15:17:03 crc kubenswrapper[5050]: I1211 15:17:03.279747 5050 scope.go:117] "RemoveContainer" containerID="ee2aa0b0e95a12724bb40f367e7d8b97f0bf2de750f8298bfc68175923e332f1" Dec 11 15:17:03 crc kubenswrapper[5050]: E1211 15:17:03.279932 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee2aa0b0e95a12724bb40f367e7d8b97f0bf2de750f8298bfc68175923e332f1\": container with ID starting with ee2aa0b0e95a12724bb40f367e7d8b97f0bf2de750f8298bfc68175923e332f1 not found: ID does not exist" containerID="ee2aa0b0e95a12724bb40f367e7d8b97f0bf2de750f8298bfc68175923e332f1" Dec 11 15:17:03 crc kubenswrapper[5050]: I1211 15:17:03.279953 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee2aa0b0e95a12724bb40f367e7d8b97f0bf2de750f8298bfc68175923e332f1"} err="failed to get container status \"ee2aa0b0e95a12724bb40f367e7d8b97f0bf2de750f8298bfc68175923e332f1\": rpc error: code = NotFound desc = could not find container \"ee2aa0b0e95a12724bb40f367e7d8b97f0bf2de750f8298bfc68175923e332f1\": container with ID starting with ee2aa0b0e95a12724bb40f367e7d8b97f0bf2de750f8298bfc68175923e332f1 not found: ID does not exist" Dec 11 15:17:03 crc kubenswrapper[5050]: I1211 15:17:03.279966 5050 scope.go:117] "RemoveContainer" containerID="42ccb70a77d5d5ab1253adf0391f03f108a77ee1c66cdc67101f136155cf949b" Dec 11 15:17:03 crc kubenswrapper[5050]: E1211 15:17:03.280134 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42ccb70a77d5d5ab1253adf0391f03f108a77ee1c66cdc67101f136155cf949b\": container with ID starting with 42ccb70a77d5d5ab1253adf0391f03f108a77ee1c66cdc67101f136155cf949b not found: ID does not exist" containerID="42ccb70a77d5d5ab1253adf0391f03f108a77ee1c66cdc67101f136155cf949b" Dec 11 15:17:03 crc kubenswrapper[5050]: I1211 15:17:03.280170 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42ccb70a77d5d5ab1253adf0391f03f108a77ee1c66cdc67101f136155cf949b"} err="failed to get container status \"42ccb70a77d5d5ab1253adf0391f03f108a77ee1c66cdc67101f136155cf949b\": rpc error: code = NotFound desc = could not find container \"42ccb70a77d5d5ab1253adf0391f03f108a77ee1c66cdc67101f136155cf949b\": container with ID starting with 42ccb70a77d5d5ab1253adf0391f03f108a77ee1c66cdc67101f136155cf949b not found: ID does not exist" Dec 11 15:17:03 crc kubenswrapper[5050]: I1211 15:17:03.280860 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/514df745-2f59-4d2e-b2b3-9d59abbf6cd2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 15:17:03 crc kubenswrapper[5050]: I1211 15:17:03.280885 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4jw4\" (UniqueName: \"kubernetes.io/projected/514df745-2f59-4d2e-b2b3-9d59abbf6cd2-kube-api-access-z4jw4\") on node \"crc\" DevicePath \"\"" Dec 11 15:17:03 crc kubenswrapper[5050]: I1211 15:17:03.280894 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/514df745-2f59-4d2e-b2b3-9d59abbf6cd2-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 15:17:03 crc kubenswrapper[5050]: I1211 15:17:03.536897 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7c54p"] Dec 11 15:17:03 crc kubenswrapper[5050]: I1211 15:17:03.542557 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7c54p"] Dec 11 15:17:03 crc kubenswrapper[5050]: I1211 15:17:03.556287 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="514df745-2f59-4d2e-b2b3-9d59abbf6cd2" path="/var/lib/kubelet/pods/514df745-2f59-4d2e-b2b3-9d59abbf6cd2/volumes" Dec 11 15:17:04 crc kubenswrapper[5050]: I1211 15:17:04.546985 5050 scope.go:117] "RemoveContainer" containerID="5e751c2ff817a404aae81889734d5a9bd57af497d8e885add288368bb5b0040e" Dec 11 15:17:04 crc kubenswrapper[5050]: E1211 15:17:04.547643 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:17:17 crc kubenswrapper[5050]: I1211 15:17:17.545389 5050 scope.go:117] "RemoveContainer" containerID="5e751c2ff817a404aae81889734d5a9bd57af497d8e885add288368bb5b0040e" Dec 11 15:17:17 crc kubenswrapper[5050]: E1211 15:17:17.546177 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:17:29 crc kubenswrapper[5050]: I1211 15:17:29.817385 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h5qq5"] Dec 11 15:17:29 crc kubenswrapper[5050]: E1211 15:17:29.818877 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="514df745-2f59-4d2e-b2b3-9d59abbf6cd2" containerName="extract-content" Dec 11 15:17:29 crc kubenswrapper[5050]: I1211 15:17:29.818896 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="514df745-2f59-4d2e-b2b3-9d59abbf6cd2" containerName="extract-content" Dec 11 15:17:29 crc kubenswrapper[5050]: E1211 15:17:29.818930 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="514df745-2f59-4d2e-b2b3-9d59abbf6cd2" containerName="extract-utilities" Dec 11 15:17:29 crc kubenswrapper[5050]: I1211 15:17:29.818940 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="514df745-2f59-4d2e-b2b3-9d59abbf6cd2" containerName="extract-utilities" Dec 11 15:17:29 crc kubenswrapper[5050]: E1211 15:17:29.818954 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="514df745-2f59-4d2e-b2b3-9d59abbf6cd2" containerName="registry-server" Dec 11 15:17:29 crc kubenswrapper[5050]: I1211 15:17:29.818962 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="514df745-2f59-4d2e-b2b3-9d59abbf6cd2" containerName="registry-server" Dec 11 15:17:29 crc kubenswrapper[5050]: I1211 15:17:29.819180 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="514df745-2f59-4d2e-b2b3-9d59abbf6cd2" containerName="registry-server" Dec 11 15:17:29 crc kubenswrapper[5050]: I1211 15:17:29.821370 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h5qq5" Dec 11 15:17:29 crc kubenswrapper[5050]: I1211 15:17:29.831565 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h5qq5"] Dec 11 15:17:29 crc kubenswrapper[5050]: I1211 15:17:29.982720 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74dccb59-5f03-40ce-a1dd-f8947ce6cb5f-utilities\") pod \"community-operators-h5qq5\" (UID: \"74dccb59-5f03-40ce-a1dd-f8947ce6cb5f\") " pod="openshift-marketplace/community-operators-h5qq5" Dec 11 15:17:29 crc kubenswrapper[5050]: I1211 15:17:29.982909 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2sv4\" (UniqueName: \"kubernetes.io/projected/74dccb59-5f03-40ce-a1dd-f8947ce6cb5f-kube-api-access-b2sv4\") pod \"community-operators-h5qq5\" (UID: \"74dccb59-5f03-40ce-a1dd-f8947ce6cb5f\") " pod="openshift-marketplace/community-operators-h5qq5" Dec 11 15:17:29 crc kubenswrapper[5050]: I1211 15:17:29.982961 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74dccb59-5f03-40ce-a1dd-f8947ce6cb5f-catalog-content\") pod \"community-operators-h5qq5\" (UID: \"74dccb59-5f03-40ce-a1dd-f8947ce6cb5f\") " pod="openshift-marketplace/community-operators-h5qq5" Dec 11 15:17:30 crc kubenswrapper[5050]: I1211 15:17:30.084063 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2sv4\" (UniqueName: \"kubernetes.io/projected/74dccb59-5f03-40ce-a1dd-f8947ce6cb5f-kube-api-access-b2sv4\") pod \"community-operators-h5qq5\" (UID: \"74dccb59-5f03-40ce-a1dd-f8947ce6cb5f\") " pod="openshift-marketplace/community-operators-h5qq5" Dec 11 15:17:30 crc kubenswrapper[5050]: I1211 15:17:30.084133 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74dccb59-5f03-40ce-a1dd-f8947ce6cb5f-catalog-content\") pod \"community-operators-h5qq5\" (UID: \"74dccb59-5f03-40ce-a1dd-f8947ce6cb5f\") " pod="openshift-marketplace/community-operators-h5qq5" Dec 11 15:17:30 crc kubenswrapper[5050]: I1211 15:17:30.084196 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74dccb59-5f03-40ce-a1dd-f8947ce6cb5f-utilities\") pod \"community-operators-h5qq5\" (UID: \"74dccb59-5f03-40ce-a1dd-f8947ce6cb5f\") " pod="openshift-marketplace/community-operators-h5qq5" Dec 11 15:17:30 crc kubenswrapper[5050]: I1211 15:17:30.084950 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74dccb59-5f03-40ce-a1dd-f8947ce6cb5f-utilities\") pod \"community-operators-h5qq5\" (UID: \"74dccb59-5f03-40ce-a1dd-f8947ce6cb5f\") " pod="openshift-marketplace/community-operators-h5qq5" Dec 11 15:17:30 crc kubenswrapper[5050]: I1211 15:17:30.085050 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74dccb59-5f03-40ce-a1dd-f8947ce6cb5f-catalog-content\") pod \"community-operators-h5qq5\" (UID: \"74dccb59-5f03-40ce-a1dd-f8947ce6cb5f\") " pod="openshift-marketplace/community-operators-h5qq5" Dec 11 15:17:30 crc kubenswrapper[5050]: I1211 15:17:30.119238 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2sv4\" (UniqueName: \"kubernetes.io/projected/74dccb59-5f03-40ce-a1dd-f8947ce6cb5f-kube-api-access-b2sv4\") pod \"community-operators-h5qq5\" (UID: \"74dccb59-5f03-40ce-a1dd-f8947ce6cb5f\") " pod="openshift-marketplace/community-operators-h5qq5" Dec 11 15:17:30 crc kubenswrapper[5050]: I1211 15:17:30.162788 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h5qq5" Dec 11 15:17:30 crc kubenswrapper[5050]: I1211 15:17:30.559234 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h5qq5"] Dec 11 15:17:31 crc kubenswrapper[5050]: I1211 15:17:31.493074 5050 generic.go:334] "Generic (PLEG): container finished" podID="74dccb59-5f03-40ce-a1dd-f8947ce6cb5f" containerID="308651620c08c5cef6358e4c1e9efe61f4433892e3d4e69458d0af3eac2765e7" exitCode=0 Dec 11 15:17:31 crc kubenswrapper[5050]: I1211 15:17:31.493217 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5qq5" event={"ID":"74dccb59-5f03-40ce-a1dd-f8947ce6cb5f","Type":"ContainerDied","Data":"308651620c08c5cef6358e4c1e9efe61f4433892e3d4e69458d0af3eac2765e7"} Dec 11 15:17:31 crc kubenswrapper[5050]: I1211 15:17:31.493556 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5qq5" event={"ID":"74dccb59-5f03-40ce-a1dd-f8947ce6cb5f","Type":"ContainerStarted","Data":"6fbb9d3ceb9bb3d89f84e53f3075bd354e66bc883832cac00e1222858f7da31a"} Dec 11 15:17:31 crc kubenswrapper[5050]: I1211 15:17:31.548619 5050 scope.go:117] "RemoveContainer" containerID="5e751c2ff817a404aae81889734d5a9bd57af497d8e885add288368bb5b0040e" Dec 11 15:17:31 crc kubenswrapper[5050]: E1211 15:17:31.551238 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:17:32 crc kubenswrapper[5050]: I1211 15:17:32.505084 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5qq5" event={"ID":"74dccb59-5f03-40ce-a1dd-f8947ce6cb5f","Type":"ContainerStarted","Data":"96d944fa8a3c4e9813200513e078a5dfcbf448f65b470a2306ab889fd951ba02"} Dec 11 15:17:33 crc kubenswrapper[5050]: I1211 15:17:33.523913 5050 generic.go:334] "Generic (PLEG): container finished" podID="74dccb59-5f03-40ce-a1dd-f8947ce6cb5f" containerID="96d944fa8a3c4e9813200513e078a5dfcbf448f65b470a2306ab889fd951ba02" exitCode=0 Dec 11 15:17:33 crc kubenswrapper[5050]: I1211 15:17:33.524056 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5qq5" event={"ID":"74dccb59-5f03-40ce-a1dd-f8947ce6cb5f","Type":"ContainerDied","Data":"96d944fa8a3c4e9813200513e078a5dfcbf448f65b470a2306ab889fd951ba02"} Dec 11 15:17:34 crc kubenswrapper[5050]: I1211 15:17:34.541716 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5qq5" event={"ID":"74dccb59-5f03-40ce-a1dd-f8947ce6cb5f","Type":"ContainerStarted","Data":"01c2f9c43d77cea318edb45c27479a149a098b7e771ced12c0c4ec17238ce706"} Dec 11 15:17:34 crc kubenswrapper[5050]: I1211 15:17:34.576289 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h5qq5" podStartSLOduration=3.109927178 podStartE2EDuration="5.576252258s" podCreationTimestamp="2025-12-11 15:17:29 +0000 UTC" firstStartedPulling="2025-12-11 15:17:31.495883212 +0000 UTC m=+5342.339605828" lastFinishedPulling="2025-12-11 15:17:33.962208312 +0000 UTC m=+5344.805930908" observedRunningTime="2025-12-11 15:17:34.57150596 +0000 UTC m=+5345.415228586" watchObservedRunningTime="2025-12-11 15:17:34.576252258 +0000 UTC m=+5345.419974874" Dec 11 15:17:40 crc kubenswrapper[5050]: I1211 15:17:40.163666 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h5qq5" Dec 11 15:17:40 crc kubenswrapper[5050]: I1211 15:17:40.164211 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h5qq5" Dec 11 15:17:40 crc kubenswrapper[5050]: I1211 15:17:40.255488 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h5qq5" Dec 11 15:17:40 crc kubenswrapper[5050]: I1211 15:17:40.628800 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h5qq5" Dec 11 15:17:40 crc kubenswrapper[5050]: I1211 15:17:40.670674 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h5qq5"] Dec 11 15:17:42 crc kubenswrapper[5050]: I1211 15:17:42.610138 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h5qq5" podUID="74dccb59-5f03-40ce-a1dd-f8947ce6cb5f" containerName="registry-server" containerID="cri-o://01c2f9c43d77cea318edb45c27479a149a098b7e771ced12c0c4ec17238ce706" gracePeriod=2 Dec 11 15:17:43 crc kubenswrapper[5050]: I1211 15:17:43.044631 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h5qq5" Dec 11 15:17:43 crc kubenswrapper[5050]: I1211 15:17:43.067591 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2sv4\" (UniqueName: \"kubernetes.io/projected/74dccb59-5f03-40ce-a1dd-f8947ce6cb5f-kube-api-access-b2sv4\") pod \"74dccb59-5f03-40ce-a1dd-f8947ce6cb5f\" (UID: \"74dccb59-5f03-40ce-a1dd-f8947ce6cb5f\") " Dec 11 15:17:43 crc kubenswrapper[5050]: I1211 15:17:43.067678 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74dccb59-5f03-40ce-a1dd-f8947ce6cb5f-utilities\") pod \"74dccb59-5f03-40ce-a1dd-f8947ce6cb5f\" (UID: \"74dccb59-5f03-40ce-a1dd-f8947ce6cb5f\") " Dec 11 15:17:43 crc kubenswrapper[5050]: I1211 15:17:43.067701 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74dccb59-5f03-40ce-a1dd-f8947ce6cb5f-catalog-content\") pod \"74dccb59-5f03-40ce-a1dd-f8947ce6cb5f\" (UID: \"74dccb59-5f03-40ce-a1dd-f8947ce6cb5f\") " Dec 11 15:17:43 crc kubenswrapper[5050]: I1211 15:17:43.074589 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74dccb59-5f03-40ce-a1dd-f8947ce6cb5f-utilities" (OuterVolumeSpecName: "utilities") pod "74dccb59-5f03-40ce-a1dd-f8947ce6cb5f" (UID: "74dccb59-5f03-40ce-a1dd-f8947ce6cb5f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:17:43 crc kubenswrapper[5050]: I1211 15:17:43.083995 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74dccb59-5f03-40ce-a1dd-f8947ce6cb5f-kube-api-access-b2sv4" (OuterVolumeSpecName: "kube-api-access-b2sv4") pod "74dccb59-5f03-40ce-a1dd-f8947ce6cb5f" (UID: "74dccb59-5f03-40ce-a1dd-f8947ce6cb5f"). InnerVolumeSpecName "kube-api-access-b2sv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:17:43 crc kubenswrapper[5050]: I1211 15:17:43.143624 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74dccb59-5f03-40ce-a1dd-f8947ce6cb5f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "74dccb59-5f03-40ce-a1dd-f8947ce6cb5f" (UID: "74dccb59-5f03-40ce-a1dd-f8947ce6cb5f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:17:43 crc kubenswrapper[5050]: I1211 15:17:43.169418 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2sv4\" (UniqueName: \"kubernetes.io/projected/74dccb59-5f03-40ce-a1dd-f8947ce6cb5f-kube-api-access-b2sv4\") on node \"crc\" DevicePath \"\"" Dec 11 15:17:43 crc kubenswrapper[5050]: I1211 15:17:43.169444 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/74dccb59-5f03-40ce-a1dd-f8947ce6cb5f-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 15:17:43 crc kubenswrapper[5050]: I1211 15:17:43.169453 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/74dccb59-5f03-40ce-a1dd-f8947ce6cb5f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 15:17:43 crc kubenswrapper[5050]: I1211 15:17:43.623659 5050 generic.go:334] "Generic (PLEG): container finished" podID="74dccb59-5f03-40ce-a1dd-f8947ce6cb5f" containerID="01c2f9c43d77cea318edb45c27479a149a098b7e771ced12c0c4ec17238ce706" exitCode=0 Dec 11 15:17:43 crc kubenswrapper[5050]: I1211 15:17:43.623754 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h5qq5" Dec 11 15:17:43 crc kubenswrapper[5050]: I1211 15:17:43.623743 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5qq5" event={"ID":"74dccb59-5f03-40ce-a1dd-f8947ce6cb5f","Type":"ContainerDied","Data":"01c2f9c43d77cea318edb45c27479a149a098b7e771ced12c0c4ec17238ce706"} Dec 11 15:17:43 crc kubenswrapper[5050]: I1211 15:17:43.624940 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5qq5" event={"ID":"74dccb59-5f03-40ce-a1dd-f8947ce6cb5f","Type":"ContainerDied","Data":"6fbb9d3ceb9bb3d89f84e53f3075bd354e66bc883832cac00e1222858f7da31a"} Dec 11 15:17:43 crc kubenswrapper[5050]: I1211 15:17:43.624977 5050 scope.go:117] "RemoveContainer" containerID="01c2f9c43d77cea318edb45c27479a149a098b7e771ced12c0c4ec17238ce706" Dec 11 15:17:43 crc kubenswrapper[5050]: I1211 15:17:43.650560 5050 scope.go:117] "RemoveContainer" containerID="96d944fa8a3c4e9813200513e078a5dfcbf448f65b470a2306ab889fd951ba02" Dec 11 15:17:43 crc kubenswrapper[5050]: I1211 15:17:43.652445 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h5qq5"] Dec 11 15:17:43 crc kubenswrapper[5050]: I1211 15:17:43.681799 5050 scope.go:117] "RemoveContainer" containerID="308651620c08c5cef6358e4c1e9efe61f4433892e3d4e69458d0af3eac2765e7" Dec 11 15:17:43 crc kubenswrapper[5050]: I1211 15:17:43.682302 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h5qq5"] Dec 11 15:17:43 crc kubenswrapper[5050]: I1211 15:17:43.722044 5050 scope.go:117] "RemoveContainer" containerID="01c2f9c43d77cea318edb45c27479a149a098b7e771ced12c0c4ec17238ce706" Dec 11 15:17:43 crc kubenswrapper[5050]: E1211 15:17:43.722512 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01c2f9c43d77cea318edb45c27479a149a098b7e771ced12c0c4ec17238ce706\": container with ID starting with 01c2f9c43d77cea318edb45c27479a149a098b7e771ced12c0c4ec17238ce706 not found: ID does not exist" containerID="01c2f9c43d77cea318edb45c27479a149a098b7e771ced12c0c4ec17238ce706" Dec 11 15:17:43 crc kubenswrapper[5050]: I1211 15:17:43.722625 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01c2f9c43d77cea318edb45c27479a149a098b7e771ced12c0c4ec17238ce706"} err="failed to get container status \"01c2f9c43d77cea318edb45c27479a149a098b7e771ced12c0c4ec17238ce706\": rpc error: code = NotFound desc = could not find container \"01c2f9c43d77cea318edb45c27479a149a098b7e771ced12c0c4ec17238ce706\": container with ID starting with 01c2f9c43d77cea318edb45c27479a149a098b7e771ced12c0c4ec17238ce706 not found: ID does not exist" Dec 11 15:17:43 crc kubenswrapper[5050]: I1211 15:17:43.722717 5050 scope.go:117] "RemoveContainer" containerID="96d944fa8a3c4e9813200513e078a5dfcbf448f65b470a2306ab889fd951ba02" Dec 11 15:17:43 crc kubenswrapper[5050]: E1211 15:17:43.723180 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96d944fa8a3c4e9813200513e078a5dfcbf448f65b470a2306ab889fd951ba02\": container with ID starting with 96d944fa8a3c4e9813200513e078a5dfcbf448f65b470a2306ab889fd951ba02 not found: ID does not exist" containerID="96d944fa8a3c4e9813200513e078a5dfcbf448f65b470a2306ab889fd951ba02" Dec 11 15:17:43 crc kubenswrapper[5050]: I1211 15:17:43.723275 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96d944fa8a3c4e9813200513e078a5dfcbf448f65b470a2306ab889fd951ba02"} err="failed to get container status \"96d944fa8a3c4e9813200513e078a5dfcbf448f65b470a2306ab889fd951ba02\": rpc error: code = NotFound desc = could not find container \"96d944fa8a3c4e9813200513e078a5dfcbf448f65b470a2306ab889fd951ba02\": container with ID starting with 96d944fa8a3c4e9813200513e078a5dfcbf448f65b470a2306ab889fd951ba02 not found: ID does not exist" Dec 11 15:17:43 crc kubenswrapper[5050]: I1211 15:17:43.723362 5050 scope.go:117] "RemoveContainer" containerID="308651620c08c5cef6358e4c1e9efe61f4433892e3d4e69458d0af3eac2765e7" Dec 11 15:17:43 crc kubenswrapper[5050]: E1211 15:17:43.723673 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"308651620c08c5cef6358e4c1e9efe61f4433892e3d4e69458d0af3eac2765e7\": container with ID starting with 308651620c08c5cef6358e4c1e9efe61f4433892e3d4e69458d0af3eac2765e7 not found: ID does not exist" containerID="308651620c08c5cef6358e4c1e9efe61f4433892e3d4e69458d0af3eac2765e7" Dec 11 15:17:43 crc kubenswrapper[5050]: I1211 15:17:43.723759 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"308651620c08c5cef6358e4c1e9efe61f4433892e3d4e69458d0af3eac2765e7"} err="failed to get container status \"308651620c08c5cef6358e4c1e9efe61f4433892e3d4e69458d0af3eac2765e7\": rpc error: code = NotFound desc = could not find container \"308651620c08c5cef6358e4c1e9efe61f4433892e3d4e69458d0af3eac2765e7\": container with ID starting with 308651620c08c5cef6358e4c1e9efe61f4433892e3d4e69458d0af3eac2765e7 not found: ID does not exist" Dec 11 15:17:44 crc kubenswrapper[5050]: I1211 15:17:44.546514 5050 scope.go:117] "RemoveContainer" containerID="5e751c2ff817a404aae81889734d5a9bd57af497d8e885add288368bb5b0040e" Dec 11 15:17:44 crc kubenswrapper[5050]: E1211 15:17:44.547318 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:17:45 crc kubenswrapper[5050]: I1211 15:17:45.563494 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74dccb59-5f03-40ce-a1dd-f8947ce6cb5f" path="/var/lib/kubelet/pods/74dccb59-5f03-40ce-a1dd-f8947ce6cb5f/volumes" Dec 11 15:17:58 crc kubenswrapper[5050]: I1211 15:17:58.546076 5050 scope.go:117] "RemoveContainer" containerID="5e751c2ff817a404aae81889734d5a9bd57af497d8e885add288368bb5b0040e" Dec 11 15:17:58 crc kubenswrapper[5050]: E1211 15:17:58.546932 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:18:10 crc kubenswrapper[5050]: I1211 15:18:10.546067 5050 scope.go:117] "RemoveContainer" containerID="5e751c2ff817a404aae81889734d5a9bd57af497d8e885add288368bb5b0040e" Dec 11 15:18:10 crc kubenswrapper[5050]: E1211 15:18:10.546836 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:18:23 crc kubenswrapper[5050]: I1211 15:18:23.686154 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-2h48r"] Dec 11 15:18:23 crc kubenswrapper[5050]: E1211 15:18:23.687303 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74dccb59-5f03-40ce-a1dd-f8947ce6cb5f" containerName="registry-server" Dec 11 15:18:23 crc kubenswrapper[5050]: I1211 15:18:23.687317 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="74dccb59-5f03-40ce-a1dd-f8947ce6cb5f" containerName="registry-server" Dec 11 15:18:23 crc kubenswrapper[5050]: E1211 15:18:23.687326 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74dccb59-5f03-40ce-a1dd-f8947ce6cb5f" containerName="extract-utilities" Dec 11 15:18:23 crc kubenswrapper[5050]: I1211 15:18:23.687332 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="74dccb59-5f03-40ce-a1dd-f8947ce6cb5f" containerName="extract-utilities" Dec 11 15:18:23 crc kubenswrapper[5050]: E1211 15:18:23.687359 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74dccb59-5f03-40ce-a1dd-f8947ce6cb5f" containerName="extract-content" Dec 11 15:18:23 crc kubenswrapper[5050]: I1211 15:18:23.687366 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="74dccb59-5f03-40ce-a1dd-f8947ce6cb5f" containerName="extract-content" Dec 11 15:18:23 crc kubenswrapper[5050]: I1211 15:18:23.687559 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="74dccb59-5f03-40ce-a1dd-f8947ce6cb5f" containerName="registry-server" Dec 11 15:18:23 crc kubenswrapper[5050]: I1211 15:18:23.688324 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2h48r" Dec 11 15:18:23 crc kubenswrapper[5050]: I1211 15:18:23.696357 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-053d-account-create-update-rpgfv"] Dec 11 15:18:23 crc kubenswrapper[5050]: I1211 15:18:23.697896 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-053d-account-create-update-rpgfv" Dec 11 15:18:23 crc kubenswrapper[5050]: I1211 15:18:23.702404 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 11 15:18:23 crc kubenswrapper[5050]: I1211 15:18:23.704196 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-2h48r"] Dec 11 15:18:23 crc kubenswrapper[5050]: I1211 15:18:23.752064 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-053d-account-create-update-rpgfv"] Dec 11 15:18:23 crc kubenswrapper[5050]: I1211 15:18:23.791577 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/603ef10e-2ec0-4d47-8be0-3cc91679ecd7-operator-scripts\") pod \"barbican-db-create-2h48r\" (UID: \"603ef10e-2ec0-4d47-8be0-3cc91679ecd7\") " pod="openstack/barbican-db-create-2h48r" Dec 11 15:18:23 crc kubenswrapper[5050]: I1211 15:18:23.792213 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dvls\" (UniqueName: \"kubernetes.io/projected/603ef10e-2ec0-4d47-8be0-3cc91679ecd7-kube-api-access-7dvls\") pod \"barbican-db-create-2h48r\" (UID: \"603ef10e-2ec0-4d47-8be0-3cc91679ecd7\") " pod="openstack/barbican-db-create-2h48r" Dec 11 15:18:23 crc kubenswrapper[5050]: I1211 15:18:23.893406 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/603ef10e-2ec0-4d47-8be0-3cc91679ecd7-operator-scripts\") pod \"barbican-db-create-2h48r\" (UID: \"603ef10e-2ec0-4d47-8be0-3cc91679ecd7\") " pod="openstack/barbican-db-create-2h48r" Dec 11 15:18:23 crc kubenswrapper[5050]: I1211 15:18:23.893517 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dvls\" (UniqueName: \"kubernetes.io/projected/603ef10e-2ec0-4d47-8be0-3cc91679ecd7-kube-api-access-7dvls\") pod \"barbican-db-create-2h48r\" (UID: \"603ef10e-2ec0-4d47-8be0-3cc91679ecd7\") " pod="openstack/barbican-db-create-2h48r" Dec 11 15:18:23 crc kubenswrapper[5050]: I1211 15:18:23.893585 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgjs6\" (UniqueName: \"kubernetes.io/projected/bf783186-0e66-4bf7-bbbf-0cbd6f432736-kube-api-access-dgjs6\") pod \"barbican-053d-account-create-update-rpgfv\" (UID: \"bf783186-0e66-4bf7-bbbf-0cbd6f432736\") " pod="openstack/barbican-053d-account-create-update-rpgfv" Dec 11 15:18:23 crc kubenswrapper[5050]: I1211 15:18:23.893665 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf783186-0e66-4bf7-bbbf-0cbd6f432736-operator-scripts\") pod \"barbican-053d-account-create-update-rpgfv\" (UID: \"bf783186-0e66-4bf7-bbbf-0cbd6f432736\") " pod="openstack/barbican-053d-account-create-update-rpgfv" Dec 11 15:18:23 crc kubenswrapper[5050]: I1211 15:18:23.894539 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/603ef10e-2ec0-4d47-8be0-3cc91679ecd7-operator-scripts\") pod \"barbican-db-create-2h48r\" (UID: \"603ef10e-2ec0-4d47-8be0-3cc91679ecd7\") " pod="openstack/barbican-db-create-2h48r" Dec 11 15:18:23 crc kubenswrapper[5050]: I1211 15:18:23.918037 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dvls\" (UniqueName: \"kubernetes.io/projected/603ef10e-2ec0-4d47-8be0-3cc91679ecd7-kube-api-access-7dvls\") pod \"barbican-db-create-2h48r\" (UID: \"603ef10e-2ec0-4d47-8be0-3cc91679ecd7\") " pod="openstack/barbican-db-create-2h48r" Dec 11 15:18:23 crc kubenswrapper[5050]: I1211 15:18:23.995703 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf783186-0e66-4bf7-bbbf-0cbd6f432736-operator-scripts\") pod \"barbican-053d-account-create-update-rpgfv\" (UID: \"bf783186-0e66-4bf7-bbbf-0cbd6f432736\") " pod="openstack/barbican-053d-account-create-update-rpgfv" Dec 11 15:18:23 crc kubenswrapper[5050]: I1211 15:18:23.995874 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgjs6\" (UniqueName: \"kubernetes.io/projected/bf783186-0e66-4bf7-bbbf-0cbd6f432736-kube-api-access-dgjs6\") pod \"barbican-053d-account-create-update-rpgfv\" (UID: \"bf783186-0e66-4bf7-bbbf-0cbd6f432736\") " pod="openstack/barbican-053d-account-create-update-rpgfv" Dec 11 15:18:23 crc kubenswrapper[5050]: I1211 15:18:23.997245 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf783186-0e66-4bf7-bbbf-0cbd6f432736-operator-scripts\") pod \"barbican-053d-account-create-update-rpgfv\" (UID: \"bf783186-0e66-4bf7-bbbf-0cbd6f432736\") " pod="openstack/barbican-053d-account-create-update-rpgfv" Dec 11 15:18:24 crc kubenswrapper[5050]: I1211 15:18:24.013315 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgjs6\" (UniqueName: \"kubernetes.io/projected/bf783186-0e66-4bf7-bbbf-0cbd6f432736-kube-api-access-dgjs6\") pod \"barbican-053d-account-create-update-rpgfv\" (UID: \"bf783186-0e66-4bf7-bbbf-0cbd6f432736\") " pod="openstack/barbican-053d-account-create-update-rpgfv" Dec 11 15:18:24 crc kubenswrapper[5050]: I1211 15:18:24.013650 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2h48r" Dec 11 15:18:24 crc kubenswrapper[5050]: I1211 15:18:24.046098 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-053d-account-create-update-rpgfv" Dec 11 15:18:24 crc kubenswrapper[5050]: I1211 15:18:24.299970 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-2h48r"] Dec 11 15:18:24 crc kubenswrapper[5050]: I1211 15:18:24.545951 5050 scope.go:117] "RemoveContainer" containerID="5e751c2ff817a404aae81889734d5a9bd57af497d8e885add288368bb5b0040e" Dec 11 15:18:24 crc kubenswrapper[5050]: E1211 15:18:24.546348 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:18:24 crc kubenswrapper[5050]: I1211 15:18:24.554971 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-053d-account-create-update-rpgfv"] Dec 11 15:18:24 crc kubenswrapper[5050]: W1211 15:18:24.558648 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf783186_0e66_4bf7_bbbf_0cbd6f432736.slice/crio-ce29660f45bff80f63689269c3691b0ccafe448c45be71287c595f9ca5c67885 WatchSource:0}: Error finding container ce29660f45bff80f63689269c3691b0ccafe448c45be71287c595f9ca5c67885: Status 404 returned error can't find the container with id ce29660f45bff80f63689269c3691b0ccafe448c45be71287c595f9ca5c67885 Dec 11 15:18:25 crc kubenswrapper[5050]: I1211 15:18:25.220473 5050 generic.go:334] "Generic (PLEG): container finished" podID="bf783186-0e66-4bf7-bbbf-0cbd6f432736" containerID="4651c66ce862378022d60a0fe8f3fc0e0447fc5a295f69811107ab8b831a9cd4" exitCode=0 Dec 11 15:18:25 crc kubenswrapper[5050]: I1211 15:18:25.220565 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-053d-account-create-update-rpgfv" event={"ID":"bf783186-0e66-4bf7-bbbf-0cbd6f432736","Type":"ContainerDied","Data":"4651c66ce862378022d60a0fe8f3fc0e0447fc5a295f69811107ab8b831a9cd4"} Dec 11 15:18:25 crc kubenswrapper[5050]: I1211 15:18:25.220951 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-053d-account-create-update-rpgfv" event={"ID":"bf783186-0e66-4bf7-bbbf-0cbd6f432736","Type":"ContainerStarted","Data":"ce29660f45bff80f63689269c3691b0ccafe448c45be71287c595f9ca5c67885"} Dec 11 15:18:25 crc kubenswrapper[5050]: I1211 15:18:25.223870 5050 generic.go:334] "Generic (PLEG): container finished" podID="603ef10e-2ec0-4d47-8be0-3cc91679ecd7" containerID="ec40b8ff0d4d8946d6868b9474d48f69c789b8edcc09389bc0734b745a313f61" exitCode=0 Dec 11 15:18:25 crc kubenswrapper[5050]: I1211 15:18:25.223949 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-2h48r" event={"ID":"603ef10e-2ec0-4d47-8be0-3cc91679ecd7","Type":"ContainerDied","Data":"ec40b8ff0d4d8946d6868b9474d48f69c789b8edcc09389bc0734b745a313f61"} Dec 11 15:18:25 crc kubenswrapper[5050]: I1211 15:18:25.223992 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-2h48r" event={"ID":"603ef10e-2ec0-4d47-8be0-3cc91679ecd7","Type":"ContainerStarted","Data":"eb0b61d909d4dd7315a4db1ffc683c1b2ba553ba9206a8a6659f3627853c13ad"} Dec 11 15:18:26 crc kubenswrapper[5050]: I1211 15:18:26.637908 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-053d-account-create-update-rpgfv" Dec 11 15:18:26 crc kubenswrapper[5050]: I1211 15:18:26.646946 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2h48r" Dec 11 15:18:26 crc kubenswrapper[5050]: I1211 15:18:26.749917 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dvls\" (UniqueName: \"kubernetes.io/projected/603ef10e-2ec0-4d47-8be0-3cc91679ecd7-kube-api-access-7dvls\") pod \"603ef10e-2ec0-4d47-8be0-3cc91679ecd7\" (UID: \"603ef10e-2ec0-4d47-8be0-3cc91679ecd7\") " Dec 11 15:18:26 crc kubenswrapper[5050]: I1211 15:18:26.750400 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/603ef10e-2ec0-4d47-8be0-3cc91679ecd7-operator-scripts\") pod \"603ef10e-2ec0-4d47-8be0-3cc91679ecd7\" (UID: \"603ef10e-2ec0-4d47-8be0-3cc91679ecd7\") " Dec 11 15:18:26 crc kubenswrapper[5050]: I1211 15:18:26.750439 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgjs6\" (UniqueName: \"kubernetes.io/projected/bf783186-0e66-4bf7-bbbf-0cbd6f432736-kube-api-access-dgjs6\") pod \"bf783186-0e66-4bf7-bbbf-0cbd6f432736\" (UID: \"bf783186-0e66-4bf7-bbbf-0cbd6f432736\") " Dec 11 15:18:26 crc kubenswrapper[5050]: I1211 15:18:26.750496 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf783186-0e66-4bf7-bbbf-0cbd6f432736-operator-scripts\") pod \"bf783186-0e66-4bf7-bbbf-0cbd6f432736\" (UID: \"bf783186-0e66-4bf7-bbbf-0cbd6f432736\") " Dec 11 15:18:26 crc kubenswrapper[5050]: I1211 15:18:26.751454 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/603ef10e-2ec0-4d47-8be0-3cc91679ecd7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "603ef10e-2ec0-4d47-8be0-3cc91679ecd7" (UID: "603ef10e-2ec0-4d47-8be0-3cc91679ecd7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:18:26 crc kubenswrapper[5050]: I1211 15:18:26.751497 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf783186-0e66-4bf7-bbbf-0cbd6f432736-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bf783186-0e66-4bf7-bbbf-0cbd6f432736" (UID: "bf783186-0e66-4bf7-bbbf-0cbd6f432736"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:18:26 crc kubenswrapper[5050]: I1211 15:18:26.752141 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/603ef10e-2ec0-4d47-8be0-3cc91679ecd7-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:18:26 crc kubenswrapper[5050]: I1211 15:18:26.752170 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf783186-0e66-4bf7-bbbf-0cbd6f432736-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:18:26 crc kubenswrapper[5050]: I1211 15:18:26.758754 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/603ef10e-2ec0-4d47-8be0-3cc91679ecd7-kube-api-access-7dvls" (OuterVolumeSpecName: "kube-api-access-7dvls") pod "603ef10e-2ec0-4d47-8be0-3cc91679ecd7" (UID: "603ef10e-2ec0-4d47-8be0-3cc91679ecd7"). InnerVolumeSpecName "kube-api-access-7dvls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:18:26 crc kubenswrapper[5050]: I1211 15:18:26.759966 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf783186-0e66-4bf7-bbbf-0cbd6f432736-kube-api-access-dgjs6" (OuterVolumeSpecName: "kube-api-access-dgjs6") pod "bf783186-0e66-4bf7-bbbf-0cbd6f432736" (UID: "bf783186-0e66-4bf7-bbbf-0cbd6f432736"). InnerVolumeSpecName "kube-api-access-dgjs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:18:26 crc kubenswrapper[5050]: I1211 15:18:26.854463 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dvls\" (UniqueName: \"kubernetes.io/projected/603ef10e-2ec0-4d47-8be0-3cc91679ecd7-kube-api-access-7dvls\") on node \"crc\" DevicePath \"\"" Dec 11 15:18:26 crc kubenswrapper[5050]: I1211 15:18:26.854522 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgjs6\" (UniqueName: \"kubernetes.io/projected/bf783186-0e66-4bf7-bbbf-0cbd6f432736-kube-api-access-dgjs6\") on node \"crc\" DevicePath \"\"" Dec 11 15:18:27 crc kubenswrapper[5050]: I1211 15:18:27.243574 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-053d-account-create-update-rpgfv" event={"ID":"bf783186-0e66-4bf7-bbbf-0cbd6f432736","Type":"ContainerDied","Data":"ce29660f45bff80f63689269c3691b0ccafe448c45be71287c595f9ca5c67885"} Dec 11 15:18:27 crc kubenswrapper[5050]: I1211 15:18:27.243628 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce29660f45bff80f63689269c3691b0ccafe448c45be71287c595f9ca5c67885" Dec 11 15:18:27 crc kubenswrapper[5050]: I1211 15:18:27.243704 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-053d-account-create-update-rpgfv" Dec 11 15:18:27 crc kubenswrapper[5050]: I1211 15:18:27.245110 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-2h48r" event={"ID":"603ef10e-2ec0-4d47-8be0-3cc91679ecd7","Type":"ContainerDied","Data":"eb0b61d909d4dd7315a4db1ffc683c1b2ba553ba9206a8a6659f3627853c13ad"} Dec 11 15:18:27 crc kubenswrapper[5050]: I1211 15:18:27.245135 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb0b61d909d4dd7315a4db1ffc683c1b2ba553ba9206a8a6659f3627853c13ad" Dec 11 15:18:27 crc kubenswrapper[5050]: I1211 15:18:27.245177 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-2h48r" Dec 11 15:18:29 crc kubenswrapper[5050]: I1211 15:18:29.047059 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-ljxnd"] Dec 11 15:18:29 crc kubenswrapper[5050]: E1211 15:18:29.047659 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="603ef10e-2ec0-4d47-8be0-3cc91679ecd7" containerName="mariadb-database-create" Dec 11 15:18:29 crc kubenswrapper[5050]: I1211 15:18:29.047671 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="603ef10e-2ec0-4d47-8be0-3cc91679ecd7" containerName="mariadb-database-create" Dec 11 15:18:29 crc kubenswrapper[5050]: E1211 15:18:29.047682 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf783186-0e66-4bf7-bbbf-0cbd6f432736" containerName="mariadb-account-create-update" Dec 11 15:18:29 crc kubenswrapper[5050]: I1211 15:18:29.047688 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf783186-0e66-4bf7-bbbf-0cbd6f432736" containerName="mariadb-account-create-update" Dec 11 15:18:29 crc kubenswrapper[5050]: I1211 15:18:29.047826 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf783186-0e66-4bf7-bbbf-0cbd6f432736" containerName="mariadb-account-create-update" Dec 11 15:18:29 crc kubenswrapper[5050]: I1211 15:18:29.047842 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="603ef10e-2ec0-4d47-8be0-3cc91679ecd7" containerName="mariadb-database-create" Dec 11 15:18:29 crc kubenswrapper[5050]: I1211 15:18:29.048360 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-ljxnd" Dec 11 15:18:29 crc kubenswrapper[5050]: I1211 15:18:29.050227 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-fxl2b" Dec 11 15:18:29 crc kubenswrapper[5050]: I1211 15:18:29.051795 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 11 15:18:29 crc kubenswrapper[5050]: I1211 15:18:29.057418 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-ljxnd"] Dec 11 15:18:29 crc kubenswrapper[5050]: I1211 15:18:29.196824 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ceabef0-4c99-4a43-8920-7aba1337fbc9-combined-ca-bundle\") pod \"barbican-db-sync-ljxnd\" (UID: \"6ceabef0-4c99-4a43-8920-7aba1337fbc9\") " pod="openstack/barbican-db-sync-ljxnd" Dec 11 15:18:29 crc kubenswrapper[5050]: I1211 15:18:29.196904 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ggcl\" (UniqueName: \"kubernetes.io/projected/6ceabef0-4c99-4a43-8920-7aba1337fbc9-kube-api-access-4ggcl\") pod \"barbican-db-sync-ljxnd\" (UID: \"6ceabef0-4c99-4a43-8920-7aba1337fbc9\") " pod="openstack/barbican-db-sync-ljxnd" Dec 11 15:18:29 crc kubenswrapper[5050]: I1211 15:18:29.197169 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6ceabef0-4c99-4a43-8920-7aba1337fbc9-db-sync-config-data\") pod \"barbican-db-sync-ljxnd\" (UID: \"6ceabef0-4c99-4a43-8920-7aba1337fbc9\") " pod="openstack/barbican-db-sync-ljxnd" Dec 11 15:18:29 crc kubenswrapper[5050]: I1211 15:18:29.299207 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ceabef0-4c99-4a43-8920-7aba1337fbc9-combined-ca-bundle\") pod \"barbican-db-sync-ljxnd\" (UID: \"6ceabef0-4c99-4a43-8920-7aba1337fbc9\") " pod="openstack/barbican-db-sync-ljxnd" Dec 11 15:18:29 crc kubenswrapper[5050]: I1211 15:18:29.299284 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ggcl\" (UniqueName: \"kubernetes.io/projected/6ceabef0-4c99-4a43-8920-7aba1337fbc9-kube-api-access-4ggcl\") pod \"barbican-db-sync-ljxnd\" (UID: \"6ceabef0-4c99-4a43-8920-7aba1337fbc9\") " pod="openstack/barbican-db-sync-ljxnd" Dec 11 15:18:29 crc kubenswrapper[5050]: I1211 15:18:29.299343 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6ceabef0-4c99-4a43-8920-7aba1337fbc9-db-sync-config-data\") pod \"barbican-db-sync-ljxnd\" (UID: \"6ceabef0-4c99-4a43-8920-7aba1337fbc9\") " pod="openstack/barbican-db-sync-ljxnd" Dec 11 15:18:29 crc kubenswrapper[5050]: I1211 15:18:29.307085 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ceabef0-4c99-4a43-8920-7aba1337fbc9-combined-ca-bundle\") pod \"barbican-db-sync-ljxnd\" (UID: \"6ceabef0-4c99-4a43-8920-7aba1337fbc9\") " pod="openstack/barbican-db-sync-ljxnd" Dec 11 15:18:29 crc kubenswrapper[5050]: I1211 15:18:29.307405 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6ceabef0-4c99-4a43-8920-7aba1337fbc9-db-sync-config-data\") pod \"barbican-db-sync-ljxnd\" (UID: \"6ceabef0-4c99-4a43-8920-7aba1337fbc9\") " pod="openstack/barbican-db-sync-ljxnd" Dec 11 15:18:29 crc kubenswrapper[5050]: I1211 15:18:29.325491 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ggcl\" (UniqueName: \"kubernetes.io/projected/6ceabef0-4c99-4a43-8920-7aba1337fbc9-kube-api-access-4ggcl\") pod \"barbican-db-sync-ljxnd\" (UID: \"6ceabef0-4c99-4a43-8920-7aba1337fbc9\") " pod="openstack/barbican-db-sync-ljxnd" Dec 11 15:18:29 crc kubenswrapper[5050]: I1211 15:18:29.362877 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-ljxnd" Dec 11 15:18:29 crc kubenswrapper[5050]: I1211 15:18:29.837984 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-ljxnd"] Dec 11 15:18:30 crc kubenswrapper[5050]: I1211 15:18:30.274104 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-ljxnd" event={"ID":"6ceabef0-4c99-4a43-8920-7aba1337fbc9","Type":"ContainerStarted","Data":"d925b2e10fffcd83c9dca4bbccffba7cc18fcbc56b6c47081746d9e5501db6d7"} Dec 11 15:18:30 crc kubenswrapper[5050]: I1211 15:18:30.274414 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-ljxnd" event={"ID":"6ceabef0-4c99-4a43-8920-7aba1337fbc9","Type":"ContainerStarted","Data":"8e10d38321c42c8215c9aab79364cce314c323fddec48352ac63fd2198e7e1ce"} Dec 11 15:18:30 crc kubenswrapper[5050]: I1211 15:18:30.292944 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-ljxnd" podStartSLOduration=1.29292567 podStartE2EDuration="1.29292567s" podCreationTimestamp="2025-12-11 15:18:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:18:30.288959153 +0000 UTC m=+5401.132681739" watchObservedRunningTime="2025-12-11 15:18:30.29292567 +0000 UTC m=+5401.136648256" Dec 11 15:18:31 crc kubenswrapper[5050]: I1211 15:18:31.289833 5050 generic.go:334] "Generic (PLEG): container finished" podID="6ceabef0-4c99-4a43-8920-7aba1337fbc9" containerID="d925b2e10fffcd83c9dca4bbccffba7cc18fcbc56b6c47081746d9e5501db6d7" exitCode=0 Dec 11 15:18:31 crc kubenswrapper[5050]: I1211 15:18:31.289952 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-ljxnd" event={"ID":"6ceabef0-4c99-4a43-8920-7aba1337fbc9","Type":"ContainerDied","Data":"d925b2e10fffcd83c9dca4bbccffba7cc18fcbc56b6c47081746d9e5501db6d7"} Dec 11 15:18:32 crc kubenswrapper[5050]: I1211 15:18:32.622961 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-ljxnd" Dec 11 15:18:32 crc kubenswrapper[5050]: I1211 15:18:32.769353 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6ceabef0-4c99-4a43-8920-7aba1337fbc9-db-sync-config-data\") pod \"6ceabef0-4c99-4a43-8920-7aba1337fbc9\" (UID: \"6ceabef0-4c99-4a43-8920-7aba1337fbc9\") " Dec 11 15:18:32 crc kubenswrapper[5050]: I1211 15:18:32.769643 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ggcl\" (UniqueName: \"kubernetes.io/projected/6ceabef0-4c99-4a43-8920-7aba1337fbc9-kube-api-access-4ggcl\") pod \"6ceabef0-4c99-4a43-8920-7aba1337fbc9\" (UID: \"6ceabef0-4c99-4a43-8920-7aba1337fbc9\") " Dec 11 15:18:32 crc kubenswrapper[5050]: I1211 15:18:32.769739 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ceabef0-4c99-4a43-8920-7aba1337fbc9-combined-ca-bundle\") pod \"6ceabef0-4c99-4a43-8920-7aba1337fbc9\" (UID: \"6ceabef0-4c99-4a43-8920-7aba1337fbc9\") " Dec 11 15:18:32 crc kubenswrapper[5050]: I1211 15:18:32.778599 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ceabef0-4c99-4a43-8920-7aba1337fbc9-kube-api-access-4ggcl" (OuterVolumeSpecName: "kube-api-access-4ggcl") pod "6ceabef0-4c99-4a43-8920-7aba1337fbc9" (UID: "6ceabef0-4c99-4a43-8920-7aba1337fbc9"). InnerVolumeSpecName "kube-api-access-4ggcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:18:32 crc kubenswrapper[5050]: I1211 15:18:32.783266 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ceabef0-4c99-4a43-8920-7aba1337fbc9-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "6ceabef0-4c99-4a43-8920-7aba1337fbc9" (UID: "6ceabef0-4c99-4a43-8920-7aba1337fbc9"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:18:32 crc kubenswrapper[5050]: I1211 15:18:32.813363 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ceabef0-4c99-4a43-8920-7aba1337fbc9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ceabef0-4c99-4a43-8920-7aba1337fbc9" (UID: "6ceabef0-4c99-4a43-8920-7aba1337fbc9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:18:32 crc kubenswrapper[5050]: I1211 15:18:32.873438 5050 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6ceabef0-4c99-4a43-8920-7aba1337fbc9-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:18:32 crc kubenswrapper[5050]: I1211 15:18:32.873477 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ggcl\" (UniqueName: \"kubernetes.io/projected/6ceabef0-4c99-4a43-8920-7aba1337fbc9-kube-api-access-4ggcl\") on node \"crc\" DevicePath \"\"" Dec 11 15:18:32 crc kubenswrapper[5050]: I1211 15:18:32.873489 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ceabef0-4c99-4a43-8920-7aba1337fbc9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.311418 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-ljxnd" event={"ID":"6ceabef0-4c99-4a43-8920-7aba1337fbc9","Type":"ContainerDied","Data":"8e10d38321c42c8215c9aab79364cce314c323fddec48352ac63fd2198e7e1ce"} Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.311467 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e10d38321c42c8215c9aab79364cce314c323fddec48352ac63fd2198e7e1ce" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.311527 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-ljxnd" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.612080 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-797c89446b-kwd44"] Dec 11 15:18:33 crc kubenswrapper[5050]: E1211 15:18:33.612788 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ceabef0-4c99-4a43-8920-7aba1337fbc9" containerName="barbican-db-sync" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.612810 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ceabef0-4c99-4a43-8920-7aba1337fbc9" containerName="barbican-db-sync" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.612965 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ceabef0-4c99-4a43-8920-7aba1337fbc9" containerName="barbican-db-sync" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.613835 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-797c89446b-kwd44" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.619191 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-fxl2b" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.619365 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.619482 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.625877 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-bc78c584f-xrmj5"] Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.627321 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-bc78c584f-xrmj5" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.632793 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.638136 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-bc78c584f-xrmj5"] Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.657601 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-797c89446b-kwd44"] Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.674576 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d9d5c6bc-xd8rc"] Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.675959 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d9d5c6bc-xd8rc" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.708846 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d9d5c6bc-xd8rc"] Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.730593 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/480b2bf3-fe1b-4d8d-a416-94f806ed262d-config-data\") pod \"barbican-keystone-listener-797c89446b-kwd44\" (UID: \"480b2bf3-fe1b-4d8d-a416-94f806ed262d\") " pod="openstack/barbican-keystone-listener-797c89446b-kwd44" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.730702 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0b3b56a-de8a-42dc-80a0-758b31bb4c1a-dns-svc\") pod \"dnsmasq-dns-6d9d5c6bc-xd8rc\" (UID: \"d0b3b56a-de8a-42dc-80a0-758b31bb4c1a\") " pod="openstack/dnsmasq-dns-6d9d5c6bc-xd8rc" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.730756 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7265b35-dcf6-4c88-994d-4643288e311b-logs\") pod \"barbican-worker-bc78c584f-xrmj5\" (UID: \"c7265b35-dcf6-4c88-994d-4643288e311b\") " pod="openstack/barbican-worker-bc78c584f-xrmj5" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.730778 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7265b35-dcf6-4c88-994d-4643288e311b-config-data-custom\") pod \"barbican-worker-bc78c584f-xrmj5\" (UID: \"c7265b35-dcf6-4c88-994d-4643288e311b\") " pod="openstack/barbican-worker-bc78c584f-xrmj5" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.730821 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7265b35-dcf6-4c88-994d-4643288e311b-config-data\") pod \"barbican-worker-bc78c584f-xrmj5\" (UID: \"c7265b35-dcf6-4c88-994d-4643288e311b\") " pod="openstack/barbican-worker-bc78c584f-xrmj5" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.730850 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzp96\" (UniqueName: \"kubernetes.io/projected/d0b3b56a-de8a-42dc-80a0-758b31bb4c1a-kube-api-access-lzp96\") pod \"dnsmasq-dns-6d9d5c6bc-xd8rc\" (UID: \"d0b3b56a-de8a-42dc-80a0-758b31bb4c1a\") " pod="openstack/dnsmasq-dns-6d9d5c6bc-xd8rc" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.730902 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx6x2\" (UniqueName: \"kubernetes.io/projected/480b2bf3-fe1b-4d8d-a416-94f806ed262d-kube-api-access-fx6x2\") pod \"barbican-keystone-listener-797c89446b-kwd44\" (UID: \"480b2bf3-fe1b-4d8d-a416-94f806ed262d\") " pod="openstack/barbican-keystone-listener-797c89446b-kwd44" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.730931 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7mhd\" (UniqueName: \"kubernetes.io/projected/c7265b35-dcf6-4c88-994d-4643288e311b-kube-api-access-f7mhd\") pod \"barbican-worker-bc78c584f-xrmj5\" (UID: \"c7265b35-dcf6-4c88-994d-4643288e311b\") " pod="openstack/barbican-worker-bc78c584f-xrmj5" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.730972 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0b3b56a-de8a-42dc-80a0-758b31bb4c1a-ovsdbserver-nb\") pod \"dnsmasq-dns-6d9d5c6bc-xd8rc\" (UID: \"d0b3b56a-de8a-42dc-80a0-758b31bb4c1a\") " pod="openstack/dnsmasq-dns-6d9d5c6bc-xd8rc" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.731049 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0b3b56a-de8a-42dc-80a0-758b31bb4c1a-config\") pod \"dnsmasq-dns-6d9d5c6bc-xd8rc\" (UID: \"d0b3b56a-de8a-42dc-80a0-758b31bb4c1a\") " pod="openstack/dnsmasq-dns-6d9d5c6bc-xd8rc" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.731078 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/480b2bf3-fe1b-4d8d-a416-94f806ed262d-combined-ca-bundle\") pod \"barbican-keystone-listener-797c89446b-kwd44\" (UID: \"480b2bf3-fe1b-4d8d-a416-94f806ed262d\") " pod="openstack/barbican-keystone-listener-797c89446b-kwd44" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.731118 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7265b35-dcf6-4c88-994d-4643288e311b-combined-ca-bundle\") pod \"barbican-worker-bc78c584f-xrmj5\" (UID: \"c7265b35-dcf6-4c88-994d-4643288e311b\") " pod="openstack/barbican-worker-bc78c584f-xrmj5" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.731175 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/480b2bf3-fe1b-4d8d-a416-94f806ed262d-config-data-custom\") pod \"barbican-keystone-listener-797c89446b-kwd44\" (UID: \"480b2bf3-fe1b-4d8d-a416-94f806ed262d\") " pod="openstack/barbican-keystone-listener-797c89446b-kwd44" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.731222 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0b3b56a-de8a-42dc-80a0-758b31bb4c1a-ovsdbserver-sb\") pod \"dnsmasq-dns-6d9d5c6bc-xd8rc\" (UID: \"d0b3b56a-de8a-42dc-80a0-758b31bb4c1a\") " pod="openstack/dnsmasq-dns-6d9d5c6bc-xd8rc" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.731252 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/480b2bf3-fe1b-4d8d-a416-94f806ed262d-logs\") pod \"barbican-keystone-listener-797c89446b-kwd44\" (UID: \"480b2bf3-fe1b-4d8d-a416-94f806ed262d\") " pod="openstack/barbican-keystone-listener-797c89446b-kwd44" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.760780 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-86b9fb989b-rv8jv"] Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.762195 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-86b9fb989b-rv8jv" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.769951 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.784033 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-86b9fb989b-rv8jv"] Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.833309 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0b3b56a-de8a-42dc-80a0-758b31bb4c1a-config\") pod \"dnsmasq-dns-6d9d5c6bc-xd8rc\" (UID: \"d0b3b56a-de8a-42dc-80a0-758b31bb4c1a\") " pod="openstack/dnsmasq-dns-6d9d5c6bc-xd8rc" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.833371 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/480b2bf3-fe1b-4d8d-a416-94f806ed262d-combined-ca-bundle\") pod \"barbican-keystone-listener-797c89446b-kwd44\" (UID: \"480b2bf3-fe1b-4d8d-a416-94f806ed262d\") " pod="openstack/barbican-keystone-listener-797c89446b-kwd44" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.833401 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7265b35-dcf6-4c88-994d-4643288e311b-combined-ca-bundle\") pod \"barbican-worker-bc78c584f-xrmj5\" (UID: \"c7265b35-dcf6-4c88-994d-4643288e311b\") " pod="openstack/barbican-worker-bc78c584f-xrmj5" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.833449 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/480b2bf3-fe1b-4d8d-a416-94f806ed262d-config-data-custom\") pod \"barbican-keystone-listener-797c89446b-kwd44\" (UID: \"480b2bf3-fe1b-4d8d-a416-94f806ed262d\") " pod="openstack/barbican-keystone-listener-797c89446b-kwd44" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.833487 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdafab22-9046-4864-8915-b66fb3b2e529-logs\") pod \"barbican-api-86b9fb989b-rv8jv\" (UID: \"cdafab22-9046-4864-8915-b66fb3b2e529\") " pod="openstack/barbican-api-86b9fb989b-rv8jv" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.833509 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0b3b56a-de8a-42dc-80a0-758b31bb4c1a-ovsdbserver-sb\") pod \"dnsmasq-dns-6d9d5c6bc-xd8rc\" (UID: \"d0b3b56a-de8a-42dc-80a0-758b31bb4c1a\") " pod="openstack/dnsmasq-dns-6d9d5c6bc-xd8rc" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.833526 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/480b2bf3-fe1b-4d8d-a416-94f806ed262d-logs\") pod \"barbican-keystone-listener-797c89446b-kwd44\" (UID: \"480b2bf3-fe1b-4d8d-a416-94f806ed262d\") " pod="openstack/barbican-keystone-listener-797c89446b-kwd44" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.833554 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/480b2bf3-fe1b-4d8d-a416-94f806ed262d-config-data\") pod \"barbican-keystone-listener-797c89446b-kwd44\" (UID: \"480b2bf3-fe1b-4d8d-a416-94f806ed262d\") " pod="openstack/barbican-keystone-listener-797c89446b-kwd44" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.833576 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0b3b56a-de8a-42dc-80a0-758b31bb4c1a-dns-svc\") pod \"dnsmasq-dns-6d9d5c6bc-xd8rc\" (UID: \"d0b3b56a-de8a-42dc-80a0-758b31bb4c1a\") " pod="openstack/dnsmasq-dns-6d9d5c6bc-xd8rc" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.833600 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdafab22-9046-4864-8915-b66fb3b2e529-config-data\") pod \"barbican-api-86b9fb989b-rv8jv\" (UID: \"cdafab22-9046-4864-8915-b66fb3b2e529\") " pod="openstack/barbican-api-86b9fb989b-rv8jv" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.833620 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7265b35-dcf6-4c88-994d-4643288e311b-logs\") pod \"barbican-worker-bc78c584f-xrmj5\" (UID: \"c7265b35-dcf6-4c88-994d-4643288e311b\") " pod="openstack/barbican-worker-bc78c584f-xrmj5" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.833640 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7265b35-dcf6-4c88-994d-4643288e311b-config-data-custom\") pod \"barbican-worker-bc78c584f-xrmj5\" (UID: \"c7265b35-dcf6-4c88-994d-4643288e311b\") " pod="openstack/barbican-worker-bc78c584f-xrmj5" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.833661 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdafab22-9046-4864-8915-b66fb3b2e529-combined-ca-bundle\") pod \"barbican-api-86b9fb989b-rv8jv\" (UID: \"cdafab22-9046-4864-8915-b66fb3b2e529\") " pod="openstack/barbican-api-86b9fb989b-rv8jv" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.833684 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7265b35-dcf6-4c88-994d-4643288e311b-config-data\") pod \"barbican-worker-bc78c584f-xrmj5\" (UID: \"c7265b35-dcf6-4c88-994d-4643288e311b\") " pod="openstack/barbican-worker-bc78c584f-xrmj5" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.833701 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6jvv\" (UniqueName: \"kubernetes.io/projected/cdafab22-9046-4864-8915-b66fb3b2e529-kube-api-access-z6jvv\") pod \"barbican-api-86b9fb989b-rv8jv\" (UID: \"cdafab22-9046-4864-8915-b66fb3b2e529\") " pod="openstack/barbican-api-86b9fb989b-rv8jv" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.833743 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzp96\" (UniqueName: \"kubernetes.io/projected/d0b3b56a-de8a-42dc-80a0-758b31bb4c1a-kube-api-access-lzp96\") pod \"dnsmasq-dns-6d9d5c6bc-xd8rc\" (UID: \"d0b3b56a-de8a-42dc-80a0-758b31bb4c1a\") " pod="openstack/dnsmasq-dns-6d9d5c6bc-xd8rc" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.833765 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx6x2\" (UniqueName: \"kubernetes.io/projected/480b2bf3-fe1b-4d8d-a416-94f806ed262d-kube-api-access-fx6x2\") pod \"barbican-keystone-listener-797c89446b-kwd44\" (UID: \"480b2bf3-fe1b-4d8d-a416-94f806ed262d\") " pod="openstack/barbican-keystone-listener-797c89446b-kwd44" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.833785 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7mhd\" (UniqueName: \"kubernetes.io/projected/c7265b35-dcf6-4c88-994d-4643288e311b-kube-api-access-f7mhd\") pod \"barbican-worker-bc78c584f-xrmj5\" (UID: \"c7265b35-dcf6-4c88-994d-4643288e311b\") " pod="openstack/barbican-worker-bc78c584f-xrmj5" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.833815 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0b3b56a-de8a-42dc-80a0-758b31bb4c1a-ovsdbserver-nb\") pod \"dnsmasq-dns-6d9d5c6bc-xd8rc\" (UID: \"d0b3b56a-de8a-42dc-80a0-758b31bb4c1a\") " pod="openstack/dnsmasq-dns-6d9d5c6bc-xd8rc" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.833833 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cdafab22-9046-4864-8915-b66fb3b2e529-config-data-custom\") pod \"barbican-api-86b9fb989b-rv8jv\" (UID: \"cdafab22-9046-4864-8915-b66fb3b2e529\") " pod="openstack/barbican-api-86b9fb989b-rv8jv" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.842609 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/480b2bf3-fe1b-4d8d-a416-94f806ed262d-logs\") pod \"barbican-keystone-listener-797c89446b-kwd44\" (UID: \"480b2bf3-fe1b-4d8d-a416-94f806ed262d\") " pod="openstack/barbican-keystone-listener-797c89446b-kwd44" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.844651 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0b3b56a-de8a-42dc-80a0-758b31bb4c1a-dns-svc\") pod \"dnsmasq-dns-6d9d5c6bc-xd8rc\" (UID: \"d0b3b56a-de8a-42dc-80a0-758b31bb4c1a\") " pod="openstack/dnsmasq-dns-6d9d5c6bc-xd8rc" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.844796 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0b3b56a-de8a-42dc-80a0-758b31bb4c1a-ovsdbserver-nb\") pod \"dnsmasq-dns-6d9d5c6bc-xd8rc\" (UID: \"d0b3b56a-de8a-42dc-80a0-758b31bb4c1a\") " pod="openstack/dnsmasq-dns-6d9d5c6bc-xd8rc" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.845575 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/480b2bf3-fe1b-4d8d-a416-94f806ed262d-combined-ca-bundle\") pod \"barbican-keystone-listener-797c89446b-kwd44\" (UID: \"480b2bf3-fe1b-4d8d-a416-94f806ed262d\") " pod="openstack/barbican-keystone-listener-797c89446b-kwd44" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.848112 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/480b2bf3-fe1b-4d8d-a416-94f806ed262d-config-data\") pod \"barbican-keystone-listener-797c89446b-kwd44\" (UID: \"480b2bf3-fe1b-4d8d-a416-94f806ed262d\") " pod="openstack/barbican-keystone-listener-797c89446b-kwd44" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.850715 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0b3b56a-de8a-42dc-80a0-758b31bb4c1a-ovsdbserver-sb\") pod \"dnsmasq-dns-6d9d5c6bc-xd8rc\" (UID: \"d0b3b56a-de8a-42dc-80a0-758b31bb4c1a\") " pod="openstack/dnsmasq-dns-6d9d5c6bc-xd8rc" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.850803 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7265b35-dcf6-4c88-994d-4643288e311b-logs\") pod \"barbican-worker-bc78c584f-xrmj5\" (UID: \"c7265b35-dcf6-4c88-994d-4643288e311b\") " pod="openstack/barbican-worker-bc78c584f-xrmj5" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.855319 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx6x2\" (UniqueName: \"kubernetes.io/projected/480b2bf3-fe1b-4d8d-a416-94f806ed262d-kube-api-access-fx6x2\") pod \"barbican-keystone-listener-797c89446b-kwd44\" (UID: \"480b2bf3-fe1b-4d8d-a416-94f806ed262d\") " pod="openstack/barbican-keystone-listener-797c89446b-kwd44" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.857864 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0b3b56a-de8a-42dc-80a0-758b31bb4c1a-config\") pod \"dnsmasq-dns-6d9d5c6bc-xd8rc\" (UID: \"d0b3b56a-de8a-42dc-80a0-758b31bb4c1a\") " pod="openstack/dnsmasq-dns-6d9d5c6bc-xd8rc" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.859737 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzp96\" (UniqueName: \"kubernetes.io/projected/d0b3b56a-de8a-42dc-80a0-758b31bb4c1a-kube-api-access-lzp96\") pod \"dnsmasq-dns-6d9d5c6bc-xd8rc\" (UID: \"d0b3b56a-de8a-42dc-80a0-758b31bb4c1a\") " pod="openstack/dnsmasq-dns-6d9d5c6bc-xd8rc" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.861041 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7mhd\" (UniqueName: \"kubernetes.io/projected/c7265b35-dcf6-4c88-994d-4643288e311b-kube-api-access-f7mhd\") pod \"barbican-worker-bc78c584f-xrmj5\" (UID: \"c7265b35-dcf6-4c88-994d-4643288e311b\") " pod="openstack/barbican-worker-bc78c584f-xrmj5" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.861222 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7265b35-dcf6-4c88-994d-4643288e311b-config-data-custom\") pod \"barbican-worker-bc78c584f-xrmj5\" (UID: \"c7265b35-dcf6-4c88-994d-4643288e311b\") " pod="openstack/barbican-worker-bc78c584f-xrmj5" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.861449 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7265b35-dcf6-4c88-994d-4643288e311b-config-data\") pod \"barbican-worker-bc78c584f-xrmj5\" (UID: \"c7265b35-dcf6-4c88-994d-4643288e311b\") " pod="openstack/barbican-worker-bc78c584f-xrmj5" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.867197 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/480b2bf3-fe1b-4d8d-a416-94f806ed262d-config-data-custom\") pod \"barbican-keystone-listener-797c89446b-kwd44\" (UID: \"480b2bf3-fe1b-4d8d-a416-94f806ed262d\") " pod="openstack/barbican-keystone-listener-797c89446b-kwd44" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.884049 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7265b35-dcf6-4c88-994d-4643288e311b-combined-ca-bundle\") pod \"barbican-worker-bc78c584f-xrmj5\" (UID: \"c7265b35-dcf6-4c88-994d-4643288e311b\") " pod="openstack/barbican-worker-bc78c584f-xrmj5" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.934053 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-797c89446b-kwd44" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.935227 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cdafab22-9046-4864-8915-b66fb3b2e529-config-data-custom\") pod \"barbican-api-86b9fb989b-rv8jv\" (UID: \"cdafab22-9046-4864-8915-b66fb3b2e529\") " pod="openstack/barbican-api-86b9fb989b-rv8jv" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.935307 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdafab22-9046-4864-8915-b66fb3b2e529-logs\") pod \"barbican-api-86b9fb989b-rv8jv\" (UID: \"cdafab22-9046-4864-8915-b66fb3b2e529\") " pod="openstack/barbican-api-86b9fb989b-rv8jv" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.935350 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdafab22-9046-4864-8915-b66fb3b2e529-config-data\") pod \"barbican-api-86b9fb989b-rv8jv\" (UID: \"cdafab22-9046-4864-8915-b66fb3b2e529\") " pod="openstack/barbican-api-86b9fb989b-rv8jv" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.935371 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdafab22-9046-4864-8915-b66fb3b2e529-combined-ca-bundle\") pod \"barbican-api-86b9fb989b-rv8jv\" (UID: \"cdafab22-9046-4864-8915-b66fb3b2e529\") " pod="openstack/barbican-api-86b9fb989b-rv8jv" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.935393 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6jvv\" (UniqueName: \"kubernetes.io/projected/cdafab22-9046-4864-8915-b66fb3b2e529-kube-api-access-z6jvv\") pod \"barbican-api-86b9fb989b-rv8jv\" (UID: \"cdafab22-9046-4864-8915-b66fb3b2e529\") " pod="openstack/barbican-api-86b9fb989b-rv8jv" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.936294 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdafab22-9046-4864-8915-b66fb3b2e529-logs\") pod \"barbican-api-86b9fb989b-rv8jv\" (UID: \"cdafab22-9046-4864-8915-b66fb3b2e529\") " pod="openstack/barbican-api-86b9fb989b-rv8jv" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.938493 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cdafab22-9046-4864-8915-b66fb3b2e529-config-data-custom\") pod \"barbican-api-86b9fb989b-rv8jv\" (UID: \"cdafab22-9046-4864-8915-b66fb3b2e529\") " pod="openstack/barbican-api-86b9fb989b-rv8jv" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.938780 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdafab22-9046-4864-8915-b66fb3b2e529-combined-ca-bundle\") pod \"barbican-api-86b9fb989b-rv8jv\" (UID: \"cdafab22-9046-4864-8915-b66fb3b2e529\") " pod="openstack/barbican-api-86b9fb989b-rv8jv" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.951593 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdafab22-9046-4864-8915-b66fb3b2e529-config-data\") pod \"barbican-api-86b9fb989b-rv8jv\" (UID: \"cdafab22-9046-4864-8915-b66fb3b2e529\") " pod="openstack/barbican-api-86b9fb989b-rv8jv" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.958192 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-bc78c584f-xrmj5" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.960577 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6jvv\" (UniqueName: \"kubernetes.io/projected/cdafab22-9046-4864-8915-b66fb3b2e529-kube-api-access-z6jvv\") pod \"barbican-api-86b9fb989b-rv8jv\" (UID: \"cdafab22-9046-4864-8915-b66fb3b2e529\") " pod="openstack/barbican-api-86b9fb989b-rv8jv" Dec 11 15:18:33 crc kubenswrapper[5050]: I1211 15:18:33.997901 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d9d5c6bc-xd8rc" Dec 11 15:18:34 crc kubenswrapper[5050]: I1211 15:18:34.093271 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-86b9fb989b-rv8jv" Dec 11 15:18:34 crc kubenswrapper[5050]: I1211 15:18:34.308466 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-bc78c584f-xrmj5"] Dec 11 15:18:34 crc kubenswrapper[5050]: I1211 15:18:34.396335 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-797c89446b-kwd44"] Dec 11 15:18:34 crc kubenswrapper[5050]: I1211 15:18:34.522827 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d9d5c6bc-xd8rc"] Dec 11 15:18:34 crc kubenswrapper[5050]: I1211 15:18:34.703725 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-86b9fb989b-rv8jv"] Dec 11 15:18:34 crc kubenswrapper[5050]: W1211 15:18:34.722714 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdafab22_9046_4864_8915_b66fb3b2e529.slice/crio-cc503f68379185c3ea3c50d4e717a5349baf04333e8f15ff67666d8dddd29b9c WatchSource:0}: Error finding container cc503f68379185c3ea3c50d4e717a5349baf04333e8f15ff67666d8dddd29b9c: Status 404 returned error can't find the container with id cc503f68379185c3ea3c50d4e717a5349baf04333e8f15ff67666d8dddd29b9c Dec 11 15:18:35 crc kubenswrapper[5050]: I1211 15:18:35.343579 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-bc78c584f-xrmj5" event={"ID":"c7265b35-dcf6-4c88-994d-4643288e311b","Type":"ContainerStarted","Data":"51c91dd7468dd3000b22058d312ac70835885a7cc36fb709efdc3542d74e681e"} Dec 11 15:18:35 crc kubenswrapper[5050]: I1211 15:18:35.343983 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-bc78c584f-xrmj5" event={"ID":"c7265b35-dcf6-4c88-994d-4643288e311b","Type":"ContainerStarted","Data":"4f675baf5cf8481b2cce8fe2729ffc749d00bb69671f63df95ca885e89817b61"} Dec 11 15:18:35 crc kubenswrapper[5050]: I1211 15:18:35.344000 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-bc78c584f-xrmj5" event={"ID":"c7265b35-dcf6-4c88-994d-4643288e311b","Type":"ContainerStarted","Data":"d3a25751f1859f387f6326d59dab6b3e13ef0efaf8a50ed7808ff9acde9203d6"} Dec 11 15:18:35 crc kubenswrapper[5050]: I1211 15:18:35.358974 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-86b9fb989b-rv8jv" event={"ID":"cdafab22-9046-4864-8915-b66fb3b2e529","Type":"ContainerStarted","Data":"e8572678bb7589bc473736d8397a485429598613d83072b06c5016640267a8a1"} Dec 11 15:18:35 crc kubenswrapper[5050]: I1211 15:18:35.359043 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-86b9fb989b-rv8jv" event={"ID":"cdafab22-9046-4864-8915-b66fb3b2e529","Type":"ContainerStarted","Data":"37d86b8d86158595aa19745012f3eb3fbc534dc0ef691b68fbc41c2cf83c5ad4"} Dec 11 15:18:35 crc kubenswrapper[5050]: I1211 15:18:35.359057 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-86b9fb989b-rv8jv" event={"ID":"cdafab22-9046-4864-8915-b66fb3b2e529","Type":"ContainerStarted","Data":"cc503f68379185c3ea3c50d4e717a5349baf04333e8f15ff67666d8dddd29b9c"} Dec 11 15:18:35 crc kubenswrapper[5050]: I1211 15:18:35.360600 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-86b9fb989b-rv8jv" Dec 11 15:18:35 crc kubenswrapper[5050]: I1211 15:18:35.360646 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-86b9fb989b-rv8jv" Dec 11 15:18:35 crc kubenswrapper[5050]: I1211 15:18:35.371479 5050 generic.go:334] "Generic (PLEG): container finished" podID="d0b3b56a-de8a-42dc-80a0-758b31bb4c1a" containerID="f40a12d709ef29585091a6c86e877fdf647a9c3f6d43d18d1c91b9be70195e74" exitCode=0 Dec 11 15:18:35 crc kubenswrapper[5050]: I1211 15:18:35.371724 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d9d5c6bc-xd8rc" event={"ID":"d0b3b56a-de8a-42dc-80a0-758b31bb4c1a","Type":"ContainerDied","Data":"f40a12d709ef29585091a6c86e877fdf647a9c3f6d43d18d1c91b9be70195e74"} Dec 11 15:18:35 crc kubenswrapper[5050]: I1211 15:18:35.371778 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d9d5c6bc-xd8rc" event={"ID":"d0b3b56a-de8a-42dc-80a0-758b31bb4c1a","Type":"ContainerStarted","Data":"be24731390de12e43cc36ecef1f1e58269ac880aaee84b7c8e3450131a9a7fd6"} Dec 11 15:18:35 crc kubenswrapper[5050]: I1211 15:18:35.374200 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-bc78c584f-xrmj5" podStartSLOduration=2.374185778 podStartE2EDuration="2.374185778s" podCreationTimestamp="2025-12-11 15:18:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:18:35.369081551 +0000 UTC m=+5406.212804147" watchObservedRunningTime="2025-12-11 15:18:35.374185778 +0000 UTC m=+5406.217908364" Dec 11 15:18:35 crc kubenswrapper[5050]: I1211 15:18:35.375414 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-797c89446b-kwd44" event={"ID":"480b2bf3-fe1b-4d8d-a416-94f806ed262d","Type":"ContainerStarted","Data":"606ab824794de8fddbb4ba842bc80849c9c7bde20fdafa7bab3d2da76bb9071b"} Dec 11 15:18:35 crc kubenswrapper[5050]: I1211 15:18:35.375466 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-797c89446b-kwd44" event={"ID":"480b2bf3-fe1b-4d8d-a416-94f806ed262d","Type":"ContainerStarted","Data":"875cbee5add5ca71123b632a6af8ca143a2980cab819514e9f43d218645e57f7"} Dec 11 15:18:35 crc kubenswrapper[5050]: I1211 15:18:35.375482 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-797c89446b-kwd44" event={"ID":"480b2bf3-fe1b-4d8d-a416-94f806ed262d","Type":"ContainerStarted","Data":"ffcc88d3fd18ac2eadfb5103bd25babb1939fc33f82a14d288053768f74145fa"} Dec 11 15:18:35 crc kubenswrapper[5050]: I1211 15:18:35.411753 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-86b9fb989b-rv8jv" podStartSLOduration=2.411734467 podStartE2EDuration="2.411734467s" podCreationTimestamp="2025-12-11 15:18:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:18:35.406105146 +0000 UTC m=+5406.249827732" watchObservedRunningTime="2025-12-11 15:18:35.411734467 +0000 UTC m=+5406.255457053" Dec 11 15:18:35 crc kubenswrapper[5050]: I1211 15:18:35.454003 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-797c89446b-kwd44" podStartSLOduration=2.453977241 podStartE2EDuration="2.453977241s" podCreationTimestamp="2025-12-11 15:18:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:18:35.444611539 +0000 UTC m=+5406.288334125" watchObservedRunningTime="2025-12-11 15:18:35.453977241 +0000 UTC m=+5406.297699827" Dec 11 15:18:36 crc kubenswrapper[5050]: I1211 15:18:36.401262 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d9d5c6bc-xd8rc" event={"ID":"d0b3b56a-de8a-42dc-80a0-758b31bb4c1a","Type":"ContainerStarted","Data":"2fecee04939a8bbc1c0882005945f113c0816a3b86833f66803dce3e80894c33"} Dec 11 15:18:36 crc kubenswrapper[5050]: I1211 15:18:36.401975 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d9d5c6bc-xd8rc" Dec 11 15:18:36 crc kubenswrapper[5050]: I1211 15:18:36.424098 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d9d5c6bc-xd8rc" podStartSLOduration=3.424075678 podStartE2EDuration="3.424075678s" podCreationTimestamp="2025-12-11 15:18:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:18:36.419939157 +0000 UTC m=+5407.263661743" watchObservedRunningTime="2025-12-11 15:18:36.424075678 +0000 UTC m=+5407.267798264" Dec 11 15:18:39 crc kubenswrapper[5050]: I1211 15:18:39.552594 5050 scope.go:117] "RemoveContainer" containerID="5e751c2ff817a404aae81889734d5a9bd57af497d8e885add288368bb5b0040e" Dec 11 15:18:39 crc kubenswrapper[5050]: E1211 15:18:39.554926 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:18:40 crc kubenswrapper[5050]: I1211 15:18:40.720438 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-86b9fb989b-rv8jv" Dec 11 15:18:42 crc kubenswrapper[5050]: I1211 15:18:42.085316 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-86b9fb989b-rv8jv" Dec 11 15:18:44 crc kubenswrapper[5050]: I1211 15:18:44.000263 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d9d5c6bc-xd8rc" Dec 11 15:18:44 crc kubenswrapper[5050]: I1211 15:18:44.059269 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b8d99c58c-44jtb"] Dec 11 15:18:44 crc kubenswrapper[5050]: I1211 15:18:44.059508 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b8d99c58c-44jtb" podUID="9847fc6d-b9e1-4fbe-9a61-502c243377a7" containerName="dnsmasq-dns" containerID="cri-o://2f2d0c3fb867e6c786d0de990fb6f92fd802fb9da1518f9b030460e71cf3e8fd" gracePeriod=10 Dec 11 15:18:44 crc kubenswrapper[5050]: I1211 15:18:44.476653 5050 generic.go:334] "Generic (PLEG): container finished" podID="9847fc6d-b9e1-4fbe-9a61-502c243377a7" containerID="2f2d0c3fb867e6c786d0de990fb6f92fd802fb9da1518f9b030460e71cf3e8fd" exitCode=0 Dec 11 15:18:44 crc kubenswrapper[5050]: I1211 15:18:44.476691 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b8d99c58c-44jtb" event={"ID":"9847fc6d-b9e1-4fbe-9a61-502c243377a7","Type":"ContainerDied","Data":"2f2d0c3fb867e6c786d0de990fb6f92fd802fb9da1518f9b030460e71cf3e8fd"} Dec 11 15:18:44 crc kubenswrapper[5050]: I1211 15:18:44.652083 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b8d99c58c-44jtb" Dec 11 15:18:44 crc kubenswrapper[5050]: I1211 15:18:44.746962 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9847fc6d-b9e1-4fbe-9a61-502c243377a7-ovsdbserver-nb\") pod \"9847fc6d-b9e1-4fbe-9a61-502c243377a7\" (UID: \"9847fc6d-b9e1-4fbe-9a61-502c243377a7\") " Dec 11 15:18:44 crc kubenswrapper[5050]: I1211 15:18:44.747036 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9847fc6d-b9e1-4fbe-9a61-502c243377a7-ovsdbserver-sb\") pod \"9847fc6d-b9e1-4fbe-9a61-502c243377a7\" (UID: \"9847fc6d-b9e1-4fbe-9a61-502c243377a7\") " Dec 11 15:18:44 crc kubenswrapper[5050]: I1211 15:18:44.747104 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9847fc6d-b9e1-4fbe-9a61-502c243377a7-config\") pod \"9847fc6d-b9e1-4fbe-9a61-502c243377a7\" (UID: \"9847fc6d-b9e1-4fbe-9a61-502c243377a7\") " Dec 11 15:18:44 crc kubenswrapper[5050]: I1211 15:18:44.747190 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktswf\" (UniqueName: \"kubernetes.io/projected/9847fc6d-b9e1-4fbe-9a61-502c243377a7-kube-api-access-ktswf\") pod \"9847fc6d-b9e1-4fbe-9a61-502c243377a7\" (UID: \"9847fc6d-b9e1-4fbe-9a61-502c243377a7\") " Dec 11 15:18:44 crc kubenswrapper[5050]: I1211 15:18:44.747270 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9847fc6d-b9e1-4fbe-9a61-502c243377a7-dns-svc\") pod \"9847fc6d-b9e1-4fbe-9a61-502c243377a7\" (UID: \"9847fc6d-b9e1-4fbe-9a61-502c243377a7\") " Dec 11 15:18:44 crc kubenswrapper[5050]: I1211 15:18:44.766826 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9847fc6d-b9e1-4fbe-9a61-502c243377a7-kube-api-access-ktswf" (OuterVolumeSpecName: "kube-api-access-ktswf") pod "9847fc6d-b9e1-4fbe-9a61-502c243377a7" (UID: "9847fc6d-b9e1-4fbe-9a61-502c243377a7"). InnerVolumeSpecName "kube-api-access-ktswf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:18:44 crc kubenswrapper[5050]: I1211 15:18:44.794667 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9847fc6d-b9e1-4fbe-9a61-502c243377a7-config" (OuterVolumeSpecName: "config") pod "9847fc6d-b9e1-4fbe-9a61-502c243377a7" (UID: "9847fc6d-b9e1-4fbe-9a61-502c243377a7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:18:44 crc kubenswrapper[5050]: I1211 15:18:44.794813 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9847fc6d-b9e1-4fbe-9a61-502c243377a7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9847fc6d-b9e1-4fbe-9a61-502c243377a7" (UID: "9847fc6d-b9e1-4fbe-9a61-502c243377a7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:18:44 crc kubenswrapper[5050]: I1211 15:18:44.796074 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9847fc6d-b9e1-4fbe-9a61-502c243377a7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9847fc6d-b9e1-4fbe-9a61-502c243377a7" (UID: "9847fc6d-b9e1-4fbe-9a61-502c243377a7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:18:44 crc kubenswrapper[5050]: I1211 15:18:44.804497 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9847fc6d-b9e1-4fbe-9a61-502c243377a7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9847fc6d-b9e1-4fbe-9a61-502c243377a7" (UID: "9847fc6d-b9e1-4fbe-9a61-502c243377a7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:18:44 crc kubenswrapper[5050]: I1211 15:18:44.848724 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktswf\" (UniqueName: \"kubernetes.io/projected/9847fc6d-b9e1-4fbe-9a61-502c243377a7-kube-api-access-ktswf\") on node \"crc\" DevicePath \"\"" Dec 11 15:18:44 crc kubenswrapper[5050]: I1211 15:18:44.848759 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9847fc6d-b9e1-4fbe-9a61-502c243377a7-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 15:18:44 crc kubenswrapper[5050]: I1211 15:18:44.848770 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9847fc6d-b9e1-4fbe-9a61-502c243377a7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 15:18:44 crc kubenswrapper[5050]: I1211 15:18:44.848778 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9847fc6d-b9e1-4fbe-9a61-502c243377a7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 15:18:44 crc kubenswrapper[5050]: I1211 15:18:44.848788 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9847fc6d-b9e1-4fbe-9a61-502c243377a7-config\") on node \"crc\" DevicePath \"\"" Dec 11 15:18:45 crc kubenswrapper[5050]: I1211 15:18:45.490967 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b8d99c58c-44jtb" event={"ID":"9847fc6d-b9e1-4fbe-9a61-502c243377a7","Type":"ContainerDied","Data":"fc183c026fb960af6f3d928cbe9f95f7f06afffde46c9b41aeebc9af9184302e"} Dec 11 15:18:45 crc kubenswrapper[5050]: I1211 15:18:45.491071 5050 scope.go:117] "RemoveContainer" containerID="2f2d0c3fb867e6c786d0de990fb6f92fd802fb9da1518f9b030460e71cf3e8fd" Dec 11 15:18:45 crc kubenswrapper[5050]: I1211 15:18:45.491073 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b8d99c58c-44jtb" Dec 11 15:18:45 crc kubenswrapper[5050]: I1211 15:18:45.525645 5050 scope.go:117] "RemoveContainer" containerID="5d34aa36c5a771a44f89db955b8ef686ad86bed16b04d4702e2db2cd19fef2eb" Dec 11 15:18:45 crc kubenswrapper[5050]: I1211 15:18:45.564949 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b8d99c58c-44jtb"] Dec 11 15:18:45 crc kubenswrapper[5050]: I1211 15:18:45.572806 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b8d99c58c-44jtb"] Dec 11 15:18:47 crc kubenswrapper[5050]: I1211 15:18:47.555575 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9847fc6d-b9e1-4fbe-9a61-502c243377a7" path="/var/lib/kubelet/pods/9847fc6d-b9e1-4fbe-9a61-502c243377a7/volumes" Dec 11 15:18:54 crc kubenswrapper[5050]: I1211 15:18:54.546342 5050 scope.go:117] "RemoveContainer" containerID="5e751c2ff817a404aae81889734d5a9bd57af497d8e885add288368bb5b0040e" Dec 11 15:18:54 crc kubenswrapper[5050]: E1211 15:18:54.547556 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:18:54 crc kubenswrapper[5050]: I1211 15:18:54.937748 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-6kwz5"] Dec 11 15:18:54 crc kubenswrapper[5050]: E1211 15:18:54.938090 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9847fc6d-b9e1-4fbe-9a61-502c243377a7" containerName="dnsmasq-dns" Dec 11 15:18:54 crc kubenswrapper[5050]: I1211 15:18:54.938105 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="9847fc6d-b9e1-4fbe-9a61-502c243377a7" containerName="dnsmasq-dns" Dec 11 15:18:54 crc kubenswrapper[5050]: E1211 15:18:54.938132 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9847fc6d-b9e1-4fbe-9a61-502c243377a7" containerName="init" Dec 11 15:18:54 crc kubenswrapper[5050]: I1211 15:18:54.938139 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="9847fc6d-b9e1-4fbe-9a61-502c243377a7" containerName="init" Dec 11 15:18:54 crc kubenswrapper[5050]: I1211 15:18:54.938301 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="9847fc6d-b9e1-4fbe-9a61-502c243377a7" containerName="dnsmasq-dns" Dec 11 15:18:54 crc kubenswrapper[5050]: I1211 15:18:54.938867 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6kwz5" Dec 11 15:18:54 crc kubenswrapper[5050]: I1211 15:18:54.945969 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-3395-account-create-update-bft8b"] Dec 11 15:18:54 crc kubenswrapper[5050]: I1211 15:18:54.947322 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3395-account-create-update-bft8b" Dec 11 15:18:54 crc kubenswrapper[5050]: I1211 15:18:54.953024 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-6kwz5"] Dec 11 15:18:54 crc kubenswrapper[5050]: I1211 15:18:54.956603 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 11 15:18:54 crc kubenswrapper[5050]: I1211 15:18:54.985163 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3395-account-create-update-bft8b"] Dec 11 15:18:55 crc kubenswrapper[5050]: I1211 15:18:55.026133 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpnfd\" (UniqueName: \"kubernetes.io/projected/e6afafbe-4972-4ecd-a7f9-102e6dc01e06-kube-api-access-hpnfd\") pod \"neutron-db-create-6kwz5\" (UID: \"e6afafbe-4972-4ecd-a7f9-102e6dc01e06\") " pod="openstack/neutron-db-create-6kwz5" Dec 11 15:18:55 crc kubenswrapper[5050]: I1211 15:18:55.026193 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13f1df50-df33-400c-8ee9-0d04a733c5c2-operator-scripts\") pod \"neutron-3395-account-create-update-bft8b\" (UID: \"13f1df50-df33-400c-8ee9-0d04a733c5c2\") " pod="openstack/neutron-3395-account-create-update-bft8b" Dec 11 15:18:55 crc kubenswrapper[5050]: I1211 15:18:55.026268 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6afafbe-4972-4ecd-a7f9-102e6dc01e06-operator-scripts\") pod \"neutron-db-create-6kwz5\" (UID: \"e6afafbe-4972-4ecd-a7f9-102e6dc01e06\") " pod="openstack/neutron-db-create-6kwz5" Dec 11 15:18:55 crc kubenswrapper[5050]: I1211 15:18:55.026415 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7jh8\" (UniqueName: \"kubernetes.io/projected/13f1df50-df33-400c-8ee9-0d04a733c5c2-kube-api-access-k7jh8\") pod \"neutron-3395-account-create-update-bft8b\" (UID: \"13f1df50-df33-400c-8ee9-0d04a733c5c2\") " pod="openstack/neutron-3395-account-create-update-bft8b" Dec 11 15:18:55 crc kubenswrapper[5050]: I1211 15:18:55.127317 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7jh8\" (UniqueName: \"kubernetes.io/projected/13f1df50-df33-400c-8ee9-0d04a733c5c2-kube-api-access-k7jh8\") pod \"neutron-3395-account-create-update-bft8b\" (UID: \"13f1df50-df33-400c-8ee9-0d04a733c5c2\") " pod="openstack/neutron-3395-account-create-update-bft8b" Dec 11 15:18:55 crc kubenswrapper[5050]: I1211 15:18:55.127591 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpnfd\" (UniqueName: \"kubernetes.io/projected/e6afafbe-4972-4ecd-a7f9-102e6dc01e06-kube-api-access-hpnfd\") pod \"neutron-db-create-6kwz5\" (UID: \"e6afafbe-4972-4ecd-a7f9-102e6dc01e06\") " pod="openstack/neutron-db-create-6kwz5" Dec 11 15:18:55 crc kubenswrapper[5050]: I1211 15:18:55.127614 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13f1df50-df33-400c-8ee9-0d04a733c5c2-operator-scripts\") pod \"neutron-3395-account-create-update-bft8b\" (UID: \"13f1df50-df33-400c-8ee9-0d04a733c5c2\") " pod="openstack/neutron-3395-account-create-update-bft8b" Dec 11 15:18:55 crc kubenswrapper[5050]: I1211 15:18:55.127658 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6afafbe-4972-4ecd-a7f9-102e6dc01e06-operator-scripts\") pod \"neutron-db-create-6kwz5\" (UID: \"e6afafbe-4972-4ecd-a7f9-102e6dc01e06\") " pod="openstack/neutron-db-create-6kwz5" Dec 11 15:18:55 crc kubenswrapper[5050]: I1211 15:18:55.128371 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13f1df50-df33-400c-8ee9-0d04a733c5c2-operator-scripts\") pod \"neutron-3395-account-create-update-bft8b\" (UID: \"13f1df50-df33-400c-8ee9-0d04a733c5c2\") " pod="openstack/neutron-3395-account-create-update-bft8b" Dec 11 15:18:55 crc kubenswrapper[5050]: I1211 15:18:55.128402 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6afafbe-4972-4ecd-a7f9-102e6dc01e06-operator-scripts\") pod \"neutron-db-create-6kwz5\" (UID: \"e6afafbe-4972-4ecd-a7f9-102e6dc01e06\") " pod="openstack/neutron-db-create-6kwz5" Dec 11 15:18:55 crc kubenswrapper[5050]: I1211 15:18:55.147674 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpnfd\" (UniqueName: \"kubernetes.io/projected/e6afafbe-4972-4ecd-a7f9-102e6dc01e06-kube-api-access-hpnfd\") pod \"neutron-db-create-6kwz5\" (UID: \"e6afafbe-4972-4ecd-a7f9-102e6dc01e06\") " pod="openstack/neutron-db-create-6kwz5" Dec 11 15:18:55 crc kubenswrapper[5050]: I1211 15:18:55.148707 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7jh8\" (UniqueName: \"kubernetes.io/projected/13f1df50-df33-400c-8ee9-0d04a733c5c2-kube-api-access-k7jh8\") pod \"neutron-3395-account-create-update-bft8b\" (UID: \"13f1df50-df33-400c-8ee9-0d04a733c5c2\") " pod="openstack/neutron-3395-account-create-update-bft8b" Dec 11 15:18:55 crc kubenswrapper[5050]: I1211 15:18:55.258471 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6kwz5" Dec 11 15:18:55 crc kubenswrapper[5050]: I1211 15:18:55.282695 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3395-account-create-update-bft8b" Dec 11 15:18:55 crc kubenswrapper[5050]: I1211 15:18:55.652276 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3395-account-create-update-bft8b"] Dec 11 15:18:55 crc kubenswrapper[5050]: I1211 15:18:55.734563 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-6kwz5"] Dec 11 15:18:55 crc kubenswrapper[5050]: I1211 15:18:55.750927 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3395-account-create-update-bft8b" event={"ID":"13f1df50-df33-400c-8ee9-0d04a733c5c2","Type":"ContainerStarted","Data":"f449983887c441113f2b9570462f58234224cd19467e271a6b17b5f1d0b35b84"} Dec 11 15:18:56 crc kubenswrapper[5050]: I1211 15:18:56.765729 5050 generic.go:334] "Generic (PLEG): container finished" podID="13f1df50-df33-400c-8ee9-0d04a733c5c2" containerID="4778bfc392f3f88f224ea74000c59e86b83c8d50962b3649f80cb6462b5b6e1f" exitCode=0 Dec 11 15:18:56 crc kubenswrapper[5050]: I1211 15:18:56.765814 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3395-account-create-update-bft8b" event={"ID":"13f1df50-df33-400c-8ee9-0d04a733c5c2","Type":"ContainerDied","Data":"4778bfc392f3f88f224ea74000c59e86b83c8d50962b3649f80cb6462b5b6e1f"} Dec 11 15:18:56 crc kubenswrapper[5050]: I1211 15:18:56.770620 5050 generic.go:334] "Generic (PLEG): container finished" podID="e6afafbe-4972-4ecd-a7f9-102e6dc01e06" containerID="5ffcd502ee6cb1a15ef61afcfc038cede45fce813cf7d92a6730ecc8be1238ea" exitCode=0 Dec 11 15:18:56 crc kubenswrapper[5050]: I1211 15:18:56.770671 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6kwz5" event={"ID":"e6afafbe-4972-4ecd-a7f9-102e6dc01e06","Type":"ContainerDied","Data":"5ffcd502ee6cb1a15ef61afcfc038cede45fce813cf7d92a6730ecc8be1238ea"} Dec 11 15:18:56 crc kubenswrapper[5050]: I1211 15:18:56.770703 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6kwz5" event={"ID":"e6afafbe-4972-4ecd-a7f9-102e6dc01e06","Type":"ContainerStarted","Data":"611abdf4e4b895599cfb070b5902484cf47fd949f779fb04b57a6a67920b1d85"} Dec 11 15:18:58 crc kubenswrapper[5050]: I1211 15:18:58.274941 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6kwz5" Dec 11 15:18:58 crc kubenswrapper[5050]: I1211 15:18:58.282475 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3395-account-create-update-bft8b" Dec 11 15:18:58 crc kubenswrapper[5050]: I1211 15:18:58.381478 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7jh8\" (UniqueName: \"kubernetes.io/projected/13f1df50-df33-400c-8ee9-0d04a733c5c2-kube-api-access-k7jh8\") pod \"13f1df50-df33-400c-8ee9-0d04a733c5c2\" (UID: \"13f1df50-df33-400c-8ee9-0d04a733c5c2\") " Dec 11 15:18:58 crc kubenswrapper[5050]: I1211 15:18:58.381594 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6afafbe-4972-4ecd-a7f9-102e6dc01e06-operator-scripts\") pod \"e6afafbe-4972-4ecd-a7f9-102e6dc01e06\" (UID: \"e6afafbe-4972-4ecd-a7f9-102e6dc01e06\") " Dec 11 15:18:58 crc kubenswrapper[5050]: I1211 15:18:58.381628 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13f1df50-df33-400c-8ee9-0d04a733c5c2-operator-scripts\") pod \"13f1df50-df33-400c-8ee9-0d04a733c5c2\" (UID: \"13f1df50-df33-400c-8ee9-0d04a733c5c2\") " Dec 11 15:18:58 crc kubenswrapper[5050]: I1211 15:18:58.381688 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpnfd\" (UniqueName: \"kubernetes.io/projected/e6afafbe-4972-4ecd-a7f9-102e6dc01e06-kube-api-access-hpnfd\") pod \"e6afafbe-4972-4ecd-a7f9-102e6dc01e06\" (UID: \"e6afafbe-4972-4ecd-a7f9-102e6dc01e06\") " Dec 11 15:18:58 crc kubenswrapper[5050]: I1211 15:18:58.382087 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13f1df50-df33-400c-8ee9-0d04a733c5c2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "13f1df50-df33-400c-8ee9-0d04a733c5c2" (UID: "13f1df50-df33-400c-8ee9-0d04a733c5c2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:18:58 crc kubenswrapper[5050]: I1211 15:18:58.382085 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6afafbe-4972-4ecd-a7f9-102e6dc01e06-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e6afafbe-4972-4ecd-a7f9-102e6dc01e06" (UID: "e6afafbe-4972-4ecd-a7f9-102e6dc01e06"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:18:58 crc kubenswrapper[5050]: I1211 15:18:58.388627 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6afafbe-4972-4ecd-a7f9-102e6dc01e06-kube-api-access-hpnfd" (OuterVolumeSpecName: "kube-api-access-hpnfd") pod "e6afafbe-4972-4ecd-a7f9-102e6dc01e06" (UID: "e6afafbe-4972-4ecd-a7f9-102e6dc01e06"). InnerVolumeSpecName "kube-api-access-hpnfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:18:58 crc kubenswrapper[5050]: I1211 15:18:58.389206 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13f1df50-df33-400c-8ee9-0d04a733c5c2-kube-api-access-k7jh8" (OuterVolumeSpecName: "kube-api-access-k7jh8") pod "13f1df50-df33-400c-8ee9-0d04a733c5c2" (UID: "13f1df50-df33-400c-8ee9-0d04a733c5c2"). InnerVolumeSpecName "kube-api-access-k7jh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:18:58 crc kubenswrapper[5050]: I1211 15:18:58.483628 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7jh8\" (UniqueName: \"kubernetes.io/projected/13f1df50-df33-400c-8ee9-0d04a733c5c2-kube-api-access-k7jh8\") on node \"crc\" DevicePath \"\"" Dec 11 15:18:58 crc kubenswrapper[5050]: I1211 15:18:58.483669 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6afafbe-4972-4ecd-a7f9-102e6dc01e06-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:18:58 crc kubenswrapper[5050]: I1211 15:18:58.483684 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/13f1df50-df33-400c-8ee9-0d04a733c5c2-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:18:58 crc kubenswrapper[5050]: I1211 15:18:58.483695 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpnfd\" (UniqueName: \"kubernetes.io/projected/e6afafbe-4972-4ecd-a7f9-102e6dc01e06-kube-api-access-hpnfd\") on node \"crc\" DevicePath \"\"" Dec 11 15:18:58 crc kubenswrapper[5050]: I1211 15:18:58.787741 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3395-account-create-update-bft8b" event={"ID":"13f1df50-df33-400c-8ee9-0d04a733c5c2","Type":"ContainerDied","Data":"f449983887c441113f2b9570462f58234224cd19467e271a6b17b5f1d0b35b84"} Dec 11 15:18:58 crc kubenswrapper[5050]: I1211 15:18:58.787796 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f449983887c441113f2b9570462f58234224cd19467e271a6b17b5f1d0b35b84" Dec 11 15:18:58 crc kubenswrapper[5050]: I1211 15:18:58.787808 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3395-account-create-update-bft8b" Dec 11 15:18:58 crc kubenswrapper[5050]: I1211 15:18:58.789606 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-6kwz5" event={"ID":"e6afafbe-4972-4ecd-a7f9-102e6dc01e06","Type":"ContainerDied","Data":"611abdf4e4b895599cfb070b5902484cf47fd949f779fb04b57a6a67920b1d85"} Dec 11 15:18:58 crc kubenswrapper[5050]: I1211 15:18:58.789645 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="611abdf4e4b895599cfb070b5902484cf47fd949f779fb04b57a6a67920b1d85" Dec 11 15:18:58 crc kubenswrapper[5050]: I1211 15:18:58.789651 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-6kwz5" Dec 11 15:19:00 crc kubenswrapper[5050]: I1211 15:19:00.259567 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-2z5q4"] Dec 11 15:19:00 crc kubenswrapper[5050]: E1211 15:19:00.260229 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6afafbe-4972-4ecd-a7f9-102e6dc01e06" containerName="mariadb-database-create" Dec 11 15:19:00 crc kubenswrapper[5050]: I1211 15:19:00.260245 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6afafbe-4972-4ecd-a7f9-102e6dc01e06" containerName="mariadb-database-create" Dec 11 15:19:00 crc kubenswrapper[5050]: E1211 15:19:00.260272 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13f1df50-df33-400c-8ee9-0d04a733c5c2" containerName="mariadb-account-create-update" Dec 11 15:19:00 crc kubenswrapper[5050]: I1211 15:19:00.260281 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="13f1df50-df33-400c-8ee9-0d04a733c5c2" containerName="mariadb-account-create-update" Dec 11 15:19:00 crc kubenswrapper[5050]: I1211 15:19:00.260519 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="13f1df50-df33-400c-8ee9-0d04a733c5c2" containerName="mariadb-account-create-update" Dec 11 15:19:00 crc kubenswrapper[5050]: I1211 15:19:00.260532 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6afafbe-4972-4ecd-a7f9-102e6dc01e06" containerName="mariadb-database-create" Dec 11 15:19:00 crc kubenswrapper[5050]: I1211 15:19:00.261171 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2z5q4" Dec 11 15:19:00 crc kubenswrapper[5050]: I1211 15:19:00.263368 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 11 15:19:00 crc kubenswrapper[5050]: I1211 15:19:00.263566 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-ctkbt" Dec 11 15:19:00 crc kubenswrapper[5050]: I1211 15:19:00.264101 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 11 15:19:00 crc kubenswrapper[5050]: I1211 15:19:00.279542 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-2z5q4"] Dec 11 15:19:00 crc kubenswrapper[5050]: I1211 15:19:00.312994 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0c23522-39f3-4930-8afd-d56611078533-combined-ca-bundle\") pod \"neutron-db-sync-2z5q4\" (UID: \"b0c23522-39f3-4930-8afd-d56611078533\") " pod="openstack/neutron-db-sync-2z5q4" Dec 11 15:19:00 crc kubenswrapper[5050]: I1211 15:19:00.313144 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b0c23522-39f3-4930-8afd-d56611078533-config\") pod \"neutron-db-sync-2z5q4\" (UID: \"b0c23522-39f3-4930-8afd-d56611078533\") " pod="openstack/neutron-db-sync-2z5q4" Dec 11 15:19:00 crc kubenswrapper[5050]: I1211 15:19:00.313224 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrfr5\" (UniqueName: \"kubernetes.io/projected/b0c23522-39f3-4930-8afd-d56611078533-kube-api-access-jrfr5\") pod \"neutron-db-sync-2z5q4\" (UID: \"b0c23522-39f3-4930-8afd-d56611078533\") " pod="openstack/neutron-db-sync-2z5q4" Dec 11 15:19:00 crc kubenswrapper[5050]: I1211 15:19:00.414911 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrfr5\" (UniqueName: \"kubernetes.io/projected/b0c23522-39f3-4930-8afd-d56611078533-kube-api-access-jrfr5\") pod \"neutron-db-sync-2z5q4\" (UID: \"b0c23522-39f3-4930-8afd-d56611078533\") " pod="openstack/neutron-db-sync-2z5q4" Dec 11 15:19:00 crc kubenswrapper[5050]: I1211 15:19:00.414988 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0c23522-39f3-4930-8afd-d56611078533-combined-ca-bundle\") pod \"neutron-db-sync-2z5q4\" (UID: \"b0c23522-39f3-4930-8afd-d56611078533\") " pod="openstack/neutron-db-sync-2z5q4" Dec 11 15:19:00 crc kubenswrapper[5050]: I1211 15:19:00.415141 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b0c23522-39f3-4930-8afd-d56611078533-config\") pod \"neutron-db-sync-2z5q4\" (UID: \"b0c23522-39f3-4930-8afd-d56611078533\") " pod="openstack/neutron-db-sync-2z5q4" Dec 11 15:19:00 crc kubenswrapper[5050]: I1211 15:19:00.418967 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0c23522-39f3-4930-8afd-d56611078533-combined-ca-bundle\") pod \"neutron-db-sync-2z5q4\" (UID: \"b0c23522-39f3-4930-8afd-d56611078533\") " pod="openstack/neutron-db-sync-2z5q4" Dec 11 15:19:00 crc kubenswrapper[5050]: I1211 15:19:00.419369 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b0c23522-39f3-4930-8afd-d56611078533-config\") pod \"neutron-db-sync-2z5q4\" (UID: \"b0c23522-39f3-4930-8afd-d56611078533\") " pod="openstack/neutron-db-sync-2z5q4" Dec 11 15:19:00 crc kubenswrapper[5050]: I1211 15:19:00.430980 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrfr5\" (UniqueName: \"kubernetes.io/projected/b0c23522-39f3-4930-8afd-d56611078533-kube-api-access-jrfr5\") pod \"neutron-db-sync-2z5q4\" (UID: \"b0c23522-39f3-4930-8afd-d56611078533\") " pod="openstack/neutron-db-sync-2z5q4" Dec 11 15:19:00 crc kubenswrapper[5050]: I1211 15:19:00.585740 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2z5q4" Dec 11 15:19:01 crc kubenswrapper[5050]: I1211 15:19:01.009190 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-2z5q4"] Dec 11 15:19:01 crc kubenswrapper[5050]: I1211 15:19:01.822994 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2z5q4" event={"ID":"b0c23522-39f3-4930-8afd-d56611078533","Type":"ContainerStarted","Data":"5854ceb703a1c14a45377a756c4bfe757bd2470825586e48ec4d5ca3dcc1e542"} Dec 11 15:19:01 crc kubenswrapper[5050]: I1211 15:19:01.823424 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2z5q4" event={"ID":"b0c23522-39f3-4930-8afd-d56611078533","Type":"ContainerStarted","Data":"c6efb4e9f538a614713131cbf05c48e55333279eb5212156c08cde8627ca8da0"} Dec 11 15:19:01 crc kubenswrapper[5050]: I1211 15:19:01.841174 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-2z5q4" podStartSLOduration=1.841149741 podStartE2EDuration="1.841149741s" podCreationTimestamp="2025-12-11 15:19:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:19:01.83960384 +0000 UTC m=+5432.683326466" watchObservedRunningTime="2025-12-11 15:19:01.841149741 +0000 UTC m=+5432.684872368" Dec 11 15:19:04 crc kubenswrapper[5050]: I1211 15:19:04.846822 5050 generic.go:334] "Generic (PLEG): container finished" podID="b0c23522-39f3-4930-8afd-d56611078533" containerID="5854ceb703a1c14a45377a756c4bfe757bd2470825586e48ec4d5ca3dcc1e542" exitCode=0 Dec 11 15:19:04 crc kubenswrapper[5050]: I1211 15:19:04.846938 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2z5q4" event={"ID":"b0c23522-39f3-4930-8afd-d56611078533","Type":"ContainerDied","Data":"5854ceb703a1c14a45377a756c4bfe757bd2470825586e48ec4d5ca3dcc1e542"} Dec 11 15:19:05 crc kubenswrapper[5050]: I1211 15:19:05.545673 5050 scope.go:117] "RemoveContainer" containerID="5e751c2ff817a404aae81889734d5a9bd57af497d8e885add288368bb5b0040e" Dec 11 15:19:05 crc kubenswrapper[5050]: E1211 15:19:05.546245 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:19:06 crc kubenswrapper[5050]: I1211 15:19:06.206397 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2z5q4" Dec 11 15:19:06 crc kubenswrapper[5050]: I1211 15:19:06.312915 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrfr5\" (UniqueName: \"kubernetes.io/projected/b0c23522-39f3-4930-8afd-d56611078533-kube-api-access-jrfr5\") pod \"b0c23522-39f3-4930-8afd-d56611078533\" (UID: \"b0c23522-39f3-4930-8afd-d56611078533\") " Dec 11 15:19:06 crc kubenswrapper[5050]: I1211 15:19:06.313530 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b0c23522-39f3-4930-8afd-d56611078533-config\") pod \"b0c23522-39f3-4930-8afd-d56611078533\" (UID: \"b0c23522-39f3-4930-8afd-d56611078533\") " Dec 11 15:19:06 crc kubenswrapper[5050]: I1211 15:19:06.313596 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0c23522-39f3-4930-8afd-d56611078533-combined-ca-bundle\") pod \"b0c23522-39f3-4930-8afd-d56611078533\" (UID: \"b0c23522-39f3-4930-8afd-d56611078533\") " Dec 11 15:19:06 crc kubenswrapper[5050]: I1211 15:19:06.319295 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0c23522-39f3-4930-8afd-d56611078533-kube-api-access-jrfr5" (OuterVolumeSpecName: "kube-api-access-jrfr5") pod "b0c23522-39f3-4930-8afd-d56611078533" (UID: "b0c23522-39f3-4930-8afd-d56611078533"). InnerVolumeSpecName "kube-api-access-jrfr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:19:06 crc kubenswrapper[5050]: I1211 15:19:06.338705 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0c23522-39f3-4930-8afd-d56611078533-config" (OuterVolumeSpecName: "config") pod "b0c23522-39f3-4930-8afd-d56611078533" (UID: "b0c23522-39f3-4930-8afd-d56611078533"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:19:06 crc kubenswrapper[5050]: I1211 15:19:06.356452 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0c23522-39f3-4930-8afd-d56611078533-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0c23522-39f3-4930-8afd-d56611078533" (UID: "b0c23522-39f3-4930-8afd-d56611078533"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:19:06 crc kubenswrapper[5050]: I1211 15:19:06.416319 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrfr5\" (UniqueName: \"kubernetes.io/projected/b0c23522-39f3-4930-8afd-d56611078533-kube-api-access-jrfr5\") on node \"crc\" DevicePath \"\"" Dec 11 15:19:06 crc kubenswrapper[5050]: I1211 15:19:06.416373 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b0c23522-39f3-4930-8afd-d56611078533-config\") on node \"crc\" DevicePath \"\"" Dec 11 15:19:06 crc kubenswrapper[5050]: I1211 15:19:06.416404 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0c23522-39f3-4930-8afd-d56611078533-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:19:06 crc kubenswrapper[5050]: I1211 15:19:06.865557 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-2z5q4" event={"ID":"b0c23522-39f3-4930-8afd-d56611078533","Type":"ContainerDied","Data":"c6efb4e9f538a614713131cbf05c48e55333279eb5212156c08cde8627ca8da0"} Dec 11 15:19:06 crc kubenswrapper[5050]: I1211 15:19:06.865594 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6efb4e9f538a614713131cbf05c48e55333279eb5212156c08cde8627ca8da0" Dec 11 15:19:06 crc kubenswrapper[5050]: I1211 15:19:06.865646 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-2z5q4" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.051652 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79754c57d9-knr44"] Dec 11 15:19:07 crc kubenswrapper[5050]: E1211 15:19:07.053007 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0c23522-39f3-4930-8afd-d56611078533" containerName="neutron-db-sync" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.053047 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0c23522-39f3-4930-8afd-d56611078533" containerName="neutron-db-sync" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.053283 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0c23522-39f3-4930-8afd-d56611078533" containerName="neutron-db-sync" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.054424 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79754c57d9-knr44" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.070176 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79754c57d9-knr44"] Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.129720 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8d953cd-0732-41cc-8005-18aa2145cb8c-dns-svc\") pod \"dnsmasq-dns-79754c57d9-knr44\" (UID: \"c8d953cd-0732-41cc-8005-18aa2145cb8c\") " pod="openstack/dnsmasq-dns-79754c57d9-knr44" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.129801 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qr87\" (UniqueName: \"kubernetes.io/projected/c8d953cd-0732-41cc-8005-18aa2145cb8c-kube-api-access-2qr87\") pod \"dnsmasq-dns-79754c57d9-knr44\" (UID: \"c8d953cd-0732-41cc-8005-18aa2145cb8c\") " pod="openstack/dnsmasq-dns-79754c57d9-knr44" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.129893 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8d953cd-0732-41cc-8005-18aa2145cb8c-config\") pod \"dnsmasq-dns-79754c57d9-knr44\" (UID: \"c8d953cd-0732-41cc-8005-18aa2145cb8c\") " pod="openstack/dnsmasq-dns-79754c57d9-knr44" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.129921 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8d953cd-0732-41cc-8005-18aa2145cb8c-ovsdbserver-sb\") pod \"dnsmasq-dns-79754c57d9-knr44\" (UID: \"c8d953cd-0732-41cc-8005-18aa2145cb8c\") " pod="openstack/dnsmasq-dns-79754c57d9-knr44" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.129976 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8d953cd-0732-41cc-8005-18aa2145cb8c-ovsdbserver-nb\") pod \"dnsmasq-dns-79754c57d9-knr44\" (UID: \"c8d953cd-0732-41cc-8005-18aa2145cb8c\") " pod="openstack/dnsmasq-dns-79754c57d9-knr44" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.172627 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-857bd6c55c-xnk9l"] Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.174993 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-857bd6c55c-xnk9l" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.195562 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.195692 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.195841 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-ctkbt" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.214057 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-857bd6c55c-xnk9l"] Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.231875 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e3e22310-61f8-4656-a58b-00164cda77a6-config\") pod \"neutron-857bd6c55c-xnk9l\" (UID: \"e3e22310-61f8-4656-a58b-00164cda77a6\") " pod="openstack/neutron-857bd6c55c-xnk9l" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.231922 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e22310-61f8-4656-a58b-00164cda77a6-combined-ca-bundle\") pod \"neutron-857bd6c55c-xnk9l\" (UID: \"e3e22310-61f8-4656-a58b-00164cda77a6\") " pod="openstack/neutron-857bd6c55c-xnk9l" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.231947 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8d953cd-0732-41cc-8005-18aa2145cb8c-config\") pod \"dnsmasq-dns-79754c57d9-knr44\" (UID: \"c8d953cd-0732-41cc-8005-18aa2145cb8c\") " pod="openstack/dnsmasq-dns-79754c57d9-knr44" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.231968 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8d953cd-0732-41cc-8005-18aa2145cb8c-ovsdbserver-sb\") pod \"dnsmasq-dns-79754c57d9-knr44\" (UID: \"c8d953cd-0732-41cc-8005-18aa2145cb8c\") " pod="openstack/dnsmasq-dns-79754c57d9-knr44" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.232028 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t67f\" (UniqueName: \"kubernetes.io/projected/e3e22310-61f8-4656-a58b-00164cda77a6-kube-api-access-4t67f\") pod \"neutron-857bd6c55c-xnk9l\" (UID: \"e3e22310-61f8-4656-a58b-00164cda77a6\") " pod="openstack/neutron-857bd6c55c-xnk9l" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.232104 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8d953cd-0732-41cc-8005-18aa2145cb8c-ovsdbserver-nb\") pod \"dnsmasq-dns-79754c57d9-knr44\" (UID: \"c8d953cd-0732-41cc-8005-18aa2145cb8c\") " pod="openstack/dnsmasq-dns-79754c57d9-knr44" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.232152 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e3e22310-61f8-4656-a58b-00164cda77a6-httpd-config\") pod \"neutron-857bd6c55c-xnk9l\" (UID: \"e3e22310-61f8-4656-a58b-00164cda77a6\") " pod="openstack/neutron-857bd6c55c-xnk9l" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.232170 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8d953cd-0732-41cc-8005-18aa2145cb8c-dns-svc\") pod \"dnsmasq-dns-79754c57d9-knr44\" (UID: \"c8d953cd-0732-41cc-8005-18aa2145cb8c\") " pod="openstack/dnsmasq-dns-79754c57d9-knr44" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.232203 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qr87\" (UniqueName: \"kubernetes.io/projected/c8d953cd-0732-41cc-8005-18aa2145cb8c-kube-api-access-2qr87\") pod \"dnsmasq-dns-79754c57d9-knr44\" (UID: \"c8d953cd-0732-41cc-8005-18aa2145cb8c\") " pod="openstack/dnsmasq-dns-79754c57d9-knr44" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.232839 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8d953cd-0732-41cc-8005-18aa2145cb8c-config\") pod \"dnsmasq-dns-79754c57d9-knr44\" (UID: \"c8d953cd-0732-41cc-8005-18aa2145cb8c\") " pod="openstack/dnsmasq-dns-79754c57d9-knr44" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.232968 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8d953cd-0732-41cc-8005-18aa2145cb8c-ovsdbserver-nb\") pod \"dnsmasq-dns-79754c57d9-knr44\" (UID: \"c8d953cd-0732-41cc-8005-18aa2145cb8c\") " pod="openstack/dnsmasq-dns-79754c57d9-knr44" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.233371 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8d953cd-0732-41cc-8005-18aa2145cb8c-ovsdbserver-sb\") pod \"dnsmasq-dns-79754c57d9-knr44\" (UID: \"c8d953cd-0732-41cc-8005-18aa2145cb8c\") " pod="openstack/dnsmasq-dns-79754c57d9-knr44" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.233482 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8d953cd-0732-41cc-8005-18aa2145cb8c-dns-svc\") pod \"dnsmasq-dns-79754c57d9-knr44\" (UID: \"c8d953cd-0732-41cc-8005-18aa2145cb8c\") " pod="openstack/dnsmasq-dns-79754c57d9-knr44" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.279165 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qr87\" (UniqueName: \"kubernetes.io/projected/c8d953cd-0732-41cc-8005-18aa2145cb8c-kube-api-access-2qr87\") pod \"dnsmasq-dns-79754c57d9-knr44\" (UID: \"c8d953cd-0732-41cc-8005-18aa2145cb8c\") " pod="openstack/dnsmasq-dns-79754c57d9-knr44" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.333371 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e3e22310-61f8-4656-a58b-00164cda77a6-httpd-config\") pod \"neutron-857bd6c55c-xnk9l\" (UID: \"e3e22310-61f8-4656-a58b-00164cda77a6\") " pod="openstack/neutron-857bd6c55c-xnk9l" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.333742 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e3e22310-61f8-4656-a58b-00164cda77a6-config\") pod \"neutron-857bd6c55c-xnk9l\" (UID: \"e3e22310-61f8-4656-a58b-00164cda77a6\") " pod="openstack/neutron-857bd6c55c-xnk9l" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.333774 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e22310-61f8-4656-a58b-00164cda77a6-combined-ca-bundle\") pod \"neutron-857bd6c55c-xnk9l\" (UID: \"e3e22310-61f8-4656-a58b-00164cda77a6\") " pod="openstack/neutron-857bd6c55c-xnk9l" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.333811 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t67f\" (UniqueName: \"kubernetes.io/projected/e3e22310-61f8-4656-a58b-00164cda77a6-kube-api-access-4t67f\") pod \"neutron-857bd6c55c-xnk9l\" (UID: \"e3e22310-61f8-4656-a58b-00164cda77a6\") " pod="openstack/neutron-857bd6c55c-xnk9l" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.346762 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e3e22310-61f8-4656-a58b-00164cda77a6-httpd-config\") pod \"neutron-857bd6c55c-xnk9l\" (UID: \"e3e22310-61f8-4656-a58b-00164cda77a6\") " pod="openstack/neutron-857bd6c55c-xnk9l" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.357878 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e22310-61f8-4656-a58b-00164cda77a6-combined-ca-bundle\") pod \"neutron-857bd6c55c-xnk9l\" (UID: \"e3e22310-61f8-4656-a58b-00164cda77a6\") " pod="openstack/neutron-857bd6c55c-xnk9l" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.357926 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e3e22310-61f8-4656-a58b-00164cda77a6-config\") pod \"neutron-857bd6c55c-xnk9l\" (UID: \"e3e22310-61f8-4656-a58b-00164cda77a6\") " pod="openstack/neutron-857bd6c55c-xnk9l" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.369366 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t67f\" (UniqueName: \"kubernetes.io/projected/e3e22310-61f8-4656-a58b-00164cda77a6-kube-api-access-4t67f\") pod \"neutron-857bd6c55c-xnk9l\" (UID: \"e3e22310-61f8-4656-a58b-00164cda77a6\") " pod="openstack/neutron-857bd6c55c-xnk9l" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.390287 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79754c57d9-knr44" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.528123 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-857bd6c55c-xnk9l" Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.863851 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-857bd6c55c-xnk9l"] Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.876866 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-857bd6c55c-xnk9l" event={"ID":"e3e22310-61f8-4656-a58b-00164cda77a6","Type":"ContainerStarted","Data":"ea0cc6269dd551504484103175806f01ed32222d26f54000e1a7fdc316acf24a"} Dec 11 15:19:07 crc kubenswrapper[5050]: I1211 15:19:07.889892 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79754c57d9-knr44"] Dec 11 15:19:08 crc kubenswrapper[5050]: I1211 15:19:08.886136 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-857bd6c55c-xnk9l" event={"ID":"e3e22310-61f8-4656-a58b-00164cda77a6","Type":"ContainerStarted","Data":"758e123bbe0143a0d04d5596659b6fdb192aff7c1db85d43ffdad8ad2586256c"} Dec 11 15:19:08 crc kubenswrapper[5050]: I1211 15:19:08.886439 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-857bd6c55c-xnk9l" event={"ID":"e3e22310-61f8-4656-a58b-00164cda77a6","Type":"ContainerStarted","Data":"fe64d1a9528ee7af2e845aa747e1d37948b0d79543b87281e685a98f7c2d1aa4"} Dec 11 15:19:08 crc kubenswrapper[5050]: I1211 15:19:08.887578 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-857bd6c55c-xnk9l" Dec 11 15:19:08 crc kubenswrapper[5050]: I1211 15:19:08.891074 5050 generic.go:334] "Generic (PLEG): container finished" podID="c8d953cd-0732-41cc-8005-18aa2145cb8c" containerID="4129d7734129500ad151c362161ef43eddbe1207c26a158e0e19296e82353712" exitCode=0 Dec 11 15:19:08 crc kubenswrapper[5050]: I1211 15:19:08.891156 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79754c57d9-knr44" event={"ID":"c8d953cd-0732-41cc-8005-18aa2145cb8c","Type":"ContainerDied","Data":"4129d7734129500ad151c362161ef43eddbe1207c26a158e0e19296e82353712"} Dec 11 15:19:08 crc kubenswrapper[5050]: I1211 15:19:08.891181 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79754c57d9-knr44" event={"ID":"c8d953cd-0732-41cc-8005-18aa2145cb8c","Type":"ContainerStarted","Data":"118f424ee8e38b61f3c34d29d253902d3347edd5b8a80091586ab4d496dedf2a"} Dec 11 15:19:08 crc kubenswrapper[5050]: I1211 15:19:08.934642 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-857bd6c55c-xnk9l" podStartSLOduration=1.934622418 podStartE2EDuration="1.934622418s" podCreationTimestamp="2025-12-11 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:19:08.917985681 +0000 UTC m=+5439.761708267" watchObservedRunningTime="2025-12-11 15:19:08.934622418 +0000 UTC m=+5439.778345004" Dec 11 15:19:09 crc kubenswrapper[5050]: I1211 15:19:09.900396 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79754c57d9-knr44" event={"ID":"c8d953cd-0732-41cc-8005-18aa2145cb8c","Type":"ContainerStarted","Data":"f8fb7f30263f3f868127bc338e25fba3a81a7a4d294e605ce240261839fdb05f"} Dec 11 15:19:09 crc kubenswrapper[5050]: I1211 15:19:09.931134 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79754c57d9-knr44" podStartSLOduration=2.931114913 podStartE2EDuration="2.931114913s" podCreationTimestamp="2025-12-11 15:19:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:19:09.925215845 +0000 UTC m=+5440.768938451" watchObservedRunningTime="2025-12-11 15:19:09.931114913 +0000 UTC m=+5440.774837499" Dec 11 15:19:10 crc kubenswrapper[5050]: I1211 15:19:10.908714 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79754c57d9-knr44" Dec 11 15:19:17 crc kubenswrapper[5050]: I1211 15:19:17.393870 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79754c57d9-knr44" Dec 11 15:19:17 crc kubenswrapper[5050]: I1211 15:19:17.467329 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d9d5c6bc-xd8rc"] Dec 11 15:19:17 crc kubenswrapper[5050]: I1211 15:19:17.467558 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d9d5c6bc-xd8rc" podUID="d0b3b56a-de8a-42dc-80a0-758b31bb4c1a" containerName="dnsmasq-dns" containerID="cri-o://2fecee04939a8bbc1c0882005945f113c0816a3b86833f66803dce3e80894c33" gracePeriod=10 Dec 11 15:19:17 crc kubenswrapper[5050]: I1211 15:19:17.546167 5050 scope.go:117] "RemoveContainer" containerID="5e751c2ff817a404aae81889734d5a9bd57af497d8e885add288368bb5b0040e" Dec 11 15:19:17 crc kubenswrapper[5050]: E1211 15:19:17.546362 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:19:17 crc kubenswrapper[5050]: I1211 15:19:17.971452 5050 generic.go:334] "Generic (PLEG): container finished" podID="d0b3b56a-de8a-42dc-80a0-758b31bb4c1a" containerID="2fecee04939a8bbc1c0882005945f113c0816a3b86833f66803dce3e80894c33" exitCode=0 Dec 11 15:19:17 crc kubenswrapper[5050]: I1211 15:19:17.971524 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d9d5c6bc-xd8rc" event={"ID":"d0b3b56a-de8a-42dc-80a0-758b31bb4c1a","Type":"ContainerDied","Data":"2fecee04939a8bbc1c0882005945f113c0816a3b86833f66803dce3e80894c33"} Dec 11 15:19:17 crc kubenswrapper[5050]: I1211 15:19:17.971806 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d9d5c6bc-xd8rc" event={"ID":"d0b3b56a-de8a-42dc-80a0-758b31bb4c1a","Type":"ContainerDied","Data":"be24731390de12e43cc36ecef1f1e58269ac880aaee84b7c8e3450131a9a7fd6"} Dec 11 15:19:17 crc kubenswrapper[5050]: I1211 15:19:17.971822 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be24731390de12e43cc36ecef1f1e58269ac880aaee84b7c8e3450131a9a7fd6" Dec 11 15:19:18 crc kubenswrapper[5050]: I1211 15:19:18.055248 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d9d5c6bc-xd8rc" Dec 11 15:19:18 crc kubenswrapper[5050]: I1211 15:19:18.222667 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0b3b56a-de8a-42dc-80a0-758b31bb4c1a-dns-svc\") pod \"d0b3b56a-de8a-42dc-80a0-758b31bb4c1a\" (UID: \"d0b3b56a-de8a-42dc-80a0-758b31bb4c1a\") " Dec 11 15:19:18 crc kubenswrapper[5050]: I1211 15:19:18.222733 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzp96\" (UniqueName: \"kubernetes.io/projected/d0b3b56a-de8a-42dc-80a0-758b31bb4c1a-kube-api-access-lzp96\") pod \"d0b3b56a-de8a-42dc-80a0-758b31bb4c1a\" (UID: \"d0b3b56a-de8a-42dc-80a0-758b31bb4c1a\") " Dec 11 15:19:18 crc kubenswrapper[5050]: I1211 15:19:18.222780 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0b3b56a-de8a-42dc-80a0-758b31bb4c1a-ovsdbserver-nb\") pod \"d0b3b56a-de8a-42dc-80a0-758b31bb4c1a\" (UID: \"d0b3b56a-de8a-42dc-80a0-758b31bb4c1a\") " Dec 11 15:19:18 crc kubenswrapper[5050]: I1211 15:19:18.222799 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0b3b56a-de8a-42dc-80a0-758b31bb4c1a-ovsdbserver-sb\") pod \"d0b3b56a-de8a-42dc-80a0-758b31bb4c1a\" (UID: \"d0b3b56a-de8a-42dc-80a0-758b31bb4c1a\") " Dec 11 15:19:18 crc kubenswrapper[5050]: I1211 15:19:18.222845 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0b3b56a-de8a-42dc-80a0-758b31bb4c1a-config\") pod \"d0b3b56a-de8a-42dc-80a0-758b31bb4c1a\" (UID: \"d0b3b56a-de8a-42dc-80a0-758b31bb4c1a\") " Dec 11 15:19:18 crc kubenswrapper[5050]: I1211 15:19:18.230226 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0b3b56a-de8a-42dc-80a0-758b31bb4c1a-kube-api-access-lzp96" (OuterVolumeSpecName: "kube-api-access-lzp96") pod "d0b3b56a-de8a-42dc-80a0-758b31bb4c1a" (UID: "d0b3b56a-de8a-42dc-80a0-758b31bb4c1a"). InnerVolumeSpecName "kube-api-access-lzp96". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:19:18 crc kubenswrapper[5050]: I1211 15:19:18.262300 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0b3b56a-de8a-42dc-80a0-758b31bb4c1a-config" (OuterVolumeSpecName: "config") pod "d0b3b56a-de8a-42dc-80a0-758b31bb4c1a" (UID: "d0b3b56a-de8a-42dc-80a0-758b31bb4c1a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:19:18 crc kubenswrapper[5050]: I1211 15:19:18.262542 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0b3b56a-de8a-42dc-80a0-758b31bb4c1a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d0b3b56a-de8a-42dc-80a0-758b31bb4c1a" (UID: "d0b3b56a-de8a-42dc-80a0-758b31bb4c1a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:19:18 crc kubenswrapper[5050]: I1211 15:19:18.267931 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0b3b56a-de8a-42dc-80a0-758b31bb4c1a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d0b3b56a-de8a-42dc-80a0-758b31bb4c1a" (UID: "d0b3b56a-de8a-42dc-80a0-758b31bb4c1a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:19:18 crc kubenswrapper[5050]: I1211 15:19:18.271416 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0b3b56a-de8a-42dc-80a0-758b31bb4c1a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d0b3b56a-de8a-42dc-80a0-758b31bb4c1a" (UID: "d0b3b56a-de8a-42dc-80a0-758b31bb4c1a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:19:18 crc kubenswrapper[5050]: I1211 15:19:18.325054 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0b3b56a-de8a-42dc-80a0-758b31bb4c1a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 15:19:18 crc kubenswrapper[5050]: I1211 15:19:18.325111 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzp96\" (UniqueName: \"kubernetes.io/projected/d0b3b56a-de8a-42dc-80a0-758b31bb4c1a-kube-api-access-lzp96\") on node \"crc\" DevicePath \"\"" Dec 11 15:19:18 crc kubenswrapper[5050]: I1211 15:19:18.325134 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0b3b56a-de8a-42dc-80a0-758b31bb4c1a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 15:19:18 crc kubenswrapper[5050]: I1211 15:19:18.325153 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0b3b56a-de8a-42dc-80a0-758b31bb4c1a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 15:19:18 crc kubenswrapper[5050]: I1211 15:19:18.325168 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0b3b56a-de8a-42dc-80a0-758b31bb4c1a-config\") on node \"crc\" DevicePath \"\"" Dec 11 15:19:18 crc kubenswrapper[5050]: I1211 15:19:18.991396 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d9d5c6bc-xd8rc" Dec 11 15:19:19 crc kubenswrapper[5050]: I1211 15:19:19.030500 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d9d5c6bc-xd8rc"] Dec 11 15:19:19 crc kubenswrapper[5050]: I1211 15:19:19.040174 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d9d5c6bc-xd8rc"] Dec 11 15:19:19 crc kubenswrapper[5050]: I1211 15:19:19.558946 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0b3b56a-de8a-42dc-80a0-758b31bb4c1a" path="/var/lib/kubelet/pods/d0b3b56a-de8a-42dc-80a0-758b31bb4c1a/volumes" Dec 11 15:19:29 crc kubenswrapper[5050]: I1211 15:19:29.557295 5050 scope.go:117] "RemoveContainer" containerID="5e751c2ff817a404aae81889734d5a9bd57af497d8e885add288368bb5b0040e" Dec 11 15:19:29 crc kubenswrapper[5050]: E1211 15:19:29.558236 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:19:37 crc kubenswrapper[5050]: I1211 15:19:37.542513 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-857bd6c55c-xnk9l" Dec 11 15:19:44 crc kubenswrapper[5050]: I1211 15:19:44.546602 5050 scope.go:117] "RemoveContainer" containerID="5e751c2ff817a404aae81889734d5a9bd57af497d8e885add288368bb5b0040e" Dec 11 15:19:44 crc kubenswrapper[5050]: E1211 15:19:44.548291 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:19:45 crc kubenswrapper[5050]: I1211 15:19:45.015692 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-4smbc"] Dec 11 15:19:45 crc kubenswrapper[5050]: E1211 15:19:45.016141 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0b3b56a-de8a-42dc-80a0-758b31bb4c1a" containerName="init" Dec 11 15:19:45 crc kubenswrapper[5050]: I1211 15:19:45.016163 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0b3b56a-de8a-42dc-80a0-758b31bb4c1a" containerName="init" Dec 11 15:19:45 crc kubenswrapper[5050]: E1211 15:19:45.016197 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0b3b56a-de8a-42dc-80a0-758b31bb4c1a" containerName="dnsmasq-dns" Dec 11 15:19:45 crc kubenswrapper[5050]: I1211 15:19:45.016206 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0b3b56a-de8a-42dc-80a0-758b31bb4c1a" containerName="dnsmasq-dns" Dec 11 15:19:45 crc kubenswrapper[5050]: I1211 15:19:45.016414 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0b3b56a-de8a-42dc-80a0-758b31bb4c1a" containerName="dnsmasq-dns" Dec 11 15:19:45 crc kubenswrapper[5050]: I1211 15:19:45.017146 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4smbc" Dec 11 15:19:45 crc kubenswrapper[5050]: I1211 15:19:45.049879 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-4smbc"] Dec 11 15:19:45 crc kubenswrapper[5050]: I1211 15:19:45.121802 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-46d9-account-create-update-spksj"] Dec 11 15:19:45 crc kubenswrapper[5050]: I1211 15:19:45.123287 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-46d9-account-create-update-spksj" Dec 11 15:19:45 crc kubenswrapper[5050]: I1211 15:19:45.125543 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 11 15:19:45 crc kubenswrapper[5050]: I1211 15:19:45.129424 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-46d9-account-create-update-spksj"] Dec 11 15:19:45 crc kubenswrapper[5050]: I1211 15:19:45.147886 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd6pv\" (UniqueName: \"kubernetes.io/projected/14a8814e-5b3e-4cfd-8646-40bd756bdec8-kube-api-access-nd6pv\") pod \"glance-db-create-4smbc\" (UID: \"14a8814e-5b3e-4cfd-8646-40bd756bdec8\") " pod="openstack/glance-db-create-4smbc" Dec 11 15:19:45 crc kubenswrapper[5050]: I1211 15:19:45.147994 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14a8814e-5b3e-4cfd-8646-40bd756bdec8-operator-scripts\") pod \"glance-db-create-4smbc\" (UID: \"14a8814e-5b3e-4cfd-8646-40bd756bdec8\") " pod="openstack/glance-db-create-4smbc" Dec 11 15:19:45 crc kubenswrapper[5050]: I1211 15:19:45.249452 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6qm4\" (UniqueName: \"kubernetes.io/projected/557216c3-9af4-4436-b52d-5d77e1562f8d-kube-api-access-v6qm4\") pod \"glance-46d9-account-create-update-spksj\" (UID: \"557216c3-9af4-4436-b52d-5d77e1562f8d\") " pod="openstack/glance-46d9-account-create-update-spksj" Dec 11 15:19:45 crc kubenswrapper[5050]: I1211 15:19:45.249543 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd6pv\" (UniqueName: \"kubernetes.io/projected/14a8814e-5b3e-4cfd-8646-40bd756bdec8-kube-api-access-nd6pv\") pod \"glance-db-create-4smbc\" (UID: \"14a8814e-5b3e-4cfd-8646-40bd756bdec8\") " pod="openstack/glance-db-create-4smbc" Dec 11 15:19:45 crc kubenswrapper[5050]: I1211 15:19:45.249614 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/557216c3-9af4-4436-b52d-5d77e1562f8d-operator-scripts\") pod \"glance-46d9-account-create-update-spksj\" (UID: \"557216c3-9af4-4436-b52d-5d77e1562f8d\") " pod="openstack/glance-46d9-account-create-update-spksj" Dec 11 15:19:45 crc kubenswrapper[5050]: I1211 15:19:45.249647 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14a8814e-5b3e-4cfd-8646-40bd756bdec8-operator-scripts\") pod \"glance-db-create-4smbc\" (UID: \"14a8814e-5b3e-4cfd-8646-40bd756bdec8\") " pod="openstack/glance-db-create-4smbc" Dec 11 15:19:45 crc kubenswrapper[5050]: I1211 15:19:45.250557 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14a8814e-5b3e-4cfd-8646-40bd756bdec8-operator-scripts\") pod \"glance-db-create-4smbc\" (UID: \"14a8814e-5b3e-4cfd-8646-40bd756bdec8\") " pod="openstack/glance-db-create-4smbc" Dec 11 15:19:45 crc kubenswrapper[5050]: I1211 15:19:45.280667 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd6pv\" (UniqueName: \"kubernetes.io/projected/14a8814e-5b3e-4cfd-8646-40bd756bdec8-kube-api-access-nd6pv\") pod \"glance-db-create-4smbc\" (UID: \"14a8814e-5b3e-4cfd-8646-40bd756bdec8\") " pod="openstack/glance-db-create-4smbc" Dec 11 15:19:45 crc kubenswrapper[5050]: I1211 15:19:45.351181 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/557216c3-9af4-4436-b52d-5d77e1562f8d-operator-scripts\") pod \"glance-46d9-account-create-update-spksj\" (UID: \"557216c3-9af4-4436-b52d-5d77e1562f8d\") " pod="openstack/glance-46d9-account-create-update-spksj" Dec 11 15:19:45 crc kubenswrapper[5050]: I1211 15:19:45.351287 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6qm4\" (UniqueName: \"kubernetes.io/projected/557216c3-9af4-4436-b52d-5d77e1562f8d-kube-api-access-v6qm4\") pod \"glance-46d9-account-create-update-spksj\" (UID: \"557216c3-9af4-4436-b52d-5d77e1562f8d\") " pod="openstack/glance-46d9-account-create-update-spksj" Dec 11 15:19:45 crc kubenswrapper[5050]: I1211 15:19:45.352188 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/557216c3-9af4-4436-b52d-5d77e1562f8d-operator-scripts\") pod \"glance-46d9-account-create-update-spksj\" (UID: \"557216c3-9af4-4436-b52d-5d77e1562f8d\") " pod="openstack/glance-46d9-account-create-update-spksj" Dec 11 15:19:45 crc kubenswrapper[5050]: I1211 15:19:45.354597 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4smbc" Dec 11 15:19:45 crc kubenswrapper[5050]: I1211 15:19:45.367325 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6qm4\" (UniqueName: \"kubernetes.io/projected/557216c3-9af4-4436-b52d-5d77e1562f8d-kube-api-access-v6qm4\") pod \"glance-46d9-account-create-update-spksj\" (UID: \"557216c3-9af4-4436-b52d-5d77e1562f8d\") " pod="openstack/glance-46d9-account-create-update-spksj" Dec 11 15:19:45 crc kubenswrapper[5050]: I1211 15:19:45.440314 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-46d9-account-create-update-spksj" Dec 11 15:19:45 crc kubenswrapper[5050]: I1211 15:19:45.696326 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-46d9-account-create-update-spksj"] Dec 11 15:19:45 crc kubenswrapper[5050]: I1211 15:19:45.805099 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-4smbc"] Dec 11 15:19:45 crc kubenswrapper[5050]: W1211 15:19:45.811222 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14a8814e_5b3e_4cfd_8646_40bd756bdec8.slice/crio-32a439cc9cf26df37b572c5d8451df488c89702df00561969c8462ea9e21a288 WatchSource:0}: Error finding container 32a439cc9cf26df37b572c5d8451df488c89702df00561969c8462ea9e21a288: Status 404 returned error can't find the container with id 32a439cc9cf26df37b572c5d8451df488c89702df00561969c8462ea9e21a288 Dec 11 15:19:46 crc kubenswrapper[5050]: I1211 15:19:46.291803 5050 generic.go:334] "Generic (PLEG): container finished" podID="14a8814e-5b3e-4cfd-8646-40bd756bdec8" containerID="0813a32ca5633075f04d57d7481a616bfb3b1d79ea23b31d7a44c9ad1e2eaa70" exitCode=0 Dec 11 15:19:46 crc kubenswrapper[5050]: I1211 15:19:46.291897 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4smbc" event={"ID":"14a8814e-5b3e-4cfd-8646-40bd756bdec8","Type":"ContainerDied","Data":"0813a32ca5633075f04d57d7481a616bfb3b1d79ea23b31d7a44c9ad1e2eaa70"} Dec 11 15:19:46 crc kubenswrapper[5050]: I1211 15:19:46.292088 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4smbc" event={"ID":"14a8814e-5b3e-4cfd-8646-40bd756bdec8","Type":"ContainerStarted","Data":"32a439cc9cf26df37b572c5d8451df488c89702df00561969c8462ea9e21a288"} Dec 11 15:19:46 crc kubenswrapper[5050]: I1211 15:19:46.294037 5050 generic.go:334] "Generic (PLEG): container finished" podID="557216c3-9af4-4436-b52d-5d77e1562f8d" containerID="50f8d2a0e4aea4a31f9c39bf2d8856658bd5a89d1549f2181eaf4daf44aefe00" exitCode=0 Dec 11 15:19:46 crc kubenswrapper[5050]: I1211 15:19:46.294074 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-46d9-account-create-update-spksj" event={"ID":"557216c3-9af4-4436-b52d-5d77e1562f8d","Type":"ContainerDied","Data":"50f8d2a0e4aea4a31f9c39bf2d8856658bd5a89d1549f2181eaf4daf44aefe00"} Dec 11 15:19:46 crc kubenswrapper[5050]: I1211 15:19:46.294112 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-46d9-account-create-update-spksj" event={"ID":"557216c3-9af4-4436-b52d-5d77e1562f8d","Type":"ContainerStarted","Data":"ed18b014bf62f4dd50e80ba3a17734c482fe3c1015115c09d820ccc16fe624f9"} Dec 11 15:19:47 crc kubenswrapper[5050]: I1211 15:19:47.717407 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4smbc" Dec 11 15:19:47 crc kubenswrapper[5050]: I1211 15:19:47.723703 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-46d9-account-create-update-spksj" Dec 11 15:19:47 crc kubenswrapper[5050]: I1211 15:19:47.823121 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14a8814e-5b3e-4cfd-8646-40bd756bdec8-operator-scripts\") pod \"14a8814e-5b3e-4cfd-8646-40bd756bdec8\" (UID: \"14a8814e-5b3e-4cfd-8646-40bd756bdec8\") " Dec 11 15:19:47 crc kubenswrapper[5050]: I1211 15:19:47.823167 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/557216c3-9af4-4436-b52d-5d77e1562f8d-operator-scripts\") pod \"557216c3-9af4-4436-b52d-5d77e1562f8d\" (UID: \"557216c3-9af4-4436-b52d-5d77e1562f8d\") " Dec 11 15:19:47 crc kubenswrapper[5050]: I1211 15:19:47.823449 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nd6pv\" (UniqueName: \"kubernetes.io/projected/14a8814e-5b3e-4cfd-8646-40bd756bdec8-kube-api-access-nd6pv\") pod \"14a8814e-5b3e-4cfd-8646-40bd756bdec8\" (UID: \"14a8814e-5b3e-4cfd-8646-40bd756bdec8\") " Dec 11 15:19:47 crc kubenswrapper[5050]: I1211 15:19:47.823490 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6qm4\" (UniqueName: \"kubernetes.io/projected/557216c3-9af4-4436-b52d-5d77e1562f8d-kube-api-access-v6qm4\") pod \"557216c3-9af4-4436-b52d-5d77e1562f8d\" (UID: \"557216c3-9af4-4436-b52d-5d77e1562f8d\") " Dec 11 15:19:47 crc kubenswrapper[5050]: I1211 15:19:47.824082 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/557216c3-9af4-4436-b52d-5d77e1562f8d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "557216c3-9af4-4436-b52d-5d77e1562f8d" (UID: "557216c3-9af4-4436-b52d-5d77e1562f8d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:19:47 crc kubenswrapper[5050]: I1211 15:19:47.824507 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14a8814e-5b3e-4cfd-8646-40bd756bdec8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "14a8814e-5b3e-4cfd-8646-40bd756bdec8" (UID: "14a8814e-5b3e-4cfd-8646-40bd756bdec8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:19:47 crc kubenswrapper[5050]: I1211 15:19:47.828459 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/557216c3-9af4-4436-b52d-5d77e1562f8d-kube-api-access-v6qm4" (OuterVolumeSpecName: "kube-api-access-v6qm4") pod "557216c3-9af4-4436-b52d-5d77e1562f8d" (UID: "557216c3-9af4-4436-b52d-5d77e1562f8d"). InnerVolumeSpecName "kube-api-access-v6qm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:19:47 crc kubenswrapper[5050]: I1211 15:19:47.828895 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14a8814e-5b3e-4cfd-8646-40bd756bdec8-kube-api-access-nd6pv" (OuterVolumeSpecName: "kube-api-access-nd6pv") pod "14a8814e-5b3e-4cfd-8646-40bd756bdec8" (UID: "14a8814e-5b3e-4cfd-8646-40bd756bdec8"). InnerVolumeSpecName "kube-api-access-nd6pv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:19:47 crc kubenswrapper[5050]: I1211 15:19:47.925572 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nd6pv\" (UniqueName: \"kubernetes.io/projected/14a8814e-5b3e-4cfd-8646-40bd756bdec8-kube-api-access-nd6pv\") on node \"crc\" DevicePath \"\"" Dec 11 15:19:47 crc kubenswrapper[5050]: I1211 15:19:47.925603 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6qm4\" (UniqueName: \"kubernetes.io/projected/557216c3-9af4-4436-b52d-5d77e1562f8d-kube-api-access-v6qm4\") on node \"crc\" DevicePath \"\"" Dec 11 15:19:47 crc kubenswrapper[5050]: I1211 15:19:47.925614 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/14a8814e-5b3e-4cfd-8646-40bd756bdec8-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:19:47 crc kubenswrapper[5050]: I1211 15:19:47.925625 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/557216c3-9af4-4436-b52d-5d77e1562f8d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:19:48 crc kubenswrapper[5050]: I1211 15:19:48.314345 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-46d9-account-create-update-spksj" event={"ID":"557216c3-9af4-4436-b52d-5d77e1562f8d","Type":"ContainerDied","Data":"ed18b014bf62f4dd50e80ba3a17734c482fe3c1015115c09d820ccc16fe624f9"} Dec 11 15:19:48 crc kubenswrapper[5050]: I1211 15:19:48.314401 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed18b014bf62f4dd50e80ba3a17734c482fe3c1015115c09d820ccc16fe624f9" Dec 11 15:19:48 crc kubenswrapper[5050]: I1211 15:19:48.314369 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-46d9-account-create-update-spksj" Dec 11 15:19:48 crc kubenswrapper[5050]: I1211 15:19:48.316491 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-4smbc" event={"ID":"14a8814e-5b3e-4cfd-8646-40bd756bdec8","Type":"ContainerDied","Data":"32a439cc9cf26df37b572c5d8451df488c89702df00561969c8462ea9e21a288"} Dec 11 15:19:48 crc kubenswrapper[5050]: I1211 15:19:48.316526 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32a439cc9cf26df37b572c5d8451df488c89702df00561969c8462ea9e21a288" Dec 11 15:19:48 crc kubenswrapper[5050]: I1211 15:19:48.316604 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-4smbc" Dec 11 15:19:50 crc kubenswrapper[5050]: I1211 15:19:50.346972 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-8p965"] Dec 11 15:19:50 crc kubenswrapper[5050]: E1211 15:19:50.347775 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14a8814e-5b3e-4cfd-8646-40bd756bdec8" containerName="mariadb-database-create" Dec 11 15:19:50 crc kubenswrapper[5050]: I1211 15:19:50.347797 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="14a8814e-5b3e-4cfd-8646-40bd756bdec8" containerName="mariadb-database-create" Dec 11 15:19:50 crc kubenswrapper[5050]: E1211 15:19:50.347845 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="557216c3-9af4-4436-b52d-5d77e1562f8d" containerName="mariadb-account-create-update" Dec 11 15:19:50 crc kubenswrapper[5050]: I1211 15:19:50.347858 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="557216c3-9af4-4436-b52d-5d77e1562f8d" containerName="mariadb-account-create-update" Dec 11 15:19:50 crc kubenswrapper[5050]: I1211 15:19:50.348200 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="14a8814e-5b3e-4cfd-8646-40bd756bdec8" containerName="mariadb-database-create" Dec 11 15:19:50 crc kubenswrapper[5050]: I1211 15:19:50.348254 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="557216c3-9af4-4436-b52d-5d77e1562f8d" containerName="mariadb-account-create-update" Dec 11 15:19:50 crc kubenswrapper[5050]: I1211 15:19:50.349660 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8p965" Dec 11 15:19:50 crc kubenswrapper[5050]: I1211 15:19:50.354701 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 11 15:19:50 crc kubenswrapper[5050]: I1211 15:19:50.354730 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ndgnr" Dec 11 15:19:50 crc kubenswrapper[5050]: I1211 15:19:50.375215 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-8p965"] Dec 11 15:19:50 crc kubenswrapper[5050]: I1211 15:19:50.473748 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/866d0d68-e4c3-4087-8158-4bc958909d2d-combined-ca-bundle\") pod \"glance-db-sync-8p965\" (UID: \"866d0d68-e4c3-4087-8158-4bc958909d2d\") " pod="openstack/glance-db-sync-8p965" Dec 11 15:19:50 crc kubenswrapper[5050]: I1211 15:19:50.474046 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/866d0d68-e4c3-4087-8158-4bc958909d2d-config-data\") pod \"glance-db-sync-8p965\" (UID: \"866d0d68-e4c3-4087-8158-4bc958909d2d\") " pod="openstack/glance-db-sync-8p965" Dec 11 15:19:50 crc kubenswrapper[5050]: I1211 15:19:50.474204 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsv67\" (UniqueName: \"kubernetes.io/projected/866d0d68-e4c3-4087-8158-4bc958909d2d-kube-api-access-hsv67\") pod \"glance-db-sync-8p965\" (UID: \"866d0d68-e4c3-4087-8158-4bc958909d2d\") " pod="openstack/glance-db-sync-8p965" Dec 11 15:19:50 crc kubenswrapper[5050]: I1211 15:19:50.474305 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/866d0d68-e4c3-4087-8158-4bc958909d2d-db-sync-config-data\") pod \"glance-db-sync-8p965\" (UID: \"866d0d68-e4c3-4087-8158-4bc958909d2d\") " pod="openstack/glance-db-sync-8p965" Dec 11 15:19:50 crc kubenswrapper[5050]: I1211 15:19:50.575990 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsv67\" (UniqueName: \"kubernetes.io/projected/866d0d68-e4c3-4087-8158-4bc958909d2d-kube-api-access-hsv67\") pod \"glance-db-sync-8p965\" (UID: \"866d0d68-e4c3-4087-8158-4bc958909d2d\") " pod="openstack/glance-db-sync-8p965" Dec 11 15:19:50 crc kubenswrapper[5050]: I1211 15:19:50.576622 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/866d0d68-e4c3-4087-8158-4bc958909d2d-db-sync-config-data\") pod \"glance-db-sync-8p965\" (UID: \"866d0d68-e4c3-4087-8158-4bc958909d2d\") " pod="openstack/glance-db-sync-8p965" Dec 11 15:19:50 crc kubenswrapper[5050]: I1211 15:19:50.577356 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/866d0d68-e4c3-4087-8158-4bc958909d2d-combined-ca-bundle\") pod \"glance-db-sync-8p965\" (UID: \"866d0d68-e4c3-4087-8158-4bc958909d2d\") " pod="openstack/glance-db-sync-8p965" Dec 11 15:19:50 crc kubenswrapper[5050]: I1211 15:19:50.577486 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/866d0d68-e4c3-4087-8158-4bc958909d2d-config-data\") pod \"glance-db-sync-8p965\" (UID: \"866d0d68-e4c3-4087-8158-4bc958909d2d\") " pod="openstack/glance-db-sync-8p965" Dec 11 15:19:50 crc kubenswrapper[5050]: I1211 15:19:50.581371 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/866d0d68-e4c3-4087-8158-4bc958909d2d-db-sync-config-data\") pod \"glance-db-sync-8p965\" (UID: \"866d0d68-e4c3-4087-8158-4bc958909d2d\") " pod="openstack/glance-db-sync-8p965" Dec 11 15:19:50 crc kubenswrapper[5050]: I1211 15:19:50.581859 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/866d0d68-e4c3-4087-8158-4bc958909d2d-config-data\") pod \"glance-db-sync-8p965\" (UID: \"866d0d68-e4c3-4087-8158-4bc958909d2d\") " pod="openstack/glance-db-sync-8p965" Dec 11 15:19:50 crc kubenswrapper[5050]: I1211 15:19:50.600573 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsv67\" (UniqueName: \"kubernetes.io/projected/866d0d68-e4c3-4087-8158-4bc958909d2d-kube-api-access-hsv67\") pod \"glance-db-sync-8p965\" (UID: \"866d0d68-e4c3-4087-8158-4bc958909d2d\") " pod="openstack/glance-db-sync-8p965" Dec 11 15:19:50 crc kubenswrapper[5050]: I1211 15:19:50.600611 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/866d0d68-e4c3-4087-8158-4bc958909d2d-combined-ca-bundle\") pod \"glance-db-sync-8p965\" (UID: \"866d0d68-e4c3-4087-8158-4bc958909d2d\") " pod="openstack/glance-db-sync-8p965" Dec 11 15:19:50 crc kubenswrapper[5050]: I1211 15:19:50.672176 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8p965" Dec 11 15:19:51 crc kubenswrapper[5050]: I1211 15:19:51.032444 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-8p965"] Dec 11 15:19:51 crc kubenswrapper[5050]: W1211 15:19:51.040102 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod866d0d68_e4c3_4087_8158_4bc958909d2d.slice/crio-d082ed9346f8eee9665ee6741151cc7eca89430dfe4c4b0363ff0cc6a41a5d64 WatchSource:0}: Error finding container d082ed9346f8eee9665ee6741151cc7eca89430dfe4c4b0363ff0cc6a41a5d64: Status 404 returned error can't find the container with id d082ed9346f8eee9665ee6741151cc7eca89430dfe4c4b0363ff0cc6a41a5d64 Dec 11 15:19:51 crc kubenswrapper[5050]: I1211 15:19:51.345841 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8p965" event={"ID":"866d0d68-e4c3-4087-8158-4bc958909d2d","Type":"ContainerStarted","Data":"d082ed9346f8eee9665ee6741151cc7eca89430dfe4c4b0363ff0cc6a41a5d64"} Dec 11 15:19:52 crc kubenswrapper[5050]: I1211 15:19:52.354642 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8p965" event={"ID":"866d0d68-e4c3-4087-8158-4bc958909d2d","Type":"ContainerStarted","Data":"ecb21c1d38076d022ead6ee9fc040f45ddec40fa36304511acb01c43faf73ad9"} Dec 11 15:19:52 crc kubenswrapper[5050]: I1211 15:19:52.375467 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-8p965" podStartSLOduration=2.375447881 podStartE2EDuration="2.375447881s" podCreationTimestamp="2025-12-11 15:19:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:19:52.374383542 +0000 UTC m=+5483.218106128" watchObservedRunningTime="2025-12-11 15:19:52.375447881 +0000 UTC m=+5483.219170467" Dec 11 15:19:55 crc kubenswrapper[5050]: I1211 15:19:55.381200 5050 generic.go:334] "Generic (PLEG): container finished" podID="866d0d68-e4c3-4087-8158-4bc958909d2d" containerID="ecb21c1d38076d022ead6ee9fc040f45ddec40fa36304511acb01c43faf73ad9" exitCode=0 Dec 11 15:19:55 crc kubenswrapper[5050]: I1211 15:19:55.381348 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8p965" event={"ID":"866d0d68-e4c3-4087-8158-4bc958909d2d","Type":"ContainerDied","Data":"ecb21c1d38076d022ead6ee9fc040f45ddec40fa36304511acb01c43faf73ad9"} Dec 11 15:19:56 crc kubenswrapper[5050]: I1211 15:19:56.768209 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8p965" Dec 11 15:19:56 crc kubenswrapper[5050]: I1211 15:19:56.877225 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsv67\" (UniqueName: \"kubernetes.io/projected/866d0d68-e4c3-4087-8158-4bc958909d2d-kube-api-access-hsv67\") pod \"866d0d68-e4c3-4087-8158-4bc958909d2d\" (UID: \"866d0d68-e4c3-4087-8158-4bc958909d2d\") " Dec 11 15:19:56 crc kubenswrapper[5050]: I1211 15:19:56.877312 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/866d0d68-e4c3-4087-8158-4bc958909d2d-db-sync-config-data\") pod \"866d0d68-e4c3-4087-8158-4bc958909d2d\" (UID: \"866d0d68-e4c3-4087-8158-4bc958909d2d\") " Dec 11 15:19:56 crc kubenswrapper[5050]: I1211 15:19:56.877419 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/866d0d68-e4c3-4087-8158-4bc958909d2d-config-data\") pod \"866d0d68-e4c3-4087-8158-4bc958909d2d\" (UID: \"866d0d68-e4c3-4087-8158-4bc958909d2d\") " Dec 11 15:19:56 crc kubenswrapper[5050]: I1211 15:19:56.877449 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/866d0d68-e4c3-4087-8158-4bc958909d2d-combined-ca-bundle\") pod \"866d0d68-e4c3-4087-8158-4bc958909d2d\" (UID: \"866d0d68-e4c3-4087-8158-4bc958909d2d\") " Dec 11 15:19:56 crc kubenswrapper[5050]: I1211 15:19:56.882702 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/866d0d68-e4c3-4087-8158-4bc958909d2d-kube-api-access-hsv67" (OuterVolumeSpecName: "kube-api-access-hsv67") pod "866d0d68-e4c3-4087-8158-4bc958909d2d" (UID: "866d0d68-e4c3-4087-8158-4bc958909d2d"). InnerVolumeSpecName "kube-api-access-hsv67". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:19:56 crc kubenswrapper[5050]: I1211 15:19:56.883078 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/866d0d68-e4c3-4087-8158-4bc958909d2d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "866d0d68-e4c3-4087-8158-4bc958909d2d" (UID: "866d0d68-e4c3-4087-8158-4bc958909d2d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:19:56 crc kubenswrapper[5050]: I1211 15:19:56.900446 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/866d0d68-e4c3-4087-8158-4bc958909d2d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "866d0d68-e4c3-4087-8158-4bc958909d2d" (UID: "866d0d68-e4c3-4087-8158-4bc958909d2d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:19:56 crc kubenswrapper[5050]: I1211 15:19:56.923809 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/866d0d68-e4c3-4087-8158-4bc958909d2d-config-data" (OuterVolumeSpecName: "config-data") pod "866d0d68-e4c3-4087-8158-4bc958909d2d" (UID: "866d0d68-e4c3-4087-8158-4bc958909d2d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:19:56 crc kubenswrapper[5050]: I1211 15:19:56.980066 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsv67\" (UniqueName: \"kubernetes.io/projected/866d0d68-e4c3-4087-8158-4bc958909d2d-kube-api-access-hsv67\") on node \"crc\" DevicePath \"\"" Dec 11 15:19:56 crc kubenswrapper[5050]: I1211 15:19:56.980100 5050 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/866d0d68-e4c3-4087-8158-4bc958909d2d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:19:56 crc kubenswrapper[5050]: I1211 15:19:56.980111 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/866d0d68-e4c3-4087-8158-4bc958909d2d-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:19:56 crc kubenswrapper[5050]: I1211 15:19:56.980121 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/866d0d68-e4c3-4087-8158-4bc958909d2d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.401403 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-8p965" event={"ID":"866d0d68-e4c3-4087-8158-4bc958909d2d","Type":"ContainerDied","Data":"d082ed9346f8eee9665ee6741151cc7eca89430dfe4c4b0363ff0cc6a41a5d64"} Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.401741 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d082ed9346f8eee9665ee6741151cc7eca89430dfe4c4b0363ff0cc6a41a5d64" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.401458 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-8p965" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.753467 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d84855d79-nmbkp"] Dec 11 15:19:57 crc kubenswrapper[5050]: E1211 15:19:57.753916 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="866d0d68-e4c3-4087-8158-4bc958909d2d" containerName="glance-db-sync" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.753939 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="866d0d68-e4c3-4087-8158-4bc958909d2d" containerName="glance-db-sync" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.754161 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="866d0d68-e4c3-4087-8158-4bc958909d2d" containerName="glance-db-sync" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.755272 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d84855d79-nmbkp" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.761713 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.765504 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.771340 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.771382 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.771767 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ndgnr" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.771995 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.779057 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.785961 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d84855d79-nmbkp"] Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.804984 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3f4baad8-b6bc-4555-a9ff-df050f578f9e-ceph\") pod \"glance-default-external-api-0\" (UID: \"3f4baad8-b6bc-4555-a9ff-df050f578f9e\") " pod="openstack/glance-default-external-api-0" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.805133 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f4baad8-b6bc-4555-a9ff-df050f578f9e-logs\") pod \"glance-default-external-api-0\" (UID: \"3f4baad8-b6bc-4555-a9ff-df050f578f9e\") " pod="openstack/glance-default-external-api-0" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.805196 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36cd4aea-0d4c-4dab-b57e-1065b6e2183d-config\") pod \"dnsmasq-dns-6d84855d79-nmbkp\" (UID: \"36cd4aea-0d4c-4dab-b57e-1065b6e2183d\") " pod="openstack/dnsmasq-dns-6d84855d79-nmbkp" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.805237 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36cd4aea-0d4c-4dab-b57e-1065b6e2183d-ovsdbserver-nb\") pod \"dnsmasq-dns-6d84855d79-nmbkp\" (UID: \"36cd4aea-0d4c-4dab-b57e-1065b6e2183d\") " pod="openstack/dnsmasq-dns-6d84855d79-nmbkp" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.805285 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36cd4aea-0d4c-4dab-b57e-1065b6e2183d-dns-svc\") pod \"dnsmasq-dns-6d84855d79-nmbkp\" (UID: \"36cd4aea-0d4c-4dab-b57e-1065b6e2183d\") " pod="openstack/dnsmasq-dns-6d84855d79-nmbkp" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.805315 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd66f\" (UniqueName: \"kubernetes.io/projected/36cd4aea-0d4c-4dab-b57e-1065b6e2183d-kube-api-access-sd66f\") pod \"dnsmasq-dns-6d84855d79-nmbkp\" (UID: \"36cd4aea-0d4c-4dab-b57e-1065b6e2183d\") " pod="openstack/dnsmasq-dns-6d84855d79-nmbkp" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.805367 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f4baad8-b6bc-4555-a9ff-df050f578f9e-scripts\") pod \"glance-default-external-api-0\" (UID: \"3f4baad8-b6bc-4555-a9ff-df050f578f9e\") " pod="openstack/glance-default-external-api-0" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.805408 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzkjw\" (UniqueName: \"kubernetes.io/projected/3f4baad8-b6bc-4555-a9ff-df050f578f9e-kube-api-access-wzkjw\") pod \"glance-default-external-api-0\" (UID: \"3f4baad8-b6bc-4555-a9ff-df050f578f9e\") " pod="openstack/glance-default-external-api-0" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.805438 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f4baad8-b6bc-4555-a9ff-df050f578f9e-config-data\") pod \"glance-default-external-api-0\" (UID: \"3f4baad8-b6bc-4555-a9ff-df050f578f9e\") " pod="openstack/glance-default-external-api-0" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.805459 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f4baad8-b6bc-4555-a9ff-df050f578f9e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3f4baad8-b6bc-4555-a9ff-df050f578f9e\") " pod="openstack/glance-default-external-api-0" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.805480 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3f4baad8-b6bc-4555-a9ff-df050f578f9e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3f4baad8-b6bc-4555-a9ff-df050f578f9e\") " pod="openstack/glance-default-external-api-0" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.805510 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36cd4aea-0d4c-4dab-b57e-1065b6e2183d-ovsdbserver-sb\") pod \"dnsmasq-dns-6d84855d79-nmbkp\" (UID: \"36cd4aea-0d4c-4dab-b57e-1065b6e2183d\") " pod="openstack/dnsmasq-dns-6d84855d79-nmbkp" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.846447 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.848383 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.858371 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.865214 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.907546 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4d0fac14-e4a9-47d6-b2d6-140d205e6772-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4d0fac14-e4a9-47d6-b2d6-140d205e6772\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.907602 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prwz9\" (UniqueName: \"kubernetes.io/projected/4d0fac14-e4a9-47d6-b2d6-140d205e6772-kube-api-access-prwz9\") pod \"glance-default-internal-api-0\" (UID: \"4d0fac14-e4a9-47d6-b2d6-140d205e6772\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.907648 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36cd4aea-0d4c-4dab-b57e-1065b6e2183d-dns-svc\") pod \"dnsmasq-dns-6d84855d79-nmbkp\" (UID: \"36cd4aea-0d4c-4dab-b57e-1065b6e2183d\") " pod="openstack/dnsmasq-dns-6d84855d79-nmbkp" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.907673 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd66f\" (UniqueName: \"kubernetes.io/projected/36cd4aea-0d4c-4dab-b57e-1065b6e2183d-kube-api-access-sd66f\") pod \"dnsmasq-dns-6d84855d79-nmbkp\" (UID: \"36cd4aea-0d4c-4dab-b57e-1065b6e2183d\") " pod="openstack/dnsmasq-dns-6d84855d79-nmbkp" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.907713 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f4baad8-b6bc-4555-a9ff-df050f578f9e-scripts\") pod \"glance-default-external-api-0\" (UID: \"3f4baad8-b6bc-4555-a9ff-df050f578f9e\") " pod="openstack/glance-default-external-api-0" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.907757 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d0fac14-e4a9-47d6-b2d6-140d205e6772-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4d0fac14-e4a9-47d6-b2d6-140d205e6772\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.907774 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d0fac14-e4a9-47d6-b2d6-140d205e6772-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4d0fac14-e4a9-47d6-b2d6-140d205e6772\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.907800 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzkjw\" (UniqueName: \"kubernetes.io/projected/3f4baad8-b6bc-4555-a9ff-df050f578f9e-kube-api-access-wzkjw\") pod \"glance-default-external-api-0\" (UID: \"3f4baad8-b6bc-4555-a9ff-df050f578f9e\") " pod="openstack/glance-default-external-api-0" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.907822 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f4baad8-b6bc-4555-a9ff-df050f578f9e-config-data\") pod \"glance-default-external-api-0\" (UID: \"3f4baad8-b6bc-4555-a9ff-df050f578f9e\") " pod="openstack/glance-default-external-api-0" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.907841 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3f4baad8-b6bc-4555-a9ff-df050f578f9e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3f4baad8-b6bc-4555-a9ff-df050f578f9e\") " pod="openstack/glance-default-external-api-0" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.907858 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f4baad8-b6bc-4555-a9ff-df050f578f9e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3f4baad8-b6bc-4555-a9ff-df050f578f9e\") " pod="openstack/glance-default-external-api-0" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.907886 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36cd4aea-0d4c-4dab-b57e-1065b6e2183d-ovsdbserver-sb\") pod \"dnsmasq-dns-6d84855d79-nmbkp\" (UID: \"36cd4aea-0d4c-4dab-b57e-1065b6e2183d\") " pod="openstack/dnsmasq-dns-6d84855d79-nmbkp" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.907910 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3f4baad8-b6bc-4555-a9ff-df050f578f9e-ceph\") pod \"glance-default-external-api-0\" (UID: \"3f4baad8-b6bc-4555-a9ff-df050f578f9e\") " pod="openstack/glance-default-external-api-0" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.907927 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f4baad8-b6bc-4555-a9ff-df050f578f9e-logs\") pod \"glance-default-external-api-0\" (UID: \"3f4baad8-b6bc-4555-a9ff-df050f578f9e\") " pod="openstack/glance-default-external-api-0" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.907945 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d0fac14-e4a9-47d6-b2d6-140d205e6772-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4d0fac14-e4a9-47d6-b2d6-140d205e6772\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.907970 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4d0fac14-e4a9-47d6-b2d6-140d205e6772-ceph\") pod \"glance-default-internal-api-0\" (UID: \"4d0fac14-e4a9-47d6-b2d6-140d205e6772\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.907994 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36cd4aea-0d4c-4dab-b57e-1065b6e2183d-config\") pod \"dnsmasq-dns-6d84855d79-nmbkp\" (UID: \"36cd4aea-0d4c-4dab-b57e-1065b6e2183d\") " pod="openstack/dnsmasq-dns-6d84855d79-nmbkp" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.908044 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d0fac14-e4a9-47d6-b2d6-140d205e6772-logs\") pod \"glance-default-internal-api-0\" (UID: \"4d0fac14-e4a9-47d6-b2d6-140d205e6772\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.908071 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36cd4aea-0d4c-4dab-b57e-1065b6e2183d-ovsdbserver-nb\") pod \"dnsmasq-dns-6d84855d79-nmbkp\" (UID: \"36cd4aea-0d4c-4dab-b57e-1065b6e2183d\") " pod="openstack/dnsmasq-dns-6d84855d79-nmbkp" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.909616 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f4baad8-b6bc-4555-a9ff-df050f578f9e-logs\") pod \"glance-default-external-api-0\" (UID: \"3f4baad8-b6bc-4555-a9ff-df050f578f9e\") " pod="openstack/glance-default-external-api-0" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.909657 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36cd4aea-0d4c-4dab-b57e-1065b6e2183d-ovsdbserver-nb\") pod \"dnsmasq-dns-6d84855d79-nmbkp\" (UID: \"36cd4aea-0d4c-4dab-b57e-1065b6e2183d\") " pod="openstack/dnsmasq-dns-6d84855d79-nmbkp" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.910058 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36cd4aea-0d4c-4dab-b57e-1065b6e2183d-dns-svc\") pod \"dnsmasq-dns-6d84855d79-nmbkp\" (UID: \"36cd4aea-0d4c-4dab-b57e-1065b6e2183d\") " pod="openstack/dnsmasq-dns-6d84855d79-nmbkp" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.910079 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3f4baad8-b6bc-4555-a9ff-df050f578f9e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3f4baad8-b6bc-4555-a9ff-df050f578f9e\") " pod="openstack/glance-default-external-api-0" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.910180 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36cd4aea-0d4c-4dab-b57e-1065b6e2183d-config\") pod \"dnsmasq-dns-6d84855d79-nmbkp\" (UID: \"36cd4aea-0d4c-4dab-b57e-1065b6e2183d\") " pod="openstack/dnsmasq-dns-6d84855d79-nmbkp" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.913599 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f4baad8-b6bc-4555-a9ff-df050f578f9e-scripts\") pod \"glance-default-external-api-0\" (UID: \"3f4baad8-b6bc-4555-a9ff-df050f578f9e\") " pod="openstack/glance-default-external-api-0" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.913750 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f4baad8-b6bc-4555-a9ff-df050f578f9e-config-data\") pod \"glance-default-external-api-0\" (UID: \"3f4baad8-b6bc-4555-a9ff-df050f578f9e\") " pod="openstack/glance-default-external-api-0" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.913840 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3f4baad8-b6bc-4555-a9ff-df050f578f9e-ceph\") pod \"glance-default-external-api-0\" (UID: \"3f4baad8-b6bc-4555-a9ff-df050f578f9e\") " pod="openstack/glance-default-external-api-0" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.922388 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36cd4aea-0d4c-4dab-b57e-1065b6e2183d-ovsdbserver-sb\") pod \"dnsmasq-dns-6d84855d79-nmbkp\" (UID: \"36cd4aea-0d4c-4dab-b57e-1065b6e2183d\") " pod="openstack/dnsmasq-dns-6d84855d79-nmbkp" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.928180 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzkjw\" (UniqueName: \"kubernetes.io/projected/3f4baad8-b6bc-4555-a9ff-df050f578f9e-kube-api-access-wzkjw\") pod \"glance-default-external-api-0\" (UID: \"3f4baad8-b6bc-4555-a9ff-df050f578f9e\") " pod="openstack/glance-default-external-api-0" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.934171 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd66f\" (UniqueName: \"kubernetes.io/projected/36cd4aea-0d4c-4dab-b57e-1065b6e2183d-kube-api-access-sd66f\") pod \"dnsmasq-dns-6d84855d79-nmbkp\" (UID: \"36cd4aea-0d4c-4dab-b57e-1065b6e2183d\") " pod="openstack/dnsmasq-dns-6d84855d79-nmbkp" Dec 11 15:19:57 crc kubenswrapper[5050]: I1211 15:19:57.943848 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f4baad8-b6bc-4555-a9ff-df050f578f9e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3f4baad8-b6bc-4555-a9ff-df050f578f9e\") " pod="openstack/glance-default-external-api-0" Dec 11 15:19:58 crc kubenswrapper[5050]: I1211 15:19:58.009500 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d0fac14-e4a9-47d6-b2d6-140d205e6772-logs\") pod \"glance-default-internal-api-0\" (UID: \"4d0fac14-e4a9-47d6-b2d6-140d205e6772\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:19:58 crc kubenswrapper[5050]: I1211 15:19:58.009564 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4d0fac14-e4a9-47d6-b2d6-140d205e6772-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4d0fac14-e4a9-47d6-b2d6-140d205e6772\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:19:58 crc kubenswrapper[5050]: I1211 15:19:58.009590 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prwz9\" (UniqueName: \"kubernetes.io/projected/4d0fac14-e4a9-47d6-b2d6-140d205e6772-kube-api-access-prwz9\") pod \"glance-default-internal-api-0\" (UID: \"4d0fac14-e4a9-47d6-b2d6-140d205e6772\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:19:58 crc kubenswrapper[5050]: I1211 15:19:58.009667 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d0fac14-e4a9-47d6-b2d6-140d205e6772-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4d0fac14-e4a9-47d6-b2d6-140d205e6772\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:19:58 crc kubenswrapper[5050]: I1211 15:19:58.009683 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d0fac14-e4a9-47d6-b2d6-140d205e6772-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4d0fac14-e4a9-47d6-b2d6-140d205e6772\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:19:58 crc kubenswrapper[5050]: I1211 15:19:58.009728 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d0fac14-e4a9-47d6-b2d6-140d205e6772-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4d0fac14-e4a9-47d6-b2d6-140d205e6772\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:19:58 crc kubenswrapper[5050]: I1211 15:19:58.009757 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4d0fac14-e4a9-47d6-b2d6-140d205e6772-ceph\") pod \"glance-default-internal-api-0\" (UID: \"4d0fac14-e4a9-47d6-b2d6-140d205e6772\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:19:58 crc kubenswrapper[5050]: I1211 15:19:58.010204 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d0fac14-e4a9-47d6-b2d6-140d205e6772-logs\") pod \"glance-default-internal-api-0\" (UID: \"4d0fac14-e4a9-47d6-b2d6-140d205e6772\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:19:58 crc kubenswrapper[5050]: I1211 15:19:58.010663 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4d0fac14-e4a9-47d6-b2d6-140d205e6772-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4d0fac14-e4a9-47d6-b2d6-140d205e6772\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:19:58 crc kubenswrapper[5050]: I1211 15:19:58.012837 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4d0fac14-e4a9-47d6-b2d6-140d205e6772-ceph\") pod \"glance-default-internal-api-0\" (UID: \"4d0fac14-e4a9-47d6-b2d6-140d205e6772\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:19:58 crc kubenswrapper[5050]: I1211 15:19:58.014325 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d0fac14-e4a9-47d6-b2d6-140d205e6772-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4d0fac14-e4a9-47d6-b2d6-140d205e6772\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:19:58 crc kubenswrapper[5050]: I1211 15:19:58.014765 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d0fac14-e4a9-47d6-b2d6-140d205e6772-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4d0fac14-e4a9-47d6-b2d6-140d205e6772\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:19:58 crc kubenswrapper[5050]: I1211 15:19:58.016726 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d0fac14-e4a9-47d6-b2d6-140d205e6772-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4d0fac14-e4a9-47d6-b2d6-140d205e6772\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:19:58 crc kubenswrapper[5050]: I1211 15:19:58.028022 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prwz9\" (UniqueName: \"kubernetes.io/projected/4d0fac14-e4a9-47d6-b2d6-140d205e6772-kube-api-access-prwz9\") pod \"glance-default-internal-api-0\" (UID: \"4d0fac14-e4a9-47d6-b2d6-140d205e6772\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:19:58 crc kubenswrapper[5050]: I1211 15:19:58.076731 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d84855d79-nmbkp" Dec 11 15:19:58 crc kubenswrapper[5050]: I1211 15:19:58.089824 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 15:19:58 crc kubenswrapper[5050]: I1211 15:19:58.176513 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 15:19:58 crc kubenswrapper[5050]: I1211 15:19:58.545955 5050 scope.go:117] "RemoveContainer" containerID="5e751c2ff817a404aae81889734d5a9bd57af497d8e885add288368bb5b0040e" Dec 11 15:19:58 crc kubenswrapper[5050]: E1211 15:19:58.546454 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:19:58 crc kubenswrapper[5050]: I1211 15:19:58.655776 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d84855d79-nmbkp"] Dec 11 15:19:58 crc kubenswrapper[5050]: I1211 15:19:58.692609 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 15:19:58 crc kubenswrapper[5050]: I1211 15:19:58.752459 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 15:19:58 crc kubenswrapper[5050]: I1211 15:19:58.809315 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 15:19:59 crc kubenswrapper[5050]: I1211 15:19:59.439395 5050 generic.go:334] "Generic (PLEG): container finished" podID="36cd4aea-0d4c-4dab-b57e-1065b6e2183d" containerID="014fafebc886bcfeb9757e835e674d7c42df2d2d7b8dc599b83a22c8696b2a13" exitCode=0 Dec 11 15:19:59 crc kubenswrapper[5050]: I1211 15:19:59.439806 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d84855d79-nmbkp" event={"ID":"36cd4aea-0d4c-4dab-b57e-1065b6e2183d","Type":"ContainerDied","Data":"014fafebc886bcfeb9757e835e674d7c42df2d2d7b8dc599b83a22c8696b2a13"} Dec 11 15:19:59 crc kubenswrapper[5050]: I1211 15:19:59.439842 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d84855d79-nmbkp" event={"ID":"36cd4aea-0d4c-4dab-b57e-1065b6e2183d","Type":"ContainerStarted","Data":"889922c4d2b66239c8f0afdaac43ed23a09b2d45c1971fe7014566fba0dbe9dd"} Dec 11 15:19:59 crc kubenswrapper[5050]: I1211 15:19:59.449583 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4d0fac14-e4a9-47d6-b2d6-140d205e6772","Type":"ContainerStarted","Data":"b0641c5969bc404030d40d2356d0ec097e8a314b4ac66cde84afb6b854bfcadc"} Dec 11 15:19:59 crc kubenswrapper[5050]: I1211 15:19:59.449641 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4d0fac14-e4a9-47d6-b2d6-140d205e6772","Type":"ContainerStarted","Data":"f9f728f94266ff3b1eb6965df4cef9e455f9160556c283e3706ce461f883e067"} Dec 11 15:19:59 crc kubenswrapper[5050]: I1211 15:19:59.452408 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3f4baad8-b6bc-4555-a9ff-df050f578f9e","Type":"ContainerStarted","Data":"435b0342b253ce4fc2bb2461922a5dca9375166aa07a9391011651de1dbe9360"} Dec 11 15:19:59 crc kubenswrapper[5050]: I1211 15:19:59.452440 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3f4baad8-b6bc-4555-a9ff-df050f578f9e","Type":"ContainerStarted","Data":"481ea879ba7660d6acfaec4cca32258bcb81110d7f69c5ef493964793160f22a"} Dec 11 15:20:00 crc kubenswrapper[5050]: I1211 15:20:00.463030 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d84855d79-nmbkp" event={"ID":"36cd4aea-0d4c-4dab-b57e-1065b6e2183d","Type":"ContainerStarted","Data":"49189878d10c48084d29fcaf1240fa5c90c358535d6be71b5208d6f365de6efe"} Dec 11 15:20:00 crc kubenswrapper[5050]: I1211 15:20:00.463408 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d84855d79-nmbkp" Dec 11 15:20:00 crc kubenswrapper[5050]: I1211 15:20:00.465160 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4d0fac14-e4a9-47d6-b2d6-140d205e6772","Type":"ContainerStarted","Data":"4ca88aeb1f1a38456cf8153c34723ecba2931c37057e2b7716a5bb78b6d4005f"} Dec 11 15:20:00 crc kubenswrapper[5050]: I1211 15:20:00.471127 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3f4baad8-b6bc-4555-a9ff-df050f578f9e","Type":"ContainerStarted","Data":"5b2503f643923c740a665ea47f3e3ef3817643d64773cf4e31d74945baced8ec"} Dec 11 15:20:00 crc kubenswrapper[5050]: I1211 15:20:00.471237 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3f4baad8-b6bc-4555-a9ff-df050f578f9e" containerName="glance-log" containerID="cri-o://435b0342b253ce4fc2bb2461922a5dca9375166aa07a9391011651de1dbe9360" gracePeriod=30 Dec 11 15:20:00 crc kubenswrapper[5050]: I1211 15:20:00.471441 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3f4baad8-b6bc-4555-a9ff-df050f578f9e" containerName="glance-httpd" containerID="cri-o://5b2503f643923c740a665ea47f3e3ef3817643d64773cf4e31d74945baced8ec" gracePeriod=30 Dec 11 15:20:00 crc kubenswrapper[5050]: I1211 15:20:00.487286 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d84855d79-nmbkp" podStartSLOduration=3.487264579 podStartE2EDuration="3.487264579s" podCreationTimestamp="2025-12-11 15:19:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:20:00.48360234 +0000 UTC m=+5491.327324926" watchObservedRunningTime="2025-12-11 15:20:00.487264579 +0000 UTC m=+5491.330987165" Dec 11 15:20:00 crc kubenswrapper[5050]: I1211 15:20:00.511590 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.511571041 podStartE2EDuration="3.511571041s" podCreationTimestamp="2025-12-11 15:19:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:20:00.507267406 +0000 UTC m=+5491.350989992" watchObservedRunningTime="2025-12-11 15:20:00.511571041 +0000 UTC m=+5491.355293627" Dec 11 15:20:00 crc kubenswrapper[5050]: I1211 15:20:00.540822 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.540798716 podStartE2EDuration="3.540798716s" podCreationTimestamp="2025-12-11 15:19:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:20:00.537027795 +0000 UTC m=+5491.380750381" watchObservedRunningTime="2025-12-11 15:20:00.540798716 +0000 UTC m=+5491.384521302" Dec 11 15:20:00 crc kubenswrapper[5050]: I1211 15:20:00.624654 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.079121 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.271194 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f4baad8-b6bc-4555-a9ff-df050f578f9e-scripts\") pod \"3f4baad8-b6bc-4555-a9ff-df050f578f9e\" (UID: \"3f4baad8-b6bc-4555-a9ff-df050f578f9e\") " Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.271285 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f4baad8-b6bc-4555-a9ff-df050f578f9e-logs\") pod \"3f4baad8-b6bc-4555-a9ff-df050f578f9e\" (UID: \"3f4baad8-b6bc-4555-a9ff-df050f578f9e\") " Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.271334 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3f4baad8-b6bc-4555-a9ff-df050f578f9e-httpd-run\") pod \"3f4baad8-b6bc-4555-a9ff-df050f578f9e\" (UID: \"3f4baad8-b6bc-4555-a9ff-df050f578f9e\") " Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.271350 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3f4baad8-b6bc-4555-a9ff-df050f578f9e-ceph\") pod \"3f4baad8-b6bc-4555-a9ff-df050f578f9e\" (UID: \"3f4baad8-b6bc-4555-a9ff-df050f578f9e\") " Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.271443 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzkjw\" (UniqueName: \"kubernetes.io/projected/3f4baad8-b6bc-4555-a9ff-df050f578f9e-kube-api-access-wzkjw\") pod \"3f4baad8-b6bc-4555-a9ff-df050f578f9e\" (UID: \"3f4baad8-b6bc-4555-a9ff-df050f578f9e\") " Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.272240 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f4baad8-b6bc-4555-a9ff-df050f578f9e-combined-ca-bundle\") pod \"3f4baad8-b6bc-4555-a9ff-df050f578f9e\" (UID: \"3f4baad8-b6bc-4555-a9ff-df050f578f9e\") " Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.271837 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f4baad8-b6bc-4555-a9ff-df050f578f9e-logs" (OuterVolumeSpecName: "logs") pod "3f4baad8-b6bc-4555-a9ff-df050f578f9e" (UID: "3f4baad8-b6bc-4555-a9ff-df050f578f9e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.272269 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f4baad8-b6bc-4555-a9ff-df050f578f9e-config-data\") pod \"3f4baad8-b6bc-4555-a9ff-df050f578f9e\" (UID: \"3f4baad8-b6bc-4555-a9ff-df050f578f9e\") " Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.271905 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f4baad8-b6bc-4555-a9ff-df050f578f9e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3f4baad8-b6bc-4555-a9ff-df050f578f9e" (UID: "3f4baad8-b6bc-4555-a9ff-df050f578f9e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.272519 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f4baad8-b6bc-4555-a9ff-df050f578f9e-logs\") on node \"crc\" DevicePath \"\"" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.272531 5050 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3f4baad8-b6bc-4555-a9ff-df050f578f9e-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.277344 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f4baad8-b6bc-4555-a9ff-df050f578f9e-scripts" (OuterVolumeSpecName: "scripts") pod "3f4baad8-b6bc-4555-a9ff-df050f578f9e" (UID: "3f4baad8-b6bc-4555-a9ff-df050f578f9e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.281852 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f4baad8-b6bc-4555-a9ff-df050f578f9e-kube-api-access-wzkjw" (OuterVolumeSpecName: "kube-api-access-wzkjw") pod "3f4baad8-b6bc-4555-a9ff-df050f578f9e" (UID: "3f4baad8-b6bc-4555-a9ff-df050f578f9e"). InnerVolumeSpecName "kube-api-access-wzkjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.286231 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f4baad8-b6bc-4555-a9ff-df050f578f9e-ceph" (OuterVolumeSpecName: "ceph") pod "3f4baad8-b6bc-4555-a9ff-df050f578f9e" (UID: "3f4baad8-b6bc-4555-a9ff-df050f578f9e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.313697 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f4baad8-b6bc-4555-a9ff-df050f578f9e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f4baad8-b6bc-4555-a9ff-df050f578f9e" (UID: "3f4baad8-b6bc-4555-a9ff-df050f578f9e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.334538 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f4baad8-b6bc-4555-a9ff-df050f578f9e-config-data" (OuterVolumeSpecName: "config-data") pod "3f4baad8-b6bc-4555-a9ff-df050f578f9e" (UID: "3f4baad8-b6bc-4555-a9ff-df050f578f9e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.375409 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f4baad8-b6bc-4555-a9ff-df050f578f9e-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.375449 5050 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/3f4baad8-b6bc-4555-a9ff-df050f578f9e-ceph\") on node \"crc\" DevicePath \"\"" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.375465 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzkjw\" (UniqueName: \"kubernetes.io/projected/3f4baad8-b6bc-4555-a9ff-df050f578f9e-kube-api-access-wzkjw\") on node \"crc\" DevicePath \"\"" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.375483 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f4baad8-b6bc-4555-a9ff-df050f578f9e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.375496 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f4baad8-b6bc-4555-a9ff-df050f578f9e-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.484339 5050 generic.go:334] "Generic (PLEG): container finished" podID="3f4baad8-b6bc-4555-a9ff-df050f578f9e" containerID="5b2503f643923c740a665ea47f3e3ef3817643d64773cf4e31d74945baced8ec" exitCode=0 Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.485091 5050 generic.go:334] "Generic (PLEG): container finished" podID="3f4baad8-b6bc-4555-a9ff-df050f578f9e" containerID="435b0342b253ce4fc2bb2461922a5dca9375166aa07a9391011651de1dbe9360" exitCode=143 Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.484418 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3f4baad8-b6bc-4555-a9ff-df050f578f9e","Type":"ContainerDied","Data":"5b2503f643923c740a665ea47f3e3ef3817643d64773cf4e31d74945baced8ec"} Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.484397 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.485174 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3f4baad8-b6bc-4555-a9ff-df050f578f9e","Type":"ContainerDied","Data":"435b0342b253ce4fc2bb2461922a5dca9375166aa07a9391011651de1dbe9360"} Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.485194 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3f4baad8-b6bc-4555-a9ff-df050f578f9e","Type":"ContainerDied","Data":"481ea879ba7660d6acfaec4cca32258bcb81110d7f69c5ef493964793160f22a"} Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.485217 5050 scope.go:117] "RemoveContainer" containerID="5b2503f643923c740a665ea47f3e3ef3817643d64773cf4e31d74945baced8ec" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.535699 5050 scope.go:117] "RemoveContainer" containerID="435b0342b253ce4fc2bb2461922a5dca9375166aa07a9391011651de1dbe9360" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.574276 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.583028 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.589802 5050 scope.go:117] "RemoveContainer" containerID="5b2503f643923c740a665ea47f3e3ef3817643d64773cf4e31d74945baced8ec" Dec 11 15:20:01 crc kubenswrapper[5050]: E1211 15:20:01.593193 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b2503f643923c740a665ea47f3e3ef3817643d64773cf4e31d74945baced8ec\": container with ID starting with 5b2503f643923c740a665ea47f3e3ef3817643d64773cf4e31d74945baced8ec not found: ID does not exist" containerID="5b2503f643923c740a665ea47f3e3ef3817643d64773cf4e31d74945baced8ec" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.593249 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b2503f643923c740a665ea47f3e3ef3817643d64773cf4e31d74945baced8ec"} err="failed to get container status \"5b2503f643923c740a665ea47f3e3ef3817643d64773cf4e31d74945baced8ec\": rpc error: code = NotFound desc = could not find container \"5b2503f643923c740a665ea47f3e3ef3817643d64773cf4e31d74945baced8ec\": container with ID starting with 5b2503f643923c740a665ea47f3e3ef3817643d64773cf4e31d74945baced8ec not found: ID does not exist" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.593296 5050 scope.go:117] "RemoveContainer" containerID="435b0342b253ce4fc2bb2461922a5dca9375166aa07a9391011651de1dbe9360" Dec 11 15:20:01 crc kubenswrapper[5050]: E1211 15:20:01.593832 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"435b0342b253ce4fc2bb2461922a5dca9375166aa07a9391011651de1dbe9360\": container with ID starting with 435b0342b253ce4fc2bb2461922a5dca9375166aa07a9391011651de1dbe9360 not found: ID does not exist" containerID="435b0342b253ce4fc2bb2461922a5dca9375166aa07a9391011651de1dbe9360" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.593861 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"435b0342b253ce4fc2bb2461922a5dca9375166aa07a9391011651de1dbe9360"} err="failed to get container status \"435b0342b253ce4fc2bb2461922a5dca9375166aa07a9391011651de1dbe9360\": rpc error: code = NotFound desc = could not find container \"435b0342b253ce4fc2bb2461922a5dca9375166aa07a9391011651de1dbe9360\": container with ID starting with 435b0342b253ce4fc2bb2461922a5dca9375166aa07a9391011651de1dbe9360 not found: ID does not exist" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.593880 5050 scope.go:117] "RemoveContainer" containerID="5b2503f643923c740a665ea47f3e3ef3817643d64773cf4e31d74945baced8ec" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.594234 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b2503f643923c740a665ea47f3e3ef3817643d64773cf4e31d74945baced8ec"} err="failed to get container status \"5b2503f643923c740a665ea47f3e3ef3817643d64773cf4e31d74945baced8ec\": rpc error: code = NotFound desc = could not find container \"5b2503f643923c740a665ea47f3e3ef3817643d64773cf4e31d74945baced8ec\": container with ID starting with 5b2503f643923c740a665ea47f3e3ef3817643d64773cf4e31d74945baced8ec not found: ID does not exist" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.594264 5050 scope.go:117] "RemoveContainer" containerID="435b0342b253ce4fc2bb2461922a5dca9375166aa07a9391011651de1dbe9360" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.594569 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"435b0342b253ce4fc2bb2461922a5dca9375166aa07a9391011651de1dbe9360"} err="failed to get container status \"435b0342b253ce4fc2bb2461922a5dca9375166aa07a9391011651de1dbe9360\": rpc error: code = NotFound desc = could not find container \"435b0342b253ce4fc2bb2461922a5dca9375166aa07a9391011651de1dbe9360\": container with ID starting with 435b0342b253ce4fc2bb2461922a5dca9375166aa07a9391011651de1dbe9360 not found: ID does not exist" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.597130 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 15:20:01 crc kubenswrapper[5050]: E1211 15:20:01.597880 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f4baad8-b6bc-4555-a9ff-df050f578f9e" containerName="glance-log" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.597911 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f4baad8-b6bc-4555-a9ff-df050f578f9e" containerName="glance-log" Dec 11 15:20:01 crc kubenswrapper[5050]: E1211 15:20:01.597924 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f4baad8-b6bc-4555-a9ff-df050f578f9e" containerName="glance-httpd" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.597933 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f4baad8-b6bc-4555-a9ff-df050f578f9e" containerName="glance-httpd" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.598175 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f4baad8-b6bc-4555-a9ff-df050f578f9e" containerName="glance-log" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.598208 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f4baad8-b6bc-4555-a9ff-df050f578f9e" containerName="glance-httpd" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.599513 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.604346 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.608491 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.787497 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29nrn\" (UniqueName: \"kubernetes.io/projected/56c61de9-b025-4705-8311-bade624f6e13-kube-api-access-29nrn\") pod \"glance-default-external-api-0\" (UID: \"56c61de9-b025-4705-8311-bade624f6e13\") " pod="openstack/glance-default-external-api-0" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.787564 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56c61de9-b025-4705-8311-bade624f6e13-logs\") pod \"glance-default-external-api-0\" (UID: \"56c61de9-b025-4705-8311-bade624f6e13\") " pod="openstack/glance-default-external-api-0" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.787614 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/56c61de9-b025-4705-8311-bade624f6e13-ceph\") pod \"glance-default-external-api-0\" (UID: \"56c61de9-b025-4705-8311-bade624f6e13\") " pod="openstack/glance-default-external-api-0" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.787671 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/56c61de9-b025-4705-8311-bade624f6e13-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"56c61de9-b025-4705-8311-bade624f6e13\") " pod="openstack/glance-default-external-api-0" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.787735 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c61de9-b025-4705-8311-bade624f6e13-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"56c61de9-b025-4705-8311-bade624f6e13\") " pod="openstack/glance-default-external-api-0" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.787770 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56c61de9-b025-4705-8311-bade624f6e13-config-data\") pod \"glance-default-external-api-0\" (UID: \"56c61de9-b025-4705-8311-bade624f6e13\") " pod="openstack/glance-default-external-api-0" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.787794 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56c61de9-b025-4705-8311-bade624f6e13-scripts\") pod \"glance-default-external-api-0\" (UID: \"56c61de9-b025-4705-8311-bade624f6e13\") " pod="openstack/glance-default-external-api-0" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.889603 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c61de9-b025-4705-8311-bade624f6e13-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"56c61de9-b025-4705-8311-bade624f6e13\") " pod="openstack/glance-default-external-api-0" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.889964 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56c61de9-b025-4705-8311-bade624f6e13-config-data\") pod \"glance-default-external-api-0\" (UID: \"56c61de9-b025-4705-8311-bade624f6e13\") " pod="openstack/glance-default-external-api-0" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.890650 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56c61de9-b025-4705-8311-bade624f6e13-scripts\") pod \"glance-default-external-api-0\" (UID: \"56c61de9-b025-4705-8311-bade624f6e13\") " pod="openstack/glance-default-external-api-0" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.890867 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29nrn\" (UniqueName: \"kubernetes.io/projected/56c61de9-b025-4705-8311-bade624f6e13-kube-api-access-29nrn\") pod \"glance-default-external-api-0\" (UID: \"56c61de9-b025-4705-8311-bade624f6e13\") " pod="openstack/glance-default-external-api-0" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.891023 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56c61de9-b025-4705-8311-bade624f6e13-logs\") pod \"glance-default-external-api-0\" (UID: \"56c61de9-b025-4705-8311-bade624f6e13\") " pod="openstack/glance-default-external-api-0" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.891217 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/56c61de9-b025-4705-8311-bade624f6e13-ceph\") pod \"glance-default-external-api-0\" (UID: \"56c61de9-b025-4705-8311-bade624f6e13\") " pod="openstack/glance-default-external-api-0" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.891352 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/56c61de9-b025-4705-8311-bade624f6e13-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"56c61de9-b025-4705-8311-bade624f6e13\") " pod="openstack/glance-default-external-api-0" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.891947 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/56c61de9-b025-4705-8311-bade624f6e13-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"56c61de9-b025-4705-8311-bade624f6e13\") " pod="openstack/glance-default-external-api-0" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.892811 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56c61de9-b025-4705-8311-bade624f6e13-logs\") pod \"glance-default-external-api-0\" (UID: \"56c61de9-b025-4705-8311-bade624f6e13\") " pod="openstack/glance-default-external-api-0" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.895099 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c61de9-b025-4705-8311-bade624f6e13-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"56c61de9-b025-4705-8311-bade624f6e13\") " pod="openstack/glance-default-external-api-0" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.895506 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56c61de9-b025-4705-8311-bade624f6e13-config-data\") pod \"glance-default-external-api-0\" (UID: \"56c61de9-b025-4705-8311-bade624f6e13\") " pod="openstack/glance-default-external-api-0" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.896138 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/56c61de9-b025-4705-8311-bade624f6e13-ceph\") pod \"glance-default-external-api-0\" (UID: \"56c61de9-b025-4705-8311-bade624f6e13\") " pod="openstack/glance-default-external-api-0" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.896628 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56c61de9-b025-4705-8311-bade624f6e13-scripts\") pod \"glance-default-external-api-0\" (UID: \"56c61de9-b025-4705-8311-bade624f6e13\") " pod="openstack/glance-default-external-api-0" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.909853 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29nrn\" (UniqueName: \"kubernetes.io/projected/56c61de9-b025-4705-8311-bade624f6e13-kube-api-access-29nrn\") pod \"glance-default-external-api-0\" (UID: \"56c61de9-b025-4705-8311-bade624f6e13\") " pod="openstack/glance-default-external-api-0" Dec 11 15:20:01 crc kubenswrapper[5050]: I1211 15:20:01.918979 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 15:20:02 crc kubenswrapper[5050]: I1211 15:20:02.495605 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4d0fac14-e4a9-47d6-b2d6-140d205e6772" containerName="glance-log" containerID="cri-o://b0641c5969bc404030d40d2356d0ec097e8a314b4ac66cde84afb6b854bfcadc" gracePeriod=30 Dec 11 15:20:02 crc kubenswrapper[5050]: I1211 15:20:02.495658 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4d0fac14-e4a9-47d6-b2d6-140d205e6772" containerName="glance-httpd" containerID="cri-o://4ca88aeb1f1a38456cf8153c34723ecba2931c37057e2b7716a5bb78b6d4005f" gracePeriod=30 Dec 11 15:20:02 crc kubenswrapper[5050]: I1211 15:20:02.514777 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 15:20:02 crc kubenswrapper[5050]: W1211 15:20:02.525388 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56c61de9_b025_4705_8311_bade624f6e13.slice/crio-7b259b6daf4ab16b8cc98718fb4284dc2718dea8af03e0a5869e1df4703accdc WatchSource:0}: Error finding container 7b259b6daf4ab16b8cc98718fb4284dc2718dea8af03e0a5869e1df4703accdc: Status 404 returned error can't find the container with id 7b259b6daf4ab16b8cc98718fb4284dc2718dea8af03e0a5869e1df4703accdc Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.208807 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.315967 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4d0fac14-e4a9-47d6-b2d6-140d205e6772-ceph\") pod \"4d0fac14-e4a9-47d6-b2d6-140d205e6772\" (UID: \"4d0fac14-e4a9-47d6-b2d6-140d205e6772\") " Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.316458 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d0fac14-e4a9-47d6-b2d6-140d205e6772-combined-ca-bundle\") pod \"4d0fac14-e4a9-47d6-b2d6-140d205e6772\" (UID: \"4d0fac14-e4a9-47d6-b2d6-140d205e6772\") " Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.316536 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prwz9\" (UniqueName: \"kubernetes.io/projected/4d0fac14-e4a9-47d6-b2d6-140d205e6772-kube-api-access-prwz9\") pod \"4d0fac14-e4a9-47d6-b2d6-140d205e6772\" (UID: \"4d0fac14-e4a9-47d6-b2d6-140d205e6772\") " Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.316559 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d0fac14-e4a9-47d6-b2d6-140d205e6772-scripts\") pod \"4d0fac14-e4a9-47d6-b2d6-140d205e6772\" (UID: \"4d0fac14-e4a9-47d6-b2d6-140d205e6772\") " Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.316597 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d0fac14-e4a9-47d6-b2d6-140d205e6772-logs\") pod \"4d0fac14-e4a9-47d6-b2d6-140d205e6772\" (UID: \"4d0fac14-e4a9-47d6-b2d6-140d205e6772\") " Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.316629 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4d0fac14-e4a9-47d6-b2d6-140d205e6772-httpd-run\") pod \"4d0fac14-e4a9-47d6-b2d6-140d205e6772\" (UID: \"4d0fac14-e4a9-47d6-b2d6-140d205e6772\") " Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.316645 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d0fac14-e4a9-47d6-b2d6-140d205e6772-config-data\") pod \"4d0fac14-e4a9-47d6-b2d6-140d205e6772\" (UID: \"4d0fac14-e4a9-47d6-b2d6-140d205e6772\") " Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.318556 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d0fac14-e4a9-47d6-b2d6-140d205e6772-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4d0fac14-e4a9-47d6-b2d6-140d205e6772" (UID: "4d0fac14-e4a9-47d6-b2d6-140d205e6772"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.318637 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d0fac14-e4a9-47d6-b2d6-140d205e6772-logs" (OuterVolumeSpecName: "logs") pod "4d0fac14-e4a9-47d6-b2d6-140d205e6772" (UID: "4d0fac14-e4a9-47d6-b2d6-140d205e6772"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.322062 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d0fac14-e4a9-47d6-b2d6-140d205e6772-ceph" (OuterVolumeSpecName: "ceph") pod "4d0fac14-e4a9-47d6-b2d6-140d205e6772" (UID: "4d0fac14-e4a9-47d6-b2d6-140d205e6772"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.323793 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d0fac14-e4a9-47d6-b2d6-140d205e6772-kube-api-access-prwz9" (OuterVolumeSpecName: "kube-api-access-prwz9") pod "4d0fac14-e4a9-47d6-b2d6-140d205e6772" (UID: "4d0fac14-e4a9-47d6-b2d6-140d205e6772"). InnerVolumeSpecName "kube-api-access-prwz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.325213 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d0fac14-e4a9-47d6-b2d6-140d205e6772-scripts" (OuterVolumeSpecName: "scripts") pod "4d0fac14-e4a9-47d6-b2d6-140d205e6772" (UID: "4d0fac14-e4a9-47d6-b2d6-140d205e6772"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.353497 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d0fac14-e4a9-47d6-b2d6-140d205e6772-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4d0fac14-e4a9-47d6-b2d6-140d205e6772" (UID: "4d0fac14-e4a9-47d6-b2d6-140d205e6772"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.361129 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d0fac14-e4a9-47d6-b2d6-140d205e6772-config-data" (OuterVolumeSpecName: "config-data") pod "4d0fac14-e4a9-47d6-b2d6-140d205e6772" (UID: "4d0fac14-e4a9-47d6-b2d6-140d205e6772"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.419135 5050 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/4d0fac14-e4a9-47d6-b2d6-140d205e6772-ceph\") on node \"crc\" DevicePath \"\"" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.419182 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d0fac14-e4a9-47d6-b2d6-140d205e6772-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.419198 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prwz9\" (UniqueName: \"kubernetes.io/projected/4d0fac14-e4a9-47d6-b2d6-140d205e6772-kube-api-access-prwz9\") on node \"crc\" DevicePath \"\"" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.419209 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d0fac14-e4a9-47d6-b2d6-140d205e6772-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.419218 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4d0fac14-e4a9-47d6-b2d6-140d205e6772-logs\") on node \"crc\" DevicePath \"\"" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.419226 5050 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4d0fac14-e4a9-47d6-b2d6-140d205e6772-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.419234 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d0fac14-e4a9-47d6-b2d6-140d205e6772-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.533848 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"56c61de9-b025-4705-8311-bade624f6e13","Type":"ContainerStarted","Data":"8176f6c1b0091367c95acae545668a0ba33f785d033543b0e3a3d92feb1aeef3"} Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.534128 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"56c61de9-b025-4705-8311-bade624f6e13","Type":"ContainerStarted","Data":"7b259b6daf4ab16b8cc98718fb4284dc2718dea8af03e0a5869e1df4703accdc"} Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.536161 5050 generic.go:334] "Generic (PLEG): container finished" podID="4d0fac14-e4a9-47d6-b2d6-140d205e6772" containerID="4ca88aeb1f1a38456cf8153c34723ecba2931c37057e2b7716a5bb78b6d4005f" exitCode=0 Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.536196 5050 generic.go:334] "Generic (PLEG): container finished" podID="4d0fac14-e4a9-47d6-b2d6-140d205e6772" containerID="b0641c5969bc404030d40d2356d0ec097e8a314b4ac66cde84afb6b854bfcadc" exitCode=143 Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.536189 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4d0fac14-e4a9-47d6-b2d6-140d205e6772","Type":"ContainerDied","Data":"4ca88aeb1f1a38456cf8153c34723ecba2931c37057e2b7716a5bb78b6d4005f"} Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.536235 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4d0fac14-e4a9-47d6-b2d6-140d205e6772","Type":"ContainerDied","Data":"b0641c5969bc404030d40d2356d0ec097e8a314b4ac66cde84afb6b854bfcadc"} Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.536251 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4d0fac14-e4a9-47d6-b2d6-140d205e6772","Type":"ContainerDied","Data":"f9f728f94266ff3b1eb6965df4cef9e455f9160556c283e3706ce461f883e067"} Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.536270 5050 scope.go:117] "RemoveContainer" containerID="4ca88aeb1f1a38456cf8153c34723ecba2931c37057e2b7716a5bb78b6d4005f" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.536281 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.554939 5050 scope.go:117] "RemoveContainer" containerID="b0641c5969bc404030d40d2356d0ec097e8a314b4ac66cde84afb6b854bfcadc" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.559672 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f4baad8-b6bc-4555-a9ff-df050f578f9e" path="/var/lib/kubelet/pods/3f4baad8-b6bc-4555-a9ff-df050f578f9e/volumes" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.574465 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.588096 5050 scope.go:117] "RemoveContainer" containerID="4ca88aeb1f1a38456cf8153c34723ecba2931c37057e2b7716a5bb78b6d4005f" Dec 11 15:20:03 crc kubenswrapper[5050]: E1211 15:20:03.588680 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ca88aeb1f1a38456cf8153c34723ecba2931c37057e2b7716a5bb78b6d4005f\": container with ID starting with 4ca88aeb1f1a38456cf8153c34723ecba2931c37057e2b7716a5bb78b6d4005f not found: ID does not exist" containerID="4ca88aeb1f1a38456cf8153c34723ecba2931c37057e2b7716a5bb78b6d4005f" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.588717 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ca88aeb1f1a38456cf8153c34723ecba2931c37057e2b7716a5bb78b6d4005f"} err="failed to get container status \"4ca88aeb1f1a38456cf8153c34723ecba2931c37057e2b7716a5bb78b6d4005f\": rpc error: code = NotFound desc = could not find container \"4ca88aeb1f1a38456cf8153c34723ecba2931c37057e2b7716a5bb78b6d4005f\": container with ID starting with 4ca88aeb1f1a38456cf8153c34723ecba2931c37057e2b7716a5bb78b6d4005f not found: ID does not exist" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.588741 5050 scope.go:117] "RemoveContainer" containerID="b0641c5969bc404030d40d2356d0ec097e8a314b4ac66cde84afb6b854bfcadc" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.591273 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 15:20:03 crc kubenswrapper[5050]: E1211 15:20:03.591547 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0641c5969bc404030d40d2356d0ec097e8a314b4ac66cde84afb6b854bfcadc\": container with ID starting with b0641c5969bc404030d40d2356d0ec097e8a314b4ac66cde84afb6b854bfcadc not found: ID does not exist" containerID="b0641c5969bc404030d40d2356d0ec097e8a314b4ac66cde84afb6b854bfcadc" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.591593 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0641c5969bc404030d40d2356d0ec097e8a314b4ac66cde84afb6b854bfcadc"} err="failed to get container status \"b0641c5969bc404030d40d2356d0ec097e8a314b4ac66cde84afb6b854bfcadc\": rpc error: code = NotFound desc = could not find container \"b0641c5969bc404030d40d2356d0ec097e8a314b4ac66cde84afb6b854bfcadc\": container with ID starting with b0641c5969bc404030d40d2356d0ec097e8a314b4ac66cde84afb6b854bfcadc not found: ID does not exist" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.591625 5050 scope.go:117] "RemoveContainer" containerID="4ca88aeb1f1a38456cf8153c34723ecba2931c37057e2b7716a5bb78b6d4005f" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.592418 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ca88aeb1f1a38456cf8153c34723ecba2931c37057e2b7716a5bb78b6d4005f"} err="failed to get container status \"4ca88aeb1f1a38456cf8153c34723ecba2931c37057e2b7716a5bb78b6d4005f\": rpc error: code = NotFound desc = could not find container \"4ca88aeb1f1a38456cf8153c34723ecba2931c37057e2b7716a5bb78b6d4005f\": container with ID starting with 4ca88aeb1f1a38456cf8153c34723ecba2931c37057e2b7716a5bb78b6d4005f not found: ID does not exist" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.592447 5050 scope.go:117] "RemoveContainer" containerID="b0641c5969bc404030d40d2356d0ec097e8a314b4ac66cde84afb6b854bfcadc" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.601419 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0641c5969bc404030d40d2356d0ec097e8a314b4ac66cde84afb6b854bfcadc"} err="failed to get container status \"b0641c5969bc404030d40d2356d0ec097e8a314b4ac66cde84afb6b854bfcadc\": rpc error: code = NotFound desc = could not find container \"b0641c5969bc404030d40d2356d0ec097e8a314b4ac66cde84afb6b854bfcadc\": container with ID starting with b0641c5969bc404030d40d2356d0ec097e8a314b4ac66cde84afb6b854bfcadc not found: ID does not exist" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.601440 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 15:20:03 crc kubenswrapper[5050]: E1211 15:20:03.601956 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d0fac14-e4a9-47d6-b2d6-140d205e6772" containerName="glance-httpd" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.601981 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d0fac14-e4a9-47d6-b2d6-140d205e6772" containerName="glance-httpd" Dec 11 15:20:03 crc kubenswrapper[5050]: E1211 15:20:03.602025 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d0fac14-e4a9-47d6-b2d6-140d205e6772" containerName="glance-log" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.602035 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d0fac14-e4a9-47d6-b2d6-140d205e6772" containerName="glance-log" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.602238 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d0fac14-e4a9-47d6-b2d6-140d205e6772" containerName="glance-log" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.602269 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d0fac14-e4a9-47d6-b2d6-140d205e6772" containerName="glance-httpd" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.603328 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.605818 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.612195 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.622811 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c444ddc6-1d1e-4d4c-8b33-bda628807710-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c444ddc6-1d1e-4d4c-8b33-bda628807710\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.622878 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdjl2\" (UniqueName: \"kubernetes.io/projected/c444ddc6-1d1e-4d4c-8b33-bda628807710-kube-api-access-bdjl2\") pod \"glance-default-internal-api-0\" (UID: \"c444ddc6-1d1e-4d4c-8b33-bda628807710\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.622931 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c444ddc6-1d1e-4d4c-8b33-bda628807710-logs\") pod \"glance-default-internal-api-0\" (UID: \"c444ddc6-1d1e-4d4c-8b33-bda628807710\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.622959 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c444ddc6-1d1e-4d4c-8b33-bda628807710-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c444ddc6-1d1e-4d4c-8b33-bda628807710\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.623047 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c444ddc6-1d1e-4d4c-8b33-bda628807710-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c444ddc6-1d1e-4d4c-8b33-bda628807710\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.623093 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c444ddc6-1d1e-4d4c-8b33-bda628807710-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c444ddc6-1d1e-4d4c-8b33-bda628807710\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.623141 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c444ddc6-1d1e-4d4c-8b33-bda628807710-ceph\") pod \"glance-default-internal-api-0\" (UID: \"c444ddc6-1d1e-4d4c-8b33-bda628807710\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.724311 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c444ddc6-1d1e-4d4c-8b33-bda628807710-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c444ddc6-1d1e-4d4c-8b33-bda628807710\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.724370 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c444ddc6-1d1e-4d4c-8b33-bda628807710-ceph\") pod \"glance-default-internal-api-0\" (UID: \"c444ddc6-1d1e-4d4c-8b33-bda628807710\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.724427 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c444ddc6-1d1e-4d4c-8b33-bda628807710-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c444ddc6-1d1e-4d4c-8b33-bda628807710\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.724448 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdjl2\" (UniqueName: \"kubernetes.io/projected/c444ddc6-1d1e-4d4c-8b33-bda628807710-kube-api-access-bdjl2\") pod \"glance-default-internal-api-0\" (UID: \"c444ddc6-1d1e-4d4c-8b33-bda628807710\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.724472 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c444ddc6-1d1e-4d4c-8b33-bda628807710-logs\") pod \"glance-default-internal-api-0\" (UID: \"c444ddc6-1d1e-4d4c-8b33-bda628807710\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.724500 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c444ddc6-1d1e-4d4c-8b33-bda628807710-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c444ddc6-1d1e-4d4c-8b33-bda628807710\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.724528 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c444ddc6-1d1e-4d4c-8b33-bda628807710-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c444ddc6-1d1e-4d4c-8b33-bda628807710\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.725055 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c444ddc6-1d1e-4d4c-8b33-bda628807710-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c444ddc6-1d1e-4d4c-8b33-bda628807710\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.725202 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c444ddc6-1d1e-4d4c-8b33-bda628807710-logs\") pod \"glance-default-internal-api-0\" (UID: \"c444ddc6-1d1e-4d4c-8b33-bda628807710\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.729064 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c444ddc6-1d1e-4d4c-8b33-bda628807710-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c444ddc6-1d1e-4d4c-8b33-bda628807710\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.729116 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c444ddc6-1d1e-4d4c-8b33-bda628807710-ceph\") pod \"glance-default-internal-api-0\" (UID: \"c444ddc6-1d1e-4d4c-8b33-bda628807710\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.729261 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c444ddc6-1d1e-4d4c-8b33-bda628807710-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c444ddc6-1d1e-4d4c-8b33-bda628807710\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.730419 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c444ddc6-1d1e-4d4c-8b33-bda628807710-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c444ddc6-1d1e-4d4c-8b33-bda628807710\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.743028 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdjl2\" (UniqueName: \"kubernetes.io/projected/c444ddc6-1d1e-4d4c-8b33-bda628807710-kube-api-access-bdjl2\") pod \"glance-default-internal-api-0\" (UID: \"c444ddc6-1d1e-4d4c-8b33-bda628807710\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:20:03 crc kubenswrapper[5050]: I1211 15:20:03.918459 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 15:20:04 crc kubenswrapper[5050]: I1211 15:20:04.272496 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 15:20:04 crc kubenswrapper[5050]: W1211 15:20:04.272883 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc444ddc6_1d1e_4d4c_8b33_bda628807710.slice/crio-75e77d9253de0f5df1c0abd79897278a28495a70e23110dcfbf58f9587dfee83 WatchSource:0}: Error finding container 75e77d9253de0f5df1c0abd79897278a28495a70e23110dcfbf58f9587dfee83: Status 404 returned error can't find the container with id 75e77d9253de0f5df1c0abd79897278a28495a70e23110dcfbf58f9587dfee83 Dec 11 15:20:04 crc kubenswrapper[5050]: I1211 15:20:04.549286 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"56c61de9-b025-4705-8311-bade624f6e13","Type":"ContainerStarted","Data":"f9dd0c437488bdff4d14d42a97042ad703e2262bd29aeb6d8358bded905a5645"} Dec 11 15:20:04 crc kubenswrapper[5050]: I1211 15:20:04.550692 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c444ddc6-1d1e-4d4c-8b33-bda628807710","Type":"ContainerStarted","Data":"75e77d9253de0f5df1c0abd79897278a28495a70e23110dcfbf58f9587dfee83"} Dec 11 15:20:05 crc kubenswrapper[5050]: I1211 15:20:05.567438 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d0fac14-e4a9-47d6-b2d6-140d205e6772" path="/var/lib/kubelet/pods/4d0fac14-e4a9-47d6-b2d6-140d205e6772/volumes" Dec 11 15:20:05 crc kubenswrapper[5050]: I1211 15:20:05.574153 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c444ddc6-1d1e-4d4c-8b33-bda628807710","Type":"ContainerStarted","Data":"8a328144a3a0b9b16561e611c2a2ff017b7a36d90e65e6d28fa4044c99f7daa3"} Dec 11 15:20:05 crc kubenswrapper[5050]: I1211 15:20:05.574207 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c444ddc6-1d1e-4d4c-8b33-bda628807710","Type":"ContainerStarted","Data":"a717d621c73eecb2a3dc8285b2583252b255f5a56b6dc361078f2f205a706971"} Dec 11 15:20:05 crc kubenswrapper[5050]: I1211 15:20:05.597960 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.597945097 podStartE2EDuration="2.597945097s" podCreationTimestamp="2025-12-11 15:20:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:20:05.595603944 +0000 UTC m=+5496.439326530" watchObservedRunningTime="2025-12-11 15:20:05.597945097 +0000 UTC m=+5496.441667683" Dec 11 15:20:05 crc kubenswrapper[5050]: I1211 15:20:05.599208 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.599201511 podStartE2EDuration="4.599201511s" podCreationTimestamp="2025-12-11 15:20:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:20:04.575575117 +0000 UTC m=+5495.419297693" watchObservedRunningTime="2025-12-11 15:20:05.599201511 +0000 UTC m=+5496.442924087" Dec 11 15:20:08 crc kubenswrapper[5050]: I1211 15:20:08.079232 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d84855d79-nmbkp" Dec 11 15:20:08 crc kubenswrapper[5050]: I1211 15:20:08.173732 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79754c57d9-knr44"] Dec 11 15:20:08 crc kubenswrapper[5050]: I1211 15:20:08.174193 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79754c57d9-knr44" podUID="c8d953cd-0732-41cc-8005-18aa2145cb8c" containerName="dnsmasq-dns" containerID="cri-o://f8fb7f30263f3f868127bc338e25fba3a81a7a4d294e605ce240261839fdb05f" gracePeriod=10 Dec 11 15:20:08 crc kubenswrapper[5050]: I1211 15:20:08.599959 5050 generic.go:334] "Generic (PLEG): container finished" podID="c8d953cd-0732-41cc-8005-18aa2145cb8c" containerID="f8fb7f30263f3f868127bc338e25fba3a81a7a4d294e605ce240261839fdb05f" exitCode=0 Dec 11 15:20:08 crc kubenswrapper[5050]: I1211 15:20:08.600285 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79754c57d9-knr44" event={"ID":"c8d953cd-0732-41cc-8005-18aa2145cb8c","Type":"ContainerDied","Data":"f8fb7f30263f3f868127bc338e25fba3a81a7a4d294e605ce240261839fdb05f"} Dec 11 15:20:08 crc kubenswrapper[5050]: I1211 15:20:08.600314 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79754c57d9-knr44" event={"ID":"c8d953cd-0732-41cc-8005-18aa2145cb8c","Type":"ContainerDied","Data":"118f424ee8e38b61f3c34d29d253902d3347edd5b8a80091586ab4d496dedf2a"} Dec 11 15:20:08 crc kubenswrapper[5050]: I1211 15:20:08.600324 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="118f424ee8e38b61f3c34d29d253902d3347edd5b8a80091586ab4d496dedf2a" Dec 11 15:20:08 crc kubenswrapper[5050]: I1211 15:20:08.640990 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79754c57d9-knr44" Dec 11 15:20:08 crc kubenswrapper[5050]: I1211 15:20:08.808886 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qr87\" (UniqueName: \"kubernetes.io/projected/c8d953cd-0732-41cc-8005-18aa2145cb8c-kube-api-access-2qr87\") pod \"c8d953cd-0732-41cc-8005-18aa2145cb8c\" (UID: \"c8d953cd-0732-41cc-8005-18aa2145cb8c\") " Dec 11 15:20:08 crc kubenswrapper[5050]: I1211 15:20:08.808987 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8d953cd-0732-41cc-8005-18aa2145cb8c-dns-svc\") pod \"c8d953cd-0732-41cc-8005-18aa2145cb8c\" (UID: \"c8d953cd-0732-41cc-8005-18aa2145cb8c\") " Dec 11 15:20:08 crc kubenswrapper[5050]: I1211 15:20:08.809086 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8d953cd-0732-41cc-8005-18aa2145cb8c-ovsdbserver-sb\") pod \"c8d953cd-0732-41cc-8005-18aa2145cb8c\" (UID: \"c8d953cd-0732-41cc-8005-18aa2145cb8c\") " Dec 11 15:20:08 crc kubenswrapper[5050]: I1211 15:20:08.809122 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8d953cd-0732-41cc-8005-18aa2145cb8c-config\") pod \"c8d953cd-0732-41cc-8005-18aa2145cb8c\" (UID: \"c8d953cd-0732-41cc-8005-18aa2145cb8c\") " Dec 11 15:20:08 crc kubenswrapper[5050]: I1211 15:20:08.809145 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8d953cd-0732-41cc-8005-18aa2145cb8c-ovsdbserver-nb\") pod \"c8d953cd-0732-41cc-8005-18aa2145cb8c\" (UID: \"c8d953cd-0732-41cc-8005-18aa2145cb8c\") " Dec 11 15:20:08 crc kubenswrapper[5050]: I1211 15:20:08.828699 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8d953cd-0732-41cc-8005-18aa2145cb8c-kube-api-access-2qr87" (OuterVolumeSpecName: "kube-api-access-2qr87") pod "c8d953cd-0732-41cc-8005-18aa2145cb8c" (UID: "c8d953cd-0732-41cc-8005-18aa2145cb8c"). InnerVolumeSpecName "kube-api-access-2qr87". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:20:08 crc kubenswrapper[5050]: I1211 15:20:08.854058 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8d953cd-0732-41cc-8005-18aa2145cb8c-config" (OuterVolumeSpecName: "config") pod "c8d953cd-0732-41cc-8005-18aa2145cb8c" (UID: "c8d953cd-0732-41cc-8005-18aa2145cb8c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:20:08 crc kubenswrapper[5050]: I1211 15:20:08.854631 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8d953cd-0732-41cc-8005-18aa2145cb8c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c8d953cd-0732-41cc-8005-18aa2145cb8c" (UID: "c8d953cd-0732-41cc-8005-18aa2145cb8c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:20:08 crc kubenswrapper[5050]: I1211 15:20:08.859190 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8d953cd-0732-41cc-8005-18aa2145cb8c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c8d953cd-0732-41cc-8005-18aa2145cb8c" (UID: "c8d953cd-0732-41cc-8005-18aa2145cb8c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:20:08 crc kubenswrapper[5050]: I1211 15:20:08.863898 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8d953cd-0732-41cc-8005-18aa2145cb8c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c8d953cd-0732-41cc-8005-18aa2145cb8c" (UID: "c8d953cd-0732-41cc-8005-18aa2145cb8c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:20:08 crc kubenswrapper[5050]: I1211 15:20:08.910870 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8d953cd-0732-41cc-8005-18aa2145cb8c-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 15:20:08 crc kubenswrapper[5050]: I1211 15:20:08.910901 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8d953cd-0732-41cc-8005-18aa2145cb8c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 15:20:08 crc kubenswrapper[5050]: I1211 15:20:08.910913 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8d953cd-0732-41cc-8005-18aa2145cb8c-config\") on node \"crc\" DevicePath \"\"" Dec 11 15:20:08 crc kubenswrapper[5050]: I1211 15:20:08.910922 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8d953cd-0732-41cc-8005-18aa2145cb8c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 15:20:08 crc kubenswrapper[5050]: I1211 15:20:08.910932 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qr87\" (UniqueName: \"kubernetes.io/projected/c8d953cd-0732-41cc-8005-18aa2145cb8c-kube-api-access-2qr87\") on node \"crc\" DevicePath \"\"" Dec 11 15:20:09 crc kubenswrapper[5050]: I1211 15:20:09.607163 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79754c57d9-knr44" Dec 11 15:20:09 crc kubenswrapper[5050]: I1211 15:20:09.632363 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79754c57d9-knr44"] Dec 11 15:20:09 crc kubenswrapper[5050]: I1211 15:20:09.639402 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79754c57d9-knr44"] Dec 11 15:20:11 crc kubenswrapper[5050]: I1211 15:20:11.547341 5050 scope.go:117] "RemoveContainer" containerID="5e751c2ff817a404aae81889734d5a9bd57af497d8e885add288368bb5b0040e" Dec 11 15:20:11 crc kubenswrapper[5050]: E1211 15:20:11.548067 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:20:11 crc kubenswrapper[5050]: I1211 15:20:11.555687 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8d953cd-0732-41cc-8005-18aa2145cb8c" path="/var/lib/kubelet/pods/c8d953cd-0732-41cc-8005-18aa2145cb8c/volumes" Dec 11 15:20:11 crc kubenswrapper[5050]: I1211 15:20:11.919665 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 11 15:20:11 crc kubenswrapper[5050]: I1211 15:20:11.919713 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 11 15:20:11 crc kubenswrapper[5050]: I1211 15:20:11.949539 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 11 15:20:11 crc kubenswrapper[5050]: I1211 15:20:11.958761 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 11 15:20:12 crc kubenswrapper[5050]: I1211 15:20:12.632312 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 11 15:20:12 crc kubenswrapper[5050]: I1211 15:20:12.632368 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 11 15:20:13 crc kubenswrapper[5050]: I1211 15:20:13.919481 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 11 15:20:13 crc kubenswrapper[5050]: I1211 15:20:13.919800 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 11 15:20:13 crc kubenswrapper[5050]: I1211 15:20:13.945922 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 11 15:20:13 crc kubenswrapper[5050]: I1211 15:20:13.959584 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 11 15:20:14 crc kubenswrapper[5050]: I1211 15:20:14.578612 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 11 15:20:14 crc kubenswrapper[5050]: I1211 15:20:14.584535 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 11 15:20:14 crc kubenswrapper[5050]: I1211 15:20:14.651214 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 11 15:20:14 crc kubenswrapper[5050]: I1211 15:20:14.651425 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 11 15:20:16 crc kubenswrapper[5050]: I1211 15:20:16.664482 5050 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 15:20:16 crc kubenswrapper[5050]: I1211 15:20:16.664903 5050 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 15:20:16 crc kubenswrapper[5050]: I1211 15:20:16.745476 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 11 15:20:16 crc kubenswrapper[5050]: I1211 15:20:16.749937 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 11 15:20:22 crc kubenswrapper[5050]: I1211 15:20:22.858858 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-l2kb8"] Dec 11 15:20:22 crc kubenswrapper[5050]: E1211 15:20:22.860181 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8d953cd-0732-41cc-8005-18aa2145cb8c" containerName="dnsmasq-dns" Dec 11 15:20:22 crc kubenswrapper[5050]: I1211 15:20:22.860204 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8d953cd-0732-41cc-8005-18aa2145cb8c" containerName="dnsmasq-dns" Dec 11 15:20:22 crc kubenswrapper[5050]: E1211 15:20:22.860228 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8d953cd-0732-41cc-8005-18aa2145cb8c" containerName="init" Dec 11 15:20:22 crc kubenswrapper[5050]: I1211 15:20:22.860236 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8d953cd-0732-41cc-8005-18aa2145cb8c" containerName="init" Dec 11 15:20:22 crc kubenswrapper[5050]: I1211 15:20:22.860517 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8d953cd-0732-41cc-8005-18aa2145cb8c" containerName="dnsmasq-dns" Dec 11 15:20:22 crc kubenswrapper[5050]: I1211 15:20:22.861737 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-l2kb8" Dec 11 15:20:22 crc kubenswrapper[5050]: I1211 15:20:22.875401 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-l2kb8"] Dec 11 15:20:22 crc kubenswrapper[5050]: I1211 15:20:22.917091 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmw9f\" (UniqueName: \"kubernetes.io/projected/c46e6785-6125-4081-9817-5d5bfd9d9731-kube-api-access-pmw9f\") pod \"placement-db-create-l2kb8\" (UID: \"c46e6785-6125-4081-9817-5d5bfd9d9731\") " pod="openstack/placement-db-create-l2kb8" Dec 11 15:20:22 crc kubenswrapper[5050]: I1211 15:20:22.917218 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c46e6785-6125-4081-9817-5d5bfd9d9731-operator-scripts\") pod \"placement-db-create-l2kb8\" (UID: \"c46e6785-6125-4081-9817-5d5bfd9d9731\") " pod="openstack/placement-db-create-l2kb8" Dec 11 15:20:22 crc kubenswrapper[5050]: I1211 15:20:22.954210 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-1e1b-account-create-update-df2bl"] Dec 11 15:20:22 crc kubenswrapper[5050]: I1211 15:20:22.956169 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1e1b-account-create-update-df2bl" Dec 11 15:20:22 crc kubenswrapper[5050]: I1211 15:20:22.958531 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 11 15:20:22 crc kubenswrapper[5050]: I1211 15:20:22.965456 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1e1b-account-create-update-df2bl"] Dec 11 15:20:23 crc kubenswrapper[5050]: I1211 15:20:23.019024 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c46e6785-6125-4081-9817-5d5bfd9d9731-operator-scripts\") pod \"placement-db-create-l2kb8\" (UID: \"c46e6785-6125-4081-9817-5d5bfd9d9731\") " pod="openstack/placement-db-create-l2kb8" Dec 11 15:20:23 crc kubenswrapper[5050]: I1211 15:20:23.019228 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmw9f\" (UniqueName: \"kubernetes.io/projected/c46e6785-6125-4081-9817-5d5bfd9d9731-kube-api-access-pmw9f\") pod \"placement-db-create-l2kb8\" (UID: \"c46e6785-6125-4081-9817-5d5bfd9d9731\") " pod="openstack/placement-db-create-l2kb8" Dec 11 15:20:23 crc kubenswrapper[5050]: I1211 15:20:23.019688 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c46e6785-6125-4081-9817-5d5bfd9d9731-operator-scripts\") pod \"placement-db-create-l2kb8\" (UID: \"c46e6785-6125-4081-9817-5d5bfd9d9731\") " pod="openstack/placement-db-create-l2kb8" Dec 11 15:20:23 crc kubenswrapper[5050]: I1211 15:20:23.046590 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmw9f\" (UniqueName: \"kubernetes.io/projected/c46e6785-6125-4081-9817-5d5bfd9d9731-kube-api-access-pmw9f\") pod \"placement-db-create-l2kb8\" (UID: \"c46e6785-6125-4081-9817-5d5bfd9d9731\") " pod="openstack/placement-db-create-l2kb8" Dec 11 15:20:23 crc kubenswrapper[5050]: I1211 15:20:23.121990 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c67e3e32-26d1-4e14-8a9b-0ba00d5c4df4-operator-scripts\") pod \"placement-1e1b-account-create-update-df2bl\" (UID: \"c67e3e32-26d1-4e14-8a9b-0ba00d5c4df4\") " pod="openstack/placement-1e1b-account-create-update-df2bl" Dec 11 15:20:23 crc kubenswrapper[5050]: I1211 15:20:23.122646 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fzwk\" (UniqueName: \"kubernetes.io/projected/c67e3e32-26d1-4e14-8a9b-0ba00d5c4df4-kube-api-access-2fzwk\") pod \"placement-1e1b-account-create-update-df2bl\" (UID: \"c67e3e32-26d1-4e14-8a9b-0ba00d5c4df4\") " pod="openstack/placement-1e1b-account-create-update-df2bl" Dec 11 15:20:23 crc kubenswrapper[5050]: I1211 15:20:23.197178 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-l2kb8" Dec 11 15:20:23 crc kubenswrapper[5050]: I1211 15:20:23.224224 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c67e3e32-26d1-4e14-8a9b-0ba00d5c4df4-operator-scripts\") pod \"placement-1e1b-account-create-update-df2bl\" (UID: \"c67e3e32-26d1-4e14-8a9b-0ba00d5c4df4\") " pod="openstack/placement-1e1b-account-create-update-df2bl" Dec 11 15:20:23 crc kubenswrapper[5050]: I1211 15:20:23.224321 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fzwk\" (UniqueName: \"kubernetes.io/projected/c67e3e32-26d1-4e14-8a9b-0ba00d5c4df4-kube-api-access-2fzwk\") pod \"placement-1e1b-account-create-update-df2bl\" (UID: \"c67e3e32-26d1-4e14-8a9b-0ba00d5c4df4\") " pod="openstack/placement-1e1b-account-create-update-df2bl" Dec 11 15:20:23 crc kubenswrapper[5050]: I1211 15:20:23.225632 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c67e3e32-26d1-4e14-8a9b-0ba00d5c4df4-operator-scripts\") pod \"placement-1e1b-account-create-update-df2bl\" (UID: \"c67e3e32-26d1-4e14-8a9b-0ba00d5c4df4\") " pod="openstack/placement-1e1b-account-create-update-df2bl" Dec 11 15:20:23 crc kubenswrapper[5050]: I1211 15:20:23.262803 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fzwk\" (UniqueName: \"kubernetes.io/projected/c67e3e32-26d1-4e14-8a9b-0ba00d5c4df4-kube-api-access-2fzwk\") pod \"placement-1e1b-account-create-update-df2bl\" (UID: \"c67e3e32-26d1-4e14-8a9b-0ba00d5c4df4\") " pod="openstack/placement-1e1b-account-create-update-df2bl" Dec 11 15:20:23 crc kubenswrapper[5050]: I1211 15:20:23.272428 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1e1b-account-create-update-df2bl" Dec 11 15:20:23 crc kubenswrapper[5050]: I1211 15:20:23.601133 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-l2kb8"] Dec 11 15:20:23 crc kubenswrapper[5050]: I1211 15:20:23.741150 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-l2kb8" event={"ID":"c46e6785-6125-4081-9817-5d5bfd9d9731","Type":"ContainerStarted","Data":"ad989d714149a9f410a18d6300458599e9ae61414a264e97afacdc6ee473ce91"} Dec 11 15:20:23 crc kubenswrapper[5050]: W1211 15:20:23.897977 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc67e3e32_26d1_4e14_8a9b_0ba00d5c4df4.slice/crio-36212e74fcd22252c67500720b57c0b70b5f49b1f7cba570953724689b8cade4 WatchSource:0}: Error finding container 36212e74fcd22252c67500720b57c0b70b5f49b1f7cba570953724689b8cade4: Status 404 returned error can't find the container with id 36212e74fcd22252c67500720b57c0b70b5f49b1f7cba570953724689b8cade4 Dec 11 15:20:23 crc kubenswrapper[5050]: I1211 15:20:23.899159 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1e1b-account-create-update-df2bl"] Dec 11 15:20:24 crc kubenswrapper[5050]: I1211 15:20:24.546612 5050 scope.go:117] "RemoveContainer" containerID="5e751c2ff817a404aae81889734d5a9bd57af497d8e885add288368bb5b0040e" Dec 11 15:20:24 crc kubenswrapper[5050]: E1211 15:20:24.547091 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:20:24 crc kubenswrapper[5050]: I1211 15:20:24.753086 5050 generic.go:334] "Generic (PLEG): container finished" podID="c67e3e32-26d1-4e14-8a9b-0ba00d5c4df4" containerID="883d3a1157268e68ce7d2e5061a1ca26470c59d2d42a2665e63e237577658da8" exitCode=0 Dec 11 15:20:24 crc kubenswrapper[5050]: I1211 15:20:24.753208 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1e1b-account-create-update-df2bl" event={"ID":"c67e3e32-26d1-4e14-8a9b-0ba00d5c4df4","Type":"ContainerDied","Data":"883d3a1157268e68ce7d2e5061a1ca26470c59d2d42a2665e63e237577658da8"} Dec 11 15:20:24 crc kubenswrapper[5050]: I1211 15:20:24.753272 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1e1b-account-create-update-df2bl" event={"ID":"c67e3e32-26d1-4e14-8a9b-0ba00d5c4df4","Type":"ContainerStarted","Data":"36212e74fcd22252c67500720b57c0b70b5f49b1f7cba570953724689b8cade4"} Dec 11 15:20:24 crc kubenswrapper[5050]: I1211 15:20:24.756167 5050 generic.go:334] "Generic (PLEG): container finished" podID="c46e6785-6125-4081-9817-5d5bfd9d9731" containerID="96f8515acd5c83fecb09b21a9b2ad08bdcc909ac21984b7a5ff9c25887d4caec" exitCode=0 Dec 11 15:20:24 crc kubenswrapper[5050]: I1211 15:20:24.756206 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-l2kb8" event={"ID":"c46e6785-6125-4081-9817-5d5bfd9d9731","Type":"ContainerDied","Data":"96f8515acd5c83fecb09b21a9b2ad08bdcc909ac21984b7a5ff9c25887d4caec"} Dec 11 15:20:26 crc kubenswrapper[5050]: I1211 15:20:26.157490 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-l2kb8" Dec 11 15:20:26 crc kubenswrapper[5050]: I1211 15:20:26.289991 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1e1b-account-create-update-df2bl" Dec 11 15:20:26 crc kubenswrapper[5050]: I1211 15:20:26.300797 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c46e6785-6125-4081-9817-5d5bfd9d9731-operator-scripts\") pod \"c46e6785-6125-4081-9817-5d5bfd9d9731\" (UID: \"c46e6785-6125-4081-9817-5d5bfd9d9731\") " Dec 11 15:20:26 crc kubenswrapper[5050]: I1211 15:20:26.300954 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmw9f\" (UniqueName: \"kubernetes.io/projected/c46e6785-6125-4081-9817-5d5bfd9d9731-kube-api-access-pmw9f\") pod \"c46e6785-6125-4081-9817-5d5bfd9d9731\" (UID: \"c46e6785-6125-4081-9817-5d5bfd9d9731\") " Dec 11 15:20:26 crc kubenswrapper[5050]: I1211 15:20:26.301791 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c46e6785-6125-4081-9817-5d5bfd9d9731-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c46e6785-6125-4081-9817-5d5bfd9d9731" (UID: "c46e6785-6125-4081-9817-5d5bfd9d9731"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:20:26 crc kubenswrapper[5050]: I1211 15:20:26.307370 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c46e6785-6125-4081-9817-5d5bfd9d9731-kube-api-access-pmw9f" (OuterVolumeSpecName: "kube-api-access-pmw9f") pod "c46e6785-6125-4081-9817-5d5bfd9d9731" (UID: "c46e6785-6125-4081-9817-5d5bfd9d9731"). InnerVolumeSpecName "kube-api-access-pmw9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:20:26 crc kubenswrapper[5050]: I1211 15:20:26.402430 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c67e3e32-26d1-4e14-8a9b-0ba00d5c4df4-operator-scripts\") pod \"c67e3e32-26d1-4e14-8a9b-0ba00d5c4df4\" (UID: \"c67e3e32-26d1-4e14-8a9b-0ba00d5c4df4\") " Dec 11 15:20:26 crc kubenswrapper[5050]: I1211 15:20:26.402505 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fzwk\" (UniqueName: \"kubernetes.io/projected/c67e3e32-26d1-4e14-8a9b-0ba00d5c4df4-kube-api-access-2fzwk\") pod \"c67e3e32-26d1-4e14-8a9b-0ba00d5c4df4\" (UID: \"c67e3e32-26d1-4e14-8a9b-0ba00d5c4df4\") " Dec 11 15:20:26 crc kubenswrapper[5050]: I1211 15:20:26.403003 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c46e6785-6125-4081-9817-5d5bfd9d9731-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:20:26 crc kubenswrapper[5050]: I1211 15:20:26.403062 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmw9f\" (UniqueName: \"kubernetes.io/projected/c46e6785-6125-4081-9817-5d5bfd9d9731-kube-api-access-pmw9f\") on node \"crc\" DevicePath \"\"" Dec 11 15:20:26 crc kubenswrapper[5050]: I1211 15:20:26.403289 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c67e3e32-26d1-4e14-8a9b-0ba00d5c4df4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c67e3e32-26d1-4e14-8a9b-0ba00d5c4df4" (UID: "c67e3e32-26d1-4e14-8a9b-0ba00d5c4df4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:20:26 crc kubenswrapper[5050]: I1211 15:20:26.406270 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c67e3e32-26d1-4e14-8a9b-0ba00d5c4df4-kube-api-access-2fzwk" (OuterVolumeSpecName: "kube-api-access-2fzwk") pod "c67e3e32-26d1-4e14-8a9b-0ba00d5c4df4" (UID: "c67e3e32-26d1-4e14-8a9b-0ba00d5c4df4"). InnerVolumeSpecName "kube-api-access-2fzwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:20:26 crc kubenswrapper[5050]: I1211 15:20:26.504792 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c67e3e32-26d1-4e14-8a9b-0ba00d5c4df4-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:20:26 crc kubenswrapper[5050]: I1211 15:20:26.504825 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fzwk\" (UniqueName: \"kubernetes.io/projected/c67e3e32-26d1-4e14-8a9b-0ba00d5c4df4-kube-api-access-2fzwk\") on node \"crc\" DevicePath \"\"" Dec 11 15:20:26 crc kubenswrapper[5050]: I1211 15:20:26.822373 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1e1b-account-create-update-df2bl" event={"ID":"c67e3e32-26d1-4e14-8a9b-0ba00d5c4df4","Type":"ContainerDied","Data":"36212e74fcd22252c67500720b57c0b70b5f49b1f7cba570953724689b8cade4"} Dec 11 15:20:26 crc kubenswrapper[5050]: I1211 15:20:26.822442 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36212e74fcd22252c67500720b57c0b70b5f49b1f7cba570953724689b8cade4" Dec 11 15:20:26 crc kubenswrapper[5050]: I1211 15:20:26.822545 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1e1b-account-create-update-df2bl" Dec 11 15:20:26 crc kubenswrapper[5050]: I1211 15:20:26.845144 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-l2kb8" event={"ID":"c46e6785-6125-4081-9817-5d5bfd9d9731","Type":"ContainerDied","Data":"ad989d714149a9f410a18d6300458599e9ae61414a264e97afacdc6ee473ce91"} Dec 11 15:20:26 crc kubenswrapper[5050]: I1211 15:20:26.845188 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad989d714149a9f410a18d6300458599e9ae61414a264e97afacdc6ee473ce91" Dec 11 15:20:26 crc kubenswrapper[5050]: I1211 15:20:26.845319 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-l2kb8" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.417089 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b49969b8f-w2f8g"] Dec 11 15:20:28 crc kubenswrapper[5050]: E1211 15:20:28.422692 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c46e6785-6125-4081-9817-5d5bfd9d9731" containerName="mariadb-database-create" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.422719 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="c46e6785-6125-4081-9817-5d5bfd9d9731" containerName="mariadb-database-create" Dec 11 15:20:28 crc kubenswrapper[5050]: E1211 15:20:28.422766 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c67e3e32-26d1-4e14-8a9b-0ba00d5c4df4" containerName="mariadb-account-create-update" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.422774 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="c67e3e32-26d1-4e14-8a9b-0ba00d5c4df4" containerName="mariadb-account-create-update" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.423022 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="c46e6785-6125-4081-9817-5d5bfd9d9731" containerName="mariadb-database-create" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.423058 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="c67e3e32-26d1-4e14-8a9b-0ba00d5c4df4" containerName="mariadb-account-create-update" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.424368 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b49969b8f-w2f8g" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.441587 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b49969b8f-w2f8g"] Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.457190 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-656c6"] Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.458586 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-656c6" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.465771 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.465816 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4zzmp" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.465921 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.516971 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-656c6"] Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.546969 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1e097af-88c0-4cd0-b4bc-92793ae0f1f0-config-data\") pod \"placement-db-sync-656c6\" (UID: \"b1e097af-88c0-4cd0-b4bc-92793ae0f1f0\") " pod="openstack/placement-db-sync-656c6" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.547359 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d52ee10-8e8e-457a-92cc-03b93c6bedca-ovsdbserver-sb\") pod \"dnsmasq-dns-b49969b8f-w2f8g\" (UID: \"9d52ee10-8e8e-457a-92cc-03b93c6bedca\") " pod="openstack/dnsmasq-dns-b49969b8f-w2f8g" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.547454 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d52ee10-8e8e-457a-92cc-03b93c6bedca-config\") pod \"dnsmasq-dns-b49969b8f-w2f8g\" (UID: \"9d52ee10-8e8e-457a-92cc-03b93c6bedca\") " pod="openstack/dnsmasq-dns-b49969b8f-w2f8g" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.547555 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q9rt\" (UniqueName: \"kubernetes.io/projected/9d52ee10-8e8e-457a-92cc-03b93c6bedca-kube-api-access-8q9rt\") pod \"dnsmasq-dns-b49969b8f-w2f8g\" (UID: \"9d52ee10-8e8e-457a-92cc-03b93c6bedca\") " pod="openstack/dnsmasq-dns-b49969b8f-w2f8g" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.547640 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1e097af-88c0-4cd0-b4bc-92793ae0f1f0-logs\") pod \"placement-db-sync-656c6\" (UID: \"b1e097af-88c0-4cd0-b4bc-92793ae0f1f0\") " pod="openstack/placement-db-sync-656c6" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.547735 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1e097af-88c0-4cd0-b4bc-92793ae0f1f0-combined-ca-bundle\") pod \"placement-db-sync-656c6\" (UID: \"b1e097af-88c0-4cd0-b4bc-92793ae0f1f0\") " pod="openstack/placement-db-sync-656c6" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.548120 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d52ee10-8e8e-457a-92cc-03b93c6bedca-dns-svc\") pod \"dnsmasq-dns-b49969b8f-w2f8g\" (UID: \"9d52ee10-8e8e-457a-92cc-03b93c6bedca\") " pod="openstack/dnsmasq-dns-b49969b8f-w2f8g" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.548285 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d52ee10-8e8e-457a-92cc-03b93c6bedca-ovsdbserver-nb\") pod \"dnsmasq-dns-b49969b8f-w2f8g\" (UID: \"9d52ee10-8e8e-457a-92cc-03b93c6bedca\") " pod="openstack/dnsmasq-dns-b49969b8f-w2f8g" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.548389 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1e097af-88c0-4cd0-b4bc-92793ae0f1f0-scripts\") pod \"placement-db-sync-656c6\" (UID: \"b1e097af-88c0-4cd0-b4bc-92793ae0f1f0\") " pod="openstack/placement-db-sync-656c6" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.548467 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkpm9\" (UniqueName: \"kubernetes.io/projected/b1e097af-88c0-4cd0-b4bc-92793ae0f1f0-kube-api-access-nkpm9\") pod \"placement-db-sync-656c6\" (UID: \"b1e097af-88c0-4cd0-b4bc-92793ae0f1f0\") " pod="openstack/placement-db-sync-656c6" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.649739 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q9rt\" (UniqueName: \"kubernetes.io/projected/9d52ee10-8e8e-457a-92cc-03b93c6bedca-kube-api-access-8q9rt\") pod \"dnsmasq-dns-b49969b8f-w2f8g\" (UID: \"9d52ee10-8e8e-457a-92cc-03b93c6bedca\") " pod="openstack/dnsmasq-dns-b49969b8f-w2f8g" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.650077 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1e097af-88c0-4cd0-b4bc-92793ae0f1f0-logs\") pod \"placement-db-sync-656c6\" (UID: \"b1e097af-88c0-4cd0-b4bc-92793ae0f1f0\") " pod="openstack/placement-db-sync-656c6" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.650186 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1e097af-88c0-4cd0-b4bc-92793ae0f1f0-combined-ca-bundle\") pod \"placement-db-sync-656c6\" (UID: \"b1e097af-88c0-4cd0-b4bc-92793ae0f1f0\") " pod="openstack/placement-db-sync-656c6" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.650280 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d52ee10-8e8e-457a-92cc-03b93c6bedca-dns-svc\") pod \"dnsmasq-dns-b49969b8f-w2f8g\" (UID: \"9d52ee10-8e8e-457a-92cc-03b93c6bedca\") " pod="openstack/dnsmasq-dns-b49969b8f-w2f8g" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.650362 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d52ee10-8e8e-457a-92cc-03b93c6bedca-ovsdbserver-nb\") pod \"dnsmasq-dns-b49969b8f-w2f8g\" (UID: \"9d52ee10-8e8e-457a-92cc-03b93c6bedca\") " pod="openstack/dnsmasq-dns-b49969b8f-w2f8g" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.650466 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1e097af-88c0-4cd0-b4bc-92793ae0f1f0-scripts\") pod \"placement-db-sync-656c6\" (UID: \"b1e097af-88c0-4cd0-b4bc-92793ae0f1f0\") " pod="openstack/placement-db-sync-656c6" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.650533 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkpm9\" (UniqueName: \"kubernetes.io/projected/b1e097af-88c0-4cd0-b4bc-92793ae0f1f0-kube-api-access-nkpm9\") pod \"placement-db-sync-656c6\" (UID: \"b1e097af-88c0-4cd0-b4bc-92793ae0f1f0\") " pod="openstack/placement-db-sync-656c6" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.650653 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1e097af-88c0-4cd0-b4bc-92793ae0f1f0-config-data\") pod \"placement-db-sync-656c6\" (UID: \"b1e097af-88c0-4cd0-b4bc-92793ae0f1f0\") " pod="openstack/placement-db-sync-656c6" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.650743 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d52ee10-8e8e-457a-92cc-03b93c6bedca-ovsdbserver-sb\") pod \"dnsmasq-dns-b49969b8f-w2f8g\" (UID: \"9d52ee10-8e8e-457a-92cc-03b93c6bedca\") " pod="openstack/dnsmasq-dns-b49969b8f-w2f8g" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.650846 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d52ee10-8e8e-457a-92cc-03b93c6bedca-config\") pod \"dnsmasq-dns-b49969b8f-w2f8g\" (UID: \"9d52ee10-8e8e-457a-92cc-03b93c6bedca\") " pod="openstack/dnsmasq-dns-b49969b8f-w2f8g" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.651819 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d52ee10-8e8e-457a-92cc-03b93c6bedca-config\") pod \"dnsmasq-dns-b49969b8f-w2f8g\" (UID: \"9d52ee10-8e8e-457a-92cc-03b93c6bedca\") " pod="openstack/dnsmasq-dns-b49969b8f-w2f8g" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.652628 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d52ee10-8e8e-457a-92cc-03b93c6bedca-dns-svc\") pod \"dnsmasq-dns-b49969b8f-w2f8g\" (UID: \"9d52ee10-8e8e-457a-92cc-03b93c6bedca\") " pod="openstack/dnsmasq-dns-b49969b8f-w2f8g" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.652902 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d52ee10-8e8e-457a-92cc-03b93c6bedca-ovsdbserver-sb\") pod \"dnsmasq-dns-b49969b8f-w2f8g\" (UID: \"9d52ee10-8e8e-457a-92cc-03b93c6bedca\") " pod="openstack/dnsmasq-dns-b49969b8f-w2f8g" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.652915 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1e097af-88c0-4cd0-b4bc-92793ae0f1f0-logs\") pod \"placement-db-sync-656c6\" (UID: \"b1e097af-88c0-4cd0-b4bc-92793ae0f1f0\") " pod="openstack/placement-db-sync-656c6" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.653110 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d52ee10-8e8e-457a-92cc-03b93c6bedca-ovsdbserver-nb\") pod \"dnsmasq-dns-b49969b8f-w2f8g\" (UID: \"9d52ee10-8e8e-457a-92cc-03b93c6bedca\") " pod="openstack/dnsmasq-dns-b49969b8f-w2f8g" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.657842 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1e097af-88c0-4cd0-b4bc-92793ae0f1f0-config-data\") pod \"placement-db-sync-656c6\" (UID: \"b1e097af-88c0-4cd0-b4bc-92793ae0f1f0\") " pod="openstack/placement-db-sync-656c6" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.658327 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1e097af-88c0-4cd0-b4bc-92793ae0f1f0-combined-ca-bundle\") pod \"placement-db-sync-656c6\" (UID: \"b1e097af-88c0-4cd0-b4bc-92793ae0f1f0\") " pod="openstack/placement-db-sync-656c6" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.663658 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1e097af-88c0-4cd0-b4bc-92793ae0f1f0-scripts\") pod \"placement-db-sync-656c6\" (UID: \"b1e097af-88c0-4cd0-b4bc-92793ae0f1f0\") " pod="openstack/placement-db-sync-656c6" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.671051 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q9rt\" (UniqueName: \"kubernetes.io/projected/9d52ee10-8e8e-457a-92cc-03b93c6bedca-kube-api-access-8q9rt\") pod \"dnsmasq-dns-b49969b8f-w2f8g\" (UID: \"9d52ee10-8e8e-457a-92cc-03b93c6bedca\") " pod="openstack/dnsmasq-dns-b49969b8f-w2f8g" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.677473 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkpm9\" (UniqueName: \"kubernetes.io/projected/b1e097af-88c0-4cd0-b4bc-92793ae0f1f0-kube-api-access-nkpm9\") pod \"placement-db-sync-656c6\" (UID: \"b1e097af-88c0-4cd0-b4bc-92793ae0f1f0\") " pod="openstack/placement-db-sync-656c6" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.748070 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b49969b8f-w2f8g" Dec 11 15:20:28 crc kubenswrapper[5050]: I1211 15:20:28.786568 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-656c6" Dec 11 15:20:29 crc kubenswrapper[5050]: I1211 15:20:29.250691 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b49969b8f-w2f8g"] Dec 11 15:20:29 crc kubenswrapper[5050]: W1211 15:20:29.259516 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d52ee10_8e8e_457a_92cc_03b93c6bedca.slice/crio-524e8252bdc3ce196fa25139227f27eb145907440d02b5dd676579231494828c WatchSource:0}: Error finding container 524e8252bdc3ce196fa25139227f27eb145907440d02b5dd676579231494828c: Status 404 returned error can't find the container with id 524e8252bdc3ce196fa25139227f27eb145907440d02b5dd676579231494828c Dec 11 15:20:29 crc kubenswrapper[5050]: I1211 15:20:29.325307 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-656c6"] Dec 11 15:20:29 crc kubenswrapper[5050]: W1211 15:20:29.339004 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1e097af_88c0_4cd0_b4bc_92793ae0f1f0.slice/crio-66946627d6159beacbc5dbd858d80b5a2bcbf64d55fb044df86361939d85a51b WatchSource:0}: Error finding container 66946627d6159beacbc5dbd858d80b5a2bcbf64d55fb044df86361939d85a51b: Status 404 returned error can't find the container with id 66946627d6159beacbc5dbd858d80b5a2bcbf64d55fb044df86361939d85a51b Dec 11 15:20:29 crc kubenswrapper[5050]: I1211 15:20:29.877842 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-656c6" event={"ID":"b1e097af-88c0-4cd0-b4bc-92793ae0f1f0","Type":"ContainerStarted","Data":"d7ffde2eff7975ba1cdc812b8c5f6af2773623225a74498593e92e9e9d01c9d7"} Dec 11 15:20:29 crc kubenswrapper[5050]: I1211 15:20:29.878261 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-656c6" event={"ID":"b1e097af-88c0-4cd0-b4bc-92793ae0f1f0","Type":"ContainerStarted","Data":"66946627d6159beacbc5dbd858d80b5a2bcbf64d55fb044df86361939d85a51b"} Dec 11 15:20:29 crc kubenswrapper[5050]: I1211 15:20:29.881684 5050 generic.go:334] "Generic (PLEG): container finished" podID="9d52ee10-8e8e-457a-92cc-03b93c6bedca" containerID="fec02ffbe8337c9b5bee5297d45dde19f614061cf73022c0b5e8af9b4cb5052d" exitCode=0 Dec 11 15:20:29 crc kubenswrapper[5050]: I1211 15:20:29.881739 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b49969b8f-w2f8g" event={"ID":"9d52ee10-8e8e-457a-92cc-03b93c6bedca","Type":"ContainerDied","Data":"fec02ffbe8337c9b5bee5297d45dde19f614061cf73022c0b5e8af9b4cb5052d"} Dec 11 15:20:29 crc kubenswrapper[5050]: I1211 15:20:29.881768 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b49969b8f-w2f8g" event={"ID":"9d52ee10-8e8e-457a-92cc-03b93c6bedca","Type":"ContainerStarted","Data":"524e8252bdc3ce196fa25139227f27eb145907440d02b5dd676579231494828c"} Dec 11 15:20:29 crc kubenswrapper[5050]: I1211 15:20:29.910163 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-656c6" podStartSLOduration=1.910135115 podStartE2EDuration="1.910135115s" podCreationTimestamp="2025-12-11 15:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:20:29.897357882 +0000 UTC m=+5520.741080478" watchObservedRunningTime="2025-12-11 15:20:29.910135115 +0000 UTC m=+5520.753857711" Dec 11 15:20:30 crc kubenswrapper[5050]: I1211 15:20:30.891296 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b49969b8f-w2f8g" event={"ID":"9d52ee10-8e8e-457a-92cc-03b93c6bedca","Type":"ContainerStarted","Data":"29b876e4d00df1a55af81f5e20c8882b09c2c84707ab01ca16a849ce3d8b63f5"} Dec 11 15:20:30 crc kubenswrapper[5050]: I1211 15:20:30.891730 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b49969b8f-w2f8g" Dec 11 15:20:30 crc kubenswrapper[5050]: I1211 15:20:30.894276 5050 generic.go:334] "Generic (PLEG): container finished" podID="b1e097af-88c0-4cd0-b4bc-92793ae0f1f0" containerID="d7ffde2eff7975ba1cdc812b8c5f6af2773623225a74498593e92e9e9d01c9d7" exitCode=0 Dec 11 15:20:30 crc kubenswrapper[5050]: I1211 15:20:30.894373 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-656c6" event={"ID":"b1e097af-88c0-4cd0-b4bc-92793ae0f1f0","Type":"ContainerDied","Data":"d7ffde2eff7975ba1cdc812b8c5f6af2773623225a74498593e92e9e9d01c9d7"} Dec 11 15:20:30 crc kubenswrapper[5050]: I1211 15:20:30.918107 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b49969b8f-w2f8g" podStartSLOduration=2.918088168 podStartE2EDuration="2.918088168s" podCreationTimestamp="2025-12-11 15:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:20:30.915458588 +0000 UTC m=+5521.759181214" watchObservedRunningTime="2025-12-11 15:20:30.918088168 +0000 UTC m=+5521.761810754" Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.281777 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-656c6" Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.351826 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1e097af-88c0-4cd0-b4bc-92793ae0f1f0-logs\") pod \"b1e097af-88c0-4cd0-b4bc-92793ae0f1f0\" (UID: \"b1e097af-88c0-4cd0-b4bc-92793ae0f1f0\") " Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.351950 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1e097af-88c0-4cd0-b4bc-92793ae0f1f0-combined-ca-bundle\") pod \"b1e097af-88c0-4cd0-b4bc-92793ae0f1f0\" (UID: \"b1e097af-88c0-4cd0-b4bc-92793ae0f1f0\") " Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.352000 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkpm9\" (UniqueName: \"kubernetes.io/projected/b1e097af-88c0-4cd0-b4bc-92793ae0f1f0-kube-api-access-nkpm9\") pod \"b1e097af-88c0-4cd0-b4bc-92793ae0f1f0\" (UID: \"b1e097af-88c0-4cd0-b4bc-92793ae0f1f0\") " Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.352079 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1e097af-88c0-4cd0-b4bc-92793ae0f1f0-config-data\") pod \"b1e097af-88c0-4cd0-b4bc-92793ae0f1f0\" (UID: \"b1e097af-88c0-4cd0-b4bc-92793ae0f1f0\") " Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.352119 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1e097af-88c0-4cd0-b4bc-92793ae0f1f0-scripts\") pod \"b1e097af-88c0-4cd0-b4bc-92793ae0f1f0\" (UID: \"b1e097af-88c0-4cd0-b4bc-92793ae0f1f0\") " Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.353315 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1e097af-88c0-4cd0-b4bc-92793ae0f1f0-logs" (OuterVolumeSpecName: "logs") pod "b1e097af-88c0-4cd0-b4bc-92793ae0f1f0" (UID: "b1e097af-88c0-4cd0-b4bc-92793ae0f1f0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.358967 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1e097af-88c0-4cd0-b4bc-92793ae0f1f0-scripts" (OuterVolumeSpecName: "scripts") pod "b1e097af-88c0-4cd0-b4bc-92793ae0f1f0" (UID: "b1e097af-88c0-4cd0-b4bc-92793ae0f1f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.359387 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1e097af-88c0-4cd0-b4bc-92793ae0f1f0-kube-api-access-nkpm9" (OuterVolumeSpecName: "kube-api-access-nkpm9") pod "b1e097af-88c0-4cd0-b4bc-92793ae0f1f0" (UID: "b1e097af-88c0-4cd0-b4bc-92793ae0f1f0"). InnerVolumeSpecName "kube-api-access-nkpm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.382797 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1e097af-88c0-4cd0-b4bc-92793ae0f1f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1e097af-88c0-4cd0-b4bc-92793ae0f1f0" (UID: "b1e097af-88c0-4cd0-b4bc-92793ae0f1f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.385927 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1e097af-88c0-4cd0-b4bc-92793ae0f1f0-config-data" (OuterVolumeSpecName: "config-data") pod "b1e097af-88c0-4cd0-b4bc-92793ae0f1f0" (UID: "b1e097af-88c0-4cd0-b4bc-92793ae0f1f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.454414 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1e097af-88c0-4cd0-b4bc-92793ae0f1f0-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.454459 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1e097af-88c0-4cd0-b4bc-92793ae0f1f0-logs\") on node \"crc\" DevicePath \"\"" Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.454472 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1e097af-88c0-4cd0-b4bc-92793ae0f1f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.454487 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkpm9\" (UniqueName: \"kubernetes.io/projected/b1e097af-88c0-4cd0-b4bc-92793ae0f1f0-kube-api-access-nkpm9\") on node \"crc\" DevicePath \"\"" Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.454498 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1e097af-88c0-4cd0-b4bc-92793ae0f1f0-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.680995 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-756f75dd4b-zg586"] Dec 11 15:20:32 crc kubenswrapper[5050]: E1211 15:20:32.681345 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1e097af-88c0-4cd0-b4bc-92793ae0f1f0" containerName="placement-db-sync" Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.681358 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1e097af-88c0-4cd0-b4bc-92793ae0f1f0" containerName="placement-db-sync" Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.681533 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1e097af-88c0-4cd0-b4bc-92793ae0f1f0" containerName="placement-db-sync" Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.682705 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-756f75dd4b-zg586" Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.702827 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-756f75dd4b-zg586"] Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.760127 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04d9456b-3702-4a01-ad1f-f4a30c9c8d83-logs\") pod \"placement-756f75dd4b-zg586\" (UID: \"04d9456b-3702-4a01-ad1f-f4a30c9c8d83\") " pod="openstack/placement-756f75dd4b-zg586" Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.760244 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04d9456b-3702-4a01-ad1f-f4a30c9c8d83-scripts\") pod \"placement-756f75dd4b-zg586\" (UID: \"04d9456b-3702-4a01-ad1f-f4a30c9c8d83\") " pod="openstack/placement-756f75dd4b-zg586" Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.760305 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04d9456b-3702-4a01-ad1f-f4a30c9c8d83-config-data\") pod \"placement-756f75dd4b-zg586\" (UID: \"04d9456b-3702-4a01-ad1f-f4a30c9c8d83\") " pod="openstack/placement-756f75dd4b-zg586" Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.760325 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04d9456b-3702-4a01-ad1f-f4a30c9c8d83-combined-ca-bundle\") pod \"placement-756f75dd4b-zg586\" (UID: \"04d9456b-3702-4a01-ad1f-f4a30c9c8d83\") " pod="openstack/placement-756f75dd4b-zg586" Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.760783 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp4lb\" (UniqueName: \"kubernetes.io/projected/04d9456b-3702-4a01-ad1f-f4a30c9c8d83-kube-api-access-cp4lb\") pod \"placement-756f75dd4b-zg586\" (UID: \"04d9456b-3702-4a01-ad1f-f4a30c9c8d83\") " pod="openstack/placement-756f75dd4b-zg586" Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.864045 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04d9456b-3702-4a01-ad1f-f4a30c9c8d83-scripts\") pod \"placement-756f75dd4b-zg586\" (UID: \"04d9456b-3702-4a01-ad1f-f4a30c9c8d83\") " pod="openstack/placement-756f75dd4b-zg586" Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.864780 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04d9456b-3702-4a01-ad1f-f4a30c9c8d83-config-data\") pod \"placement-756f75dd4b-zg586\" (UID: \"04d9456b-3702-4a01-ad1f-f4a30c9c8d83\") " pod="openstack/placement-756f75dd4b-zg586" Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.864840 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04d9456b-3702-4a01-ad1f-f4a30c9c8d83-combined-ca-bundle\") pod \"placement-756f75dd4b-zg586\" (UID: \"04d9456b-3702-4a01-ad1f-f4a30c9c8d83\") " pod="openstack/placement-756f75dd4b-zg586" Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.864966 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp4lb\" (UniqueName: \"kubernetes.io/projected/04d9456b-3702-4a01-ad1f-f4a30c9c8d83-kube-api-access-cp4lb\") pod \"placement-756f75dd4b-zg586\" (UID: \"04d9456b-3702-4a01-ad1f-f4a30c9c8d83\") " pod="openstack/placement-756f75dd4b-zg586" Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.865063 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04d9456b-3702-4a01-ad1f-f4a30c9c8d83-logs\") pod \"placement-756f75dd4b-zg586\" (UID: \"04d9456b-3702-4a01-ad1f-f4a30c9c8d83\") " pod="openstack/placement-756f75dd4b-zg586" Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.865794 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04d9456b-3702-4a01-ad1f-f4a30c9c8d83-logs\") pod \"placement-756f75dd4b-zg586\" (UID: \"04d9456b-3702-4a01-ad1f-f4a30c9c8d83\") " pod="openstack/placement-756f75dd4b-zg586" Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.869727 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04d9456b-3702-4a01-ad1f-f4a30c9c8d83-scripts\") pod \"placement-756f75dd4b-zg586\" (UID: \"04d9456b-3702-4a01-ad1f-f4a30c9c8d83\") " pod="openstack/placement-756f75dd4b-zg586" Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.870407 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04d9456b-3702-4a01-ad1f-f4a30c9c8d83-combined-ca-bundle\") pod \"placement-756f75dd4b-zg586\" (UID: \"04d9456b-3702-4a01-ad1f-f4a30c9c8d83\") " pod="openstack/placement-756f75dd4b-zg586" Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.871878 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04d9456b-3702-4a01-ad1f-f4a30c9c8d83-config-data\") pod \"placement-756f75dd4b-zg586\" (UID: \"04d9456b-3702-4a01-ad1f-f4a30c9c8d83\") " pod="openstack/placement-756f75dd4b-zg586" Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.885490 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp4lb\" (UniqueName: \"kubernetes.io/projected/04d9456b-3702-4a01-ad1f-f4a30c9c8d83-kube-api-access-cp4lb\") pod \"placement-756f75dd4b-zg586\" (UID: \"04d9456b-3702-4a01-ad1f-f4a30c9c8d83\") " pod="openstack/placement-756f75dd4b-zg586" Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.916331 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-656c6" event={"ID":"b1e097af-88c0-4cd0-b4bc-92793ae0f1f0","Type":"ContainerDied","Data":"66946627d6159beacbc5dbd858d80b5a2bcbf64d55fb044df86361939d85a51b"} Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.916375 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66946627d6159beacbc5dbd858d80b5a2bcbf64d55fb044df86361939d85a51b" Dec 11 15:20:32 crc kubenswrapper[5050]: I1211 15:20:32.916519 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-656c6" Dec 11 15:20:33 crc kubenswrapper[5050]: I1211 15:20:33.001747 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-756f75dd4b-zg586" Dec 11 15:20:33 crc kubenswrapper[5050]: I1211 15:20:33.274797 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-756f75dd4b-zg586"] Dec 11 15:20:33 crc kubenswrapper[5050]: I1211 15:20:33.927962 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-756f75dd4b-zg586" event={"ID":"04d9456b-3702-4a01-ad1f-f4a30c9c8d83","Type":"ContainerStarted","Data":"aca891df85429942cd2963a262183eeec90a421f1eaff16d722f19c84d8b0613"} Dec 11 15:20:33 crc kubenswrapper[5050]: I1211 15:20:33.928378 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-756f75dd4b-zg586" event={"ID":"04d9456b-3702-4a01-ad1f-f4a30c9c8d83","Type":"ContainerStarted","Data":"046b745336b97aa9a5fb6002cc27badf78bde0d383e9b1e7ad60ffbb4d5ecd7e"} Dec 11 15:20:33 crc kubenswrapper[5050]: I1211 15:20:33.928395 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-756f75dd4b-zg586" event={"ID":"04d9456b-3702-4a01-ad1f-f4a30c9c8d83","Type":"ContainerStarted","Data":"9cf229ae2304f28d5d39e814f027ccb26e45e0428c24e49c4e424e8961ea77d5"} Dec 11 15:20:33 crc kubenswrapper[5050]: I1211 15:20:33.928450 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-756f75dd4b-zg586" Dec 11 15:20:33 crc kubenswrapper[5050]: I1211 15:20:33.928477 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-756f75dd4b-zg586" Dec 11 15:20:33 crc kubenswrapper[5050]: I1211 15:20:33.967491 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-756f75dd4b-zg586" podStartSLOduration=1.967452832 podStartE2EDuration="1.967452832s" podCreationTimestamp="2025-12-11 15:20:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:20:33.952197532 +0000 UTC m=+5524.795920128" watchObservedRunningTime="2025-12-11 15:20:33.967452832 +0000 UTC m=+5524.811175438" Dec 11 15:20:36 crc kubenswrapper[5050]: I1211 15:20:36.546787 5050 scope.go:117] "RemoveContainer" containerID="5e751c2ff817a404aae81889734d5a9bd57af497d8e885add288368bb5b0040e" Dec 11 15:20:36 crc kubenswrapper[5050]: E1211 15:20:36.547642 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:20:38 crc kubenswrapper[5050]: I1211 15:20:38.750865 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b49969b8f-w2f8g" Dec 11 15:20:38 crc kubenswrapper[5050]: I1211 15:20:38.840956 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d84855d79-nmbkp"] Dec 11 15:20:38 crc kubenswrapper[5050]: I1211 15:20:38.841406 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d84855d79-nmbkp" podUID="36cd4aea-0d4c-4dab-b57e-1065b6e2183d" containerName="dnsmasq-dns" containerID="cri-o://49189878d10c48084d29fcaf1240fa5c90c358535d6be71b5208d6f365de6efe" gracePeriod=10 Dec 11 15:20:38 crc kubenswrapper[5050]: I1211 15:20:38.996408 5050 generic.go:334] "Generic (PLEG): container finished" podID="36cd4aea-0d4c-4dab-b57e-1065b6e2183d" containerID="49189878d10c48084d29fcaf1240fa5c90c358535d6be71b5208d6f365de6efe" exitCode=0 Dec 11 15:20:38 crc kubenswrapper[5050]: I1211 15:20:38.996453 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d84855d79-nmbkp" event={"ID":"36cd4aea-0d4c-4dab-b57e-1065b6e2183d","Type":"ContainerDied","Data":"49189878d10c48084d29fcaf1240fa5c90c358535d6be71b5208d6f365de6efe"} Dec 11 15:20:39 crc kubenswrapper[5050]: I1211 15:20:39.357948 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d84855d79-nmbkp" Dec 11 15:20:39 crc kubenswrapper[5050]: I1211 15:20:39.413534 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36cd4aea-0d4c-4dab-b57e-1065b6e2183d-dns-svc\") pod \"36cd4aea-0d4c-4dab-b57e-1065b6e2183d\" (UID: \"36cd4aea-0d4c-4dab-b57e-1065b6e2183d\") " Dec 11 15:20:39 crc kubenswrapper[5050]: I1211 15:20:39.413701 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36cd4aea-0d4c-4dab-b57e-1065b6e2183d-ovsdbserver-nb\") pod \"36cd4aea-0d4c-4dab-b57e-1065b6e2183d\" (UID: \"36cd4aea-0d4c-4dab-b57e-1065b6e2183d\") " Dec 11 15:20:39 crc kubenswrapper[5050]: I1211 15:20:39.413741 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36cd4aea-0d4c-4dab-b57e-1065b6e2183d-config\") pod \"36cd4aea-0d4c-4dab-b57e-1065b6e2183d\" (UID: \"36cd4aea-0d4c-4dab-b57e-1065b6e2183d\") " Dec 11 15:20:39 crc kubenswrapper[5050]: I1211 15:20:39.413832 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36cd4aea-0d4c-4dab-b57e-1065b6e2183d-ovsdbserver-sb\") pod \"36cd4aea-0d4c-4dab-b57e-1065b6e2183d\" (UID: \"36cd4aea-0d4c-4dab-b57e-1065b6e2183d\") " Dec 11 15:20:39 crc kubenswrapper[5050]: I1211 15:20:39.413958 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sd66f\" (UniqueName: \"kubernetes.io/projected/36cd4aea-0d4c-4dab-b57e-1065b6e2183d-kube-api-access-sd66f\") pod \"36cd4aea-0d4c-4dab-b57e-1065b6e2183d\" (UID: \"36cd4aea-0d4c-4dab-b57e-1065b6e2183d\") " Dec 11 15:20:39 crc kubenswrapper[5050]: I1211 15:20:39.446452 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36cd4aea-0d4c-4dab-b57e-1065b6e2183d-kube-api-access-sd66f" (OuterVolumeSpecName: "kube-api-access-sd66f") pod "36cd4aea-0d4c-4dab-b57e-1065b6e2183d" (UID: "36cd4aea-0d4c-4dab-b57e-1065b6e2183d"). InnerVolumeSpecName "kube-api-access-sd66f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:20:39 crc kubenswrapper[5050]: I1211 15:20:39.470677 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36cd4aea-0d4c-4dab-b57e-1065b6e2183d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "36cd4aea-0d4c-4dab-b57e-1065b6e2183d" (UID: "36cd4aea-0d4c-4dab-b57e-1065b6e2183d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:20:39 crc kubenswrapper[5050]: I1211 15:20:39.486335 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36cd4aea-0d4c-4dab-b57e-1065b6e2183d-config" (OuterVolumeSpecName: "config") pod "36cd4aea-0d4c-4dab-b57e-1065b6e2183d" (UID: "36cd4aea-0d4c-4dab-b57e-1065b6e2183d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:20:39 crc kubenswrapper[5050]: I1211 15:20:39.494504 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36cd4aea-0d4c-4dab-b57e-1065b6e2183d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "36cd4aea-0d4c-4dab-b57e-1065b6e2183d" (UID: "36cd4aea-0d4c-4dab-b57e-1065b6e2183d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:20:39 crc kubenswrapper[5050]: I1211 15:20:39.495583 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36cd4aea-0d4c-4dab-b57e-1065b6e2183d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "36cd4aea-0d4c-4dab-b57e-1065b6e2183d" (UID: "36cd4aea-0d4c-4dab-b57e-1065b6e2183d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:20:39 crc kubenswrapper[5050]: I1211 15:20:39.518501 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sd66f\" (UniqueName: \"kubernetes.io/projected/36cd4aea-0d4c-4dab-b57e-1065b6e2183d-kube-api-access-sd66f\") on node \"crc\" DevicePath \"\"" Dec 11 15:20:39 crc kubenswrapper[5050]: I1211 15:20:39.518527 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36cd4aea-0d4c-4dab-b57e-1065b6e2183d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 15:20:39 crc kubenswrapper[5050]: I1211 15:20:39.518538 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36cd4aea-0d4c-4dab-b57e-1065b6e2183d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 15:20:39 crc kubenswrapper[5050]: I1211 15:20:39.518547 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36cd4aea-0d4c-4dab-b57e-1065b6e2183d-config\") on node \"crc\" DevicePath \"\"" Dec 11 15:20:39 crc kubenswrapper[5050]: I1211 15:20:39.518556 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36cd4aea-0d4c-4dab-b57e-1065b6e2183d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 15:20:40 crc kubenswrapper[5050]: I1211 15:20:40.008436 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d84855d79-nmbkp" event={"ID":"36cd4aea-0d4c-4dab-b57e-1065b6e2183d","Type":"ContainerDied","Data":"889922c4d2b66239c8f0afdaac43ed23a09b2d45c1971fe7014566fba0dbe9dd"} Dec 11 15:20:40 crc kubenswrapper[5050]: I1211 15:20:40.008499 5050 scope.go:117] "RemoveContainer" containerID="49189878d10c48084d29fcaf1240fa5c90c358535d6be71b5208d6f365de6efe" Dec 11 15:20:40 crc kubenswrapper[5050]: I1211 15:20:40.008506 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d84855d79-nmbkp" Dec 11 15:20:40 crc kubenswrapper[5050]: I1211 15:20:40.042421 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d84855d79-nmbkp"] Dec 11 15:20:40 crc kubenswrapper[5050]: I1211 15:20:40.042545 5050 scope.go:117] "RemoveContainer" containerID="014fafebc886bcfeb9757e835e674d7c42df2d2d7b8dc599b83a22c8696b2a13" Dec 11 15:20:40 crc kubenswrapper[5050]: I1211 15:20:40.054653 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d84855d79-nmbkp"] Dec 11 15:20:41 crc kubenswrapper[5050]: I1211 15:20:41.559799 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36cd4aea-0d4c-4dab-b57e-1065b6e2183d" path="/var/lib/kubelet/pods/36cd4aea-0d4c-4dab-b57e-1065b6e2183d/volumes" Dec 11 15:20:48 crc kubenswrapper[5050]: I1211 15:20:48.546062 5050 scope.go:117] "RemoveContainer" containerID="5e751c2ff817a404aae81889734d5a9bd57af497d8e885add288368bb5b0040e" Dec 11 15:20:48 crc kubenswrapper[5050]: E1211 15:20:48.546847 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:21:02 crc kubenswrapper[5050]: I1211 15:21:02.546586 5050 scope.go:117] "RemoveContainer" containerID="5e751c2ff817a404aae81889734d5a9bd57af497d8e885add288368bb5b0040e" Dec 11 15:21:02 crc kubenswrapper[5050]: E1211 15:21:02.547318 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:21:03 crc kubenswrapper[5050]: I1211 15:21:03.942941 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-756f75dd4b-zg586" Dec 11 15:21:03 crc kubenswrapper[5050]: I1211 15:21:03.965821 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-756f75dd4b-zg586" Dec 11 15:21:15 crc kubenswrapper[5050]: I1211 15:21:15.546862 5050 scope.go:117] "RemoveContainer" containerID="5e751c2ff817a404aae81889734d5a9bd57af497d8e885add288368bb5b0040e" Dec 11 15:21:15 crc kubenswrapper[5050]: E1211 15:21:15.547630 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.236227 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-gtv8c"] Dec 11 15:21:25 crc kubenswrapper[5050]: E1211 15:21:25.237092 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36cd4aea-0d4c-4dab-b57e-1065b6e2183d" containerName="dnsmasq-dns" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.237106 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="36cd4aea-0d4c-4dab-b57e-1065b6e2183d" containerName="dnsmasq-dns" Dec 11 15:21:25 crc kubenswrapper[5050]: E1211 15:21:25.237135 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36cd4aea-0d4c-4dab-b57e-1065b6e2183d" containerName="init" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.237140 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="36cd4aea-0d4c-4dab-b57e-1065b6e2183d" containerName="init" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.237303 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="36cd4aea-0d4c-4dab-b57e-1065b6e2183d" containerName="dnsmasq-dns" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.237899 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-gtv8c" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.251440 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-gtv8c"] Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.294124 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h88d9\" (UniqueName: \"kubernetes.io/projected/a80b79e5-138b-4e71-ab5e-aa8805cce0b5-kube-api-access-h88d9\") pod \"nova-api-db-create-gtv8c\" (UID: \"a80b79e5-138b-4e71-ab5e-aa8805cce0b5\") " pod="openstack/nova-api-db-create-gtv8c" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.294187 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a80b79e5-138b-4e71-ab5e-aa8805cce0b5-operator-scripts\") pod \"nova-api-db-create-gtv8c\" (UID: \"a80b79e5-138b-4e71-ab5e-aa8805cce0b5\") " pod="openstack/nova-api-db-create-gtv8c" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.331383 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-m7c2g"] Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.333118 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-m7c2g" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.353038 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-m7c2g"] Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.395858 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx7p5\" (UniqueName: \"kubernetes.io/projected/465e991b-2d30-4579-aa62-8fc4ab7afe21-kube-api-access-zx7p5\") pod \"nova-cell0-db-create-m7c2g\" (UID: \"465e991b-2d30-4579-aa62-8fc4ab7afe21\") " pod="openstack/nova-cell0-db-create-m7c2g" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.395955 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/465e991b-2d30-4579-aa62-8fc4ab7afe21-operator-scripts\") pod \"nova-cell0-db-create-m7c2g\" (UID: \"465e991b-2d30-4579-aa62-8fc4ab7afe21\") " pod="openstack/nova-cell0-db-create-m7c2g" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.396069 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h88d9\" (UniqueName: \"kubernetes.io/projected/a80b79e5-138b-4e71-ab5e-aa8805cce0b5-kube-api-access-h88d9\") pod \"nova-api-db-create-gtv8c\" (UID: \"a80b79e5-138b-4e71-ab5e-aa8805cce0b5\") " pod="openstack/nova-api-db-create-gtv8c" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.396117 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a80b79e5-138b-4e71-ab5e-aa8805cce0b5-operator-scripts\") pod \"nova-api-db-create-gtv8c\" (UID: \"a80b79e5-138b-4e71-ab5e-aa8805cce0b5\") " pod="openstack/nova-api-db-create-gtv8c" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.397026 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a80b79e5-138b-4e71-ab5e-aa8805cce0b5-operator-scripts\") pod \"nova-api-db-create-gtv8c\" (UID: \"a80b79e5-138b-4e71-ab5e-aa8805cce0b5\") " pod="openstack/nova-api-db-create-gtv8c" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.415278 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h88d9\" (UniqueName: \"kubernetes.io/projected/a80b79e5-138b-4e71-ab5e-aa8805cce0b5-kube-api-access-h88d9\") pod \"nova-api-db-create-gtv8c\" (UID: \"a80b79e5-138b-4e71-ab5e-aa8805cce0b5\") " pod="openstack/nova-api-db-create-gtv8c" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.438602 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-x9fv9"] Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.445693 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-x9fv9" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.458859 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-x9fv9"] Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.475983 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-3433-account-create-update-wknwz"] Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.477436 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3433-account-create-update-wknwz" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.480171 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.491636 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3433-account-create-update-wknwz"] Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.497687 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hklcg\" (UniqueName: \"kubernetes.io/projected/ae46e8c2-edc3-46dc-a160-75c77cb2bafb-kube-api-access-hklcg\") pod \"nova-cell1-db-create-x9fv9\" (UID: \"ae46e8c2-edc3-46dc-a160-75c77cb2bafb\") " pod="openstack/nova-cell1-db-create-x9fv9" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.497783 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx7p5\" (UniqueName: \"kubernetes.io/projected/465e991b-2d30-4579-aa62-8fc4ab7afe21-kube-api-access-zx7p5\") pod \"nova-cell0-db-create-m7c2g\" (UID: \"465e991b-2d30-4579-aa62-8fc4ab7afe21\") " pod="openstack/nova-cell0-db-create-m7c2g" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.497845 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/465e991b-2d30-4579-aa62-8fc4ab7afe21-operator-scripts\") pod \"nova-cell0-db-create-m7c2g\" (UID: \"465e991b-2d30-4579-aa62-8fc4ab7afe21\") " pod="openstack/nova-cell0-db-create-m7c2g" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.497898 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae46e8c2-edc3-46dc-a160-75c77cb2bafb-operator-scripts\") pod \"nova-cell1-db-create-x9fv9\" (UID: \"ae46e8c2-edc3-46dc-a160-75c77cb2bafb\") " pod="openstack/nova-cell1-db-create-x9fv9" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.498648 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/465e991b-2d30-4579-aa62-8fc4ab7afe21-operator-scripts\") pod \"nova-cell0-db-create-m7c2g\" (UID: \"465e991b-2d30-4579-aa62-8fc4ab7afe21\") " pod="openstack/nova-cell0-db-create-m7c2g" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.520895 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx7p5\" (UniqueName: \"kubernetes.io/projected/465e991b-2d30-4579-aa62-8fc4ab7afe21-kube-api-access-zx7p5\") pod \"nova-cell0-db-create-m7c2g\" (UID: \"465e991b-2d30-4579-aa62-8fc4ab7afe21\") " pod="openstack/nova-cell0-db-create-m7c2g" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.564148 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-gtv8c" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.598900 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hklcg\" (UniqueName: \"kubernetes.io/projected/ae46e8c2-edc3-46dc-a160-75c77cb2bafb-kube-api-access-hklcg\") pod \"nova-cell1-db-create-x9fv9\" (UID: \"ae46e8c2-edc3-46dc-a160-75c77cb2bafb\") " pod="openstack/nova-cell1-db-create-x9fv9" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.599125 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxhz4\" (UniqueName: \"kubernetes.io/projected/d35e2a89-ca99-46a0-86ce-83d7eac9733e-kube-api-access-jxhz4\") pod \"nova-api-3433-account-create-update-wknwz\" (UID: \"d35e2a89-ca99-46a0-86ce-83d7eac9733e\") " pod="openstack/nova-api-3433-account-create-update-wknwz" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.599149 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae46e8c2-edc3-46dc-a160-75c77cb2bafb-operator-scripts\") pod \"nova-cell1-db-create-x9fv9\" (UID: \"ae46e8c2-edc3-46dc-a160-75c77cb2bafb\") " pod="openstack/nova-cell1-db-create-x9fv9" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.599253 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d35e2a89-ca99-46a0-86ce-83d7eac9733e-operator-scripts\") pod \"nova-api-3433-account-create-update-wknwz\" (UID: \"d35e2a89-ca99-46a0-86ce-83d7eac9733e\") " pod="openstack/nova-api-3433-account-create-update-wknwz" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.599796 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae46e8c2-edc3-46dc-a160-75c77cb2bafb-operator-scripts\") pod \"nova-cell1-db-create-x9fv9\" (UID: \"ae46e8c2-edc3-46dc-a160-75c77cb2bafb\") " pod="openstack/nova-cell1-db-create-x9fv9" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.622683 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hklcg\" (UniqueName: \"kubernetes.io/projected/ae46e8c2-edc3-46dc-a160-75c77cb2bafb-kube-api-access-hklcg\") pod \"nova-cell1-db-create-x9fv9\" (UID: \"ae46e8c2-edc3-46dc-a160-75c77cb2bafb\") " pod="openstack/nova-cell1-db-create-x9fv9" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.655525 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-9fc1-account-create-update-prg9d"] Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.655599 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-m7c2g" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.657070 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9fc1-account-create-update-prg9d" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.660644 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.677183 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9fc1-account-create-update-prg9d"] Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.713776 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxhz4\" (UniqueName: \"kubernetes.io/projected/d35e2a89-ca99-46a0-86ce-83d7eac9733e-kube-api-access-jxhz4\") pod \"nova-api-3433-account-create-update-wknwz\" (UID: \"d35e2a89-ca99-46a0-86ce-83d7eac9733e\") " pod="openstack/nova-api-3433-account-create-update-wknwz" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.713849 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d35e2a89-ca99-46a0-86ce-83d7eac9733e-operator-scripts\") pod \"nova-api-3433-account-create-update-wknwz\" (UID: \"d35e2a89-ca99-46a0-86ce-83d7eac9733e\") " pod="openstack/nova-api-3433-account-create-update-wknwz" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.715222 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d35e2a89-ca99-46a0-86ce-83d7eac9733e-operator-scripts\") pod \"nova-api-3433-account-create-update-wknwz\" (UID: \"d35e2a89-ca99-46a0-86ce-83d7eac9733e\") " pod="openstack/nova-api-3433-account-create-update-wknwz" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.737277 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxhz4\" (UniqueName: \"kubernetes.io/projected/d35e2a89-ca99-46a0-86ce-83d7eac9733e-kube-api-access-jxhz4\") pod \"nova-api-3433-account-create-update-wknwz\" (UID: \"d35e2a89-ca99-46a0-86ce-83d7eac9733e\") " pod="openstack/nova-api-3433-account-create-update-wknwz" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.794958 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-x9fv9" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.816299 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04dc7ae1-2485-4ffe-a853-4ef671794e68-operator-scripts\") pod \"nova-cell0-9fc1-account-create-update-prg9d\" (UID: \"04dc7ae1-2485-4ffe-a853-4ef671794e68\") " pod="openstack/nova-cell0-9fc1-account-create-update-prg9d" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.816388 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jphxk\" (UniqueName: \"kubernetes.io/projected/04dc7ae1-2485-4ffe-a853-4ef671794e68-kube-api-access-jphxk\") pod \"nova-cell0-9fc1-account-create-update-prg9d\" (UID: \"04dc7ae1-2485-4ffe-a853-4ef671794e68\") " pod="openstack/nova-cell0-9fc1-account-create-update-prg9d" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.822731 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3433-account-create-update-wknwz" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.843639 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-45df-account-create-update-ckxx8"] Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.845234 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-45df-account-create-update-ckxx8" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.847691 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.851564 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-45df-account-create-update-ckxx8"] Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.917521 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a630a5b-c349-4e2f-876a-0b82485a8221-operator-scripts\") pod \"nova-cell1-45df-account-create-update-ckxx8\" (UID: \"5a630a5b-c349-4e2f-876a-0b82485a8221\") " pod="openstack/nova-cell1-45df-account-create-update-ckxx8" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.917612 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrkdb\" (UniqueName: \"kubernetes.io/projected/5a630a5b-c349-4e2f-876a-0b82485a8221-kube-api-access-xrkdb\") pod \"nova-cell1-45df-account-create-update-ckxx8\" (UID: \"5a630a5b-c349-4e2f-876a-0b82485a8221\") " pod="openstack/nova-cell1-45df-account-create-update-ckxx8" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.917643 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jphxk\" (UniqueName: \"kubernetes.io/projected/04dc7ae1-2485-4ffe-a853-4ef671794e68-kube-api-access-jphxk\") pod \"nova-cell0-9fc1-account-create-update-prg9d\" (UID: \"04dc7ae1-2485-4ffe-a853-4ef671794e68\") " pod="openstack/nova-cell0-9fc1-account-create-update-prg9d" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.917734 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04dc7ae1-2485-4ffe-a853-4ef671794e68-operator-scripts\") pod \"nova-cell0-9fc1-account-create-update-prg9d\" (UID: \"04dc7ae1-2485-4ffe-a853-4ef671794e68\") " pod="openstack/nova-cell0-9fc1-account-create-update-prg9d" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.919086 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04dc7ae1-2485-4ffe-a853-4ef671794e68-operator-scripts\") pod \"nova-cell0-9fc1-account-create-update-prg9d\" (UID: \"04dc7ae1-2485-4ffe-a853-4ef671794e68\") " pod="openstack/nova-cell0-9fc1-account-create-update-prg9d" Dec 11 15:21:25 crc kubenswrapper[5050]: I1211 15:21:25.936939 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jphxk\" (UniqueName: \"kubernetes.io/projected/04dc7ae1-2485-4ffe-a853-4ef671794e68-kube-api-access-jphxk\") pod \"nova-cell0-9fc1-account-create-update-prg9d\" (UID: \"04dc7ae1-2485-4ffe-a853-4ef671794e68\") " pod="openstack/nova-cell0-9fc1-account-create-update-prg9d" Dec 11 15:21:26 crc kubenswrapper[5050]: I1211 15:21:26.019217 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a630a5b-c349-4e2f-876a-0b82485a8221-operator-scripts\") pod \"nova-cell1-45df-account-create-update-ckxx8\" (UID: \"5a630a5b-c349-4e2f-876a-0b82485a8221\") " pod="openstack/nova-cell1-45df-account-create-update-ckxx8" Dec 11 15:21:26 crc kubenswrapper[5050]: I1211 15:21:26.019281 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrkdb\" (UniqueName: \"kubernetes.io/projected/5a630a5b-c349-4e2f-876a-0b82485a8221-kube-api-access-xrkdb\") pod \"nova-cell1-45df-account-create-update-ckxx8\" (UID: \"5a630a5b-c349-4e2f-876a-0b82485a8221\") " pod="openstack/nova-cell1-45df-account-create-update-ckxx8" Dec 11 15:21:26 crc kubenswrapper[5050]: I1211 15:21:26.020183 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a630a5b-c349-4e2f-876a-0b82485a8221-operator-scripts\") pod \"nova-cell1-45df-account-create-update-ckxx8\" (UID: \"5a630a5b-c349-4e2f-876a-0b82485a8221\") " pod="openstack/nova-cell1-45df-account-create-update-ckxx8" Dec 11 15:21:26 crc kubenswrapper[5050]: I1211 15:21:26.035948 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrkdb\" (UniqueName: \"kubernetes.io/projected/5a630a5b-c349-4e2f-876a-0b82485a8221-kube-api-access-xrkdb\") pod \"nova-cell1-45df-account-create-update-ckxx8\" (UID: \"5a630a5b-c349-4e2f-876a-0b82485a8221\") " pod="openstack/nova-cell1-45df-account-create-update-ckxx8" Dec 11 15:21:26 crc kubenswrapper[5050]: I1211 15:21:26.041577 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9fc1-account-create-update-prg9d" Dec 11 15:21:26 crc kubenswrapper[5050]: I1211 15:21:26.082099 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-gtv8c"] Dec 11 15:21:26 crc kubenswrapper[5050]: I1211 15:21:26.172410 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-45df-account-create-update-ckxx8" Dec 11 15:21:26 crc kubenswrapper[5050]: I1211 15:21:26.183068 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-x9fv9"] Dec 11 15:21:26 crc kubenswrapper[5050]: I1211 15:21:26.203864 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-m7c2g"] Dec 11 15:21:26 crc kubenswrapper[5050]: I1211 15:21:26.261018 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3433-account-create-update-wknwz"] Dec 11 15:21:26 crc kubenswrapper[5050]: W1211 15:21:26.324041 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd35e2a89_ca99_46a0_86ce_83d7eac9733e.slice/crio-1a581d6d1f84876519a0950858ffc6b372b8edafa03ab517f443e441615dad56 WatchSource:0}: Error finding container 1a581d6d1f84876519a0950858ffc6b372b8edafa03ab517f443e441615dad56: Status 404 returned error can't find the container with id 1a581d6d1f84876519a0950858ffc6b372b8edafa03ab517f443e441615dad56 Dec 11 15:21:26 crc kubenswrapper[5050]: I1211 15:21:26.465668 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-gtv8c" event={"ID":"a80b79e5-138b-4e71-ab5e-aa8805cce0b5","Type":"ContainerStarted","Data":"e09a0786d1e30296342f9183180cf055bb58bbf45f4ab7cc1d293b2b154af638"} Dec 11 15:21:26 crc kubenswrapper[5050]: I1211 15:21:26.468314 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-x9fv9" event={"ID":"ae46e8c2-edc3-46dc-a160-75c77cb2bafb","Type":"ContainerStarted","Data":"2a8ff004611a96d055d7a592971ea4b020fa1edfc09982d4ca83073bde0f45c7"} Dec 11 15:21:26 crc kubenswrapper[5050]: I1211 15:21:26.470667 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3433-account-create-update-wknwz" event={"ID":"d35e2a89-ca99-46a0-86ce-83d7eac9733e","Type":"ContainerStarted","Data":"1a581d6d1f84876519a0950858ffc6b372b8edafa03ab517f443e441615dad56"} Dec 11 15:21:26 crc kubenswrapper[5050]: I1211 15:21:26.471841 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-m7c2g" event={"ID":"465e991b-2d30-4579-aa62-8fc4ab7afe21","Type":"ContainerStarted","Data":"2219fca8b2257ca3b4e4898367d3089541fe50a1fea0801dfe707a9db277e43f"} Dec 11 15:21:26 crc kubenswrapper[5050]: I1211 15:21:26.547704 5050 scope.go:117] "RemoveContainer" containerID="5e751c2ff817a404aae81889734d5a9bd57af497d8e885add288368bb5b0040e" Dec 11 15:21:26 crc kubenswrapper[5050]: E1211 15:21:26.547938 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:21:27 crc kubenswrapper[5050]: I1211 15:21:26.599775 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-9fc1-account-create-update-prg9d"] Dec 11 15:21:27 crc kubenswrapper[5050]: I1211 15:21:26.814374 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-45df-account-create-update-ckxx8"] Dec 11 15:21:27 crc kubenswrapper[5050]: I1211 15:21:27.481944 5050 generic.go:334] "Generic (PLEG): container finished" podID="a80b79e5-138b-4e71-ab5e-aa8805cce0b5" containerID="f7f1b6337ce8ed39b35e0217d22c76a014c4bda1bad0bd9c450cb00397f0876e" exitCode=0 Dec 11 15:21:27 crc kubenswrapper[5050]: I1211 15:21:27.482364 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-gtv8c" event={"ID":"a80b79e5-138b-4e71-ab5e-aa8805cce0b5","Type":"ContainerDied","Data":"f7f1b6337ce8ed39b35e0217d22c76a014c4bda1bad0bd9c450cb00397f0876e"} Dec 11 15:21:27 crc kubenswrapper[5050]: I1211 15:21:27.484083 5050 generic.go:334] "Generic (PLEG): container finished" podID="ae46e8c2-edc3-46dc-a160-75c77cb2bafb" containerID="c02660abc3fc5fb076ff5ad5bf2e511e26185a715de3f3abc635e5d8f4dcacdc" exitCode=0 Dec 11 15:21:27 crc kubenswrapper[5050]: I1211 15:21:27.484182 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-x9fv9" event={"ID":"ae46e8c2-edc3-46dc-a160-75c77cb2bafb","Type":"ContainerDied","Data":"c02660abc3fc5fb076ff5ad5bf2e511e26185a715de3f3abc635e5d8f4dcacdc"} Dec 11 15:21:27 crc kubenswrapper[5050]: I1211 15:21:27.486373 5050 generic.go:334] "Generic (PLEG): container finished" podID="d35e2a89-ca99-46a0-86ce-83d7eac9733e" containerID="c1e7a916f86387d6bb6a96e63dcb5d510f10fc66d19aee95197e783fdfbcaf87" exitCode=0 Dec 11 15:21:27 crc kubenswrapper[5050]: I1211 15:21:27.486420 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3433-account-create-update-wknwz" event={"ID":"d35e2a89-ca99-46a0-86ce-83d7eac9733e","Type":"ContainerDied","Data":"c1e7a916f86387d6bb6a96e63dcb5d510f10fc66d19aee95197e783fdfbcaf87"} Dec 11 15:21:27 crc kubenswrapper[5050]: I1211 15:21:27.488060 5050 generic.go:334] "Generic (PLEG): container finished" podID="04dc7ae1-2485-4ffe-a853-4ef671794e68" containerID="1adc4ab25db7fa2cbe239da32fd3d06314408717077a26f512cd6812c2d70dc4" exitCode=0 Dec 11 15:21:27 crc kubenswrapper[5050]: I1211 15:21:27.488230 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9fc1-account-create-update-prg9d" event={"ID":"04dc7ae1-2485-4ffe-a853-4ef671794e68","Type":"ContainerDied","Data":"1adc4ab25db7fa2cbe239da32fd3d06314408717077a26f512cd6812c2d70dc4"} Dec 11 15:21:27 crc kubenswrapper[5050]: I1211 15:21:27.488265 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9fc1-account-create-update-prg9d" event={"ID":"04dc7ae1-2485-4ffe-a853-4ef671794e68","Type":"ContainerStarted","Data":"99bc5729b50b7100370ffce08f444fb5cd4f5c6a1d91dd3a02b586cce97dd12a"} Dec 11 15:21:27 crc kubenswrapper[5050]: I1211 15:21:27.490735 5050 generic.go:334] "Generic (PLEG): container finished" podID="465e991b-2d30-4579-aa62-8fc4ab7afe21" containerID="069c9a16de9de821816198ef02460338c610d667d1330d2725bf537d9b806e8b" exitCode=0 Dec 11 15:21:27 crc kubenswrapper[5050]: I1211 15:21:27.490927 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-m7c2g" event={"ID":"465e991b-2d30-4579-aa62-8fc4ab7afe21","Type":"ContainerDied","Data":"069c9a16de9de821816198ef02460338c610d667d1330d2725bf537d9b806e8b"} Dec 11 15:21:27 crc kubenswrapper[5050]: I1211 15:21:27.492626 5050 generic.go:334] "Generic (PLEG): container finished" podID="5a630a5b-c349-4e2f-876a-0b82485a8221" containerID="b42d6621c7a82a0d50cf771a1c285c9d99fc393f239c5167d830d58ee4694b91" exitCode=0 Dec 11 15:21:27 crc kubenswrapper[5050]: I1211 15:21:27.492650 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-45df-account-create-update-ckxx8" event={"ID":"5a630a5b-c349-4e2f-876a-0b82485a8221","Type":"ContainerDied","Data":"b42d6621c7a82a0d50cf771a1c285c9d99fc393f239c5167d830d58ee4694b91"} Dec 11 15:21:27 crc kubenswrapper[5050]: I1211 15:21:27.492663 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-45df-account-create-update-ckxx8" event={"ID":"5a630a5b-c349-4e2f-876a-0b82485a8221","Type":"ContainerStarted","Data":"a928b2c8a4583771b4e46223adef9e73c3dfe4060ad2f22b56dcc4c23a60dc1e"} Dec 11 15:21:28 crc kubenswrapper[5050]: I1211 15:21:28.895567 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-45df-account-create-update-ckxx8" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.004423 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrkdb\" (UniqueName: \"kubernetes.io/projected/5a630a5b-c349-4e2f-876a-0b82485a8221-kube-api-access-xrkdb\") pod \"5a630a5b-c349-4e2f-876a-0b82485a8221\" (UID: \"5a630a5b-c349-4e2f-876a-0b82485a8221\") " Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.004702 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a630a5b-c349-4e2f-876a-0b82485a8221-operator-scripts\") pod \"5a630a5b-c349-4e2f-876a-0b82485a8221\" (UID: \"5a630a5b-c349-4e2f-876a-0b82485a8221\") " Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.006057 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a630a5b-c349-4e2f-876a-0b82485a8221-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5a630a5b-c349-4e2f-876a-0b82485a8221" (UID: "5a630a5b-c349-4e2f-876a-0b82485a8221"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.010410 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a630a5b-c349-4e2f-876a-0b82485a8221-kube-api-access-xrkdb" (OuterVolumeSpecName: "kube-api-access-xrkdb") pod "5a630a5b-c349-4e2f-876a-0b82485a8221" (UID: "5a630a5b-c349-4e2f-876a-0b82485a8221"). InnerVolumeSpecName "kube-api-access-xrkdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.054072 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3433-account-create-update-wknwz" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.058827 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-gtv8c" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.065989 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-x9fv9" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.076456 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9fc1-account-create-update-prg9d" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.088282 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-m7c2g" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.114289 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrkdb\" (UniqueName: \"kubernetes.io/projected/5a630a5b-c349-4e2f-876a-0b82485a8221-kube-api-access-xrkdb\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.114331 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a630a5b-c349-4e2f-876a-0b82485a8221-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.215546 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a80b79e5-138b-4e71-ab5e-aa8805cce0b5-operator-scripts\") pod \"a80b79e5-138b-4e71-ab5e-aa8805cce0b5\" (UID: \"a80b79e5-138b-4e71-ab5e-aa8805cce0b5\") " Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.215605 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae46e8c2-edc3-46dc-a160-75c77cb2bafb-operator-scripts\") pod \"ae46e8c2-edc3-46dc-a160-75c77cb2bafb\" (UID: \"ae46e8c2-edc3-46dc-a160-75c77cb2bafb\") " Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.215664 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h88d9\" (UniqueName: \"kubernetes.io/projected/a80b79e5-138b-4e71-ab5e-aa8805cce0b5-kube-api-access-h88d9\") pod \"a80b79e5-138b-4e71-ab5e-aa8805cce0b5\" (UID: \"a80b79e5-138b-4e71-ab5e-aa8805cce0b5\") " Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.215714 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zx7p5\" (UniqueName: \"kubernetes.io/projected/465e991b-2d30-4579-aa62-8fc4ab7afe21-kube-api-access-zx7p5\") pod \"465e991b-2d30-4579-aa62-8fc4ab7afe21\" (UID: \"465e991b-2d30-4579-aa62-8fc4ab7afe21\") " Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.215744 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxhz4\" (UniqueName: \"kubernetes.io/projected/d35e2a89-ca99-46a0-86ce-83d7eac9733e-kube-api-access-jxhz4\") pod \"d35e2a89-ca99-46a0-86ce-83d7eac9733e\" (UID: \"d35e2a89-ca99-46a0-86ce-83d7eac9733e\") " Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.215783 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hklcg\" (UniqueName: \"kubernetes.io/projected/ae46e8c2-edc3-46dc-a160-75c77cb2bafb-kube-api-access-hklcg\") pod \"ae46e8c2-edc3-46dc-a160-75c77cb2bafb\" (UID: \"ae46e8c2-edc3-46dc-a160-75c77cb2bafb\") " Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.215810 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d35e2a89-ca99-46a0-86ce-83d7eac9733e-operator-scripts\") pod \"d35e2a89-ca99-46a0-86ce-83d7eac9733e\" (UID: \"d35e2a89-ca99-46a0-86ce-83d7eac9733e\") " Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.215848 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04dc7ae1-2485-4ffe-a853-4ef671794e68-operator-scripts\") pod \"04dc7ae1-2485-4ffe-a853-4ef671794e68\" (UID: \"04dc7ae1-2485-4ffe-a853-4ef671794e68\") " Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.215902 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/465e991b-2d30-4579-aa62-8fc4ab7afe21-operator-scripts\") pod \"465e991b-2d30-4579-aa62-8fc4ab7afe21\" (UID: \"465e991b-2d30-4579-aa62-8fc4ab7afe21\") " Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.215948 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jphxk\" (UniqueName: \"kubernetes.io/projected/04dc7ae1-2485-4ffe-a853-4ef671794e68-kube-api-access-jphxk\") pod \"04dc7ae1-2485-4ffe-a853-4ef671794e68\" (UID: \"04dc7ae1-2485-4ffe-a853-4ef671794e68\") " Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.216163 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a80b79e5-138b-4e71-ab5e-aa8805cce0b5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a80b79e5-138b-4e71-ab5e-aa8805cce0b5" (UID: "a80b79e5-138b-4e71-ab5e-aa8805cce0b5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.216230 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae46e8c2-edc3-46dc-a160-75c77cb2bafb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ae46e8c2-edc3-46dc-a160-75c77cb2bafb" (UID: "ae46e8c2-edc3-46dc-a160-75c77cb2bafb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.216571 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d35e2a89-ca99-46a0-86ce-83d7eac9733e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d35e2a89-ca99-46a0-86ce-83d7eac9733e" (UID: "d35e2a89-ca99-46a0-86ce-83d7eac9733e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.216808 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04dc7ae1-2485-4ffe-a853-4ef671794e68-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "04dc7ae1-2485-4ffe-a853-4ef671794e68" (UID: "04dc7ae1-2485-4ffe-a853-4ef671794e68"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.216897 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/465e991b-2d30-4579-aa62-8fc4ab7afe21-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "465e991b-2d30-4579-aa62-8fc4ab7afe21" (UID: "465e991b-2d30-4579-aa62-8fc4ab7afe21"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.217310 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a80b79e5-138b-4e71-ab5e-aa8805cce0b5-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.217335 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ae46e8c2-edc3-46dc-a160-75c77cb2bafb-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.217346 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d35e2a89-ca99-46a0-86ce-83d7eac9733e-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.217356 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04dc7ae1-2485-4ffe-a853-4ef671794e68-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.217367 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/465e991b-2d30-4579-aa62-8fc4ab7afe21-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.220210 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d35e2a89-ca99-46a0-86ce-83d7eac9733e-kube-api-access-jxhz4" (OuterVolumeSpecName: "kube-api-access-jxhz4") pod "d35e2a89-ca99-46a0-86ce-83d7eac9733e" (UID: "d35e2a89-ca99-46a0-86ce-83d7eac9733e"). InnerVolumeSpecName "kube-api-access-jxhz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.220268 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae46e8c2-edc3-46dc-a160-75c77cb2bafb-kube-api-access-hklcg" (OuterVolumeSpecName: "kube-api-access-hklcg") pod "ae46e8c2-edc3-46dc-a160-75c77cb2bafb" (UID: "ae46e8c2-edc3-46dc-a160-75c77cb2bafb"). InnerVolumeSpecName "kube-api-access-hklcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.220333 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/465e991b-2d30-4579-aa62-8fc4ab7afe21-kube-api-access-zx7p5" (OuterVolumeSpecName: "kube-api-access-zx7p5") pod "465e991b-2d30-4579-aa62-8fc4ab7afe21" (UID: "465e991b-2d30-4579-aa62-8fc4ab7afe21"). InnerVolumeSpecName "kube-api-access-zx7p5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.220387 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a80b79e5-138b-4e71-ab5e-aa8805cce0b5-kube-api-access-h88d9" (OuterVolumeSpecName: "kube-api-access-h88d9") pod "a80b79e5-138b-4e71-ab5e-aa8805cce0b5" (UID: "a80b79e5-138b-4e71-ab5e-aa8805cce0b5"). InnerVolumeSpecName "kube-api-access-h88d9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.220749 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04dc7ae1-2485-4ffe-a853-4ef671794e68-kube-api-access-jphxk" (OuterVolumeSpecName: "kube-api-access-jphxk") pod "04dc7ae1-2485-4ffe-a853-4ef671794e68" (UID: "04dc7ae1-2485-4ffe-a853-4ef671794e68"). InnerVolumeSpecName "kube-api-access-jphxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.318928 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zx7p5\" (UniqueName: \"kubernetes.io/projected/465e991b-2d30-4579-aa62-8fc4ab7afe21-kube-api-access-zx7p5\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.318985 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxhz4\" (UniqueName: \"kubernetes.io/projected/d35e2a89-ca99-46a0-86ce-83d7eac9733e-kube-api-access-jxhz4\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.319029 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hklcg\" (UniqueName: \"kubernetes.io/projected/ae46e8c2-edc3-46dc-a160-75c77cb2bafb-kube-api-access-hklcg\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.319048 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jphxk\" (UniqueName: \"kubernetes.io/projected/04dc7ae1-2485-4ffe-a853-4ef671794e68-kube-api-access-jphxk\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.319065 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h88d9\" (UniqueName: \"kubernetes.io/projected/a80b79e5-138b-4e71-ab5e-aa8805cce0b5-kube-api-access-h88d9\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.512261 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-x9fv9" event={"ID":"ae46e8c2-edc3-46dc-a160-75c77cb2bafb","Type":"ContainerDied","Data":"2a8ff004611a96d055d7a592971ea4b020fa1edfc09982d4ca83073bde0f45c7"} Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.512301 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a8ff004611a96d055d7a592971ea4b020fa1edfc09982d4ca83073bde0f45c7" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.512354 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-x9fv9" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.514777 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3433-account-create-update-wknwz" event={"ID":"d35e2a89-ca99-46a0-86ce-83d7eac9733e","Type":"ContainerDied","Data":"1a581d6d1f84876519a0950858ffc6b372b8edafa03ab517f443e441615dad56"} Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.514820 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a581d6d1f84876519a0950858ffc6b372b8edafa03ab517f443e441615dad56" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.514867 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3433-account-create-update-wknwz" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.516351 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-9fc1-account-create-update-prg9d" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.516365 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-9fc1-account-create-update-prg9d" event={"ID":"04dc7ae1-2485-4ffe-a853-4ef671794e68","Type":"ContainerDied","Data":"99bc5729b50b7100370ffce08f444fb5cd4f5c6a1d91dd3a02b586cce97dd12a"} Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.516705 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99bc5729b50b7100370ffce08f444fb5cd4f5c6a1d91dd3a02b586cce97dd12a" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.517516 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-m7c2g" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.517544 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-m7c2g" event={"ID":"465e991b-2d30-4579-aa62-8fc4ab7afe21","Type":"ContainerDied","Data":"2219fca8b2257ca3b4e4898367d3089541fe50a1fea0801dfe707a9db277e43f"} Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.517599 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2219fca8b2257ca3b4e4898367d3089541fe50a1fea0801dfe707a9db277e43f" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.523729 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-45df-account-create-update-ckxx8" event={"ID":"5a630a5b-c349-4e2f-876a-0b82485a8221","Type":"ContainerDied","Data":"a928b2c8a4583771b4e46223adef9e73c3dfe4060ad2f22b56dcc4c23a60dc1e"} Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.523771 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a928b2c8a4583771b4e46223adef9e73c3dfe4060ad2f22b56dcc4c23a60dc1e" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.523754 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-45df-account-create-update-ckxx8" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.526525 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-gtv8c" event={"ID":"a80b79e5-138b-4e71-ab5e-aa8805cce0b5","Type":"ContainerDied","Data":"e09a0786d1e30296342f9183180cf055bb58bbf45f4ab7cc1d293b2b154af638"} Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.526639 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e09a0786d1e30296342f9183180cf055bb58bbf45f4ab7cc1d293b2b154af638" Dec 11 15:21:29 crc kubenswrapper[5050]: I1211 15:21:29.526617 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-gtv8c" Dec 11 15:21:30 crc kubenswrapper[5050]: E1211 15:21:30.611693 5050 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda80b79e5_138b_4e71_ab5e_aa8805cce0b5.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a630a5b_c349_4e2f_876a_0b82485a8221.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd35e2a89_ca99_46a0_86ce_83d7eac9733e.slice/crio-1a581d6d1f84876519a0950858ffc6b372b8edafa03ab517f443e441615dad56\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda80b79e5_138b_4e71_ab5e_aa8805cce0b5.slice/crio-e09a0786d1e30296342f9183180cf055bb58bbf45f4ab7cc1d293b2b154af638\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04dc7ae1_2485_4ffe_a853_4ef671794e68.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a630a5b_c349_4e2f_876a_0b82485a8221.slice/crio-a928b2c8a4583771b4e46223adef9e73c3dfe4060ad2f22b56dcc4c23a60dc1e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd35e2a89_ca99_46a0_86ce_83d7eac9733e.slice\": RecentStats: unable to find data in memory cache]" Dec 11 15:21:30 crc kubenswrapper[5050]: I1211 15:21:30.840232 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-tx9kf"] Dec 11 15:21:30 crc kubenswrapper[5050]: E1211 15:21:30.840689 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="465e991b-2d30-4579-aa62-8fc4ab7afe21" containerName="mariadb-database-create" Dec 11 15:21:30 crc kubenswrapper[5050]: I1211 15:21:30.840708 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="465e991b-2d30-4579-aa62-8fc4ab7afe21" containerName="mariadb-database-create" Dec 11 15:21:30 crc kubenswrapper[5050]: E1211 15:21:30.840731 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04dc7ae1-2485-4ffe-a853-4ef671794e68" containerName="mariadb-account-create-update" Dec 11 15:21:30 crc kubenswrapper[5050]: I1211 15:21:30.840739 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="04dc7ae1-2485-4ffe-a853-4ef671794e68" containerName="mariadb-account-create-update" Dec 11 15:21:30 crc kubenswrapper[5050]: E1211 15:21:30.840750 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae46e8c2-edc3-46dc-a160-75c77cb2bafb" containerName="mariadb-database-create" Dec 11 15:21:30 crc kubenswrapper[5050]: I1211 15:21:30.840757 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae46e8c2-edc3-46dc-a160-75c77cb2bafb" containerName="mariadb-database-create" Dec 11 15:21:30 crc kubenswrapper[5050]: E1211 15:21:30.840772 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a80b79e5-138b-4e71-ab5e-aa8805cce0b5" containerName="mariadb-database-create" Dec 11 15:21:30 crc kubenswrapper[5050]: I1211 15:21:30.840779 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a80b79e5-138b-4e71-ab5e-aa8805cce0b5" containerName="mariadb-database-create" Dec 11 15:21:30 crc kubenswrapper[5050]: E1211 15:21:30.840790 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d35e2a89-ca99-46a0-86ce-83d7eac9733e" containerName="mariadb-account-create-update" Dec 11 15:21:30 crc kubenswrapper[5050]: I1211 15:21:30.840799 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="d35e2a89-ca99-46a0-86ce-83d7eac9733e" containerName="mariadb-account-create-update" Dec 11 15:21:30 crc kubenswrapper[5050]: E1211 15:21:30.840823 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a630a5b-c349-4e2f-876a-0b82485a8221" containerName="mariadb-account-create-update" Dec 11 15:21:30 crc kubenswrapper[5050]: I1211 15:21:30.840830 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a630a5b-c349-4e2f-876a-0b82485a8221" containerName="mariadb-account-create-update" Dec 11 15:21:30 crc kubenswrapper[5050]: I1211 15:21:30.841123 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="d35e2a89-ca99-46a0-86ce-83d7eac9733e" containerName="mariadb-account-create-update" Dec 11 15:21:30 crc kubenswrapper[5050]: I1211 15:21:30.841152 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="a80b79e5-138b-4e71-ab5e-aa8805cce0b5" containerName="mariadb-database-create" Dec 11 15:21:30 crc kubenswrapper[5050]: I1211 15:21:30.841162 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="04dc7ae1-2485-4ffe-a853-4ef671794e68" containerName="mariadb-account-create-update" Dec 11 15:21:30 crc kubenswrapper[5050]: I1211 15:21:30.841180 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae46e8c2-edc3-46dc-a160-75c77cb2bafb" containerName="mariadb-database-create" Dec 11 15:21:30 crc kubenswrapper[5050]: I1211 15:21:30.841191 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a630a5b-c349-4e2f-876a-0b82485a8221" containerName="mariadb-account-create-update" Dec 11 15:21:30 crc kubenswrapper[5050]: I1211 15:21:30.841201 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="465e991b-2d30-4579-aa62-8fc4ab7afe21" containerName="mariadb-database-create" Dec 11 15:21:30 crc kubenswrapper[5050]: I1211 15:21:30.841892 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-tx9kf" Dec 11 15:21:30 crc kubenswrapper[5050]: I1211 15:21:30.844083 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-mdjbl" Dec 11 15:21:30 crc kubenswrapper[5050]: I1211 15:21:30.844523 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 11 15:21:30 crc kubenswrapper[5050]: I1211 15:21:30.847566 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 11 15:21:30 crc kubenswrapper[5050]: I1211 15:21:30.851933 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-tx9kf"] Dec 11 15:21:30 crc kubenswrapper[5050]: I1211 15:21:30.944662 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/768c7508-49b4-4465-9cf9-f1388a1ca283-scripts\") pod \"nova-cell0-conductor-db-sync-tx9kf\" (UID: \"768c7508-49b4-4465-9cf9-f1388a1ca283\") " pod="openstack/nova-cell0-conductor-db-sync-tx9kf" Dec 11 15:21:30 crc kubenswrapper[5050]: I1211 15:21:30.944738 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrsrw\" (UniqueName: \"kubernetes.io/projected/768c7508-49b4-4465-9cf9-f1388a1ca283-kube-api-access-zrsrw\") pod \"nova-cell0-conductor-db-sync-tx9kf\" (UID: \"768c7508-49b4-4465-9cf9-f1388a1ca283\") " pod="openstack/nova-cell0-conductor-db-sync-tx9kf" Dec 11 15:21:30 crc kubenswrapper[5050]: I1211 15:21:30.944774 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/768c7508-49b4-4465-9cf9-f1388a1ca283-config-data\") pod \"nova-cell0-conductor-db-sync-tx9kf\" (UID: \"768c7508-49b4-4465-9cf9-f1388a1ca283\") " pod="openstack/nova-cell0-conductor-db-sync-tx9kf" Dec 11 15:21:30 crc kubenswrapper[5050]: I1211 15:21:30.944794 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/768c7508-49b4-4465-9cf9-f1388a1ca283-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-tx9kf\" (UID: \"768c7508-49b4-4465-9cf9-f1388a1ca283\") " pod="openstack/nova-cell0-conductor-db-sync-tx9kf" Dec 11 15:21:31 crc kubenswrapper[5050]: I1211 15:21:31.046577 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrsrw\" (UniqueName: \"kubernetes.io/projected/768c7508-49b4-4465-9cf9-f1388a1ca283-kube-api-access-zrsrw\") pod \"nova-cell0-conductor-db-sync-tx9kf\" (UID: \"768c7508-49b4-4465-9cf9-f1388a1ca283\") " pod="openstack/nova-cell0-conductor-db-sync-tx9kf" Dec 11 15:21:31 crc kubenswrapper[5050]: I1211 15:21:31.046665 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/768c7508-49b4-4465-9cf9-f1388a1ca283-config-data\") pod \"nova-cell0-conductor-db-sync-tx9kf\" (UID: \"768c7508-49b4-4465-9cf9-f1388a1ca283\") " pod="openstack/nova-cell0-conductor-db-sync-tx9kf" Dec 11 15:21:31 crc kubenswrapper[5050]: I1211 15:21:31.046700 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/768c7508-49b4-4465-9cf9-f1388a1ca283-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-tx9kf\" (UID: \"768c7508-49b4-4465-9cf9-f1388a1ca283\") " pod="openstack/nova-cell0-conductor-db-sync-tx9kf" Dec 11 15:21:31 crc kubenswrapper[5050]: I1211 15:21:31.046807 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/768c7508-49b4-4465-9cf9-f1388a1ca283-scripts\") pod \"nova-cell0-conductor-db-sync-tx9kf\" (UID: \"768c7508-49b4-4465-9cf9-f1388a1ca283\") " pod="openstack/nova-cell0-conductor-db-sync-tx9kf" Dec 11 15:21:31 crc kubenswrapper[5050]: I1211 15:21:31.052464 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/768c7508-49b4-4465-9cf9-f1388a1ca283-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-tx9kf\" (UID: \"768c7508-49b4-4465-9cf9-f1388a1ca283\") " pod="openstack/nova-cell0-conductor-db-sync-tx9kf" Dec 11 15:21:31 crc kubenswrapper[5050]: I1211 15:21:31.052532 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/768c7508-49b4-4465-9cf9-f1388a1ca283-config-data\") pod \"nova-cell0-conductor-db-sync-tx9kf\" (UID: \"768c7508-49b4-4465-9cf9-f1388a1ca283\") " pod="openstack/nova-cell0-conductor-db-sync-tx9kf" Dec 11 15:21:31 crc kubenswrapper[5050]: I1211 15:21:31.052881 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/768c7508-49b4-4465-9cf9-f1388a1ca283-scripts\") pod \"nova-cell0-conductor-db-sync-tx9kf\" (UID: \"768c7508-49b4-4465-9cf9-f1388a1ca283\") " pod="openstack/nova-cell0-conductor-db-sync-tx9kf" Dec 11 15:21:31 crc kubenswrapper[5050]: I1211 15:21:31.075393 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrsrw\" (UniqueName: \"kubernetes.io/projected/768c7508-49b4-4465-9cf9-f1388a1ca283-kube-api-access-zrsrw\") pod \"nova-cell0-conductor-db-sync-tx9kf\" (UID: \"768c7508-49b4-4465-9cf9-f1388a1ca283\") " pod="openstack/nova-cell0-conductor-db-sync-tx9kf" Dec 11 15:21:31 crc kubenswrapper[5050]: I1211 15:21:31.165722 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-tx9kf" Dec 11 15:21:31 crc kubenswrapper[5050]: I1211 15:21:31.580714 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-tx9kf"] Dec 11 15:21:31 crc kubenswrapper[5050]: W1211 15:21:31.586333 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod768c7508_49b4_4465_9cf9_f1388a1ca283.slice/crio-408948081eb2f3815c5623d41a5e1df21f7bdcf67b18fffafe3e039f1025bde4 WatchSource:0}: Error finding container 408948081eb2f3815c5623d41a5e1df21f7bdcf67b18fffafe3e039f1025bde4: Status 404 returned error can't find the container with id 408948081eb2f3815c5623d41a5e1df21f7bdcf67b18fffafe3e039f1025bde4 Dec 11 15:21:32 crc kubenswrapper[5050]: I1211 15:21:32.551347 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-tx9kf" event={"ID":"768c7508-49b4-4465-9cf9-f1388a1ca283","Type":"ContainerStarted","Data":"266d44f0eadb34c4b91b10f9eaec54f29a10ca04027a57ae77f6f1e133cf194e"} Dec 11 15:21:32 crc kubenswrapper[5050]: I1211 15:21:32.551616 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-tx9kf" event={"ID":"768c7508-49b4-4465-9cf9-f1388a1ca283","Type":"ContainerStarted","Data":"408948081eb2f3815c5623d41a5e1df21f7bdcf67b18fffafe3e039f1025bde4"} Dec 11 15:21:32 crc kubenswrapper[5050]: I1211 15:21:32.573096 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-tx9kf" podStartSLOduration=2.573074481 podStartE2EDuration="2.573074481s" podCreationTimestamp="2025-12-11 15:21:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:21:32.569836855 +0000 UTC m=+5583.413559441" watchObservedRunningTime="2025-12-11 15:21:32.573074481 +0000 UTC m=+5583.416797087" Dec 11 15:21:37 crc kubenswrapper[5050]: I1211 15:21:37.592242 5050 generic.go:334] "Generic (PLEG): container finished" podID="768c7508-49b4-4465-9cf9-f1388a1ca283" containerID="266d44f0eadb34c4b91b10f9eaec54f29a10ca04027a57ae77f6f1e133cf194e" exitCode=0 Dec 11 15:21:37 crc kubenswrapper[5050]: I1211 15:21:37.592306 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-tx9kf" event={"ID":"768c7508-49b4-4465-9cf9-f1388a1ca283","Type":"ContainerDied","Data":"266d44f0eadb34c4b91b10f9eaec54f29a10ca04027a57ae77f6f1e133cf194e"} Dec 11 15:21:38 crc kubenswrapper[5050]: I1211 15:21:38.546345 5050 scope.go:117] "RemoveContainer" containerID="5e751c2ff817a404aae81889734d5a9bd57af497d8e885add288368bb5b0040e" Dec 11 15:21:38 crc kubenswrapper[5050]: E1211 15:21:38.547039 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:21:39 crc kubenswrapper[5050]: I1211 15:21:39.018218 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-tx9kf" Dec 11 15:21:39 crc kubenswrapper[5050]: I1211 15:21:39.092163 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrsrw\" (UniqueName: \"kubernetes.io/projected/768c7508-49b4-4465-9cf9-f1388a1ca283-kube-api-access-zrsrw\") pod \"768c7508-49b4-4465-9cf9-f1388a1ca283\" (UID: \"768c7508-49b4-4465-9cf9-f1388a1ca283\") " Dec 11 15:21:39 crc kubenswrapper[5050]: I1211 15:21:39.092241 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/768c7508-49b4-4465-9cf9-f1388a1ca283-combined-ca-bundle\") pod \"768c7508-49b4-4465-9cf9-f1388a1ca283\" (UID: \"768c7508-49b4-4465-9cf9-f1388a1ca283\") " Dec 11 15:21:39 crc kubenswrapper[5050]: I1211 15:21:39.092312 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/768c7508-49b4-4465-9cf9-f1388a1ca283-config-data\") pod \"768c7508-49b4-4465-9cf9-f1388a1ca283\" (UID: \"768c7508-49b4-4465-9cf9-f1388a1ca283\") " Dec 11 15:21:39 crc kubenswrapper[5050]: I1211 15:21:39.092366 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/768c7508-49b4-4465-9cf9-f1388a1ca283-scripts\") pod \"768c7508-49b4-4465-9cf9-f1388a1ca283\" (UID: \"768c7508-49b4-4465-9cf9-f1388a1ca283\") " Dec 11 15:21:39 crc kubenswrapper[5050]: I1211 15:21:39.097761 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/768c7508-49b4-4465-9cf9-f1388a1ca283-kube-api-access-zrsrw" (OuterVolumeSpecName: "kube-api-access-zrsrw") pod "768c7508-49b4-4465-9cf9-f1388a1ca283" (UID: "768c7508-49b4-4465-9cf9-f1388a1ca283"). InnerVolumeSpecName "kube-api-access-zrsrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:21:39 crc kubenswrapper[5050]: I1211 15:21:39.099406 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/768c7508-49b4-4465-9cf9-f1388a1ca283-scripts" (OuterVolumeSpecName: "scripts") pod "768c7508-49b4-4465-9cf9-f1388a1ca283" (UID: "768c7508-49b4-4465-9cf9-f1388a1ca283"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:21:39 crc kubenswrapper[5050]: I1211 15:21:39.118997 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/768c7508-49b4-4465-9cf9-f1388a1ca283-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "768c7508-49b4-4465-9cf9-f1388a1ca283" (UID: "768c7508-49b4-4465-9cf9-f1388a1ca283"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:21:39 crc kubenswrapper[5050]: I1211 15:21:39.129344 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/768c7508-49b4-4465-9cf9-f1388a1ca283-config-data" (OuterVolumeSpecName: "config-data") pod "768c7508-49b4-4465-9cf9-f1388a1ca283" (UID: "768c7508-49b4-4465-9cf9-f1388a1ca283"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:21:39 crc kubenswrapper[5050]: I1211 15:21:39.194201 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrsrw\" (UniqueName: \"kubernetes.io/projected/768c7508-49b4-4465-9cf9-f1388a1ca283-kube-api-access-zrsrw\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:39 crc kubenswrapper[5050]: I1211 15:21:39.194244 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/768c7508-49b4-4465-9cf9-f1388a1ca283-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:39 crc kubenswrapper[5050]: I1211 15:21:39.194258 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/768c7508-49b4-4465-9cf9-f1388a1ca283-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:39 crc kubenswrapper[5050]: I1211 15:21:39.194271 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/768c7508-49b4-4465-9cf9-f1388a1ca283-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:39 crc kubenswrapper[5050]: I1211 15:21:39.609406 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-tx9kf" event={"ID":"768c7508-49b4-4465-9cf9-f1388a1ca283","Type":"ContainerDied","Data":"408948081eb2f3815c5623d41a5e1df21f7bdcf67b18fffafe3e039f1025bde4"} Dec 11 15:21:39 crc kubenswrapper[5050]: I1211 15:21:39.609448 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="408948081eb2f3815c5623d41a5e1df21f7bdcf67b18fffafe3e039f1025bde4" Dec 11 15:21:39 crc kubenswrapper[5050]: I1211 15:21:39.609508 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-tx9kf" Dec 11 15:21:39 crc kubenswrapper[5050]: I1211 15:21:39.690760 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 11 15:21:39 crc kubenswrapper[5050]: E1211 15:21:39.691135 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="768c7508-49b4-4465-9cf9-f1388a1ca283" containerName="nova-cell0-conductor-db-sync" Dec 11 15:21:39 crc kubenswrapper[5050]: I1211 15:21:39.691151 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="768c7508-49b4-4465-9cf9-f1388a1ca283" containerName="nova-cell0-conductor-db-sync" Dec 11 15:21:39 crc kubenswrapper[5050]: I1211 15:21:39.691346 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="768c7508-49b4-4465-9cf9-f1388a1ca283" containerName="nova-cell0-conductor-db-sync" Dec 11 15:21:39 crc kubenswrapper[5050]: I1211 15:21:39.691919 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 11 15:21:39 crc kubenswrapper[5050]: I1211 15:21:39.696296 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 11 15:21:39 crc kubenswrapper[5050]: I1211 15:21:39.696605 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-mdjbl" Dec 11 15:21:39 crc kubenswrapper[5050]: I1211 15:21:39.709383 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 11 15:21:39 crc kubenswrapper[5050]: I1211 15:21:39.805604 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/063d7c41-b6e3-4e21-9b57-ae16dddec75e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"063d7c41-b6e3-4e21-9b57-ae16dddec75e\") " pod="openstack/nova-cell0-conductor-0" Dec 11 15:21:39 crc kubenswrapper[5050]: I1211 15:21:39.805685 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nh7w\" (UniqueName: \"kubernetes.io/projected/063d7c41-b6e3-4e21-9b57-ae16dddec75e-kube-api-access-9nh7w\") pod \"nova-cell0-conductor-0\" (UID: \"063d7c41-b6e3-4e21-9b57-ae16dddec75e\") " pod="openstack/nova-cell0-conductor-0" Dec 11 15:21:39 crc kubenswrapper[5050]: I1211 15:21:39.805787 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/063d7c41-b6e3-4e21-9b57-ae16dddec75e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"063d7c41-b6e3-4e21-9b57-ae16dddec75e\") " pod="openstack/nova-cell0-conductor-0" Dec 11 15:21:39 crc kubenswrapper[5050]: I1211 15:21:39.907910 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/063d7c41-b6e3-4e21-9b57-ae16dddec75e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"063d7c41-b6e3-4e21-9b57-ae16dddec75e\") " pod="openstack/nova-cell0-conductor-0" Dec 11 15:21:39 crc kubenswrapper[5050]: I1211 15:21:39.907980 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nh7w\" (UniqueName: \"kubernetes.io/projected/063d7c41-b6e3-4e21-9b57-ae16dddec75e-kube-api-access-9nh7w\") pod \"nova-cell0-conductor-0\" (UID: \"063d7c41-b6e3-4e21-9b57-ae16dddec75e\") " pod="openstack/nova-cell0-conductor-0" Dec 11 15:21:39 crc kubenswrapper[5050]: I1211 15:21:39.908054 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/063d7c41-b6e3-4e21-9b57-ae16dddec75e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"063d7c41-b6e3-4e21-9b57-ae16dddec75e\") " pod="openstack/nova-cell0-conductor-0" Dec 11 15:21:39 crc kubenswrapper[5050]: I1211 15:21:39.911516 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/063d7c41-b6e3-4e21-9b57-ae16dddec75e-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"063d7c41-b6e3-4e21-9b57-ae16dddec75e\") " pod="openstack/nova-cell0-conductor-0" Dec 11 15:21:39 crc kubenswrapper[5050]: I1211 15:21:39.913656 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/063d7c41-b6e3-4e21-9b57-ae16dddec75e-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"063d7c41-b6e3-4e21-9b57-ae16dddec75e\") " pod="openstack/nova-cell0-conductor-0" Dec 11 15:21:39 crc kubenswrapper[5050]: I1211 15:21:39.924468 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nh7w\" (UniqueName: \"kubernetes.io/projected/063d7c41-b6e3-4e21-9b57-ae16dddec75e-kube-api-access-9nh7w\") pod \"nova-cell0-conductor-0\" (UID: \"063d7c41-b6e3-4e21-9b57-ae16dddec75e\") " pod="openstack/nova-cell0-conductor-0" Dec 11 15:21:40 crc kubenswrapper[5050]: I1211 15:21:40.013056 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 11 15:21:40 crc kubenswrapper[5050]: I1211 15:21:40.580223 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 11 15:21:40 crc kubenswrapper[5050]: I1211 15:21:40.617032 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"063d7c41-b6e3-4e21-9b57-ae16dddec75e","Type":"ContainerStarted","Data":"b4da6477b27c50c9a7a30fe27f11a9c81875b735aed0b377f1a067c3a98a0b97"} Dec 11 15:21:40 crc kubenswrapper[5050]: E1211 15:21:40.825142 5050 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda80b79e5_138b_4e71_ab5e_aa8805cce0b5.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04dc7ae1_2485_4ffe_a853_4ef671794e68.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a630a5b_c349_4e2f_876a_0b82485a8221.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd35e2a89_ca99_46a0_86ce_83d7eac9733e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda80b79e5_138b_4e71_ab5e_aa8805cce0b5.slice/crio-e09a0786d1e30296342f9183180cf055bb58bbf45f4ab7cc1d293b2b154af638\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd35e2a89_ca99_46a0_86ce_83d7eac9733e.slice/crio-1a581d6d1f84876519a0950858ffc6b372b8edafa03ab517f443e441615dad56\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a630a5b_c349_4e2f_876a_0b82485a8221.slice/crio-a928b2c8a4583771b4e46223adef9e73c3dfe4060ad2f22b56dcc4c23a60dc1e\": RecentStats: unable to find data in memory cache]" Dec 11 15:21:41 crc kubenswrapper[5050]: I1211 15:21:41.626661 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"063d7c41-b6e3-4e21-9b57-ae16dddec75e","Type":"ContainerStarted","Data":"03b2da0c6e232f7ddc475f0fe3660a9b619575601d4596fb72229cfda6a5e070"} Dec 11 15:21:41 crc kubenswrapper[5050]: I1211 15:21:41.626839 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 11 15:21:41 crc kubenswrapper[5050]: I1211 15:21:41.658753 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.658716265 podStartE2EDuration="2.658716265s" podCreationTimestamp="2025-12-11 15:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:21:41.649998712 +0000 UTC m=+5592.493721298" watchObservedRunningTime="2025-12-11 15:21:41.658716265 +0000 UTC m=+5592.502438871" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.054559 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.456579 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-qx6t6"] Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.457930 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qx6t6" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.461145 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.462742 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.470680 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-qx6t6"] Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.525692 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe7472c-e7d5-4aef-859d-fc0f5e43e05f-scripts\") pod \"nova-cell0-cell-mapping-qx6t6\" (UID: \"afe7472c-e7d5-4aef-859d-fc0f5e43e05f\") " pod="openstack/nova-cell0-cell-mapping-qx6t6" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.525764 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe7472c-e7d5-4aef-859d-fc0f5e43e05f-config-data\") pod \"nova-cell0-cell-mapping-qx6t6\" (UID: \"afe7472c-e7d5-4aef-859d-fc0f5e43e05f\") " pod="openstack/nova-cell0-cell-mapping-qx6t6" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.525788 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe7472c-e7d5-4aef-859d-fc0f5e43e05f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qx6t6\" (UID: \"afe7472c-e7d5-4aef-859d-fc0f5e43e05f\") " pod="openstack/nova-cell0-cell-mapping-qx6t6" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.525844 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6stg\" (UniqueName: \"kubernetes.io/projected/afe7472c-e7d5-4aef-859d-fc0f5e43e05f-kube-api-access-v6stg\") pod \"nova-cell0-cell-mapping-qx6t6\" (UID: \"afe7472c-e7d5-4aef-859d-fc0f5e43e05f\") " pod="openstack/nova-cell0-cell-mapping-qx6t6" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.626959 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6stg\" (UniqueName: \"kubernetes.io/projected/afe7472c-e7d5-4aef-859d-fc0f5e43e05f-kube-api-access-v6stg\") pod \"nova-cell0-cell-mapping-qx6t6\" (UID: \"afe7472c-e7d5-4aef-859d-fc0f5e43e05f\") " pod="openstack/nova-cell0-cell-mapping-qx6t6" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.627177 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe7472c-e7d5-4aef-859d-fc0f5e43e05f-scripts\") pod \"nova-cell0-cell-mapping-qx6t6\" (UID: \"afe7472c-e7d5-4aef-859d-fc0f5e43e05f\") " pod="openstack/nova-cell0-cell-mapping-qx6t6" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.627243 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe7472c-e7d5-4aef-859d-fc0f5e43e05f-config-data\") pod \"nova-cell0-cell-mapping-qx6t6\" (UID: \"afe7472c-e7d5-4aef-859d-fc0f5e43e05f\") " pod="openstack/nova-cell0-cell-mapping-qx6t6" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.627266 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe7472c-e7d5-4aef-859d-fc0f5e43e05f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qx6t6\" (UID: \"afe7472c-e7d5-4aef-859d-fc0f5e43e05f\") " pod="openstack/nova-cell0-cell-mapping-qx6t6" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.636551 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe7472c-e7d5-4aef-859d-fc0f5e43e05f-scripts\") pod \"nova-cell0-cell-mapping-qx6t6\" (UID: \"afe7472c-e7d5-4aef-859d-fc0f5e43e05f\") " pod="openstack/nova-cell0-cell-mapping-qx6t6" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.641771 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe7472c-e7d5-4aef-859d-fc0f5e43e05f-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-qx6t6\" (UID: \"afe7472c-e7d5-4aef-859d-fc0f5e43e05f\") " pod="openstack/nova-cell0-cell-mapping-qx6t6" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.651767 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe7472c-e7d5-4aef-859d-fc0f5e43e05f-config-data\") pod \"nova-cell0-cell-mapping-qx6t6\" (UID: \"afe7472c-e7d5-4aef-859d-fc0f5e43e05f\") " pod="openstack/nova-cell0-cell-mapping-qx6t6" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.681123 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.683826 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.688542 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.692614 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6stg\" (UniqueName: \"kubernetes.io/projected/afe7472c-e7d5-4aef-859d-fc0f5e43e05f-kube-api-access-v6stg\") pod \"nova-cell0-cell-mapping-qx6t6\" (UID: \"afe7472c-e7d5-4aef-859d-fc0f5e43e05f\") " pod="openstack/nova-cell0-cell-mapping-qx6t6" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.721671 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.734693 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.736166 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.736377 5050 scope.go:117] "RemoveContainer" containerID="f55b3c9fdbfff71d47bbf096e3b38f61f282575fdfc0df338dab5054db2f39bf" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.775506 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.775711 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.776281 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qx6t6" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.838385 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8b7a9dc-0609-4549-8abf-caf981152d2a-logs\") pod \"nova-metadata-0\" (UID: \"b8b7a9dc-0609-4549-8abf-caf981152d2a\") " pod="openstack/nova-metadata-0" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.838463 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a0a6737-e9de-4e43-a75f-796e189251e1-config-data\") pod \"nova-api-0\" (UID: \"4a0a6737-e9de-4e43-a75f-796e189251e1\") " pod="openstack/nova-api-0" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.838507 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87nh4\" (UniqueName: \"kubernetes.io/projected/b8b7a9dc-0609-4549-8abf-caf981152d2a-kube-api-access-87nh4\") pod \"nova-metadata-0\" (UID: \"b8b7a9dc-0609-4549-8abf-caf981152d2a\") " pod="openstack/nova-metadata-0" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.838616 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8b7a9dc-0609-4549-8abf-caf981152d2a-config-data\") pod \"nova-metadata-0\" (UID: \"b8b7a9dc-0609-4549-8abf-caf981152d2a\") " pod="openstack/nova-metadata-0" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.838724 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hkjw\" (UniqueName: \"kubernetes.io/projected/4a0a6737-e9de-4e43-a75f-796e189251e1-kube-api-access-2hkjw\") pod \"nova-api-0\" (UID: \"4a0a6737-e9de-4e43-a75f-796e189251e1\") " pod="openstack/nova-api-0" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.838762 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8b7a9dc-0609-4549-8abf-caf981152d2a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b8b7a9dc-0609-4549-8abf-caf981152d2a\") " pod="openstack/nova-metadata-0" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.838807 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a0a6737-e9de-4e43-a75f-796e189251e1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4a0a6737-e9de-4e43-a75f-796e189251e1\") " pod="openstack/nova-api-0" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.838828 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a0a6737-e9de-4e43-a75f-796e189251e1-logs\") pod \"nova-api-0\" (UID: \"4a0a6737-e9de-4e43-a75f-796e189251e1\") " pod="openstack/nova-api-0" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.844896 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.846700 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.851001 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.870619 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.901138 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7dc945cfcf-bfgvl"] Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.902829 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dc945cfcf-bfgvl" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.916277 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7dc945cfcf-bfgvl"] Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.923780 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.925145 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.929143 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.938959 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.944505 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a0a6737-e9de-4e43-a75f-796e189251e1-config-data\") pod \"nova-api-0\" (UID: \"4a0a6737-e9de-4e43-a75f-796e189251e1\") " pod="openstack/nova-api-0" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.944553 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bcf288f-ca1a-46f5-b3f3-5136f97465cf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1bcf288f-ca1a-46f5-b3f3-5136f97465cf\") " pod="openstack/nova-scheduler-0" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.944589 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87nh4\" (UniqueName: \"kubernetes.io/projected/b8b7a9dc-0609-4549-8abf-caf981152d2a-kube-api-access-87nh4\") pod \"nova-metadata-0\" (UID: \"b8b7a9dc-0609-4549-8abf-caf981152d2a\") " pod="openstack/nova-metadata-0" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.944657 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8b7a9dc-0609-4549-8abf-caf981152d2a-config-data\") pod \"nova-metadata-0\" (UID: \"b8b7a9dc-0609-4549-8abf-caf981152d2a\") " pod="openstack/nova-metadata-0" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.944785 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hkjw\" (UniqueName: \"kubernetes.io/projected/4a0a6737-e9de-4e43-a75f-796e189251e1-kube-api-access-2hkjw\") pod \"nova-api-0\" (UID: \"4a0a6737-e9de-4e43-a75f-796e189251e1\") " pod="openstack/nova-api-0" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.944829 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8b7a9dc-0609-4549-8abf-caf981152d2a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b8b7a9dc-0609-4549-8abf-caf981152d2a\") " pod="openstack/nova-metadata-0" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.944846 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgppl\" (UniqueName: \"kubernetes.io/projected/1bcf288f-ca1a-46f5-b3f3-5136f97465cf-kube-api-access-mgppl\") pod \"nova-scheduler-0\" (UID: \"1bcf288f-ca1a-46f5-b3f3-5136f97465cf\") " pod="openstack/nova-scheduler-0" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.944891 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a0a6737-e9de-4e43-a75f-796e189251e1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4a0a6737-e9de-4e43-a75f-796e189251e1\") " pod="openstack/nova-api-0" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.944911 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a0a6737-e9de-4e43-a75f-796e189251e1-logs\") pod \"nova-api-0\" (UID: \"4a0a6737-e9de-4e43-a75f-796e189251e1\") " pod="openstack/nova-api-0" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.944987 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bcf288f-ca1a-46f5-b3f3-5136f97465cf-config-data\") pod \"nova-scheduler-0\" (UID: \"1bcf288f-ca1a-46f5-b3f3-5136f97465cf\") " pod="openstack/nova-scheduler-0" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.945127 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8b7a9dc-0609-4549-8abf-caf981152d2a-logs\") pod \"nova-metadata-0\" (UID: \"b8b7a9dc-0609-4549-8abf-caf981152d2a\") " pod="openstack/nova-metadata-0" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.945635 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8b7a9dc-0609-4549-8abf-caf981152d2a-logs\") pod \"nova-metadata-0\" (UID: \"b8b7a9dc-0609-4549-8abf-caf981152d2a\") " pod="openstack/nova-metadata-0" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.947347 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a0a6737-e9de-4e43-a75f-796e189251e1-logs\") pod \"nova-api-0\" (UID: \"4a0a6737-e9de-4e43-a75f-796e189251e1\") " pod="openstack/nova-api-0" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.950746 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8b7a9dc-0609-4549-8abf-caf981152d2a-config-data\") pod \"nova-metadata-0\" (UID: \"b8b7a9dc-0609-4549-8abf-caf981152d2a\") " pod="openstack/nova-metadata-0" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.950818 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a0a6737-e9de-4e43-a75f-796e189251e1-config-data\") pod \"nova-api-0\" (UID: \"4a0a6737-e9de-4e43-a75f-796e189251e1\") " pod="openstack/nova-api-0" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.954104 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a0a6737-e9de-4e43-a75f-796e189251e1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4a0a6737-e9de-4e43-a75f-796e189251e1\") " pod="openstack/nova-api-0" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.964375 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8b7a9dc-0609-4549-8abf-caf981152d2a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b8b7a9dc-0609-4549-8abf-caf981152d2a\") " pod="openstack/nova-metadata-0" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.973250 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hkjw\" (UniqueName: \"kubernetes.io/projected/4a0a6737-e9de-4e43-a75f-796e189251e1-kube-api-access-2hkjw\") pod \"nova-api-0\" (UID: \"4a0a6737-e9de-4e43-a75f-796e189251e1\") " pod="openstack/nova-api-0" Dec 11 15:21:45 crc kubenswrapper[5050]: I1211 15:21:45.975513 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87nh4\" (UniqueName: \"kubernetes.io/projected/b8b7a9dc-0609-4549-8abf-caf981152d2a-kube-api-access-87nh4\") pod \"nova-metadata-0\" (UID: \"b8b7a9dc-0609-4549-8abf-caf981152d2a\") " pod="openstack/nova-metadata-0" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.048744 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bcf288f-ca1a-46f5-b3f3-5136f97465cf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1bcf288f-ca1a-46f5-b3f3-5136f97465cf\") " pod="openstack/nova-scheduler-0" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.048829 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34f44ada-5f76-4b45-89ba-132caf64ae4a-dns-svc\") pod \"dnsmasq-dns-7dc945cfcf-bfgvl\" (UID: \"34f44ada-5f76-4b45-89ba-132caf64ae4a\") " pod="openstack/dnsmasq-dns-7dc945cfcf-bfgvl" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.048891 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34f44ada-5f76-4b45-89ba-132caf64ae4a-ovsdbserver-nb\") pod \"dnsmasq-dns-7dc945cfcf-bfgvl\" (UID: \"34f44ada-5f76-4b45-89ba-132caf64ae4a\") " pod="openstack/dnsmasq-dns-7dc945cfcf-bfgvl" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.049034 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbs8m\" (UniqueName: \"kubernetes.io/projected/63dac332-2a74-496b-bf44-83acbf69ad11-kube-api-access-zbs8m\") pod \"nova-cell1-novncproxy-0\" (UID: \"63dac332-2a74-496b-bf44-83acbf69ad11\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.049213 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgppl\" (UniqueName: \"kubernetes.io/projected/1bcf288f-ca1a-46f5-b3f3-5136f97465cf-kube-api-access-mgppl\") pod \"nova-scheduler-0\" (UID: \"1bcf288f-ca1a-46f5-b3f3-5136f97465cf\") " pod="openstack/nova-scheduler-0" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.049268 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34f44ada-5f76-4b45-89ba-132caf64ae4a-config\") pod \"dnsmasq-dns-7dc945cfcf-bfgvl\" (UID: \"34f44ada-5f76-4b45-89ba-132caf64ae4a\") " pod="openstack/dnsmasq-dns-7dc945cfcf-bfgvl" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.049294 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63dac332-2a74-496b-bf44-83acbf69ad11-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"63dac332-2a74-496b-bf44-83acbf69ad11\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.049322 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63dac332-2a74-496b-bf44-83acbf69ad11-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"63dac332-2a74-496b-bf44-83acbf69ad11\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.049376 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34f44ada-5f76-4b45-89ba-132caf64ae4a-ovsdbserver-sb\") pod \"dnsmasq-dns-7dc945cfcf-bfgvl\" (UID: \"34f44ada-5f76-4b45-89ba-132caf64ae4a\") " pod="openstack/dnsmasq-dns-7dc945cfcf-bfgvl" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.049421 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bcf288f-ca1a-46f5-b3f3-5136f97465cf-config-data\") pod \"nova-scheduler-0\" (UID: \"1bcf288f-ca1a-46f5-b3f3-5136f97465cf\") " pod="openstack/nova-scheduler-0" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.049527 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qvnr\" (UniqueName: \"kubernetes.io/projected/34f44ada-5f76-4b45-89ba-132caf64ae4a-kube-api-access-9qvnr\") pod \"dnsmasq-dns-7dc945cfcf-bfgvl\" (UID: \"34f44ada-5f76-4b45-89ba-132caf64ae4a\") " pod="openstack/dnsmasq-dns-7dc945cfcf-bfgvl" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.052054 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bcf288f-ca1a-46f5-b3f3-5136f97465cf-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1bcf288f-ca1a-46f5-b3f3-5136f97465cf\") " pod="openstack/nova-scheduler-0" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.052981 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bcf288f-ca1a-46f5-b3f3-5136f97465cf-config-data\") pod \"nova-scheduler-0\" (UID: \"1bcf288f-ca1a-46f5-b3f3-5136f97465cf\") " pod="openstack/nova-scheduler-0" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.068846 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgppl\" (UniqueName: \"kubernetes.io/projected/1bcf288f-ca1a-46f5-b3f3-5136f97465cf-kube-api-access-mgppl\") pod \"nova-scheduler-0\" (UID: \"1bcf288f-ca1a-46f5-b3f3-5136f97465cf\") " pod="openstack/nova-scheduler-0" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.125810 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.142138 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.154000 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34f44ada-5f76-4b45-89ba-132caf64ae4a-dns-svc\") pod \"dnsmasq-dns-7dc945cfcf-bfgvl\" (UID: \"34f44ada-5f76-4b45-89ba-132caf64ae4a\") " pod="openstack/dnsmasq-dns-7dc945cfcf-bfgvl" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.154070 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34f44ada-5f76-4b45-89ba-132caf64ae4a-ovsdbserver-nb\") pod \"dnsmasq-dns-7dc945cfcf-bfgvl\" (UID: \"34f44ada-5f76-4b45-89ba-132caf64ae4a\") " pod="openstack/dnsmasq-dns-7dc945cfcf-bfgvl" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.154153 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbs8m\" (UniqueName: \"kubernetes.io/projected/63dac332-2a74-496b-bf44-83acbf69ad11-kube-api-access-zbs8m\") pod \"nova-cell1-novncproxy-0\" (UID: \"63dac332-2a74-496b-bf44-83acbf69ad11\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.154199 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63dac332-2a74-496b-bf44-83acbf69ad11-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"63dac332-2a74-496b-bf44-83acbf69ad11\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.154223 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34f44ada-5f76-4b45-89ba-132caf64ae4a-config\") pod \"dnsmasq-dns-7dc945cfcf-bfgvl\" (UID: \"34f44ada-5f76-4b45-89ba-132caf64ae4a\") " pod="openstack/dnsmasq-dns-7dc945cfcf-bfgvl" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.154248 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63dac332-2a74-496b-bf44-83acbf69ad11-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"63dac332-2a74-496b-bf44-83acbf69ad11\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.154290 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34f44ada-5f76-4b45-89ba-132caf64ae4a-ovsdbserver-sb\") pod \"dnsmasq-dns-7dc945cfcf-bfgvl\" (UID: \"34f44ada-5f76-4b45-89ba-132caf64ae4a\") " pod="openstack/dnsmasq-dns-7dc945cfcf-bfgvl" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.154375 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qvnr\" (UniqueName: \"kubernetes.io/projected/34f44ada-5f76-4b45-89ba-132caf64ae4a-kube-api-access-9qvnr\") pod \"dnsmasq-dns-7dc945cfcf-bfgvl\" (UID: \"34f44ada-5f76-4b45-89ba-132caf64ae4a\") " pod="openstack/dnsmasq-dns-7dc945cfcf-bfgvl" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.155569 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34f44ada-5f76-4b45-89ba-132caf64ae4a-dns-svc\") pod \"dnsmasq-dns-7dc945cfcf-bfgvl\" (UID: \"34f44ada-5f76-4b45-89ba-132caf64ae4a\") " pod="openstack/dnsmasq-dns-7dc945cfcf-bfgvl" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.156217 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34f44ada-5f76-4b45-89ba-132caf64ae4a-ovsdbserver-sb\") pod \"dnsmasq-dns-7dc945cfcf-bfgvl\" (UID: \"34f44ada-5f76-4b45-89ba-132caf64ae4a\") " pod="openstack/dnsmasq-dns-7dc945cfcf-bfgvl" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.156437 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34f44ada-5f76-4b45-89ba-132caf64ae4a-ovsdbserver-nb\") pod \"dnsmasq-dns-7dc945cfcf-bfgvl\" (UID: \"34f44ada-5f76-4b45-89ba-132caf64ae4a\") " pod="openstack/dnsmasq-dns-7dc945cfcf-bfgvl" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.156560 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34f44ada-5f76-4b45-89ba-132caf64ae4a-config\") pod \"dnsmasq-dns-7dc945cfcf-bfgvl\" (UID: \"34f44ada-5f76-4b45-89ba-132caf64ae4a\") " pod="openstack/dnsmasq-dns-7dc945cfcf-bfgvl" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.157422 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-qx6t6"] Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.159157 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63dac332-2a74-496b-bf44-83acbf69ad11-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"63dac332-2a74-496b-bf44-83acbf69ad11\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.160562 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63dac332-2a74-496b-bf44-83acbf69ad11-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"63dac332-2a74-496b-bf44-83acbf69ad11\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.171565 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbs8m\" (UniqueName: \"kubernetes.io/projected/63dac332-2a74-496b-bf44-83acbf69ad11-kube-api-access-zbs8m\") pod \"nova-cell1-novncproxy-0\" (UID: \"63dac332-2a74-496b-bf44-83acbf69ad11\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.171601 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qvnr\" (UniqueName: \"kubernetes.io/projected/34f44ada-5f76-4b45-89ba-132caf64ae4a-kube-api-access-9qvnr\") pod \"dnsmasq-dns-7dc945cfcf-bfgvl\" (UID: \"34f44ada-5f76-4b45-89ba-132caf64ae4a\") " pod="openstack/dnsmasq-dns-7dc945cfcf-bfgvl" Dec 11 15:21:46 crc kubenswrapper[5050]: W1211 15:21:46.173097 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafe7472c_e7d5_4aef_859d_fc0f5e43e05f.slice/crio-e45558aa1d9f1ca00aedb3b72d5699b94dda5c35367914a3b2d709c03b77bbf5 WatchSource:0}: Error finding container e45558aa1d9f1ca00aedb3b72d5699b94dda5c35367914a3b2d709c03b77bbf5: Status 404 returned error can't find the container with id e45558aa1d9f1ca00aedb3b72d5699b94dda5c35367914a3b2d709c03b77bbf5 Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.252135 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.261479 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dc945cfcf-bfgvl" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.275157 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.492079 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qqfkl"] Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.493602 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qqfkl" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.502385 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.505691 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.515155 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qqfkl"] Dec 11 15:21:46 crc kubenswrapper[5050]: W1211 15:21:46.614444 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8b7a9dc_0609_4549_8abf_caf981152d2a.slice/crio-9054f70b2c18d629b4d109fcdd8d5df9d2086ee8d54da868edbe508eeb137a90 WatchSource:0}: Error finding container 9054f70b2c18d629b4d109fcdd8d5df9d2086ee8d54da868edbe508eeb137a90: Status 404 returned error can't find the container with id 9054f70b2c18d629b4d109fcdd8d5df9d2086ee8d54da868edbe508eeb137a90 Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.614782 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.667622 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3e50969-a69c-4b66-9788-ca2566127898-scripts\") pod \"nova-cell1-conductor-db-sync-qqfkl\" (UID: \"c3e50969-a69c-4b66-9788-ca2566127898\") " pod="openstack/nova-cell1-conductor-db-sync-qqfkl" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.667765 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxbwc\" (UniqueName: \"kubernetes.io/projected/c3e50969-a69c-4b66-9788-ca2566127898-kube-api-access-nxbwc\") pod \"nova-cell1-conductor-db-sync-qqfkl\" (UID: \"c3e50969-a69c-4b66-9788-ca2566127898\") " pod="openstack/nova-cell1-conductor-db-sync-qqfkl" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.667875 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e50969-a69c-4b66-9788-ca2566127898-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-qqfkl\" (UID: \"c3e50969-a69c-4b66-9788-ca2566127898\") " pod="openstack/nova-cell1-conductor-db-sync-qqfkl" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.667945 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3e50969-a69c-4b66-9788-ca2566127898-config-data\") pod \"nova-cell1-conductor-db-sync-qqfkl\" (UID: \"c3e50969-a69c-4b66-9788-ca2566127898\") " pod="openstack/nova-cell1-conductor-db-sync-qqfkl" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.709580 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qx6t6" event={"ID":"afe7472c-e7d5-4aef-859d-fc0f5e43e05f","Type":"ContainerStarted","Data":"c06af4b1d0beb3613d151bbf71668b5e3490434ca6e76f526229fa627f332232"} Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.709627 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qx6t6" event={"ID":"afe7472c-e7d5-4aef-859d-fc0f5e43e05f","Type":"ContainerStarted","Data":"e45558aa1d9f1ca00aedb3b72d5699b94dda5c35367914a3b2d709c03b77bbf5"} Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.712075 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b8b7a9dc-0609-4549-8abf-caf981152d2a","Type":"ContainerStarted","Data":"9054f70b2c18d629b4d109fcdd8d5df9d2086ee8d54da868edbe508eeb137a90"} Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.738665 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-qx6t6" podStartSLOduration=1.738649878 podStartE2EDuration="1.738649878s" podCreationTimestamp="2025-12-11 15:21:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:21:46.730366177 +0000 UTC m=+5597.574088763" watchObservedRunningTime="2025-12-11 15:21:46.738649878 +0000 UTC m=+5597.582372454" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.753568 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.769116 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3e50969-a69c-4b66-9788-ca2566127898-config-data\") pod \"nova-cell1-conductor-db-sync-qqfkl\" (UID: \"c3e50969-a69c-4b66-9788-ca2566127898\") " pod="openstack/nova-cell1-conductor-db-sync-qqfkl" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.769164 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3e50969-a69c-4b66-9788-ca2566127898-scripts\") pod \"nova-cell1-conductor-db-sync-qqfkl\" (UID: \"c3e50969-a69c-4b66-9788-ca2566127898\") " pod="openstack/nova-cell1-conductor-db-sync-qqfkl" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.769240 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxbwc\" (UniqueName: \"kubernetes.io/projected/c3e50969-a69c-4b66-9788-ca2566127898-kube-api-access-nxbwc\") pod \"nova-cell1-conductor-db-sync-qqfkl\" (UID: \"c3e50969-a69c-4b66-9788-ca2566127898\") " pod="openstack/nova-cell1-conductor-db-sync-qqfkl" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.769309 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e50969-a69c-4b66-9788-ca2566127898-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-qqfkl\" (UID: \"c3e50969-a69c-4b66-9788-ca2566127898\") " pod="openstack/nova-cell1-conductor-db-sync-qqfkl" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.775497 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3e50969-a69c-4b66-9788-ca2566127898-scripts\") pod \"nova-cell1-conductor-db-sync-qqfkl\" (UID: \"c3e50969-a69c-4b66-9788-ca2566127898\") " pod="openstack/nova-cell1-conductor-db-sync-qqfkl" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.778336 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3e50969-a69c-4b66-9788-ca2566127898-config-data\") pod \"nova-cell1-conductor-db-sync-qqfkl\" (UID: \"c3e50969-a69c-4b66-9788-ca2566127898\") " pod="openstack/nova-cell1-conductor-db-sync-qqfkl" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.785154 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e50969-a69c-4b66-9788-ca2566127898-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-qqfkl\" (UID: \"c3e50969-a69c-4b66-9788-ca2566127898\") " pod="openstack/nova-cell1-conductor-db-sync-qqfkl" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.791255 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxbwc\" (UniqueName: \"kubernetes.io/projected/c3e50969-a69c-4b66-9788-ca2566127898-kube-api-access-nxbwc\") pod \"nova-cell1-conductor-db-sync-qqfkl\" (UID: \"c3e50969-a69c-4b66-9788-ca2566127898\") " pod="openstack/nova-cell1-conductor-db-sync-qqfkl" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.820383 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qqfkl" Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.820856 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.835026 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 15:21:46 crc kubenswrapper[5050]: I1211 15:21:46.970277 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7dc945cfcf-bfgvl"] Dec 11 15:21:47 crc kubenswrapper[5050]: I1211 15:21:47.199993 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qqfkl"] Dec 11 15:21:47 crc kubenswrapper[5050]: W1211 15:21:47.200539 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3e50969_a69c_4b66_9788_ca2566127898.slice/crio-0463161c15bb7134dbfaf93ea63c03439708d7092dd4fcadea98530aeb5b35df WatchSource:0}: Error finding container 0463161c15bb7134dbfaf93ea63c03439708d7092dd4fcadea98530aeb5b35df: Status 404 returned error can't find the container with id 0463161c15bb7134dbfaf93ea63c03439708d7092dd4fcadea98530aeb5b35df Dec 11 15:21:47 crc kubenswrapper[5050]: I1211 15:21:47.735798 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b8b7a9dc-0609-4549-8abf-caf981152d2a","Type":"ContainerStarted","Data":"e81516e776bc20477a74564b8638763dde8907d2b332218cf8b387ae5ddfbd68"} Dec 11 15:21:47 crc kubenswrapper[5050]: I1211 15:21:47.736174 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b8b7a9dc-0609-4549-8abf-caf981152d2a","Type":"ContainerStarted","Data":"06dd38f356c42aa630810481160a1dc00bea4efe7b43c38e79fa6d1439a75a60"} Dec 11 15:21:47 crc kubenswrapper[5050]: I1211 15:21:47.753334 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1bcf288f-ca1a-46f5-b3f3-5136f97465cf","Type":"ContainerStarted","Data":"30dd4750ef1da079dc2a1d47f7c9a776395d1c483687cb1280344a864334315f"} Dec 11 15:21:47 crc kubenswrapper[5050]: I1211 15:21:47.753390 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1bcf288f-ca1a-46f5-b3f3-5136f97465cf","Type":"ContainerStarted","Data":"ebe75507e89c57497b4509f47cd4ac2d453c109dbc179eba6a6a4acb8772e0ed"} Dec 11 15:21:47 crc kubenswrapper[5050]: I1211 15:21:47.770326 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.7703130910000002 podStartE2EDuration="2.770313091s" podCreationTimestamp="2025-12-11 15:21:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:21:47.767889166 +0000 UTC m=+5598.611611752" watchObservedRunningTime="2025-12-11 15:21:47.770313091 +0000 UTC m=+5598.614035677" Dec 11 15:21:47 crc kubenswrapper[5050]: I1211 15:21:47.772771 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4a0a6737-e9de-4e43-a75f-796e189251e1","Type":"ContainerStarted","Data":"3e56fca1c857c28dee4ece2352525ee6572fed5025ab9a5bfc1f84336cc8bdc3"} Dec 11 15:21:47 crc kubenswrapper[5050]: I1211 15:21:47.772810 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4a0a6737-e9de-4e43-a75f-796e189251e1","Type":"ContainerStarted","Data":"861fdfa63aa8548e7917ebff813c65ade3b956a8b37928a8d8347db0ff629e59"} Dec 11 15:21:47 crc kubenswrapper[5050]: I1211 15:21:47.772821 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4a0a6737-e9de-4e43-a75f-796e189251e1","Type":"ContainerStarted","Data":"c2cbeb493e7ef7978ef1596116e47ce73f1b9f613439d6d8b2bda59f55bf83f6"} Dec 11 15:21:47 crc kubenswrapper[5050]: I1211 15:21:47.792218 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.792197346 podStartE2EDuration="2.792197346s" podCreationTimestamp="2025-12-11 15:21:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:21:47.783208666 +0000 UTC m=+5598.626931262" watchObservedRunningTime="2025-12-11 15:21:47.792197346 +0000 UTC m=+5598.635919932" Dec 11 15:21:47 crc kubenswrapper[5050]: I1211 15:21:47.796315 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qqfkl" event={"ID":"c3e50969-a69c-4b66-9788-ca2566127898","Type":"ContainerStarted","Data":"bba526dafd51c2c91e6a8bc3cb18cc4b2a68f042ba7fc81d2819805dac2c26a4"} Dec 11 15:21:47 crc kubenswrapper[5050]: I1211 15:21:47.796357 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qqfkl" event={"ID":"c3e50969-a69c-4b66-9788-ca2566127898","Type":"ContainerStarted","Data":"0463161c15bb7134dbfaf93ea63c03439708d7092dd4fcadea98530aeb5b35df"} Dec 11 15:21:47 crc kubenswrapper[5050]: I1211 15:21:47.803336 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"63dac332-2a74-496b-bf44-83acbf69ad11","Type":"ContainerStarted","Data":"0d66ac326a05f25fc9b2fa6aea5c914ec9c3296dc44ce9f8f0b1caf402c6b92a"} Dec 11 15:21:47 crc kubenswrapper[5050]: I1211 15:21:47.803387 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"63dac332-2a74-496b-bf44-83acbf69ad11","Type":"ContainerStarted","Data":"86aa62c29a161b771d222c202d89c5f9f577a0234deb98460de7ff61dd517db2"} Dec 11 15:21:47 crc kubenswrapper[5050]: I1211 15:21:47.810602 5050 generic.go:334] "Generic (PLEG): container finished" podID="34f44ada-5f76-4b45-89ba-132caf64ae4a" containerID="39a083055865128f8d2d36931caadeb9cfe2196d13a8e2829dd33dd4817eac5c" exitCode=0 Dec 11 15:21:47 crc kubenswrapper[5050]: I1211 15:21:47.811406 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dc945cfcf-bfgvl" event={"ID":"34f44ada-5f76-4b45-89ba-132caf64ae4a","Type":"ContainerDied","Data":"39a083055865128f8d2d36931caadeb9cfe2196d13a8e2829dd33dd4817eac5c"} Dec 11 15:21:47 crc kubenswrapper[5050]: I1211 15:21:47.811437 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dc945cfcf-bfgvl" event={"ID":"34f44ada-5f76-4b45-89ba-132caf64ae4a","Type":"ContainerStarted","Data":"fbdf034a429b09cb38734b4c61da34ce78adfb7b966c48aef25c9a4eade2bfff"} Dec 11 15:21:47 crc kubenswrapper[5050]: I1211 15:21:47.812300 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.812284642 podStartE2EDuration="2.812284642s" podCreationTimestamp="2025-12-11 15:21:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:21:47.803685132 +0000 UTC m=+5598.647407718" watchObservedRunningTime="2025-12-11 15:21:47.812284642 +0000 UTC m=+5598.656007228" Dec 11 15:21:47 crc kubenswrapper[5050]: I1211 15:21:47.835762 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.835738788 podStartE2EDuration="2.835738788s" podCreationTimestamp="2025-12-11 15:21:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:21:47.829385179 +0000 UTC m=+5598.673107775" watchObservedRunningTime="2025-12-11 15:21:47.835738788 +0000 UTC m=+5598.679461374" Dec 11 15:21:47 crc kubenswrapper[5050]: I1211 15:21:47.856523 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-qqfkl" podStartSLOduration=1.856503223 podStartE2EDuration="1.856503223s" podCreationTimestamp="2025-12-11 15:21:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:21:47.853355619 +0000 UTC m=+5598.697078205" watchObservedRunningTime="2025-12-11 15:21:47.856503223 +0000 UTC m=+5598.700225809" Dec 11 15:21:48 crc kubenswrapper[5050]: I1211 15:21:48.823545 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dc945cfcf-bfgvl" event={"ID":"34f44ada-5f76-4b45-89ba-132caf64ae4a","Type":"ContainerStarted","Data":"1915832d9493f4f7d5d3aa66249038d1213fb2e67070b301c39c161b5c64a9c0"} Dec 11 15:21:48 crc kubenswrapper[5050]: I1211 15:21:48.826795 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7dc945cfcf-bfgvl" Dec 11 15:21:48 crc kubenswrapper[5050]: I1211 15:21:48.841779 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7dc945cfcf-bfgvl" podStartSLOduration=3.841759346 podStartE2EDuration="3.841759346s" podCreationTimestamp="2025-12-11 15:21:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:21:48.84153822 +0000 UTC m=+5599.685260806" watchObservedRunningTime="2025-12-11 15:21:48.841759346 +0000 UTC m=+5599.685481932" Dec 11 15:21:50 crc kubenswrapper[5050]: I1211 15:21:50.546772 5050 scope.go:117] "RemoveContainer" containerID="5e751c2ff817a404aae81889734d5a9bd57af497d8e885add288368bb5b0040e" Dec 11 15:21:50 crc kubenswrapper[5050]: I1211 15:21:50.844747 5050 generic.go:334] "Generic (PLEG): container finished" podID="c3e50969-a69c-4b66-9788-ca2566127898" containerID="bba526dafd51c2c91e6a8bc3cb18cc4b2a68f042ba7fc81d2819805dac2c26a4" exitCode=0 Dec 11 15:21:50 crc kubenswrapper[5050]: I1211 15:21:50.844803 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qqfkl" event={"ID":"c3e50969-a69c-4b66-9788-ca2566127898","Type":"ContainerDied","Data":"bba526dafd51c2c91e6a8bc3cb18cc4b2a68f042ba7fc81d2819805dac2c26a4"} Dec 11 15:21:50 crc kubenswrapper[5050]: I1211 15:21:50.847529 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerStarted","Data":"cd874cd802dc062c08f2ff800368e85c46f12b5b02fbe3836d6e9375c3dc66f8"} Dec 11 15:21:51 crc kubenswrapper[5050]: E1211 15:21:51.072003 5050 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04dc7ae1_2485_4ffe_a853_4ef671794e68.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a630a5b_c349_4e2f_876a_0b82485a8221.slice/crio-a928b2c8a4583771b4e46223adef9e73c3dfe4060ad2f22b56dcc4c23a60dc1e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd35e2a89_ca99_46a0_86ce_83d7eac9733e.slice/crio-1a581d6d1f84876519a0950858ffc6b372b8edafa03ab517f443e441615dad56\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda80b79e5_138b_4e71_ab5e_aa8805cce0b5.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd35e2a89_ca99_46a0_86ce_83d7eac9733e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a630a5b_c349_4e2f_876a_0b82485a8221.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda80b79e5_138b_4e71_ab5e_aa8805cce0b5.slice/crio-e09a0786d1e30296342f9183180cf055bb58bbf45f4ab7cc1d293b2b154af638\": RecentStats: unable to find data in memory cache]" Dec 11 15:21:51 crc kubenswrapper[5050]: I1211 15:21:51.126524 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 11 15:21:51 crc kubenswrapper[5050]: I1211 15:21:51.127191 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 11 15:21:51 crc kubenswrapper[5050]: I1211 15:21:51.253077 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 11 15:21:51 crc kubenswrapper[5050]: I1211 15:21:51.275697 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 11 15:21:51 crc kubenswrapper[5050]: I1211 15:21:51.856483 5050 generic.go:334] "Generic (PLEG): container finished" podID="afe7472c-e7d5-4aef-859d-fc0f5e43e05f" containerID="c06af4b1d0beb3613d151bbf71668b5e3490434ca6e76f526229fa627f332232" exitCode=0 Dec 11 15:21:51 crc kubenswrapper[5050]: I1211 15:21:51.856570 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qx6t6" event={"ID":"afe7472c-e7d5-4aef-859d-fc0f5e43e05f","Type":"ContainerDied","Data":"c06af4b1d0beb3613d151bbf71668b5e3490434ca6e76f526229fa627f332232"} Dec 11 15:21:52 crc kubenswrapper[5050]: I1211 15:21:52.184041 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qqfkl" Dec 11 15:21:52 crc kubenswrapper[5050]: I1211 15:21:52.316489 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e50969-a69c-4b66-9788-ca2566127898-combined-ca-bundle\") pod \"c3e50969-a69c-4b66-9788-ca2566127898\" (UID: \"c3e50969-a69c-4b66-9788-ca2566127898\") " Dec 11 15:21:52 crc kubenswrapper[5050]: I1211 15:21:52.316659 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3e50969-a69c-4b66-9788-ca2566127898-config-data\") pod \"c3e50969-a69c-4b66-9788-ca2566127898\" (UID: \"c3e50969-a69c-4b66-9788-ca2566127898\") " Dec 11 15:21:52 crc kubenswrapper[5050]: I1211 15:21:52.316779 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxbwc\" (UniqueName: \"kubernetes.io/projected/c3e50969-a69c-4b66-9788-ca2566127898-kube-api-access-nxbwc\") pod \"c3e50969-a69c-4b66-9788-ca2566127898\" (UID: \"c3e50969-a69c-4b66-9788-ca2566127898\") " Dec 11 15:21:52 crc kubenswrapper[5050]: I1211 15:21:52.316808 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3e50969-a69c-4b66-9788-ca2566127898-scripts\") pod \"c3e50969-a69c-4b66-9788-ca2566127898\" (UID: \"c3e50969-a69c-4b66-9788-ca2566127898\") " Dec 11 15:21:52 crc kubenswrapper[5050]: I1211 15:21:52.322385 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3e50969-a69c-4b66-9788-ca2566127898-scripts" (OuterVolumeSpecName: "scripts") pod "c3e50969-a69c-4b66-9788-ca2566127898" (UID: "c3e50969-a69c-4b66-9788-ca2566127898"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:21:52 crc kubenswrapper[5050]: I1211 15:21:52.322707 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3e50969-a69c-4b66-9788-ca2566127898-kube-api-access-nxbwc" (OuterVolumeSpecName: "kube-api-access-nxbwc") pod "c3e50969-a69c-4b66-9788-ca2566127898" (UID: "c3e50969-a69c-4b66-9788-ca2566127898"). InnerVolumeSpecName "kube-api-access-nxbwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:21:52 crc kubenswrapper[5050]: I1211 15:21:52.344874 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3e50969-a69c-4b66-9788-ca2566127898-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3e50969-a69c-4b66-9788-ca2566127898" (UID: "c3e50969-a69c-4b66-9788-ca2566127898"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:21:52 crc kubenswrapper[5050]: I1211 15:21:52.358793 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3e50969-a69c-4b66-9788-ca2566127898-config-data" (OuterVolumeSpecName: "config-data") pod "c3e50969-a69c-4b66-9788-ca2566127898" (UID: "c3e50969-a69c-4b66-9788-ca2566127898"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:21:52 crc kubenswrapper[5050]: I1211 15:21:52.419289 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxbwc\" (UniqueName: \"kubernetes.io/projected/c3e50969-a69c-4b66-9788-ca2566127898-kube-api-access-nxbwc\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:52 crc kubenswrapper[5050]: I1211 15:21:52.419321 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3e50969-a69c-4b66-9788-ca2566127898-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:52 crc kubenswrapper[5050]: I1211 15:21:52.419335 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3e50969-a69c-4b66-9788-ca2566127898-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:52 crc kubenswrapper[5050]: I1211 15:21:52.419342 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3e50969-a69c-4b66-9788-ca2566127898-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:52 crc kubenswrapper[5050]: I1211 15:21:52.868040 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qqfkl" Dec 11 15:21:52 crc kubenswrapper[5050]: I1211 15:21:52.869584 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qqfkl" event={"ID":"c3e50969-a69c-4b66-9788-ca2566127898","Type":"ContainerDied","Data":"0463161c15bb7134dbfaf93ea63c03439708d7092dd4fcadea98530aeb5b35df"} Dec 11 15:21:52 crc kubenswrapper[5050]: I1211 15:21:52.869694 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0463161c15bb7134dbfaf93ea63c03439708d7092dd4fcadea98530aeb5b35df" Dec 11 15:21:52 crc kubenswrapper[5050]: I1211 15:21:52.930601 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 11 15:21:52 crc kubenswrapper[5050]: E1211 15:21:52.930971 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3e50969-a69c-4b66-9788-ca2566127898" containerName="nova-cell1-conductor-db-sync" Dec 11 15:21:52 crc kubenswrapper[5050]: I1211 15:21:52.930989 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3e50969-a69c-4b66-9788-ca2566127898" containerName="nova-cell1-conductor-db-sync" Dec 11 15:21:52 crc kubenswrapper[5050]: I1211 15:21:52.931230 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3e50969-a69c-4b66-9788-ca2566127898" containerName="nova-cell1-conductor-db-sync" Dec 11 15:21:52 crc kubenswrapper[5050]: I1211 15:21:52.931969 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 11 15:21:52 crc kubenswrapper[5050]: I1211 15:21:52.937168 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 11 15:21:52 crc kubenswrapper[5050]: I1211 15:21:52.949745 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 11 15:21:53 crc kubenswrapper[5050]: I1211 15:21:53.030948 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11adb6b7-1e47-45c3-932b-9f3e248d7621-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"11adb6b7-1e47-45c3-932b-9f3e248d7621\") " pod="openstack/nova-cell1-conductor-0" Dec 11 15:21:53 crc kubenswrapper[5050]: I1211 15:21:53.031057 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbzxc\" (UniqueName: \"kubernetes.io/projected/11adb6b7-1e47-45c3-932b-9f3e248d7621-kube-api-access-xbzxc\") pod \"nova-cell1-conductor-0\" (UID: \"11adb6b7-1e47-45c3-932b-9f3e248d7621\") " pod="openstack/nova-cell1-conductor-0" Dec 11 15:21:53 crc kubenswrapper[5050]: I1211 15:21:53.031162 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11adb6b7-1e47-45c3-932b-9f3e248d7621-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"11adb6b7-1e47-45c3-932b-9f3e248d7621\") " pod="openstack/nova-cell1-conductor-0" Dec 11 15:21:53 crc kubenswrapper[5050]: I1211 15:21:53.133071 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11adb6b7-1e47-45c3-932b-9f3e248d7621-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"11adb6b7-1e47-45c3-932b-9f3e248d7621\") " pod="openstack/nova-cell1-conductor-0" Dec 11 15:21:53 crc kubenswrapper[5050]: I1211 15:21:53.133169 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbzxc\" (UniqueName: \"kubernetes.io/projected/11adb6b7-1e47-45c3-932b-9f3e248d7621-kube-api-access-xbzxc\") pod \"nova-cell1-conductor-0\" (UID: \"11adb6b7-1e47-45c3-932b-9f3e248d7621\") " pod="openstack/nova-cell1-conductor-0" Dec 11 15:21:53 crc kubenswrapper[5050]: I1211 15:21:53.133209 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11adb6b7-1e47-45c3-932b-9f3e248d7621-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"11adb6b7-1e47-45c3-932b-9f3e248d7621\") " pod="openstack/nova-cell1-conductor-0" Dec 11 15:21:53 crc kubenswrapper[5050]: I1211 15:21:53.138054 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11adb6b7-1e47-45c3-932b-9f3e248d7621-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"11adb6b7-1e47-45c3-932b-9f3e248d7621\") " pod="openstack/nova-cell1-conductor-0" Dec 11 15:21:53 crc kubenswrapper[5050]: I1211 15:21:53.138225 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11adb6b7-1e47-45c3-932b-9f3e248d7621-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"11adb6b7-1e47-45c3-932b-9f3e248d7621\") " pod="openstack/nova-cell1-conductor-0" Dec 11 15:21:53 crc kubenswrapper[5050]: I1211 15:21:53.151952 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbzxc\" (UniqueName: \"kubernetes.io/projected/11adb6b7-1e47-45c3-932b-9f3e248d7621-kube-api-access-xbzxc\") pod \"nova-cell1-conductor-0\" (UID: \"11adb6b7-1e47-45c3-932b-9f3e248d7621\") " pod="openstack/nova-cell1-conductor-0" Dec 11 15:21:53 crc kubenswrapper[5050]: I1211 15:21:53.231816 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qx6t6" Dec 11 15:21:53 crc kubenswrapper[5050]: I1211 15:21:53.254176 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 11 15:21:53 crc kubenswrapper[5050]: I1211 15:21:53.336214 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe7472c-e7d5-4aef-859d-fc0f5e43e05f-config-data\") pod \"afe7472c-e7d5-4aef-859d-fc0f5e43e05f\" (UID: \"afe7472c-e7d5-4aef-859d-fc0f5e43e05f\") " Dec 11 15:21:53 crc kubenswrapper[5050]: I1211 15:21:53.336260 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe7472c-e7d5-4aef-859d-fc0f5e43e05f-scripts\") pod \"afe7472c-e7d5-4aef-859d-fc0f5e43e05f\" (UID: \"afe7472c-e7d5-4aef-859d-fc0f5e43e05f\") " Dec 11 15:21:53 crc kubenswrapper[5050]: I1211 15:21:53.336376 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6stg\" (UniqueName: \"kubernetes.io/projected/afe7472c-e7d5-4aef-859d-fc0f5e43e05f-kube-api-access-v6stg\") pod \"afe7472c-e7d5-4aef-859d-fc0f5e43e05f\" (UID: \"afe7472c-e7d5-4aef-859d-fc0f5e43e05f\") " Dec 11 15:21:53 crc kubenswrapper[5050]: I1211 15:21:53.336414 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe7472c-e7d5-4aef-859d-fc0f5e43e05f-combined-ca-bundle\") pod \"afe7472c-e7d5-4aef-859d-fc0f5e43e05f\" (UID: \"afe7472c-e7d5-4aef-859d-fc0f5e43e05f\") " Dec 11 15:21:53 crc kubenswrapper[5050]: I1211 15:21:53.340179 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afe7472c-e7d5-4aef-859d-fc0f5e43e05f-kube-api-access-v6stg" (OuterVolumeSpecName: "kube-api-access-v6stg") pod "afe7472c-e7d5-4aef-859d-fc0f5e43e05f" (UID: "afe7472c-e7d5-4aef-859d-fc0f5e43e05f"). InnerVolumeSpecName "kube-api-access-v6stg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:21:53 crc kubenswrapper[5050]: I1211 15:21:53.340611 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe7472c-e7d5-4aef-859d-fc0f5e43e05f-scripts" (OuterVolumeSpecName: "scripts") pod "afe7472c-e7d5-4aef-859d-fc0f5e43e05f" (UID: "afe7472c-e7d5-4aef-859d-fc0f5e43e05f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:21:53 crc kubenswrapper[5050]: I1211 15:21:53.361392 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe7472c-e7d5-4aef-859d-fc0f5e43e05f-config-data" (OuterVolumeSpecName: "config-data") pod "afe7472c-e7d5-4aef-859d-fc0f5e43e05f" (UID: "afe7472c-e7d5-4aef-859d-fc0f5e43e05f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:21:53 crc kubenswrapper[5050]: I1211 15:21:53.361977 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe7472c-e7d5-4aef-859d-fc0f5e43e05f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "afe7472c-e7d5-4aef-859d-fc0f5e43e05f" (UID: "afe7472c-e7d5-4aef-859d-fc0f5e43e05f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:21:53 crc kubenswrapper[5050]: I1211 15:21:53.444206 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe7472c-e7d5-4aef-859d-fc0f5e43e05f-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:53 crc kubenswrapper[5050]: I1211 15:21:53.444239 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe7472c-e7d5-4aef-859d-fc0f5e43e05f-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:53 crc kubenswrapper[5050]: I1211 15:21:53.444248 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6stg\" (UniqueName: \"kubernetes.io/projected/afe7472c-e7d5-4aef-859d-fc0f5e43e05f-kube-api-access-v6stg\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:53 crc kubenswrapper[5050]: I1211 15:21:53.444259 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe7472c-e7d5-4aef-859d-fc0f5e43e05f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:53 crc kubenswrapper[5050]: I1211 15:21:53.766682 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 11 15:21:53 crc kubenswrapper[5050]: W1211 15:21:53.768686 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11adb6b7_1e47_45c3_932b_9f3e248d7621.slice/crio-e827c0ad139bbfa4c2cecdc5ef81e79f003a975f08ddbd66ac3a8db2f53a7bdc WatchSource:0}: Error finding container e827c0ad139bbfa4c2cecdc5ef81e79f003a975f08ddbd66ac3a8db2f53a7bdc: Status 404 returned error can't find the container with id e827c0ad139bbfa4c2cecdc5ef81e79f003a975f08ddbd66ac3a8db2f53a7bdc Dec 11 15:21:53 crc kubenswrapper[5050]: I1211 15:21:53.877517 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"11adb6b7-1e47-45c3-932b-9f3e248d7621","Type":"ContainerStarted","Data":"e827c0ad139bbfa4c2cecdc5ef81e79f003a975f08ddbd66ac3a8db2f53a7bdc"} Dec 11 15:21:53 crc kubenswrapper[5050]: I1211 15:21:53.879306 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-qx6t6" event={"ID":"afe7472c-e7d5-4aef-859d-fc0f5e43e05f","Type":"ContainerDied","Data":"e45558aa1d9f1ca00aedb3b72d5699b94dda5c35367914a3b2d709c03b77bbf5"} Dec 11 15:21:53 crc kubenswrapper[5050]: I1211 15:21:53.879339 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e45558aa1d9f1ca00aedb3b72d5699b94dda5c35367914a3b2d709c03b77bbf5" Dec 11 15:21:53 crc kubenswrapper[5050]: I1211 15:21:53.879368 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-qx6t6" Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.050600 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.050868 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4a0a6737-e9de-4e43-a75f-796e189251e1" containerName="nova-api-log" containerID="cri-o://861fdfa63aa8548e7917ebff813c65ade3b956a8b37928a8d8347db0ff629e59" gracePeriod=30 Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.051001 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4a0a6737-e9de-4e43-a75f-796e189251e1" containerName="nova-api-api" containerID="cri-o://3e56fca1c857c28dee4ece2352525ee6572fed5025ab9a5bfc1f84336cc8bdc3" gracePeriod=30 Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.059709 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.059888 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="1bcf288f-ca1a-46f5-b3f3-5136f97465cf" containerName="nova-scheduler-scheduler" containerID="cri-o://30dd4750ef1da079dc2a1d47f7c9a776395d1c483687cb1280344a864334315f" gracePeriod=30 Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.106897 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.107138 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b8b7a9dc-0609-4549-8abf-caf981152d2a" containerName="nova-metadata-log" containerID="cri-o://06dd38f356c42aa630810481160a1dc00bea4efe7b43c38e79fa6d1439a75a60" gracePeriod=30 Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.107662 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b8b7a9dc-0609-4549-8abf-caf981152d2a" containerName="nova-metadata-metadata" containerID="cri-o://e81516e776bc20477a74564b8638763dde8907d2b332218cf8b387ae5ddfbd68" gracePeriod=30 Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.660897 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.677126 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.773546 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hkjw\" (UniqueName: \"kubernetes.io/projected/4a0a6737-e9de-4e43-a75f-796e189251e1-kube-api-access-2hkjw\") pod \"4a0a6737-e9de-4e43-a75f-796e189251e1\" (UID: \"4a0a6737-e9de-4e43-a75f-796e189251e1\") " Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.773646 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8b7a9dc-0609-4549-8abf-caf981152d2a-logs\") pod \"b8b7a9dc-0609-4549-8abf-caf981152d2a\" (UID: \"b8b7a9dc-0609-4549-8abf-caf981152d2a\") " Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.773673 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8b7a9dc-0609-4549-8abf-caf981152d2a-config-data\") pod \"b8b7a9dc-0609-4549-8abf-caf981152d2a\" (UID: \"b8b7a9dc-0609-4549-8abf-caf981152d2a\") " Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.773708 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87nh4\" (UniqueName: \"kubernetes.io/projected/b8b7a9dc-0609-4549-8abf-caf981152d2a-kube-api-access-87nh4\") pod \"b8b7a9dc-0609-4549-8abf-caf981152d2a\" (UID: \"b8b7a9dc-0609-4549-8abf-caf981152d2a\") " Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.773784 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a0a6737-e9de-4e43-a75f-796e189251e1-logs\") pod \"4a0a6737-e9de-4e43-a75f-796e189251e1\" (UID: \"4a0a6737-e9de-4e43-a75f-796e189251e1\") " Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.773828 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8b7a9dc-0609-4549-8abf-caf981152d2a-combined-ca-bundle\") pod \"b8b7a9dc-0609-4549-8abf-caf981152d2a\" (UID: \"b8b7a9dc-0609-4549-8abf-caf981152d2a\") " Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.773907 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a0a6737-e9de-4e43-a75f-796e189251e1-combined-ca-bundle\") pod \"4a0a6737-e9de-4e43-a75f-796e189251e1\" (UID: \"4a0a6737-e9de-4e43-a75f-796e189251e1\") " Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.773959 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a0a6737-e9de-4e43-a75f-796e189251e1-config-data\") pod \"4a0a6737-e9de-4e43-a75f-796e189251e1\" (UID: \"4a0a6737-e9de-4e43-a75f-796e189251e1\") " Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.775000 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8b7a9dc-0609-4549-8abf-caf981152d2a-logs" (OuterVolumeSpecName: "logs") pod "b8b7a9dc-0609-4549-8abf-caf981152d2a" (UID: "b8b7a9dc-0609-4549-8abf-caf981152d2a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.775352 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4a0a6737-e9de-4e43-a75f-796e189251e1-logs" (OuterVolumeSpecName: "logs") pod "4a0a6737-e9de-4e43-a75f-796e189251e1" (UID: "4a0a6737-e9de-4e43-a75f-796e189251e1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.779749 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a0a6737-e9de-4e43-a75f-796e189251e1-kube-api-access-2hkjw" (OuterVolumeSpecName: "kube-api-access-2hkjw") pod "4a0a6737-e9de-4e43-a75f-796e189251e1" (UID: "4a0a6737-e9de-4e43-a75f-796e189251e1"). InnerVolumeSpecName "kube-api-access-2hkjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.780054 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8b7a9dc-0609-4549-8abf-caf981152d2a-kube-api-access-87nh4" (OuterVolumeSpecName: "kube-api-access-87nh4") pod "b8b7a9dc-0609-4549-8abf-caf981152d2a" (UID: "b8b7a9dc-0609-4549-8abf-caf981152d2a"). InnerVolumeSpecName "kube-api-access-87nh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.802716 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8b7a9dc-0609-4549-8abf-caf981152d2a-config-data" (OuterVolumeSpecName: "config-data") pod "b8b7a9dc-0609-4549-8abf-caf981152d2a" (UID: "b8b7a9dc-0609-4549-8abf-caf981152d2a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.803088 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a0a6737-e9de-4e43-a75f-796e189251e1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a0a6737-e9de-4e43-a75f-796e189251e1" (UID: "4a0a6737-e9de-4e43-a75f-796e189251e1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.804103 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a0a6737-e9de-4e43-a75f-796e189251e1-config-data" (OuterVolumeSpecName: "config-data") pod "4a0a6737-e9de-4e43-a75f-796e189251e1" (UID: "4a0a6737-e9de-4e43-a75f-796e189251e1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.810978 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8b7a9dc-0609-4549-8abf-caf981152d2a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8b7a9dc-0609-4549-8abf-caf981152d2a" (UID: "b8b7a9dc-0609-4549-8abf-caf981152d2a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.881552 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8b7a9dc-0609-4549-8abf-caf981152d2a-logs\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.881597 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8b7a9dc-0609-4549-8abf-caf981152d2a-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.881612 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87nh4\" (UniqueName: \"kubernetes.io/projected/b8b7a9dc-0609-4549-8abf-caf981152d2a-kube-api-access-87nh4\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.881628 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4a0a6737-e9de-4e43-a75f-796e189251e1-logs\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.881639 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8b7a9dc-0609-4549-8abf-caf981152d2a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.881651 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a0a6737-e9de-4e43-a75f-796e189251e1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.881665 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a0a6737-e9de-4e43-a75f-796e189251e1-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.881678 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hkjw\" (UniqueName: \"kubernetes.io/projected/4a0a6737-e9de-4e43-a75f-796e189251e1-kube-api-access-2hkjw\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.889868 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"11adb6b7-1e47-45c3-932b-9f3e248d7621","Type":"ContainerStarted","Data":"47f1a1038f9fbef7ae0b1178f8e3117cb60df69aa73d0374eaa42c0f61a1c05e"} Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.891499 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.893415 5050 generic.go:334] "Generic (PLEG): container finished" podID="b8b7a9dc-0609-4549-8abf-caf981152d2a" containerID="e81516e776bc20477a74564b8638763dde8907d2b332218cf8b387ae5ddfbd68" exitCode=0 Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.893477 5050 generic.go:334] "Generic (PLEG): container finished" podID="b8b7a9dc-0609-4549-8abf-caf981152d2a" containerID="06dd38f356c42aa630810481160a1dc00bea4efe7b43c38e79fa6d1439a75a60" exitCode=143 Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.893528 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b8b7a9dc-0609-4549-8abf-caf981152d2a","Type":"ContainerDied","Data":"e81516e776bc20477a74564b8638763dde8907d2b332218cf8b387ae5ddfbd68"} Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.893585 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b8b7a9dc-0609-4549-8abf-caf981152d2a","Type":"ContainerDied","Data":"06dd38f356c42aa630810481160a1dc00bea4efe7b43c38e79fa6d1439a75a60"} Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.893599 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b8b7a9dc-0609-4549-8abf-caf981152d2a","Type":"ContainerDied","Data":"9054f70b2c18d629b4d109fcdd8d5df9d2086ee8d54da868edbe508eeb137a90"} Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.893618 5050 scope.go:117] "RemoveContainer" containerID="e81516e776bc20477a74564b8638763dde8907d2b332218cf8b387ae5ddfbd68" Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.893766 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.896159 5050 generic.go:334] "Generic (PLEG): container finished" podID="4a0a6737-e9de-4e43-a75f-796e189251e1" containerID="3e56fca1c857c28dee4ece2352525ee6572fed5025ab9a5bfc1f84336cc8bdc3" exitCode=0 Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.896173 5050 generic.go:334] "Generic (PLEG): container finished" podID="4a0a6737-e9de-4e43-a75f-796e189251e1" containerID="861fdfa63aa8548e7917ebff813c65ade3b956a8b37928a8d8347db0ff629e59" exitCode=143 Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.896187 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4a0a6737-e9de-4e43-a75f-796e189251e1","Type":"ContainerDied","Data":"3e56fca1c857c28dee4ece2352525ee6572fed5025ab9a5bfc1f84336cc8bdc3"} Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.896201 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4a0a6737-e9de-4e43-a75f-796e189251e1","Type":"ContainerDied","Data":"861fdfa63aa8548e7917ebff813c65ade3b956a8b37928a8d8347db0ff629e59"} Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.896209 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4a0a6737-e9de-4e43-a75f-796e189251e1","Type":"ContainerDied","Data":"c2cbeb493e7ef7978ef1596116e47ce73f1b9f613439d6d8b2bda59f55bf83f6"} Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.896254 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.918137 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.918121941 podStartE2EDuration="2.918121941s" podCreationTimestamp="2025-12-11 15:21:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:21:54.909204943 +0000 UTC m=+5605.752927539" watchObservedRunningTime="2025-12-11 15:21:54.918121941 +0000 UTC m=+5605.761844527" Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.939273 5050 scope.go:117] "RemoveContainer" containerID="06dd38f356c42aa630810481160a1dc00bea4efe7b43c38e79fa6d1439a75a60" Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.961326 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.968060 5050 scope.go:117] "RemoveContainer" containerID="e81516e776bc20477a74564b8638763dde8907d2b332218cf8b387ae5ddfbd68" Dec 11 15:21:54 crc kubenswrapper[5050]: E1211 15:21:54.968529 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e81516e776bc20477a74564b8638763dde8907d2b332218cf8b387ae5ddfbd68\": container with ID starting with e81516e776bc20477a74564b8638763dde8907d2b332218cf8b387ae5ddfbd68 not found: ID does not exist" containerID="e81516e776bc20477a74564b8638763dde8907d2b332218cf8b387ae5ddfbd68" Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.968562 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e81516e776bc20477a74564b8638763dde8907d2b332218cf8b387ae5ddfbd68"} err="failed to get container status \"e81516e776bc20477a74564b8638763dde8907d2b332218cf8b387ae5ddfbd68\": rpc error: code = NotFound desc = could not find container \"e81516e776bc20477a74564b8638763dde8907d2b332218cf8b387ae5ddfbd68\": container with ID starting with e81516e776bc20477a74564b8638763dde8907d2b332218cf8b387ae5ddfbd68 not found: ID does not exist" Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.969124 5050 scope.go:117] "RemoveContainer" containerID="06dd38f356c42aa630810481160a1dc00bea4efe7b43c38e79fa6d1439a75a60" Dec 11 15:21:54 crc kubenswrapper[5050]: E1211 15:21:54.973203 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06dd38f356c42aa630810481160a1dc00bea4efe7b43c38e79fa6d1439a75a60\": container with ID starting with 06dd38f356c42aa630810481160a1dc00bea4efe7b43c38e79fa6d1439a75a60 not found: ID does not exist" containerID="06dd38f356c42aa630810481160a1dc00bea4efe7b43c38e79fa6d1439a75a60" Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.973258 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06dd38f356c42aa630810481160a1dc00bea4efe7b43c38e79fa6d1439a75a60"} err="failed to get container status \"06dd38f356c42aa630810481160a1dc00bea4efe7b43c38e79fa6d1439a75a60\": rpc error: code = NotFound desc = could not find container \"06dd38f356c42aa630810481160a1dc00bea4efe7b43c38e79fa6d1439a75a60\": container with ID starting with 06dd38f356c42aa630810481160a1dc00bea4efe7b43c38e79fa6d1439a75a60 not found: ID does not exist" Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.973292 5050 scope.go:117] "RemoveContainer" containerID="e81516e776bc20477a74564b8638763dde8907d2b332218cf8b387ae5ddfbd68" Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.975267 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e81516e776bc20477a74564b8638763dde8907d2b332218cf8b387ae5ddfbd68"} err="failed to get container status \"e81516e776bc20477a74564b8638763dde8907d2b332218cf8b387ae5ddfbd68\": rpc error: code = NotFound desc = could not find container \"e81516e776bc20477a74564b8638763dde8907d2b332218cf8b387ae5ddfbd68\": container with ID starting with e81516e776bc20477a74564b8638763dde8907d2b332218cf8b387ae5ddfbd68 not found: ID does not exist" Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.975307 5050 scope.go:117] "RemoveContainer" containerID="06dd38f356c42aa630810481160a1dc00bea4efe7b43c38e79fa6d1439a75a60" Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.976208 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06dd38f356c42aa630810481160a1dc00bea4efe7b43c38e79fa6d1439a75a60"} err="failed to get container status \"06dd38f356c42aa630810481160a1dc00bea4efe7b43c38e79fa6d1439a75a60\": rpc error: code = NotFound desc = could not find container \"06dd38f356c42aa630810481160a1dc00bea4efe7b43c38e79fa6d1439a75a60\": container with ID starting with 06dd38f356c42aa630810481160a1dc00bea4efe7b43c38e79fa6d1439a75a60 not found: ID does not exist" Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.976238 5050 scope.go:117] "RemoveContainer" containerID="3e56fca1c857c28dee4ece2352525ee6572fed5025ab9a5bfc1f84336cc8bdc3" Dec 11 15:21:54 crc kubenswrapper[5050]: I1211 15:21:54.981522 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.009301 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.017666 5050 scope.go:117] "RemoveContainer" containerID="861fdfa63aa8548e7917ebff813c65ade3b956a8b37928a8d8347db0ff629e59" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.018766 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.031668 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 11 15:21:55 crc kubenswrapper[5050]: E1211 15:21:55.033163 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe7472c-e7d5-4aef-859d-fc0f5e43e05f" containerName="nova-manage" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.033193 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe7472c-e7d5-4aef-859d-fc0f5e43e05f" containerName="nova-manage" Dec 11 15:21:55 crc kubenswrapper[5050]: E1211 15:21:55.033210 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8b7a9dc-0609-4549-8abf-caf981152d2a" containerName="nova-metadata-metadata" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.033218 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8b7a9dc-0609-4549-8abf-caf981152d2a" containerName="nova-metadata-metadata" Dec 11 15:21:55 crc kubenswrapper[5050]: E1211 15:21:55.033238 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a0a6737-e9de-4e43-a75f-796e189251e1" containerName="nova-api-log" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.033245 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a0a6737-e9de-4e43-a75f-796e189251e1" containerName="nova-api-log" Dec 11 15:21:55 crc kubenswrapper[5050]: E1211 15:21:55.033263 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a0a6737-e9de-4e43-a75f-796e189251e1" containerName="nova-api-api" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.033269 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a0a6737-e9de-4e43-a75f-796e189251e1" containerName="nova-api-api" Dec 11 15:21:55 crc kubenswrapper[5050]: E1211 15:21:55.033284 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8b7a9dc-0609-4549-8abf-caf981152d2a" containerName="nova-metadata-log" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.033291 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8b7a9dc-0609-4549-8abf-caf981152d2a" containerName="nova-metadata-log" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.033502 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a0a6737-e9de-4e43-a75f-796e189251e1" containerName="nova-api-api" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.033521 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a0a6737-e9de-4e43-a75f-796e189251e1" containerName="nova-api-log" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.033532 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe7472c-e7d5-4aef-859d-fc0f5e43e05f" containerName="nova-manage" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.033546 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8b7a9dc-0609-4549-8abf-caf981152d2a" containerName="nova-metadata-log" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.033558 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8b7a9dc-0609-4549-8abf-caf981152d2a" containerName="nova-metadata-metadata" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.036699 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.042312 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.043295 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.048162 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.054939 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.057449 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.057820 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.081596 5050 scope.go:117] "RemoveContainer" containerID="3e56fca1c857c28dee4ece2352525ee6572fed5025ab9a5bfc1f84336cc8bdc3" Dec 11 15:21:55 crc kubenswrapper[5050]: E1211 15:21:55.086345 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e56fca1c857c28dee4ece2352525ee6572fed5025ab9a5bfc1f84336cc8bdc3\": container with ID starting with 3e56fca1c857c28dee4ece2352525ee6572fed5025ab9a5bfc1f84336cc8bdc3 not found: ID does not exist" containerID="3e56fca1c857c28dee4ece2352525ee6572fed5025ab9a5bfc1f84336cc8bdc3" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.086395 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e56fca1c857c28dee4ece2352525ee6572fed5025ab9a5bfc1f84336cc8bdc3"} err="failed to get container status \"3e56fca1c857c28dee4ece2352525ee6572fed5025ab9a5bfc1f84336cc8bdc3\": rpc error: code = NotFound desc = could not find container \"3e56fca1c857c28dee4ece2352525ee6572fed5025ab9a5bfc1f84336cc8bdc3\": container with ID starting with 3e56fca1c857c28dee4ece2352525ee6572fed5025ab9a5bfc1f84336cc8bdc3 not found: ID does not exist" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.086426 5050 scope.go:117] "RemoveContainer" containerID="861fdfa63aa8548e7917ebff813c65ade3b956a8b37928a8d8347db0ff629e59" Dec 11 15:21:55 crc kubenswrapper[5050]: E1211 15:21:55.086828 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"861fdfa63aa8548e7917ebff813c65ade3b956a8b37928a8d8347db0ff629e59\": container with ID starting with 861fdfa63aa8548e7917ebff813c65ade3b956a8b37928a8d8347db0ff629e59 not found: ID does not exist" containerID="861fdfa63aa8548e7917ebff813c65ade3b956a8b37928a8d8347db0ff629e59" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.086855 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"861fdfa63aa8548e7917ebff813c65ade3b956a8b37928a8d8347db0ff629e59"} err="failed to get container status \"861fdfa63aa8548e7917ebff813c65ade3b956a8b37928a8d8347db0ff629e59\": rpc error: code = NotFound desc = could not find container \"861fdfa63aa8548e7917ebff813c65ade3b956a8b37928a8d8347db0ff629e59\": container with ID starting with 861fdfa63aa8548e7917ebff813c65ade3b956a8b37928a8d8347db0ff629e59 not found: ID does not exist" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.086873 5050 scope.go:117] "RemoveContainer" containerID="3e56fca1c857c28dee4ece2352525ee6572fed5025ab9a5bfc1f84336cc8bdc3" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.087171 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e56fca1c857c28dee4ece2352525ee6572fed5025ab9a5bfc1f84336cc8bdc3"} err="failed to get container status \"3e56fca1c857c28dee4ece2352525ee6572fed5025ab9a5bfc1f84336cc8bdc3\": rpc error: code = NotFound desc = could not find container \"3e56fca1c857c28dee4ece2352525ee6572fed5025ab9a5bfc1f84336cc8bdc3\": container with ID starting with 3e56fca1c857c28dee4ece2352525ee6572fed5025ab9a5bfc1f84336cc8bdc3 not found: ID does not exist" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.087202 5050 scope.go:117] "RemoveContainer" containerID="861fdfa63aa8548e7917ebff813c65ade3b956a8b37928a8d8347db0ff629e59" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.091276 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"861fdfa63aa8548e7917ebff813c65ade3b956a8b37928a8d8347db0ff629e59"} err="failed to get container status \"861fdfa63aa8548e7917ebff813c65ade3b956a8b37928a8d8347db0ff629e59\": rpc error: code = NotFound desc = could not find container \"861fdfa63aa8548e7917ebff813c65ade3b956a8b37928a8d8347db0ff629e59\": container with ID starting with 861fdfa63aa8548e7917ebff813c65ade3b956a8b37928a8d8347db0ff629e59 not found: ID does not exist" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.186195 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6aae3cc-13af-4bce-ac0b-d00638cf96e5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c6aae3cc-13af-4bce-ac0b-d00638cf96e5\") " pod="openstack/nova-metadata-0" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.186247 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlfbf\" (UniqueName: \"kubernetes.io/projected/be35db30-16f9-4271-888b-47d7c51d1d1f-kube-api-access-rlfbf\") pod \"nova-api-0\" (UID: \"be35db30-16f9-4271-888b-47d7c51d1d1f\") " pod="openstack/nova-api-0" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.186333 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be35db30-16f9-4271-888b-47d7c51d1d1f-config-data\") pod \"nova-api-0\" (UID: \"be35db30-16f9-4271-888b-47d7c51d1d1f\") " pod="openstack/nova-api-0" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.186356 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6aae3cc-13af-4bce-ac0b-d00638cf96e5-logs\") pod \"nova-metadata-0\" (UID: \"c6aae3cc-13af-4bce-ac0b-d00638cf96e5\") " pod="openstack/nova-metadata-0" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.186373 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be35db30-16f9-4271-888b-47d7c51d1d1f-logs\") pod \"nova-api-0\" (UID: \"be35db30-16f9-4271-888b-47d7c51d1d1f\") " pod="openstack/nova-api-0" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.186400 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be35db30-16f9-4271-888b-47d7c51d1d1f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"be35db30-16f9-4271-888b-47d7c51d1d1f\") " pod="openstack/nova-api-0" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.186423 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6aae3cc-13af-4bce-ac0b-d00638cf96e5-config-data\") pod \"nova-metadata-0\" (UID: \"c6aae3cc-13af-4bce-ac0b-d00638cf96e5\") " pod="openstack/nova-metadata-0" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.186446 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sf9l\" (UniqueName: \"kubernetes.io/projected/c6aae3cc-13af-4bce-ac0b-d00638cf96e5-kube-api-access-8sf9l\") pod \"nova-metadata-0\" (UID: \"c6aae3cc-13af-4bce-ac0b-d00638cf96e5\") " pod="openstack/nova-metadata-0" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.288116 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be35db30-16f9-4271-888b-47d7c51d1d1f-config-data\") pod \"nova-api-0\" (UID: \"be35db30-16f9-4271-888b-47d7c51d1d1f\") " pod="openstack/nova-api-0" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.288161 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6aae3cc-13af-4bce-ac0b-d00638cf96e5-logs\") pod \"nova-metadata-0\" (UID: \"c6aae3cc-13af-4bce-ac0b-d00638cf96e5\") " pod="openstack/nova-metadata-0" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.288182 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be35db30-16f9-4271-888b-47d7c51d1d1f-logs\") pod \"nova-api-0\" (UID: \"be35db30-16f9-4271-888b-47d7c51d1d1f\") " pod="openstack/nova-api-0" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.288208 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be35db30-16f9-4271-888b-47d7c51d1d1f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"be35db30-16f9-4271-888b-47d7c51d1d1f\") " pod="openstack/nova-api-0" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.288235 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6aae3cc-13af-4bce-ac0b-d00638cf96e5-config-data\") pod \"nova-metadata-0\" (UID: \"c6aae3cc-13af-4bce-ac0b-d00638cf96e5\") " pod="openstack/nova-metadata-0" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.288271 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sf9l\" (UniqueName: \"kubernetes.io/projected/c6aae3cc-13af-4bce-ac0b-d00638cf96e5-kube-api-access-8sf9l\") pod \"nova-metadata-0\" (UID: \"c6aae3cc-13af-4bce-ac0b-d00638cf96e5\") " pod="openstack/nova-metadata-0" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.288303 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6aae3cc-13af-4bce-ac0b-d00638cf96e5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c6aae3cc-13af-4bce-ac0b-d00638cf96e5\") " pod="openstack/nova-metadata-0" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.288327 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlfbf\" (UniqueName: \"kubernetes.io/projected/be35db30-16f9-4271-888b-47d7c51d1d1f-kube-api-access-rlfbf\") pod \"nova-api-0\" (UID: \"be35db30-16f9-4271-888b-47d7c51d1d1f\") " pod="openstack/nova-api-0" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.289221 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be35db30-16f9-4271-888b-47d7c51d1d1f-logs\") pod \"nova-api-0\" (UID: \"be35db30-16f9-4271-888b-47d7c51d1d1f\") " pod="openstack/nova-api-0" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.289224 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6aae3cc-13af-4bce-ac0b-d00638cf96e5-logs\") pod \"nova-metadata-0\" (UID: \"c6aae3cc-13af-4bce-ac0b-d00638cf96e5\") " pod="openstack/nova-metadata-0" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.294745 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6aae3cc-13af-4bce-ac0b-d00638cf96e5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c6aae3cc-13af-4bce-ac0b-d00638cf96e5\") " pod="openstack/nova-metadata-0" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.294939 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be35db30-16f9-4271-888b-47d7c51d1d1f-config-data\") pod \"nova-api-0\" (UID: \"be35db30-16f9-4271-888b-47d7c51d1d1f\") " pod="openstack/nova-api-0" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.295790 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be35db30-16f9-4271-888b-47d7c51d1d1f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"be35db30-16f9-4271-888b-47d7c51d1d1f\") " pod="openstack/nova-api-0" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.319713 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlfbf\" (UniqueName: \"kubernetes.io/projected/be35db30-16f9-4271-888b-47d7c51d1d1f-kube-api-access-rlfbf\") pod \"nova-api-0\" (UID: \"be35db30-16f9-4271-888b-47d7c51d1d1f\") " pod="openstack/nova-api-0" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.323759 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6aae3cc-13af-4bce-ac0b-d00638cf96e5-config-data\") pod \"nova-metadata-0\" (UID: \"c6aae3cc-13af-4bce-ac0b-d00638cf96e5\") " pod="openstack/nova-metadata-0" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.333188 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sf9l\" (UniqueName: \"kubernetes.io/projected/c6aae3cc-13af-4bce-ac0b-d00638cf96e5-kube-api-access-8sf9l\") pod \"nova-metadata-0\" (UID: \"c6aae3cc-13af-4bce-ac0b-d00638cf96e5\") " pod="openstack/nova-metadata-0" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.371512 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.383760 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.506838 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.559519 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a0a6737-e9de-4e43-a75f-796e189251e1" path="/var/lib/kubelet/pods/4a0a6737-e9de-4e43-a75f-796e189251e1/volumes" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.560335 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8b7a9dc-0609-4549-8abf-caf981152d2a" path="/var/lib/kubelet/pods/b8b7a9dc-0609-4549-8abf-caf981152d2a/volumes" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.594901 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bcf288f-ca1a-46f5-b3f3-5136f97465cf-config-data\") pod \"1bcf288f-ca1a-46f5-b3f3-5136f97465cf\" (UID: \"1bcf288f-ca1a-46f5-b3f3-5136f97465cf\") " Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.594986 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bcf288f-ca1a-46f5-b3f3-5136f97465cf-combined-ca-bundle\") pod \"1bcf288f-ca1a-46f5-b3f3-5136f97465cf\" (UID: \"1bcf288f-ca1a-46f5-b3f3-5136f97465cf\") " Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.595157 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mgppl\" (UniqueName: \"kubernetes.io/projected/1bcf288f-ca1a-46f5-b3f3-5136f97465cf-kube-api-access-mgppl\") pod \"1bcf288f-ca1a-46f5-b3f3-5136f97465cf\" (UID: \"1bcf288f-ca1a-46f5-b3f3-5136f97465cf\") " Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.602805 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bcf288f-ca1a-46f5-b3f3-5136f97465cf-kube-api-access-mgppl" (OuterVolumeSpecName: "kube-api-access-mgppl") pod "1bcf288f-ca1a-46f5-b3f3-5136f97465cf" (UID: "1bcf288f-ca1a-46f5-b3f3-5136f97465cf"). InnerVolumeSpecName "kube-api-access-mgppl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.623829 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bcf288f-ca1a-46f5-b3f3-5136f97465cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1bcf288f-ca1a-46f5-b3f3-5136f97465cf" (UID: "1bcf288f-ca1a-46f5-b3f3-5136f97465cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.631485 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bcf288f-ca1a-46f5-b3f3-5136f97465cf-config-data" (OuterVolumeSpecName: "config-data") pod "1bcf288f-ca1a-46f5-b3f3-5136f97465cf" (UID: "1bcf288f-ca1a-46f5-b3f3-5136f97465cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.697284 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mgppl\" (UniqueName: \"kubernetes.io/projected/1bcf288f-ca1a-46f5-b3f3-5136f97465cf-kube-api-access-mgppl\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.697323 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1bcf288f-ca1a-46f5-b3f3-5136f97465cf-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.697334 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1bcf288f-ca1a-46f5-b3f3-5136f97465cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.854328 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 15:21:55 crc kubenswrapper[5050]: W1211 15:21:55.857232 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe35db30_16f9_4271_888b_47d7c51d1d1f.slice/crio-0a2f27de62271732f23c5b56a841b0a7bbb7a3d469c0196fb2e50fe6dec73d98 WatchSource:0}: Error finding container 0a2f27de62271732f23c5b56a841b0a7bbb7a3d469c0196fb2e50fe6dec73d98: Status 404 returned error can't find the container with id 0a2f27de62271732f23c5b56a841b0a7bbb7a3d469c0196fb2e50fe6dec73d98 Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.906345 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"be35db30-16f9-4271-888b-47d7c51d1d1f","Type":"ContainerStarted","Data":"0a2f27de62271732f23c5b56a841b0a7bbb7a3d469c0196fb2e50fe6dec73d98"} Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.909362 5050 generic.go:334] "Generic (PLEG): container finished" podID="1bcf288f-ca1a-46f5-b3f3-5136f97465cf" containerID="30dd4750ef1da079dc2a1d47f7c9a776395d1c483687cb1280344a864334315f" exitCode=0 Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.909465 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.910256 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1bcf288f-ca1a-46f5-b3f3-5136f97465cf","Type":"ContainerDied","Data":"30dd4750ef1da079dc2a1d47f7c9a776395d1c483687cb1280344a864334315f"} Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.910317 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1bcf288f-ca1a-46f5-b3f3-5136f97465cf","Type":"ContainerDied","Data":"ebe75507e89c57497b4509f47cd4ac2d453c109dbc179eba6a6a4acb8772e0ed"} Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.910335 5050 scope.go:117] "RemoveContainer" containerID="30dd4750ef1da079dc2a1d47f7c9a776395d1c483687cb1280344a864334315f" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.932557 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.940570 5050 scope.go:117] "RemoveContainer" containerID="30dd4750ef1da079dc2a1d47f7c9a776395d1c483687cb1280344a864334315f" Dec 11 15:21:55 crc kubenswrapper[5050]: E1211 15:21:55.940980 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30dd4750ef1da079dc2a1d47f7c9a776395d1c483687cb1280344a864334315f\": container with ID starting with 30dd4750ef1da079dc2a1d47f7c9a776395d1c483687cb1280344a864334315f not found: ID does not exist" containerID="30dd4750ef1da079dc2a1d47f7c9a776395d1c483687cb1280344a864334315f" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.941029 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30dd4750ef1da079dc2a1d47f7c9a776395d1c483687cb1280344a864334315f"} err="failed to get container status \"30dd4750ef1da079dc2a1d47f7c9a776395d1c483687cb1280344a864334315f\": rpc error: code = NotFound desc = could not find container \"30dd4750ef1da079dc2a1d47f7c9a776395d1c483687cb1280344a864334315f\": container with ID starting with 30dd4750ef1da079dc2a1d47f7c9a776395d1c483687cb1280344a864334315f not found: ID does not exist" Dec 11 15:21:55 crc kubenswrapper[5050]: W1211 15:21:55.954589 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6aae3cc_13af_4bce_ac0b_d00638cf96e5.slice/crio-b4ae26dd1238ef8e06f2705280bc7d731b9237f753d18e84ae09219b148c9a29 WatchSource:0}: Error finding container b4ae26dd1238ef8e06f2705280bc7d731b9237f753d18e84ae09219b148c9a29: Status 404 returned error can't find the container with id b4ae26dd1238ef8e06f2705280bc7d731b9237f753d18e84ae09219b148c9a29 Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.959632 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.970870 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.981669 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 15:21:55 crc kubenswrapper[5050]: E1211 15:21:55.982140 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bcf288f-ca1a-46f5-b3f3-5136f97465cf" containerName="nova-scheduler-scheduler" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.982157 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bcf288f-ca1a-46f5-b3f3-5136f97465cf" containerName="nova-scheduler-scheduler" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.982322 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bcf288f-ca1a-46f5-b3f3-5136f97465cf" containerName="nova-scheduler-scheduler" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.982946 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.986617 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 11 15:21:55 crc kubenswrapper[5050]: I1211 15:21:55.989566 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.106758 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/497b2933-da2f-4dce-9a38-6307ad42c044-config-data\") pod \"nova-scheduler-0\" (UID: \"497b2933-da2f-4dce-9a38-6307ad42c044\") " pod="openstack/nova-scheduler-0" Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.106879 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/497b2933-da2f-4dce-9a38-6307ad42c044-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"497b2933-da2f-4dce-9a38-6307ad42c044\") " pod="openstack/nova-scheduler-0" Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.106936 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l57bn\" (UniqueName: \"kubernetes.io/projected/497b2933-da2f-4dce-9a38-6307ad42c044-kube-api-access-l57bn\") pod \"nova-scheduler-0\" (UID: \"497b2933-da2f-4dce-9a38-6307ad42c044\") " pod="openstack/nova-scheduler-0" Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.208106 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l57bn\" (UniqueName: \"kubernetes.io/projected/497b2933-da2f-4dce-9a38-6307ad42c044-kube-api-access-l57bn\") pod \"nova-scheduler-0\" (UID: \"497b2933-da2f-4dce-9a38-6307ad42c044\") " pod="openstack/nova-scheduler-0" Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.208202 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/497b2933-da2f-4dce-9a38-6307ad42c044-config-data\") pod \"nova-scheduler-0\" (UID: \"497b2933-da2f-4dce-9a38-6307ad42c044\") " pod="openstack/nova-scheduler-0" Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.208333 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/497b2933-da2f-4dce-9a38-6307ad42c044-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"497b2933-da2f-4dce-9a38-6307ad42c044\") " pod="openstack/nova-scheduler-0" Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.213026 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/497b2933-da2f-4dce-9a38-6307ad42c044-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"497b2933-da2f-4dce-9a38-6307ad42c044\") " pod="openstack/nova-scheduler-0" Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.218746 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/497b2933-da2f-4dce-9a38-6307ad42c044-config-data\") pod \"nova-scheduler-0\" (UID: \"497b2933-da2f-4dce-9a38-6307ad42c044\") " pod="openstack/nova-scheduler-0" Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.224903 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l57bn\" (UniqueName: \"kubernetes.io/projected/497b2933-da2f-4dce-9a38-6307ad42c044-kube-api-access-l57bn\") pod \"nova-scheduler-0\" (UID: \"497b2933-da2f-4dce-9a38-6307ad42c044\") " pod="openstack/nova-scheduler-0" Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.263549 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7dc945cfcf-bfgvl" Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.277052 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.291002 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.324142 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.336402 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b49969b8f-w2f8g"] Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.336668 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b49969b8f-w2f8g" podUID="9d52ee10-8e8e-457a-92cc-03b93c6bedca" containerName="dnsmasq-dns" containerID="cri-o://29b876e4d00df1a55af81f5e20c8882b09c2c84707ab01ca16a849ce3d8b63f5" gracePeriod=10 Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.781056 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b49969b8f-w2f8g" Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.921578 5050 generic.go:334] "Generic (PLEG): container finished" podID="9d52ee10-8e8e-457a-92cc-03b93c6bedca" containerID="29b876e4d00df1a55af81f5e20c8882b09c2c84707ab01ca16a849ce3d8b63f5" exitCode=0 Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.921639 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b49969b8f-w2f8g" Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.921639 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b49969b8f-w2f8g" event={"ID":"9d52ee10-8e8e-457a-92cc-03b93c6bedca","Type":"ContainerDied","Data":"29b876e4d00df1a55af81f5e20c8882b09c2c84707ab01ca16a849ce3d8b63f5"} Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.921693 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b49969b8f-w2f8g" event={"ID":"9d52ee10-8e8e-457a-92cc-03b93c6bedca","Type":"ContainerDied","Data":"524e8252bdc3ce196fa25139227f27eb145907440d02b5dd676579231494828c"} Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.921716 5050 scope.go:117] "RemoveContainer" containerID="29b876e4d00df1a55af81f5e20c8882b09c2c84707ab01ca16a849ce3d8b63f5" Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.925063 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d52ee10-8e8e-457a-92cc-03b93c6bedca-dns-svc\") pod \"9d52ee10-8e8e-457a-92cc-03b93c6bedca\" (UID: \"9d52ee10-8e8e-457a-92cc-03b93c6bedca\") " Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.925100 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8q9rt\" (UniqueName: \"kubernetes.io/projected/9d52ee10-8e8e-457a-92cc-03b93c6bedca-kube-api-access-8q9rt\") pod \"9d52ee10-8e8e-457a-92cc-03b93c6bedca\" (UID: \"9d52ee10-8e8e-457a-92cc-03b93c6bedca\") " Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.925158 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d52ee10-8e8e-457a-92cc-03b93c6bedca-ovsdbserver-sb\") pod \"9d52ee10-8e8e-457a-92cc-03b93c6bedca\" (UID: \"9d52ee10-8e8e-457a-92cc-03b93c6bedca\") " Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.925198 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d52ee10-8e8e-457a-92cc-03b93c6bedca-config\") pod \"9d52ee10-8e8e-457a-92cc-03b93c6bedca\" (UID: \"9d52ee10-8e8e-457a-92cc-03b93c6bedca\") " Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.925252 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d52ee10-8e8e-457a-92cc-03b93c6bedca-ovsdbserver-nb\") pod \"9d52ee10-8e8e-457a-92cc-03b93c6bedca\" (UID: \"9d52ee10-8e8e-457a-92cc-03b93c6bedca\") " Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.931409 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"be35db30-16f9-4271-888b-47d7c51d1d1f","Type":"ContainerStarted","Data":"df9dc53af5abd15e404a800bf45fd0f9dc8def9d71e7182c7f79e07ae15e88a0"} Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.931456 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"be35db30-16f9-4271-888b-47d7c51d1d1f","Type":"ContainerStarted","Data":"71677816fd695eda83f3319b1edd425d19cf185a9756684bf46608f372809ce9"} Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.933154 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d52ee10-8e8e-457a-92cc-03b93c6bedca-kube-api-access-8q9rt" (OuterVolumeSpecName: "kube-api-access-8q9rt") pod "9d52ee10-8e8e-457a-92cc-03b93c6bedca" (UID: "9d52ee10-8e8e-457a-92cc-03b93c6bedca"). InnerVolumeSpecName "kube-api-access-8q9rt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.935801 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c6aae3cc-13af-4bce-ac0b-d00638cf96e5","Type":"ContainerStarted","Data":"d916e7ef2fae5dce5990693b24c9a73e32b0e8550baa6e7fa7d3d6104b3a27ba"} Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.935859 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c6aae3cc-13af-4bce-ac0b-d00638cf96e5","Type":"ContainerStarted","Data":"f4c00760cb0f4c1f09bfd2dd634fb447463d45e29e047cd2973278a9cc0a88b8"} Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.935873 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c6aae3cc-13af-4bce-ac0b-d00638cf96e5","Type":"ContainerStarted","Data":"b4ae26dd1238ef8e06f2705280bc7d731b9237f753d18e84ae09219b148c9a29"} Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.951499 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.952060 5050 scope.go:117] "RemoveContainer" containerID="fec02ffbe8337c9b5bee5297d45dde19f614061cf73022c0b5e8af9b4cb5052d" Dec 11 15:21:56 crc kubenswrapper[5050]: W1211 15:21:56.964142 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod497b2933_da2f_4dce_9a38_6307ad42c044.slice/crio-f27bc068e0d76d608d56a3647df55b3328a29182d7d30b720403f540cd282e27 WatchSource:0}: Error finding container f27bc068e0d76d608d56a3647df55b3328a29182d7d30b720403f540cd282e27: Status 404 returned error can't find the container with id f27bc068e0d76d608d56a3647df55b3328a29182d7d30b720403f540cd282e27 Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.967190 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.968849 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.96883492 podStartE2EDuration="2.96883492s" podCreationTimestamp="2025-12-11 15:21:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:21:56.959054349 +0000 UTC m=+5607.802776955" watchObservedRunningTime="2025-12-11 15:21:56.96883492 +0000 UTC m=+5607.812557506" Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.981868 5050 scope.go:117] "RemoveContainer" containerID="29b876e4d00df1a55af81f5e20c8882b09c2c84707ab01ca16a849ce3d8b63f5" Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.982268 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d52ee10-8e8e-457a-92cc-03b93c6bedca-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9d52ee10-8e8e-457a-92cc-03b93c6bedca" (UID: "9d52ee10-8e8e-457a-92cc-03b93c6bedca"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:21:56 crc kubenswrapper[5050]: E1211 15:21:56.984735 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29b876e4d00df1a55af81f5e20c8882b09c2c84707ab01ca16a849ce3d8b63f5\": container with ID starting with 29b876e4d00df1a55af81f5e20c8882b09c2c84707ab01ca16a849ce3d8b63f5 not found: ID does not exist" containerID="29b876e4d00df1a55af81f5e20c8882b09c2c84707ab01ca16a849ce3d8b63f5" Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.984779 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29b876e4d00df1a55af81f5e20c8882b09c2c84707ab01ca16a849ce3d8b63f5"} err="failed to get container status \"29b876e4d00df1a55af81f5e20c8882b09c2c84707ab01ca16a849ce3d8b63f5\": rpc error: code = NotFound desc = could not find container \"29b876e4d00df1a55af81f5e20c8882b09c2c84707ab01ca16a849ce3d8b63f5\": container with ID starting with 29b876e4d00df1a55af81f5e20c8882b09c2c84707ab01ca16a849ce3d8b63f5 not found: ID does not exist" Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.984803 5050 scope.go:117] "RemoveContainer" containerID="fec02ffbe8337c9b5bee5297d45dde19f614061cf73022c0b5e8af9b4cb5052d" Dec 11 15:21:56 crc kubenswrapper[5050]: E1211 15:21:56.985294 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fec02ffbe8337c9b5bee5297d45dde19f614061cf73022c0b5e8af9b4cb5052d\": container with ID starting with fec02ffbe8337c9b5bee5297d45dde19f614061cf73022c0b5e8af9b4cb5052d not found: ID does not exist" containerID="fec02ffbe8337c9b5bee5297d45dde19f614061cf73022c0b5e8af9b4cb5052d" Dec 11 15:21:56 crc kubenswrapper[5050]: I1211 15:21:56.985330 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fec02ffbe8337c9b5bee5297d45dde19f614061cf73022c0b5e8af9b4cb5052d"} err="failed to get container status \"fec02ffbe8337c9b5bee5297d45dde19f614061cf73022c0b5e8af9b4cb5052d\": rpc error: code = NotFound desc = could not find container \"fec02ffbe8337c9b5bee5297d45dde19f614061cf73022c0b5e8af9b4cb5052d\": container with ID starting with fec02ffbe8337c9b5bee5297d45dde19f614061cf73022c0b5e8af9b4cb5052d not found: ID does not exist" Dec 11 15:21:57 crc kubenswrapper[5050]: I1211 15:21:57.014180 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d52ee10-8e8e-457a-92cc-03b93c6bedca-config" (OuterVolumeSpecName: "config") pod "9d52ee10-8e8e-457a-92cc-03b93c6bedca" (UID: "9d52ee10-8e8e-457a-92cc-03b93c6bedca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:21:57 crc kubenswrapper[5050]: I1211 15:21:57.014669 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d52ee10-8e8e-457a-92cc-03b93c6bedca-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9d52ee10-8e8e-457a-92cc-03b93c6bedca" (UID: "9d52ee10-8e8e-457a-92cc-03b93c6bedca"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:21:57 crc kubenswrapper[5050]: I1211 15:21:57.018154 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d52ee10-8e8e-457a-92cc-03b93c6bedca-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9d52ee10-8e8e-457a-92cc-03b93c6bedca" (UID: "9d52ee10-8e8e-457a-92cc-03b93c6bedca"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:21:57 crc kubenswrapper[5050]: I1211 15:21:57.021243 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.021222569 podStartE2EDuration="3.021222569s" podCreationTimestamp="2025-12-11 15:21:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:21:57.001023779 +0000 UTC m=+5607.844746375" watchObservedRunningTime="2025-12-11 15:21:57.021222569 +0000 UTC m=+5607.864945155" Dec 11 15:21:57 crc kubenswrapper[5050]: I1211 15:21:57.026851 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d52ee10-8e8e-457a-92cc-03b93c6bedca-config\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:57 crc kubenswrapper[5050]: I1211 15:21:57.026883 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d52ee10-8e8e-457a-92cc-03b93c6bedca-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:57 crc kubenswrapper[5050]: I1211 15:21:57.026894 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d52ee10-8e8e-457a-92cc-03b93c6bedca-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:57 crc kubenswrapper[5050]: I1211 15:21:57.026902 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8q9rt\" (UniqueName: \"kubernetes.io/projected/9d52ee10-8e8e-457a-92cc-03b93c6bedca-kube-api-access-8q9rt\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:57 crc kubenswrapper[5050]: I1211 15:21:57.026911 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d52ee10-8e8e-457a-92cc-03b93c6bedca-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 15:21:57 crc kubenswrapper[5050]: I1211 15:21:57.256433 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b49969b8f-w2f8g"] Dec 11 15:21:57 crc kubenswrapper[5050]: I1211 15:21:57.266198 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b49969b8f-w2f8g"] Dec 11 15:21:57 crc kubenswrapper[5050]: I1211 15:21:57.557647 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bcf288f-ca1a-46f5-b3f3-5136f97465cf" path="/var/lib/kubelet/pods/1bcf288f-ca1a-46f5-b3f3-5136f97465cf/volumes" Dec 11 15:21:57 crc kubenswrapper[5050]: I1211 15:21:57.558196 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d52ee10-8e8e-457a-92cc-03b93c6bedca" path="/var/lib/kubelet/pods/9d52ee10-8e8e-457a-92cc-03b93c6bedca/volumes" Dec 11 15:21:57 crc kubenswrapper[5050]: I1211 15:21:57.951905 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"497b2933-da2f-4dce-9a38-6307ad42c044","Type":"ContainerStarted","Data":"ffb551f7b57b343e73eda1b90ae83c614f10683bd66dc14fe8c51a84474196a3"} Dec 11 15:21:57 crc kubenswrapper[5050]: I1211 15:21:57.951962 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"497b2933-da2f-4dce-9a38-6307ad42c044","Type":"ContainerStarted","Data":"f27bc068e0d76d608d56a3647df55b3328a29182d7d30b720403f540cd282e27"} Dec 11 15:21:57 crc kubenswrapper[5050]: I1211 15:21:57.969084 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.969065945 podStartE2EDuration="2.969065945s" podCreationTimestamp="2025-12-11 15:21:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:21:57.966997169 +0000 UTC m=+5608.810719755" watchObservedRunningTime="2025-12-11 15:21:57.969065945 +0000 UTC m=+5608.812788531" Dec 11 15:21:58 crc kubenswrapper[5050]: I1211 15:21:58.282982 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 11 15:21:58 crc kubenswrapper[5050]: I1211 15:21:58.731415 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-q2m5c"] Dec 11 15:21:58 crc kubenswrapper[5050]: E1211 15:21:58.731784 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d52ee10-8e8e-457a-92cc-03b93c6bedca" containerName="init" Dec 11 15:21:58 crc kubenswrapper[5050]: I1211 15:21:58.731802 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d52ee10-8e8e-457a-92cc-03b93c6bedca" containerName="init" Dec 11 15:21:58 crc kubenswrapper[5050]: E1211 15:21:58.731813 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d52ee10-8e8e-457a-92cc-03b93c6bedca" containerName="dnsmasq-dns" Dec 11 15:21:58 crc kubenswrapper[5050]: I1211 15:21:58.731822 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d52ee10-8e8e-457a-92cc-03b93c6bedca" containerName="dnsmasq-dns" Dec 11 15:21:58 crc kubenswrapper[5050]: I1211 15:21:58.732026 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d52ee10-8e8e-457a-92cc-03b93c6bedca" containerName="dnsmasq-dns" Dec 11 15:21:58 crc kubenswrapper[5050]: I1211 15:21:58.732642 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-q2m5c" Dec 11 15:21:58 crc kubenswrapper[5050]: I1211 15:21:58.743048 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 11 15:21:58 crc kubenswrapper[5050]: I1211 15:21:58.743298 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 11 15:21:58 crc kubenswrapper[5050]: I1211 15:21:58.749082 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-q2m5c"] Dec 11 15:21:58 crc kubenswrapper[5050]: I1211 15:21:58.857495 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50d84a10-51b0-4e8e-a413-727685826a4d-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-q2m5c\" (UID: \"50d84a10-51b0-4e8e-a413-727685826a4d\") " pod="openstack/nova-cell1-cell-mapping-q2m5c" Dec 11 15:21:58 crc kubenswrapper[5050]: I1211 15:21:58.857544 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zbqc\" (UniqueName: \"kubernetes.io/projected/50d84a10-51b0-4e8e-a413-727685826a4d-kube-api-access-7zbqc\") pod \"nova-cell1-cell-mapping-q2m5c\" (UID: \"50d84a10-51b0-4e8e-a413-727685826a4d\") " pod="openstack/nova-cell1-cell-mapping-q2m5c" Dec 11 15:21:58 crc kubenswrapper[5050]: I1211 15:21:58.857600 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50d84a10-51b0-4e8e-a413-727685826a4d-config-data\") pod \"nova-cell1-cell-mapping-q2m5c\" (UID: \"50d84a10-51b0-4e8e-a413-727685826a4d\") " pod="openstack/nova-cell1-cell-mapping-q2m5c" Dec 11 15:21:58 crc kubenswrapper[5050]: I1211 15:21:58.857751 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50d84a10-51b0-4e8e-a413-727685826a4d-scripts\") pod \"nova-cell1-cell-mapping-q2m5c\" (UID: \"50d84a10-51b0-4e8e-a413-727685826a4d\") " pod="openstack/nova-cell1-cell-mapping-q2m5c" Dec 11 15:21:58 crc kubenswrapper[5050]: I1211 15:21:58.959271 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50d84a10-51b0-4e8e-a413-727685826a4d-config-data\") pod \"nova-cell1-cell-mapping-q2m5c\" (UID: \"50d84a10-51b0-4e8e-a413-727685826a4d\") " pod="openstack/nova-cell1-cell-mapping-q2m5c" Dec 11 15:21:58 crc kubenswrapper[5050]: I1211 15:21:58.959348 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50d84a10-51b0-4e8e-a413-727685826a4d-scripts\") pod \"nova-cell1-cell-mapping-q2m5c\" (UID: \"50d84a10-51b0-4e8e-a413-727685826a4d\") " pod="openstack/nova-cell1-cell-mapping-q2m5c" Dec 11 15:21:58 crc kubenswrapper[5050]: I1211 15:21:58.959500 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50d84a10-51b0-4e8e-a413-727685826a4d-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-q2m5c\" (UID: \"50d84a10-51b0-4e8e-a413-727685826a4d\") " pod="openstack/nova-cell1-cell-mapping-q2m5c" Dec 11 15:21:58 crc kubenswrapper[5050]: I1211 15:21:58.959540 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zbqc\" (UniqueName: \"kubernetes.io/projected/50d84a10-51b0-4e8e-a413-727685826a4d-kube-api-access-7zbqc\") pod \"nova-cell1-cell-mapping-q2m5c\" (UID: \"50d84a10-51b0-4e8e-a413-727685826a4d\") " pod="openstack/nova-cell1-cell-mapping-q2m5c" Dec 11 15:21:58 crc kubenswrapper[5050]: I1211 15:21:58.964484 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50d84a10-51b0-4e8e-a413-727685826a4d-scripts\") pod \"nova-cell1-cell-mapping-q2m5c\" (UID: \"50d84a10-51b0-4e8e-a413-727685826a4d\") " pod="openstack/nova-cell1-cell-mapping-q2m5c" Dec 11 15:21:58 crc kubenswrapper[5050]: I1211 15:21:58.965972 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50d84a10-51b0-4e8e-a413-727685826a4d-config-data\") pod \"nova-cell1-cell-mapping-q2m5c\" (UID: \"50d84a10-51b0-4e8e-a413-727685826a4d\") " pod="openstack/nova-cell1-cell-mapping-q2m5c" Dec 11 15:21:58 crc kubenswrapper[5050]: I1211 15:21:58.969588 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50d84a10-51b0-4e8e-a413-727685826a4d-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-q2m5c\" (UID: \"50d84a10-51b0-4e8e-a413-727685826a4d\") " pod="openstack/nova-cell1-cell-mapping-q2m5c" Dec 11 15:21:58 crc kubenswrapper[5050]: I1211 15:21:58.978791 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zbqc\" (UniqueName: \"kubernetes.io/projected/50d84a10-51b0-4e8e-a413-727685826a4d-kube-api-access-7zbqc\") pod \"nova-cell1-cell-mapping-q2m5c\" (UID: \"50d84a10-51b0-4e8e-a413-727685826a4d\") " pod="openstack/nova-cell1-cell-mapping-q2m5c" Dec 11 15:21:59 crc kubenswrapper[5050]: I1211 15:21:59.055620 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-q2m5c" Dec 11 15:21:59 crc kubenswrapper[5050]: W1211 15:21:59.531632 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50d84a10_51b0_4e8e_a413_727685826a4d.slice/crio-9502c21ade3fed93ddd6b6ec9116ab1d3ee29749d54a558286e50e73f19fbf7f WatchSource:0}: Error finding container 9502c21ade3fed93ddd6b6ec9116ab1d3ee29749d54a558286e50e73f19fbf7f: Status 404 returned error can't find the container with id 9502c21ade3fed93ddd6b6ec9116ab1d3ee29749d54a558286e50e73f19fbf7f Dec 11 15:21:59 crc kubenswrapper[5050]: I1211 15:21:59.532442 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-q2m5c"] Dec 11 15:21:59 crc kubenswrapper[5050]: I1211 15:21:59.967520 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-q2m5c" event={"ID":"50d84a10-51b0-4e8e-a413-727685826a4d","Type":"ContainerStarted","Data":"d5a02bbceed282d5531ddaefcb14aa55611a9672386803752c6e363d5bcef2ec"} Dec 11 15:21:59 crc kubenswrapper[5050]: I1211 15:21:59.967806 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-q2m5c" event={"ID":"50d84a10-51b0-4e8e-a413-727685826a4d","Type":"ContainerStarted","Data":"9502c21ade3fed93ddd6b6ec9116ab1d3ee29749d54a558286e50e73f19fbf7f"} Dec 11 15:21:59 crc kubenswrapper[5050]: I1211 15:21:59.993006 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-q2m5c" podStartSLOduration=1.992989677 podStartE2EDuration="1.992989677s" podCreationTimestamp="2025-12-11 15:21:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:21:59.990797619 +0000 UTC m=+5610.834520205" watchObservedRunningTime="2025-12-11 15:21:59.992989677 +0000 UTC m=+5610.836712263" Dec 11 15:22:00 crc kubenswrapper[5050]: I1211 15:22:00.372132 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 11 15:22:00 crc kubenswrapper[5050]: I1211 15:22:00.372393 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 11 15:22:01 crc kubenswrapper[5050]: E1211 15:22:01.307873 5050 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a630a5b_c349_4e2f_876a_0b82485a8221.slice/crio-a928b2c8a4583771b4e46223adef9e73c3dfe4060ad2f22b56dcc4c23a60dc1e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda80b79e5_138b_4e71_ab5e_aa8805cce0b5.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd35e2a89_ca99_46a0_86ce_83d7eac9733e.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda80b79e5_138b_4e71_ab5e_aa8805cce0b5.slice/crio-e09a0786d1e30296342f9183180cf055bb58bbf45f4ab7cc1d293b2b154af638\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd35e2a89_ca99_46a0_86ce_83d7eac9733e.slice/crio-1a581d6d1f84876519a0950858ffc6b372b8edafa03ab517f443e441615dad56\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04dc7ae1_2485_4ffe_a853_4ef671794e68.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a630a5b_c349_4e2f_876a_0b82485a8221.slice\": RecentStats: unable to find data in memory cache]" Dec 11 15:22:01 crc kubenswrapper[5050]: I1211 15:22:01.325512 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 11 15:22:05 crc kubenswrapper[5050]: I1211 15:22:05.010268 5050 generic.go:334] "Generic (PLEG): container finished" podID="50d84a10-51b0-4e8e-a413-727685826a4d" containerID="d5a02bbceed282d5531ddaefcb14aa55611a9672386803752c6e363d5bcef2ec" exitCode=0 Dec 11 15:22:05 crc kubenswrapper[5050]: I1211 15:22:05.010590 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-q2m5c" event={"ID":"50d84a10-51b0-4e8e-a413-727685826a4d","Type":"ContainerDied","Data":"d5a02bbceed282d5531ddaefcb14aa55611a9672386803752c6e363d5bcef2ec"} Dec 11 15:22:05 crc kubenswrapper[5050]: I1211 15:22:05.372269 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 11 15:22:05 crc kubenswrapper[5050]: I1211 15:22:05.372336 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 11 15:22:05 crc kubenswrapper[5050]: I1211 15:22:05.384590 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 15:22:05 crc kubenswrapper[5050]: I1211 15:22:05.384660 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 15:22:06 crc kubenswrapper[5050]: I1211 15:22:06.325171 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 11 15:22:06 crc kubenswrapper[5050]: I1211 15:22:06.356835 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 11 15:22:06 crc kubenswrapper[5050]: I1211 15:22:06.425381 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-q2m5c" Dec 11 15:22:06 crc kubenswrapper[5050]: I1211 15:22:06.528760 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zbqc\" (UniqueName: \"kubernetes.io/projected/50d84a10-51b0-4e8e-a413-727685826a4d-kube-api-access-7zbqc\") pod \"50d84a10-51b0-4e8e-a413-727685826a4d\" (UID: \"50d84a10-51b0-4e8e-a413-727685826a4d\") " Dec 11 15:22:06 crc kubenswrapper[5050]: I1211 15:22:06.528921 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50d84a10-51b0-4e8e-a413-727685826a4d-scripts\") pod \"50d84a10-51b0-4e8e-a413-727685826a4d\" (UID: \"50d84a10-51b0-4e8e-a413-727685826a4d\") " Dec 11 15:22:06 crc kubenswrapper[5050]: I1211 15:22:06.528985 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50d84a10-51b0-4e8e-a413-727685826a4d-config-data\") pod \"50d84a10-51b0-4e8e-a413-727685826a4d\" (UID: \"50d84a10-51b0-4e8e-a413-727685826a4d\") " Dec 11 15:22:06 crc kubenswrapper[5050]: I1211 15:22:06.529104 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50d84a10-51b0-4e8e-a413-727685826a4d-combined-ca-bundle\") pod \"50d84a10-51b0-4e8e-a413-727685826a4d\" (UID: \"50d84a10-51b0-4e8e-a413-727685826a4d\") " Dec 11 15:22:06 crc kubenswrapper[5050]: I1211 15:22:06.534948 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50d84a10-51b0-4e8e-a413-727685826a4d-scripts" (OuterVolumeSpecName: "scripts") pod "50d84a10-51b0-4e8e-a413-727685826a4d" (UID: "50d84a10-51b0-4e8e-a413-727685826a4d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:22:06 crc kubenswrapper[5050]: I1211 15:22:06.538265 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c6aae3cc-13af-4bce-ac0b-d00638cf96e5" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.68:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:22:06 crc kubenswrapper[5050]: I1211 15:22:06.538360 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="be35db30-16f9-4271-888b-47d7c51d1d1f" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.69:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:22:06 crc kubenswrapper[5050]: I1211 15:22:06.538336 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c6aae3cc-13af-4bce-ac0b-d00638cf96e5" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.68:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:22:06 crc kubenswrapper[5050]: I1211 15:22:06.538586 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="be35db30-16f9-4271-888b-47d7c51d1d1f" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.69:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:22:06 crc kubenswrapper[5050]: I1211 15:22:06.538942 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50d84a10-51b0-4e8e-a413-727685826a4d-kube-api-access-7zbqc" (OuterVolumeSpecName: "kube-api-access-7zbqc") pod "50d84a10-51b0-4e8e-a413-727685826a4d" (UID: "50d84a10-51b0-4e8e-a413-727685826a4d"). InnerVolumeSpecName "kube-api-access-7zbqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:22:06 crc kubenswrapper[5050]: I1211 15:22:06.558038 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50d84a10-51b0-4e8e-a413-727685826a4d-config-data" (OuterVolumeSpecName: "config-data") pod "50d84a10-51b0-4e8e-a413-727685826a4d" (UID: "50d84a10-51b0-4e8e-a413-727685826a4d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:22:06 crc kubenswrapper[5050]: I1211 15:22:06.562477 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50d84a10-51b0-4e8e-a413-727685826a4d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50d84a10-51b0-4e8e-a413-727685826a4d" (UID: "50d84a10-51b0-4e8e-a413-727685826a4d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:22:06 crc kubenswrapper[5050]: I1211 15:22:06.636951 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50d84a10-51b0-4e8e-a413-727685826a4d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:22:06 crc kubenswrapper[5050]: I1211 15:22:06.636986 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zbqc\" (UniqueName: \"kubernetes.io/projected/50d84a10-51b0-4e8e-a413-727685826a4d-kube-api-access-7zbqc\") on node \"crc\" DevicePath \"\"" Dec 11 15:22:06 crc kubenswrapper[5050]: I1211 15:22:06.636998 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/50d84a10-51b0-4e8e-a413-727685826a4d-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:22:06 crc kubenswrapper[5050]: I1211 15:22:06.637026 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/50d84a10-51b0-4e8e-a413-727685826a4d-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:22:07 crc kubenswrapper[5050]: I1211 15:22:07.032330 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-q2m5c" Dec 11 15:22:07 crc kubenswrapper[5050]: I1211 15:22:07.033080 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-q2m5c" event={"ID":"50d84a10-51b0-4e8e-a413-727685826a4d","Type":"ContainerDied","Data":"9502c21ade3fed93ddd6b6ec9116ab1d3ee29749d54a558286e50e73f19fbf7f"} Dec 11 15:22:07 crc kubenswrapper[5050]: I1211 15:22:07.041337 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9502c21ade3fed93ddd6b6ec9116ab1d3ee29749d54a558286e50e73f19fbf7f" Dec 11 15:22:07 crc kubenswrapper[5050]: I1211 15:22:07.080812 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 11 15:22:07 crc kubenswrapper[5050]: I1211 15:22:07.203257 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 15:22:07 crc kubenswrapper[5050]: I1211 15:22:07.203476 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="be35db30-16f9-4271-888b-47d7c51d1d1f" containerName="nova-api-log" containerID="cri-o://71677816fd695eda83f3319b1edd425d19cf185a9756684bf46608f372809ce9" gracePeriod=30 Dec 11 15:22:07 crc kubenswrapper[5050]: I1211 15:22:07.203572 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="be35db30-16f9-4271-888b-47d7c51d1d1f" containerName="nova-api-api" containerID="cri-o://df9dc53af5abd15e404a800bf45fd0f9dc8def9d71e7182c7f79e07ae15e88a0" gracePeriod=30 Dec 11 15:22:07 crc kubenswrapper[5050]: I1211 15:22:07.265534 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 15:22:07 crc kubenswrapper[5050]: I1211 15:22:07.265772 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c6aae3cc-13af-4bce-ac0b-d00638cf96e5" containerName="nova-metadata-log" containerID="cri-o://f4c00760cb0f4c1f09bfd2dd634fb447463d45e29e047cd2973278a9cc0a88b8" gracePeriod=30 Dec 11 15:22:07 crc kubenswrapper[5050]: I1211 15:22:07.266239 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c6aae3cc-13af-4bce-ac0b-d00638cf96e5" containerName="nova-metadata-metadata" containerID="cri-o://d916e7ef2fae5dce5990693b24c9a73e32b0e8550baa6e7fa7d3d6104b3a27ba" gracePeriod=30 Dec 11 15:22:07 crc kubenswrapper[5050]: I1211 15:22:07.567471 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 15:22:08 crc kubenswrapper[5050]: I1211 15:22:08.042478 5050 generic.go:334] "Generic (PLEG): container finished" podID="be35db30-16f9-4271-888b-47d7c51d1d1f" containerID="71677816fd695eda83f3319b1edd425d19cf185a9756684bf46608f372809ce9" exitCode=143 Dec 11 15:22:08 crc kubenswrapper[5050]: I1211 15:22:08.042553 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"be35db30-16f9-4271-888b-47d7c51d1d1f","Type":"ContainerDied","Data":"71677816fd695eda83f3319b1edd425d19cf185a9756684bf46608f372809ce9"} Dec 11 15:22:08 crc kubenswrapper[5050]: I1211 15:22:08.044253 5050 generic.go:334] "Generic (PLEG): container finished" podID="c6aae3cc-13af-4bce-ac0b-d00638cf96e5" containerID="f4c00760cb0f4c1f09bfd2dd634fb447463d45e29e047cd2973278a9cc0a88b8" exitCode=143 Dec 11 15:22:08 crc kubenswrapper[5050]: I1211 15:22:08.044903 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c6aae3cc-13af-4bce-ac0b-d00638cf96e5","Type":"ContainerDied","Data":"f4c00760cb0f4c1f09bfd2dd634fb447463d45e29e047cd2973278a9cc0a88b8"} Dec 11 15:22:09 crc kubenswrapper[5050]: I1211 15:22:09.053122 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="497b2933-da2f-4dce-9a38-6307ad42c044" containerName="nova-scheduler-scheduler" containerID="cri-o://ffb551f7b57b343e73eda1b90ae83c614f10683bd66dc14fe8c51a84474196a3" gracePeriod=30 Dec 11 15:22:11 crc kubenswrapper[5050]: I1211 15:22:11.071532 5050 generic.go:334] "Generic (PLEG): container finished" podID="c6aae3cc-13af-4bce-ac0b-d00638cf96e5" containerID="d916e7ef2fae5dce5990693b24c9a73e32b0e8550baa6e7fa7d3d6104b3a27ba" exitCode=0 Dec 11 15:22:11 crc kubenswrapper[5050]: I1211 15:22:11.072053 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c6aae3cc-13af-4bce-ac0b-d00638cf96e5","Type":"ContainerDied","Data":"d916e7ef2fae5dce5990693b24c9a73e32b0e8550baa6e7fa7d3d6104b3a27ba"} Dec 11 15:22:11 crc kubenswrapper[5050]: I1211 15:22:11.152868 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 15:22:11 crc kubenswrapper[5050]: I1211 15:22:11.242204 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sf9l\" (UniqueName: \"kubernetes.io/projected/c6aae3cc-13af-4bce-ac0b-d00638cf96e5-kube-api-access-8sf9l\") pod \"c6aae3cc-13af-4bce-ac0b-d00638cf96e5\" (UID: \"c6aae3cc-13af-4bce-ac0b-d00638cf96e5\") " Dec 11 15:22:11 crc kubenswrapper[5050]: I1211 15:22:11.242361 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6aae3cc-13af-4bce-ac0b-d00638cf96e5-combined-ca-bundle\") pod \"c6aae3cc-13af-4bce-ac0b-d00638cf96e5\" (UID: \"c6aae3cc-13af-4bce-ac0b-d00638cf96e5\") " Dec 11 15:22:11 crc kubenswrapper[5050]: I1211 15:22:11.242398 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6aae3cc-13af-4bce-ac0b-d00638cf96e5-logs\") pod \"c6aae3cc-13af-4bce-ac0b-d00638cf96e5\" (UID: \"c6aae3cc-13af-4bce-ac0b-d00638cf96e5\") " Dec 11 15:22:11 crc kubenswrapper[5050]: I1211 15:22:11.242419 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6aae3cc-13af-4bce-ac0b-d00638cf96e5-config-data\") pod \"c6aae3cc-13af-4bce-ac0b-d00638cf96e5\" (UID: \"c6aae3cc-13af-4bce-ac0b-d00638cf96e5\") " Dec 11 15:22:11 crc kubenswrapper[5050]: I1211 15:22:11.242854 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6aae3cc-13af-4bce-ac0b-d00638cf96e5-logs" (OuterVolumeSpecName: "logs") pod "c6aae3cc-13af-4bce-ac0b-d00638cf96e5" (UID: "c6aae3cc-13af-4bce-ac0b-d00638cf96e5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:22:11 crc kubenswrapper[5050]: I1211 15:22:11.247391 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6aae3cc-13af-4bce-ac0b-d00638cf96e5-kube-api-access-8sf9l" (OuterVolumeSpecName: "kube-api-access-8sf9l") pod "c6aae3cc-13af-4bce-ac0b-d00638cf96e5" (UID: "c6aae3cc-13af-4bce-ac0b-d00638cf96e5"). InnerVolumeSpecName "kube-api-access-8sf9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:22:11 crc kubenswrapper[5050]: I1211 15:22:11.270274 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6aae3cc-13af-4bce-ac0b-d00638cf96e5-config-data" (OuterVolumeSpecName: "config-data") pod "c6aae3cc-13af-4bce-ac0b-d00638cf96e5" (UID: "c6aae3cc-13af-4bce-ac0b-d00638cf96e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:22:11 crc kubenswrapper[5050]: I1211 15:22:11.302398 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6aae3cc-13af-4bce-ac0b-d00638cf96e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6aae3cc-13af-4bce-ac0b-d00638cf96e5" (UID: "c6aae3cc-13af-4bce-ac0b-d00638cf96e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:22:11 crc kubenswrapper[5050]: E1211 15:22:11.327067 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ffb551f7b57b343e73eda1b90ae83c614f10683bd66dc14fe8c51a84474196a3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 11 15:22:11 crc kubenswrapper[5050]: E1211 15:22:11.328376 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ffb551f7b57b343e73eda1b90ae83c614f10683bd66dc14fe8c51a84474196a3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 11 15:22:11 crc kubenswrapper[5050]: E1211 15:22:11.329594 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ffb551f7b57b343e73eda1b90ae83c614f10683bd66dc14fe8c51a84474196a3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 11 15:22:11 crc kubenswrapper[5050]: E1211 15:22:11.329726 5050 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="497b2933-da2f-4dce-9a38-6307ad42c044" containerName="nova-scheduler-scheduler" Dec 11 15:22:11 crc kubenswrapper[5050]: I1211 15:22:11.344030 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6aae3cc-13af-4bce-ac0b-d00638cf96e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:22:11 crc kubenswrapper[5050]: I1211 15:22:11.344203 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6aae3cc-13af-4bce-ac0b-d00638cf96e5-logs\") on node \"crc\" DevicePath \"\"" Dec 11 15:22:11 crc kubenswrapper[5050]: I1211 15:22:11.344314 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6aae3cc-13af-4bce-ac0b-d00638cf96e5-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:22:11 crc kubenswrapper[5050]: I1211 15:22:11.344382 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sf9l\" (UniqueName: \"kubernetes.io/projected/c6aae3cc-13af-4bce-ac0b-d00638cf96e5-kube-api-access-8sf9l\") on node \"crc\" DevicePath \"\"" Dec 11 15:22:11 crc kubenswrapper[5050]: E1211 15:22:11.532877 5050 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a630a5b_c349_4e2f_876a_0b82485a8221.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a630a5b_c349_4e2f_876a_0b82485a8221.slice/crio-a928b2c8a4583771b4e46223adef9e73c3dfe4060ad2f22b56dcc4c23a60dc1e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda80b79e5_138b_4e71_ab5e_aa8805cce0b5.slice/crio-e09a0786d1e30296342f9183180cf055bb58bbf45f4ab7cc1d293b2b154af638\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda80b79e5_138b_4e71_ab5e_aa8805cce0b5.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04dc7ae1_2485_4ffe_a853_4ef671794e68.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd35e2a89_ca99_46a0_86ce_83d7eac9733e.slice/crio-1a581d6d1f84876519a0950858ffc6b372b8edafa03ab517f443e441615dad56\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd35e2a89_ca99_46a0_86ce_83d7eac9733e.slice\": RecentStats: unable to find data in memory cache]" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.067194 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.082833 5050 generic.go:334] "Generic (PLEG): container finished" podID="be35db30-16f9-4271-888b-47d7c51d1d1f" containerID="df9dc53af5abd15e404a800bf45fd0f9dc8def9d71e7182c7f79e07ae15e88a0" exitCode=0 Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.082918 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.082933 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"be35db30-16f9-4271-888b-47d7c51d1d1f","Type":"ContainerDied","Data":"df9dc53af5abd15e404a800bf45fd0f9dc8def9d71e7182c7f79e07ae15e88a0"} Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.084351 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"be35db30-16f9-4271-888b-47d7c51d1d1f","Type":"ContainerDied","Data":"0a2f27de62271732f23c5b56a841b0a7bbb7a3d469c0196fb2e50fe6dec73d98"} Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.084395 5050 scope.go:117] "RemoveContainer" containerID="df9dc53af5abd15e404a800bf45fd0f9dc8def9d71e7182c7f79e07ae15e88a0" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.089359 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c6aae3cc-13af-4bce-ac0b-d00638cf96e5","Type":"ContainerDied","Data":"b4ae26dd1238ef8e06f2705280bc7d731b9237f753d18e84ae09219b148c9a29"} Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.089482 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.114564 5050 scope.go:117] "RemoveContainer" containerID="71677816fd695eda83f3319b1edd425d19cf185a9756684bf46608f372809ce9" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.121549 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.131607 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.147760 5050 scope.go:117] "RemoveContainer" containerID="df9dc53af5abd15e404a800bf45fd0f9dc8def9d71e7182c7f79e07ae15e88a0" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.149092 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 11 15:22:12 crc kubenswrapper[5050]: E1211 15:22:12.149274 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df9dc53af5abd15e404a800bf45fd0f9dc8def9d71e7182c7f79e07ae15e88a0\": container with ID starting with df9dc53af5abd15e404a800bf45fd0f9dc8def9d71e7182c7f79e07ae15e88a0 not found: ID does not exist" containerID="df9dc53af5abd15e404a800bf45fd0f9dc8def9d71e7182c7f79e07ae15e88a0" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.149321 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df9dc53af5abd15e404a800bf45fd0f9dc8def9d71e7182c7f79e07ae15e88a0"} err="failed to get container status \"df9dc53af5abd15e404a800bf45fd0f9dc8def9d71e7182c7f79e07ae15e88a0\": rpc error: code = NotFound desc = could not find container \"df9dc53af5abd15e404a800bf45fd0f9dc8def9d71e7182c7f79e07ae15e88a0\": container with ID starting with df9dc53af5abd15e404a800bf45fd0f9dc8def9d71e7182c7f79e07ae15e88a0 not found: ID does not exist" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.149390 5050 scope.go:117] "RemoveContainer" containerID="71677816fd695eda83f3319b1edd425d19cf185a9756684bf46608f372809ce9" Dec 11 15:22:12 crc kubenswrapper[5050]: E1211 15:22:12.149618 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6aae3cc-13af-4bce-ac0b-d00638cf96e5" containerName="nova-metadata-log" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.149638 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6aae3cc-13af-4bce-ac0b-d00638cf96e5" containerName="nova-metadata-log" Dec 11 15:22:12 crc kubenswrapper[5050]: E1211 15:22:12.149660 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be35db30-16f9-4271-888b-47d7c51d1d1f" containerName="nova-api-api" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.149668 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="be35db30-16f9-4271-888b-47d7c51d1d1f" containerName="nova-api-api" Dec 11 15:22:12 crc kubenswrapper[5050]: E1211 15:22:12.149678 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50d84a10-51b0-4e8e-a413-727685826a4d" containerName="nova-manage" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.149685 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="50d84a10-51b0-4e8e-a413-727685826a4d" containerName="nova-manage" Dec 11 15:22:12 crc kubenswrapper[5050]: E1211 15:22:12.149728 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6aae3cc-13af-4bce-ac0b-d00638cf96e5" containerName="nova-metadata-metadata" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.149736 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6aae3cc-13af-4bce-ac0b-d00638cf96e5" containerName="nova-metadata-metadata" Dec 11 15:22:12 crc kubenswrapper[5050]: E1211 15:22:12.149737 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71677816fd695eda83f3319b1edd425d19cf185a9756684bf46608f372809ce9\": container with ID starting with 71677816fd695eda83f3319b1edd425d19cf185a9756684bf46608f372809ce9 not found: ID does not exist" containerID="71677816fd695eda83f3319b1edd425d19cf185a9756684bf46608f372809ce9" Dec 11 15:22:12 crc kubenswrapper[5050]: E1211 15:22:12.149753 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be35db30-16f9-4271-888b-47d7c51d1d1f" containerName="nova-api-log" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.149753 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71677816fd695eda83f3319b1edd425d19cf185a9756684bf46608f372809ce9"} err="failed to get container status \"71677816fd695eda83f3319b1edd425d19cf185a9756684bf46608f372809ce9\": rpc error: code = NotFound desc = could not find container \"71677816fd695eda83f3319b1edd425d19cf185a9756684bf46608f372809ce9\": container with ID starting with 71677816fd695eda83f3319b1edd425d19cf185a9756684bf46608f372809ce9 not found: ID does not exist" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.149761 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="be35db30-16f9-4271-888b-47d7c51d1d1f" containerName="nova-api-log" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.149765 5050 scope.go:117] "RemoveContainer" containerID="d916e7ef2fae5dce5990693b24c9a73e32b0e8550baa6e7fa7d3d6104b3a27ba" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.149980 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="be35db30-16f9-4271-888b-47d7c51d1d1f" containerName="nova-api-log" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.150036 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6aae3cc-13af-4bce-ac0b-d00638cf96e5" containerName="nova-metadata-log" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.150053 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6aae3cc-13af-4bce-ac0b-d00638cf96e5" containerName="nova-metadata-metadata" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.150067 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="50d84a10-51b0-4e8e-a413-727685826a4d" containerName="nova-manage" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.150085 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="be35db30-16f9-4271-888b-47d7c51d1d1f" containerName="nova-api-api" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.151281 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.153338 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.162623 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.164610 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be35db30-16f9-4271-888b-47d7c51d1d1f-combined-ca-bundle\") pod \"be35db30-16f9-4271-888b-47d7c51d1d1f\" (UID: \"be35db30-16f9-4271-888b-47d7c51d1d1f\") " Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.164708 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be35db30-16f9-4271-888b-47d7c51d1d1f-logs\") pod \"be35db30-16f9-4271-888b-47d7c51d1d1f\" (UID: \"be35db30-16f9-4271-888b-47d7c51d1d1f\") " Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.164752 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be35db30-16f9-4271-888b-47d7c51d1d1f-config-data\") pod \"be35db30-16f9-4271-888b-47d7c51d1d1f\" (UID: \"be35db30-16f9-4271-888b-47d7c51d1d1f\") " Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.164770 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlfbf\" (UniqueName: \"kubernetes.io/projected/be35db30-16f9-4271-888b-47d7c51d1d1f-kube-api-access-rlfbf\") pod \"be35db30-16f9-4271-888b-47d7c51d1d1f\" (UID: \"be35db30-16f9-4271-888b-47d7c51d1d1f\") " Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.166771 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be35db30-16f9-4271-888b-47d7c51d1d1f-logs" (OuterVolumeSpecName: "logs") pod "be35db30-16f9-4271-888b-47d7c51d1d1f" (UID: "be35db30-16f9-4271-888b-47d7c51d1d1f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.181788 5050 scope.go:117] "RemoveContainer" containerID="f4c00760cb0f4c1f09bfd2dd634fb447463d45e29e047cd2973278a9cc0a88b8" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.183109 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be35db30-16f9-4271-888b-47d7c51d1d1f-kube-api-access-rlfbf" (OuterVolumeSpecName: "kube-api-access-rlfbf") pod "be35db30-16f9-4271-888b-47d7c51d1d1f" (UID: "be35db30-16f9-4271-888b-47d7c51d1d1f"). InnerVolumeSpecName "kube-api-access-rlfbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.192426 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be35db30-16f9-4271-888b-47d7c51d1d1f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be35db30-16f9-4271-888b-47d7c51d1d1f" (UID: "be35db30-16f9-4271-888b-47d7c51d1d1f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.198061 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be35db30-16f9-4271-888b-47d7c51d1d1f-config-data" (OuterVolumeSpecName: "config-data") pod "be35db30-16f9-4271-888b-47d7c51d1d1f" (UID: "be35db30-16f9-4271-888b-47d7c51d1d1f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.266556 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwrg9\" (UniqueName: \"kubernetes.io/projected/93efaf6f-33d4-4fb3-ac60-f564a9496fdf-kube-api-access-gwrg9\") pod \"nova-metadata-0\" (UID: \"93efaf6f-33d4-4fb3-ac60-f564a9496fdf\") " pod="openstack/nova-metadata-0" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.266942 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93efaf6f-33d4-4fb3-ac60-f564a9496fdf-logs\") pod \"nova-metadata-0\" (UID: \"93efaf6f-33d4-4fb3-ac60-f564a9496fdf\") " pod="openstack/nova-metadata-0" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.267084 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93efaf6f-33d4-4fb3-ac60-f564a9496fdf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"93efaf6f-33d4-4fb3-ac60-f564a9496fdf\") " pod="openstack/nova-metadata-0" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.267405 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93efaf6f-33d4-4fb3-ac60-f564a9496fdf-config-data\") pod \"nova-metadata-0\" (UID: \"93efaf6f-33d4-4fb3-ac60-f564a9496fdf\") " pod="openstack/nova-metadata-0" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.267578 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be35db30-16f9-4271-888b-47d7c51d1d1f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.267689 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be35db30-16f9-4271-888b-47d7c51d1d1f-logs\") on node \"crc\" DevicePath \"\"" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.267763 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be35db30-16f9-4271-888b-47d7c51d1d1f-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.267843 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlfbf\" (UniqueName: \"kubernetes.io/projected/be35db30-16f9-4271-888b-47d7c51d1d1f-kube-api-access-rlfbf\") on node \"crc\" DevicePath \"\"" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.369735 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwrg9\" (UniqueName: \"kubernetes.io/projected/93efaf6f-33d4-4fb3-ac60-f564a9496fdf-kube-api-access-gwrg9\") pod \"nova-metadata-0\" (UID: \"93efaf6f-33d4-4fb3-ac60-f564a9496fdf\") " pod="openstack/nova-metadata-0" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.369799 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93efaf6f-33d4-4fb3-ac60-f564a9496fdf-logs\") pod \"nova-metadata-0\" (UID: \"93efaf6f-33d4-4fb3-ac60-f564a9496fdf\") " pod="openstack/nova-metadata-0" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.369864 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93efaf6f-33d4-4fb3-ac60-f564a9496fdf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"93efaf6f-33d4-4fb3-ac60-f564a9496fdf\") " pod="openstack/nova-metadata-0" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.369940 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93efaf6f-33d4-4fb3-ac60-f564a9496fdf-config-data\") pod \"nova-metadata-0\" (UID: \"93efaf6f-33d4-4fb3-ac60-f564a9496fdf\") " pod="openstack/nova-metadata-0" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.370375 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93efaf6f-33d4-4fb3-ac60-f564a9496fdf-logs\") pod \"nova-metadata-0\" (UID: \"93efaf6f-33d4-4fb3-ac60-f564a9496fdf\") " pod="openstack/nova-metadata-0" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.378873 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93efaf6f-33d4-4fb3-ac60-f564a9496fdf-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"93efaf6f-33d4-4fb3-ac60-f564a9496fdf\") " pod="openstack/nova-metadata-0" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.378925 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93efaf6f-33d4-4fb3-ac60-f564a9496fdf-config-data\") pod \"nova-metadata-0\" (UID: \"93efaf6f-33d4-4fb3-ac60-f564a9496fdf\") " pod="openstack/nova-metadata-0" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.390031 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwrg9\" (UniqueName: \"kubernetes.io/projected/93efaf6f-33d4-4fb3-ac60-f564a9496fdf-kube-api-access-gwrg9\") pod \"nova-metadata-0\" (UID: \"93efaf6f-33d4-4fb3-ac60-f564a9496fdf\") " pod="openstack/nova-metadata-0" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.419591 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.429345 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.445244 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.447536 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.449939 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.469979 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.470968 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04bd236f-ca75-4592-ba08-b2a0975485cf-logs\") pod \"nova-api-0\" (UID: \"04bd236f-ca75-4592-ba08-b2a0975485cf\") " pod="openstack/nova-api-0" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.471237 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsvmz\" (UniqueName: \"kubernetes.io/projected/04bd236f-ca75-4592-ba08-b2a0975485cf-kube-api-access-hsvmz\") pod \"nova-api-0\" (UID: \"04bd236f-ca75-4592-ba08-b2a0975485cf\") " pod="openstack/nova-api-0" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.471313 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04bd236f-ca75-4592-ba08-b2a0975485cf-config-data\") pod \"nova-api-0\" (UID: \"04bd236f-ca75-4592-ba08-b2a0975485cf\") " pod="openstack/nova-api-0" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.471568 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.471729 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04bd236f-ca75-4592-ba08-b2a0975485cf-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"04bd236f-ca75-4592-ba08-b2a0975485cf\") " pod="openstack/nova-api-0" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.573472 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04bd236f-ca75-4592-ba08-b2a0975485cf-logs\") pod \"nova-api-0\" (UID: \"04bd236f-ca75-4592-ba08-b2a0975485cf\") " pod="openstack/nova-api-0" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.573553 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsvmz\" (UniqueName: \"kubernetes.io/projected/04bd236f-ca75-4592-ba08-b2a0975485cf-kube-api-access-hsvmz\") pod \"nova-api-0\" (UID: \"04bd236f-ca75-4592-ba08-b2a0975485cf\") " pod="openstack/nova-api-0" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.573583 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04bd236f-ca75-4592-ba08-b2a0975485cf-config-data\") pod \"nova-api-0\" (UID: \"04bd236f-ca75-4592-ba08-b2a0975485cf\") " pod="openstack/nova-api-0" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.573678 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04bd236f-ca75-4592-ba08-b2a0975485cf-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"04bd236f-ca75-4592-ba08-b2a0975485cf\") " pod="openstack/nova-api-0" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.574285 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04bd236f-ca75-4592-ba08-b2a0975485cf-logs\") pod \"nova-api-0\" (UID: \"04bd236f-ca75-4592-ba08-b2a0975485cf\") " pod="openstack/nova-api-0" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.581756 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04bd236f-ca75-4592-ba08-b2a0975485cf-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"04bd236f-ca75-4592-ba08-b2a0975485cf\") " pod="openstack/nova-api-0" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.582105 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04bd236f-ca75-4592-ba08-b2a0975485cf-config-data\") pod \"nova-api-0\" (UID: \"04bd236f-ca75-4592-ba08-b2a0975485cf\") " pod="openstack/nova-api-0" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.592486 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsvmz\" (UniqueName: \"kubernetes.io/projected/04bd236f-ca75-4592-ba08-b2a0975485cf-kube-api-access-hsvmz\") pod \"nova-api-0\" (UID: \"04bd236f-ca75-4592-ba08-b2a0975485cf\") " pod="openstack/nova-api-0" Dec 11 15:22:12 crc kubenswrapper[5050]: I1211 15:22:12.764929 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 15:22:13 crc kubenswrapper[5050]: W1211 15:22:13.003190 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93efaf6f_33d4_4fb3_ac60_f564a9496fdf.slice/crio-a0a02a5d8cf8dd39c12238fbb67886ab4dd31e979f84a5177f8032948f4de935 WatchSource:0}: Error finding container a0a02a5d8cf8dd39c12238fbb67886ab4dd31e979f84a5177f8032948f4de935: Status 404 returned error can't find the container with id a0a02a5d8cf8dd39c12238fbb67886ab4dd31e979f84a5177f8032948f4de935 Dec 11 15:22:13 crc kubenswrapper[5050]: I1211 15:22:13.003812 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 15:22:13 crc kubenswrapper[5050]: I1211 15:22:13.103229 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"93efaf6f-33d4-4fb3-ac60-f564a9496fdf","Type":"ContainerStarted","Data":"a0a02a5d8cf8dd39c12238fbb67886ab4dd31e979f84a5177f8032948f4de935"} Dec 11 15:22:13 crc kubenswrapper[5050]: I1211 15:22:13.189536 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 15:22:13 crc kubenswrapper[5050]: W1211 15:22:13.192438 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04bd236f_ca75_4592_ba08_b2a0975485cf.slice/crio-cb4318c61c892f16a485c3a3a4b56b7465f31c00e1d391096f2249a30627c3ee WatchSource:0}: Error finding container cb4318c61c892f16a485c3a3a4b56b7465f31c00e1d391096f2249a30627c3ee: Status 404 returned error can't find the container with id cb4318c61c892f16a485c3a3a4b56b7465f31c00e1d391096f2249a30627c3ee Dec 11 15:22:13 crc kubenswrapper[5050]: I1211 15:22:13.579115 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be35db30-16f9-4271-888b-47d7c51d1d1f" path="/var/lib/kubelet/pods/be35db30-16f9-4271-888b-47d7c51d1d1f/volumes" Dec 11 15:22:13 crc kubenswrapper[5050]: I1211 15:22:13.580591 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6aae3cc-13af-4bce-ac0b-d00638cf96e5" path="/var/lib/kubelet/pods/c6aae3cc-13af-4bce-ac0b-d00638cf96e5/volumes" Dec 11 15:22:13 crc kubenswrapper[5050]: I1211 15:22:13.925240 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.012945 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l57bn\" (UniqueName: \"kubernetes.io/projected/497b2933-da2f-4dce-9a38-6307ad42c044-kube-api-access-l57bn\") pod \"497b2933-da2f-4dce-9a38-6307ad42c044\" (UID: \"497b2933-da2f-4dce-9a38-6307ad42c044\") " Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.013085 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/497b2933-da2f-4dce-9a38-6307ad42c044-config-data\") pod \"497b2933-da2f-4dce-9a38-6307ad42c044\" (UID: \"497b2933-da2f-4dce-9a38-6307ad42c044\") " Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.013248 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/497b2933-da2f-4dce-9a38-6307ad42c044-combined-ca-bundle\") pod \"497b2933-da2f-4dce-9a38-6307ad42c044\" (UID: \"497b2933-da2f-4dce-9a38-6307ad42c044\") " Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.024462 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/497b2933-da2f-4dce-9a38-6307ad42c044-kube-api-access-l57bn" (OuterVolumeSpecName: "kube-api-access-l57bn") pod "497b2933-da2f-4dce-9a38-6307ad42c044" (UID: "497b2933-da2f-4dce-9a38-6307ad42c044"). InnerVolumeSpecName "kube-api-access-l57bn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.040933 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/497b2933-da2f-4dce-9a38-6307ad42c044-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "497b2933-da2f-4dce-9a38-6307ad42c044" (UID: "497b2933-da2f-4dce-9a38-6307ad42c044"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.041829 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/497b2933-da2f-4dce-9a38-6307ad42c044-config-data" (OuterVolumeSpecName: "config-data") pod "497b2933-da2f-4dce-9a38-6307ad42c044" (UID: "497b2933-da2f-4dce-9a38-6307ad42c044"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.113861 5050 generic.go:334] "Generic (PLEG): container finished" podID="497b2933-da2f-4dce-9a38-6307ad42c044" containerID="ffb551f7b57b343e73eda1b90ae83c614f10683bd66dc14fe8c51a84474196a3" exitCode=0 Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.113946 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.113954 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"497b2933-da2f-4dce-9a38-6307ad42c044","Type":"ContainerDied","Data":"ffb551f7b57b343e73eda1b90ae83c614f10683bd66dc14fe8c51a84474196a3"} Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.114312 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"497b2933-da2f-4dce-9a38-6307ad42c044","Type":"ContainerDied","Data":"f27bc068e0d76d608d56a3647df55b3328a29182d7d30b720403f540cd282e27"} Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.114342 5050 scope.go:117] "RemoveContainer" containerID="ffb551f7b57b343e73eda1b90ae83c614f10683bd66dc14fe8c51a84474196a3" Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.115371 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l57bn\" (UniqueName: \"kubernetes.io/projected/497b2933-da2f-4dce-9a38-6307ad42c044-kube-api-access-l57bn\") on node \"crc\" DevicePath \"\"" Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.115397 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/497b2933-da2f-4dce-9a38-6307ad42c044-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.115407 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/497b2933-da2f-4dce-9a38-6307ad42c044-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.116179 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"93efaf6f-33d4-4fb3-ac60-f564a9496fdf","Type":"ContainerStarted","Data":"f9c2a19739f8baaac559ab31e190fdbf57aa90f2f7cd221cab39b46bb4dfa250"} Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.116213 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"93efaf6f-33d4-4fb3-ac60-f564a9496fdf","Type":"ContainerStarted","Data":"b89e12da37cfa25c10b2219419b5164111657e01581a7ac6ad51cb02c0de55d6"} Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.118214 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"04bd236f-ca75-4592-ba08-b2a0975485cf","Type":"ContainerStarted","Data":"6b074152650d06f0652cae8c5f58c063867201238db092a0eee18e9d607b157a"} Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.118265 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"04bd236f-ca75-4592-ba08-b2a0975485cf","Type":"ContainerStarted","Data":"362345edf73a8955a8db318f46b29fa3a2aded319afa6bb49fa65831f409cfb3"} Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.118281 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"04bd236f-ca75-4592-ba08-b2a0975485cf","Type":"ContainerStarted","Data":"cb4318c61c892f16a485c3a3a4b56b7465f31c00e1d391096f2249a30627c3ee"} Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.139122 5050 scope.go:117] "RemoveContainer" containerID="ffb551f7b57b343e73eda1b90ae83c614f10683bd66dc14fe8c51a84474196a3" Dec 11 15:22:14 crc kubenswrapper[5050]: E1211 15:22:14.140663 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffb551f7b57b343e73eda1b90ae83c614f10683bd66dc14fe8c51a84474196a3\": container with ID starting with ffb551f7b57b343e73eda1b90ae83c614f10683bd66dc14fe8c51a84474196a3 not found: ID does not exist" containerID="ffb551f7b57b343e73eda1b90ae83c614f10683bd66dc14fe8c51a84474196a3" Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.140712 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffb551f7b57b343e73eda1b90ae83c614f10683bd66dc14fe8c51a84474196a3"} err="failed to get container status \"ffb551f7b57b343e73eda1b90ae83c614f10683bd66dc14fe8c51a84474196a3\": rpc error: code = NotFound desc = could not find container \"ffb551f7b57b343e73eda1b90ae83c614f10683bd66dc14fe8c51a84474196a3\": container with ID starting with ffb551f7b57b343e73eda1b90ae83c614f10683bd66dc14fe8c51a84474196a3 not found: ID does not exist" Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.150418 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.150399856 podStartE2EDuration="2.150399856s" podCreationTimestamp="2025-12-11 15:22:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:22:14.144599681 +0000 UTC m=+5624.988322277" watchObservedRunningTime="2025-12-11 15:22:14.150399856 +0000 UTC m=+5624.994122442" Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.171395 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.171378766 podStartE2EDuration="2.171378766s" podCreationTimestamp="2025-12-11 15:22:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:22:14.163884036 +0000 UTC m=+5625.007606622" watchObservedRunningTime="2025-12-11 15:22:14.171378766 +0000 UTC m=+5625.015101352" Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.186143 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.195992 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.210164 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 15:22:14 crc kubenswrapper[5050]: E1211 15:22:14.210524 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="497b2933-da2f-4dce-9a38-6307ad42c044" containerName="nova-scheduler-scheduler" Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.210543 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="497b2933-da2f-4dce-9a38-6307ad42c044" containerName="nova-scheduler-scheduler" Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.210731 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="497b2933-da2f-4dce-9a38-6307ad42c044" containerName="nova-scheduler-scheduler" Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.220646 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.225431 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.238847 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.319875 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6n6g\" (UniqueName: \"kubernetes.io/projected/b23edbac-1c5a-49ce-9cb3-39c6b3bf689e-kube-api-access-k6n6g\") pod \"nova-scheduler-0\" (UID: \"b23edbac-1c5a-49ce-9cb3-39c6b3bf689e\") " pod="openstack/nova-scheduler-0" Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.319969 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b23edbac-1c5a-49ce-9cb3-39c6b3bf689e-config-data\") pod \"nova-scheduler-0\" (UID: \"b23edbac-1c5a-49ce-9cb3-39c6b3bf689e\") " pod="openstack/nova-scheduler-0" Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.320054 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b23edbac-1c5a-49ce-9cb3-39c6b3bf689e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b23edbac-1c5a-49ce-9cb3-39c6b3bf689e\") " pod="openstack/nova-scheduler-0" Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.421139 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6n6g\" (UniqueName: \"kubernetes.io/projected/b23edbac-1c5a-49ce-9cb3-39c6b3bf689e-kube-api-access-k6n6g\") pod \"nova-scheduler-0\" (UID: \"b23edbac-1c5a-49ce-9cb3-39c6b3bf689e\") " pod="openstack/nova-scheduler-0" Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.421237 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b23edbac-1c5a-49ce-9cb3-39c6b3bf689e-config-data\") pod \"nova-scheduler-0\" (UID: \"b23edbac-1c5a-49ce-9cb3-39c6b3bf689e\") " pod="openstack/nova-scheduler-0" Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.421292 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b23edbac-1c5a-49ce-9cb3-39c6b3bf689e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b23edbac-1c5a-49ce-9cb3-39c6b3bf689e\") " pod="openstack/nova-scheduler-0" Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.424964 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b23edbac-1c5a-49ce-9cb3-39c6b3bf689e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b23edbac-1c5a-49ce-9cb3-39c6b3bf689e\") " pod="openstack/nova-scheduler-0" Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.426098 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b23edbac-1c5a-49ce-9cb3-39c6b3bf689e-config-data\") pod \"nova-scheduler-0\" (UID: \"b23edbac-1c5a-49ce-9cb3-39c6b3bf689e\") " pod="openstack/nova-scheduler-0" Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.436671 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6n6g\" (UniqueName: \"kubernetes.io/projected/b23edbac-1c5a-49ce-9cb3-39c6b3bf689e-kube-api-access-k6n6g\") pod \"nova-scheduler-0\" (UID: \"b23edbac-1c5a-49ce-9cb3-39c6b3bf689e\") " pod="openstack/nova-scheduler-0" Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.546975 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 15:22:14 crc kubenswrapper[5050]: I1211 15:22:14.810318 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 15:22:15 crc kubenswrapper[5050]: I1211 15:22:15.133531 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b23edbac-1c5a-49ce-9cb3-39c6b3bf689e","Type":"ContainerStarted","Data":"380ed4609ec1c05c25da4a2e18dc163423299fa05e358e41ced42424cd8e919a"} Dec 11 15:22:15 crc kubenswrapper[5050]: I1211 15:22:15.133854 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b23edbac-1c5a-49ce-9cb3-39c6b3bf689e","Type":"ContainerStarted","Data":"f1ba3da98603fd64978ea35e84a3525b084b8a51c799e62f060913ce6675436c"} Dec 11 15:22:15 crc kubenswrapper[5050]: I1211 15:22:15.157717 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.157698109 podStartE2EDuration="1.157698109s" podCreationTimestamp="2025-12-11 15:22:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:22:15.147600809 +0000 UTC m=+5625.991323405" watchObservedRunningTime="2025-12-11 15:22:15.157698109 +0000 UTC m=+5626.001420705" Dec 11 15:22:15 crc kubenswrapper[5050]: I1211 15:22:15.559300 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="497b2933-da2f-4dce-9a38-6307ad42c044" path="/var/lib/kubelet/pods/497b2933-da2f-4dce-9a38-6307ad42c044/volumes" Dec 11 15:22:17 crc kubenswrapper[5050]: I1211 15:22:17.472190 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 11 15:22:17 crc kubenswrapper[5050]: I1211 15:22:17.472811 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 11 15:22:19 crc kubenswrapper[5050]: I1211 15:22:19.569755 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 11 15:22:21 crc kubenswrapper[5050]: E1211 15:22:21.761948 5050 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda80b79e5_138b_4e71_ab5e_aa8805cce0b5.slice/crio-e09a0786d1e30296342f9183180cf055bb58bbf45f4ab7cc1d293b2b154af638\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a630a5b_c349_4e2f_876a_0b82485a8221.slice/crio-a928b2c8a4583771b4e46223adef9e73c3dfe4060ad2f22b56dcc4c23a60dc1e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04dc7ae1_2485_4ffe_a853_4ef671794e68.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd35e2a89_ca99_46a0_86ce_83d7eac9733e.slice/crio-1a581d6d1f84876519a0950858ffc6b372b8edafa03ab517f443e441615dad56\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda80b79e5_138b_4e71_ab5e_aa8805cce0b5.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a630a5b_c349_4e2f_876a_0b82485a8221.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd35e2a89_ca99_46a0_86ce_83d7eac9733e.slice\": RecentStats: unable to find data in memory cache]" Dec 11 15:22:22 crc kubenswrapper[5050]: I1211 15:22:22.471734 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 11 15:22:22 crc kubenswrapper[5050]: I1211 15:22:22.472277 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 11 15:22:22 crc kubenswrapper[5050]: I1211 15:22:22.765869 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 15:22:22 crc kubenswrapper[5050]: I1211 15:22:22.766602 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 15:22:23 crc kubenswrapper[5050]: I1211 15:22:23.553231 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="93efaf6f-33d4-4fb3-ac60-f564a9496fdf" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.72:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:22:23 crc kubenswrapper[5050]: I1211 15:22:23.553296 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="93efaf6f-33d4-4fb3-ac60-f564a9496fdf" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.72:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:22:23 crc kubenswrapper[5050]: I1211 15:22:23.848219 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="04bd236f-ca75-4592-ba08-b2a0975485cf" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.73:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:22:23 crc kubenswrapper[5050]: I1211 15:22:23.848219 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="04bd236f-ca75-4592-ba08-b2a0975485cf" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.73:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:22:24 crc kubenswrapper[5050]: I1211 15:22:24.547357 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 11 15:22:24 crc kubenswrapper[5050]: I1211 15:22:24.578062 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 11 15:22:25 crc kubenswrapper[5050]: I1211 15:22:25.253876 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 11 15:22:32 crc kubenswrapper[5050]: I1211 15:22:32.474576 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 11 15:22:32 crc kubenswrapper[5050]: I1211 15:22:32.475055 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 11 15:22:32 crc kubenswrapper[5050]: I1211 15:22:32.477487 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 11 15:22:32 crc kubenswrapper[5050]: I1211 15:22:32.478164 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 11 15:22:32 crc kubenswrapper[5050]: I1211 15:22:32.768817 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 11 15:22:32 crc kubenswrapper[5050]: I1211 15:22:32.770647 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 11 15:22:32 crc kubenswrapper[5050]: I1211 15:22:32.771800 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 11 15:22:32 crc kubenswrapper[5050]: I1211 15:22:32.777226 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 11 15:22:33 crc kubenswrapper[5050]: I1211 15:22:33.291473 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 11 15:22:33 crc kubenswrapper[5050]: I1211 15:22:33.294818 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 11 15:22:33 crc kubenswrapper[5050]: I1211 15:22:33.459787 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68d6fbbf59-rwsxw"] Dec 11 15:22:33 crc kubenswrapper[5050]: I1211 15:22:33.461857 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d6fbbf59-rwsxw" Dec 11 15:22:33 crc kubenswrapper[5050]: I1211 15:22:33.516615 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d6fbbf59-rwsxw"] Dec 11 15:22:33 crc kubenswrapper[5050]: I1211 15:22:33.667816 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7bdaee0-e5cc-41e6-8ebc-f471529a89a8-config\") pod \"dnsmasq-dns-68d6fbbf59-rwsxw\" (UID: \"e7bdaee0-e5cc-41e6-8ebc-f471529a89a8\") " pod="openstack/dnsmasq-dns-68d6fbbf59-rwsxw" Dec 11 15:22:33 crc kubenswrapper[5050]: I1211 15:22:33.667892 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7bdaee0-e5cc-41e6-8ebc-f471529a89a8-ovsdbserver-sb\") pod \"dnsmasq-dns-68d6fbbf59-rwsxw\" (UID: \"e7bdaee0-e5cc-41e6-8ebc-f471529a89a8\") " pod="openstack/dnsmasq-dns-68d6fbbf59-rwsxw" Dec 11 15:22:33 crc kubenswrapper[5050]: I1211 15:22:33.667931 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7bdaee0-e5cc-41e6-8ebc-f471529a89a8-dns-svc\") pod \"dnsmasq-dns-68d6fbbf59-rwsxw\" (UID: \"e7bdaee0-e5cc-41e6-8ebc-f471529a89a8\") " pod="openstack/dnsmasq-dns-68d6fbbf59-rwsxw" Dec 11 15:22:33 crc kubenswrapper[5050]: I1211 15:22:33.668213 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7hjb\" (UniqueName: \"kubernetes.io/projected/e7bdaee0-e5cc-41e6-8ebc-f471529a89a8-kube-api-access-g7hjb\") pod \"dnsmasq-dns-68d6fbbf59-rwsxw\" (UID: \"e7bdaee0-e5cc-41e6-8ebc-f471529a89a8\") " pod="openstack/dnsmasq-dns-68d6fbbf59-rwsxw" Dec 11 15:22:33 crc kubenswrapper[5050]: I1211 15:22:33.668370 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7bdaee0-e5cc-41e6-8ebc-f471529a89a8-ovsdbserver-nb\") pod \"dnsmasq-dns-68d6fbbf59-rwsxw\" (UID: \"e7bdaee0-e5cc-41e6-8ebc-f471529a89a8\") " pod="openstack/dnsmasq-dns-68d6fbbf59-rwsxw" Dec 11 15:22:33 crc kubenswrapper[5050]: I1211 15:22:33.770541 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7bdaee0-e5cc-41e6-8ebc-f471529a89a8-config\") pod \"dnsmasq-dns-68d6fbbf59-rwsxw\" (UID: \"e7bdaee0-e5cc-41e6-8ebc-f471529a89a8\") " pod="openstack/dnsmasq-dns-68d6fbbf59-rwsxw" Dec 11 15:22:33 crc kubenswrapper[5050]: I1211 15:22:33.770627 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7bdaee0-e5cc-41e6-8ebc-f471529a89a8-ovsdbserver-sb\") pod \"dnsmasq-dns-68d6fbbf59-rwsxw\" (UID: \"e7bdaee0-e5cc-41e6-8ebc-f471529a89a8\") " pod="openstack/dnsmasq-dns-68d6fbbf59-rwsxw" Dec 11 15:22:33 crc kubenswrapper[5050]: I1211 15:22:33.770661 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7bdaee0-e5cc-41e6-8ebc-f471529a89a8-dns-svc\") pod \"dnsmasq-dns-68d6fbbf59-rwsxw\" (UID: \"e7bdaee0-e5cc-41e6-8ebc-f471529a89a8\") " pod="openstack/dnsmasq-dns-68d6fbbf59-rwsxw" Dec 11 15:22:33 crc kubenswrapper[5050]: I1211 15:22:33.770719 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7hjb\" (UniqueName: \"kubernetes.io/projected/e7bdaee0-e5cc-41e6-8ebc-f471529a89a8-kube-api-access-g7hjb\") pod \"dnsmasq-dns-68d6fbbf59-rwsxw\" (UID: \"e7bdaee0-e5cc-41e6-8ebc-f471529a89a8\") " pod="openstack/dnsmasq-dns-68d6fbbf59-rwsxw" Dec 11 15:22:33 crc kubenswrapper[5050]: I1211 15:22:33.770764 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7bdaee0-e5cc-41e6-8ebc-f471529a89a8-ovsdbserver-nb\") pod \"dnsmasq-dns-68d6fbbf59-rwsxw\" (UID: \"e7bdaee0-e5cc-41e6-8ebc-f471529a89a8\") " pod="openstack/dnsmasq-dns-68d6fbbf59-rwsxw" Dec 11 15:22:33 crc kubenswrapper[5050]: I1211 15:22:33.771722 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7bdaee0-e5cc-41e6-8ebc-f471529a89a8-ovsdbserver-nb\") pod \"dnsmasq-dns-68d6fbbf59-rwsxw\" (UID: \"e7bdaee0-e5cc-41e6-8ebc-f471529a89a8\") " pod="openstack/dnsmasq-dns-68d6fbbf59-rwsxw" Dec 11 15:22:33 crc kubenswrapper[5050]: I1211 15:22:33.771734 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7bdaee0-e5cc-41e6-8ebc-f471529a89a8-config\") pod \"dnsmasq-dns-68d6fbbf59-rwsxw\" (UID: \"e7bdaee0-e5cc-41e6-8ebc-f471529a89a8\") " pod="openstack/dnsmasq-dns-68d6fbbf59-rwsxw" Dec 11 15:22:33 crc kubenswrapper[5050]: I1211 15:22:33.772220 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7bdaee0-e5cc-41e6-8ebc-f471529a89a8-dns-svc\") pod \"dnsmasq-dns-68d6fbbf59-rwsxw\" (UID: \"e7bdaee0-e5cc-41e6-8ebc-f471529a89a8\") " pod="openstack/dnsmasq-dns-68d6fbbf59-rwsxw" Dec 11 15:22:33 crc kubenswrapper[5050]: I1211 15:22:33.772548 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7bdaee0-e5cc-41e6-8ebc-f471529a89a8-ovsdbserver-sb\") pod \"dnsmasq-dns-68d6fbbf59-rwsxw\" (UID: \"e7bdaee0-e5cc-41e6-8ebc-f471529a89a8\") " pod="openstack/dnsmasq-dns-68d6fbbf59-rwsxw" Dec 11 15:22:33 crc kubenswrapper[5050]: I1211 15:22:33.788487 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7hjb\" (UniqueName: \"kubernetes.io/projected/e7bdaee0-e5cc-41e6-8ebc-f471529a89a8-kube-api-access-g7hjb\") pod \"dnsmasq-dns-68d6fbbf59-rwsxw\" (UID: \"e7bdaee0-e5cc-41e6-8ebc-f471529a89a8\") " pod="openstack/dnsmasq-dns-68d6fbbf59-rwsxw" Dec 11 15:22:33 crc kubenswrapper[5050]: I1211 15:22:33.811575 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d6fbbf59-rwsxw" Dec 11 15:22:34 crc kubenswrapper[5050]: I1211 15:22:34.373129 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d6fbbf59-rwsxw"] Dec 11 15:22:35 crc kubenswrapper[5050]: I1211 15:22:35.307288 5050 generic.go:334] "Generic (PLEG): container finished" podID="e7bdaee0-e5cc-41e6-8ebc-f471529a89a8" containerID="3a7c5d8248142decc485d79c6b9769faad19bceb7be82b069818796d04743137" exitCode=0 Dec 11 15:22:35 crc kubenswrapper[5050]: I1211 15:22:35.307331 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d6fbbf59-rwsxw" event={"ID":"e7bdaee0-e5cc-41e6-8ebc-f471529a89a8","Type":"ContainerDied","Data":"3a7c5d8248142decc485d79c6b9769faad19bceb7be82b069818796d04743137"} Dec 11 15:22:35 crc kubenswrapper[5050]: I1211 15:22:35.307605 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d6fbbf59-rwsxw" event={"ID":"e7bdaee0-e5cc-41e6-8ebc-f471529a89a8","Type":"ContainerStarted","Data":"db7ed695e6a39ff066038c2564a97defa9911c3b38d42bf3e0a05cec69f7221e"} Dec 11 15:22:36 crc kubenswrapper[5050]: I1211 15:22:36.318468 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d6fbbf59-rwsxw" event={"ID":"e7bdaee0-e5cc-41e6-8ebc-f471529a89a8","Type":"ContainerStarted","Data":"4daec903833e5a1d9a7b3772cdd54b26b3dbb6ce85c21b707b4f8478a2ed9ef9"} Dec 11 15:22:36 crc kubenswrapper[5050]: I1211 15:22:36.319719 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68d6fbbf59-rwsxw" Dec 11 15:22:36 crc kubenswrapper[5050]: I1211 15:22:36.337081 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68d6fbbf59-rwsxw" podStartSLOduration=3.337061055 podStartE2EDuration="3.337061055s" podCreationTimestamp="2025-12-11 15:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:22:36.336160921 +0000 UTC m=+5647.179883497" watchObservedRunningTime="2025-12-11 15:22:36.337061055 +0000 UTC m=+5647.180783641" Dec 11 15:22:43 crc kubenswrapper[5050]: I1211 15:22:43.813216 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68d6fbbf59-rwsxw" Dec 11 15:22:43 crc kubenswrapper[5050]: I1211 15:22:43.876060 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7dc945cfcf-bfgvl"] Dec 11 15:22:43 crc kubenswrapper[5050]: I1211 15:22:43.876363 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7dc945cfcf-bfgvl" podUID="34f44ada-5f76-4b45-89ba-132caf64ae4a" containerName="dnsmasq-dns" containerID="cri-o://1915832d9493f4f7d5d3aa66249038d1213fb2e67070b301c39c161b5c64a9c0" gracePeriod=10 Dec 11 15:22:44 crc kubenswrapper[5050]: I1211 15:22:44.384477 5050 generic.go:334] "Generic (PLEG): container finished" podID="34f44ada-5f76-4b45-89ba-132caf64ae4a" containerID="1915832d9493f4f7d5d3aa66249038d1213fb2e67070b301c39c161b5c64a9c0" exitCode=0 Dec 11 15:22:44 crc kubenswrapper[5050]: I1211 15:22:44.384574 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dc945cfcf-bfgvl" event={"ID":"34f44ada-5f76-4b45-89ba-132caf64ae4a","Type":"ContainerDied","Data":"1915832d9493f4f7d5d3aa66249038d1213fb2e67070b301c39c161b5c64a9c0"} Dec 11 15:22:44 crc kubenswrapper[5050]: I1211 15:22:44.384795 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7dc945cfcf-bfgvl" event={"ID":"34f44ada-5f76-4b45-89ba-132caf64ae4a","Type":"ContainerDied","Data":"fbdf034a429b09cb38734b4c61da34ce78adfb7b966c48aef25c9a4eade2bfff"} Dec 11 15:22:44 crc kubenswrapper[5050]: I1211 15:22:44.384811 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbdf034a429b09cb38734b4c61da34ce78adfb7b966c48aef25c9a4eade2bfff" Dec 11 15:22:44 crc kubenswrapper[5050]: I1211 15:22:44.440950 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dc945cfcf-bfgvl" Dec 11 15:22:44 crc kubenswrapper[5050]: I1211 15:22:44.601598 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qvnr\" (UniqueName: \"kubernetes.io/projected/34f44ada-5f76-4b45-89ba-132caf64ae4a-kube-api-access-9qvnr\") pod \"34f44ada-5f76-4b45-89ba-132caf64ae4a\" (UID: \"34f44ada-5f76-4b45-89ba-132caf64ae4a\") " Dec 11 15:22:44 crc kubenswrapper[5050]: I1211 15:22:44.602128 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34f44ada-5f76-4b45-89ba-132caf64ae4a-ovsdbserver-sb\") pod \"34f44ada-5f76-4b45-89ba-132caf64ae4a\" (UID: \"34f44ada-5f76-4b45-89ba-132caf64ae4a\") " Dec 11 15:22:44 crc kubenswrapper[5050]: I1211 15:22:44.602169 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34f44ada-5f76-4b45-89ba-132caf64ae4a-config\") pod \"34f44ada-5f76-4b45-89ba-132caf64ae4a\" (UID: \"34f44ada-5f76-4b45-89ba-132caf64ae4a\") " Dec 11 15:22:44 crc kubenswrapper[5050]: I1211 15:22:44.602211 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34f44ada-5f76-4b45-89ba-132caf64ae4a-ovsdbserver-nb\") pod \"34f44ada-5f76-4b45-89ba-132caf64ae4a\" (UID: \"34f44ada-5f76-4b45-89ba-132caf64ae4a\") " Dec 11 15:22:44 crc kubenswrapper[5050]: I1211 15:22:44.602230 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34f44ada-5f76-4b45-89ba-132caf64ae4a-dns-svc\") pod \"34f44ada-5f76-4b45-89ba-132caf64ae4a\" (UID: \"34f44ada-5f76-4b45-89ba-132caf64ae4a\") " Dec 11 15:22:44 crc kubenswrapper[5050]: I1211 15:22:44.615285 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34f44ada-5f76-4b45-89ba-132caf64ae4a-kube-api-access-9qvnr" (OuterVolumeSpecName: "kube-api-access-9qvnr") pod "34f44ada-5f76-4b45-89ba-132caf64ae4a" (UID: "34f44ada-5f76-4b45-89ba-132caf64ae4a"). InnerVolumeSpecName "kube-api-access-9qvnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:22:44 crc kubenswrapper[5050]: I1211 15:22:44.650513 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34f44ada-5f76-4b45-89ba-132caf64ae4a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "34f44ada-5f76-4b45-89ba-132caf64ae4a" (UID: "34f44ada-5f76-4b45-89ba-132caf64ae4a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:22:44 crc kubenswrapper[5050]: I1211 15:22:44.650705 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34f44ada-5f76-4b45-89ba-132caf64ae4a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "34f44ada-5f76-4b45-89ba-132caf64ae4a" (UID: "34f44ada-5f76-4b45-89ba-132caf64ae4a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:22:44 crc kubenswrapper[5050]: I1211 15:22:44.657827 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34f44ada-5f76-4b45-89ba-132caf64ae4a-config" (OuterVolumeSpecName: "config") pod "34f44ada-5f76-4b45-89ba-132caf64ae4a" (UID: "34f44ada-5f76-4b45-89ba-132caf64ae4a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:22:44 crc kubenswrapper[5050]: I1211 15:22:44.671193 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34f44ada-5f76-4b45-89ba-132caf64ae4a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "34f44ada-5f76-4b45-89ba-132caf64ae4a" (UID: "34f44ada-5f76-4b45-89ba-132caf64ae4a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:22:44 crc kubenswrapper[5050]: I1211 15:22:44.704274 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/34f44ada-5f76-4b45-89ba-132caf64ae4a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 15:22:44 crc kubenswrapper[5050]: I1211 15:22:44.704310 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34f44ada-5f76-4b45-89ba-132caf64ae4a-config\") on node \"crc\" DevicePath \"\"" Dec 11 15:22:44 crc kubenswrapper[5050]: I1211 15:22:44.704320 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/34f44ada-5f76-4b45-89ba-132caf64ae4a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 15:22:44 crc kubenswrapper[5050]: I1211 15:22:44.704328 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/34f44ada-5f76-4b45-89ba-132caf64ae4a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 15:22:44 crc kubenswrapper[5050]: I1211 15:22:44.704347 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qvnr\" (UniqueName: \"kubernetes.io/projected/34f44ada-5f76-4b45-89ba-132caf64ae4a-kube-api-access-9qvnr\") on node \"crc\" DevicePath \"\"" Dec 11 15:22:45 crc kubenswrapper[5050]: I1211 15:22:45.395950 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7dc945cfcf-bfgvl" Dec 11 15:22:45 crc kubenswrapper[5050]: I1211 15:22:45.430364 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7dc945cfcf-bfgvl"] Dec 11 15:22:45 crc kubenswrapper[5050]: I1211 15:22:45.438183 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7dc945cfcf-bfgvl"] Dec 11 15:22:45 crc kubenswrapper[5050]: I1211 15:22:45.556898 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34f44ada-5f76-4b45-89ba-132caf64ae4a" path="/var/lib/kubelet/pods/34f44ada-5f76-4b45-89ba-132caf64ae4a/volumes" Dec 11 15:22:45 crc kubenswrapper[5050]: I1211 15:22:45.631159 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-jcgz4"] Dec 11 15:22:45 crc kubenswrapper[5050]: E1211 15:22:45.631680 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34f44ada-5f76-4b45-89ba-132caf64ae4a" containerName="init" Dec 11 15:22:45 crc kubenswrapper[5050]: I1211 15:22:45.631706 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="34f44ada-5f76-4b45-89ba-132caf64ae4a" containerName="init" Dec 11 15:22:45 crc kubenswrapper[5050]: E1211 15:22:45.631746 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34f44ada-5f76-4b45-89ba-132caf64ae4a" containerName="dnsmasq-dns" Dec 11 15:22:45 crc kubenswrapper[5050]: I1211 15:22:45.631756 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="34f44ada-5f76-4b45-89ba-132caf64ae4a" containerName="dnsmasq-dns" Dec 11 15:22:45 crc kubenswrapper[5050]: I1211 15:22:45.632002 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="34f44ada-5f76-4b45-89ba-132caf64ae4a" containerName="dnsmasq-dns" Dec 11 15:22:45 crc kubenswrapper[5050]: I1211 15:22:45.632916 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jcgz4" Dec 11 15:22:45 crc kubenswrapper[5050]: I1211 15:22:45.643751 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-jcgz4"] Dec 11 15:22:45 crc kubenswrapper[5050]: I1211 15:22:45.734841 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b147-account-create-update-qst8q"] Dec 11 15:22:45 crc kubenswrapper[5050]: I1211 15:22:45.736726 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b147-account-create-update-qst8q" Dec 11 15:22:45 crc kubenswrapper[5050]: I1211 15:22:45.740330 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 11 15:22:45 crc kubenswrapper[5050]: I1211 15:22:45.749919 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b147-account-create-update-qst8q"] Dec 11 15:22:45 crc kubenswrapper[5050]: I1211 15:22:45.823976 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddb9d2a0-555b-482f-98a9-2aacd8129ead-operator-scripts\") pod \"cinder-db-create-jcgz4\" (UID: \"ddb9d2a0-555b-482f-98a9-2aacd8129ead\") " pod="openstack/cinder-db-create-jcgz4" Dec 11 15:22:45 crc kubenswrapper[5050]: I1211 15:22:45.824131 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z5m7\" (UniqueName: \"kubernetes.io/projected/ddb9d2a0-555b-482f-98a9-2aacd8129ead-kube-api-access-8z5m7\") pod \"cinder-db-create-jcgz4\" (UID: \"ddb9d2a0-555b-482f-98a9-2aacd8129ead\") " pod="openstack/cinder-db-create-jcgz4" Dec 11 15:22:45 crc kubenswrapper[5050]: I1211 15:22:45.926052 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a860a2e6-279e-4b60-81cb-895bab7f0525-operator-scripts\") pod \"cinder-b147-account-create-update-qst8q\" (UID: \"a860a2e6-279e-4b60-81cb-895bab7f0525\") " pod="openstack/cinder-b147-account-create-update-qst8q" Dec 11 15:22:45 crc kubenswrapper[5050]: I1211 15:22:45.926160 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddb9d2a0-555b-482f-98a9-2aacd8129ead-operator-scripts\") pod \"cinder-db-create-jcgz4\" (UID: \"ddb9d2a0-555b-482f-98a9-2aacd8129ead\") " pod="openstack/cinder-db-create-jcgz4" Dec 11 15:22:45 crc kubenswrapper[5050]: I1211 15:22:45.926260 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwgtd\" (UniqueName: \"kubernetes.io/projected/a860a2e6-279e-4b60-81cb-895bab7f0525-kube-api-access-jwgtd\") pod \"cinder-b147-account-create-update-qst8q\" (UID: \"a860a2e6-279e-4b60-81cb-895bab7f0525\") " pod="openstack/cinder-b147-account-create-update-qst8q" Dec 11 15:22:45 crc kubenswrapper[5050]: I1211 15:22:45.926308 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z5m7\" (UniqueName: \"kubernetes.io/projected/ddb9d2a0-555b-482f-98a9-2aacd8129ead-kube-api-access-8z5m7\") pod \"cinder-db-create-jcgz4\" (UID: \"ddb9d2a0-555b-482f-98a9-2aacd8129ead\") " pod="openstack/cinder-db-create-jcgz4" Dec 11 15:22:45 crc kubenswrapper[5050]: I1211 15:22:45.926990 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddb9d2a0-555b-482f-98a9-2aacd8129ead-operator-scripts\") pod \"cinder-db-create-jcgz4\" (UID: \"ddb9d2a0-555b-482f-98a9-2aacd8129ead\") " pod="openstack/cinder-db-create-jcgz4" Dec 11 15:22:45 crc kubenswrapper[5050]: I1211 15:22:45.943536 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z5m7\" (UniqueName: \"kubernetes.io/projected/ddb9d2a0-555b-482f-98a9-2aacd8129ead-kube-api-access-8z5m7\") pod \"cinder-db-create-jcgz4\" (UID: \"ddb9d2a0-555b-482f-98a9-2aacd8129ead\") " pod="openstack/cinder-db-create-jcgz4" Dec 11 15:22:45 crc kubenswrapper[5050]: I1211 15:22:45.957375 5050 scope.go:117] "RemoveContainer" containerID="95574c85143e722a797380e59847ccda75e0bb777797ab163fdc0c195ef2e351" Dec 11 15:22:45 crc kubenswrapper[5050]: I1211 15:22:45.958122 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jcgz4" Dec 11 15:22:46 crc kubenswrapper[5050]: I1211 15:22:46.032932 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwgtd\" (UniqueName: \"kubernetes.io/projected/a860a2e6-279e-4b60-81cb-895bab7f0525-kube-api-access-jwgtd\") pod \"cinder-b147-account-create-update-qst8q\" (UID: \"a860a2e6-279e-4b60-81cb-895bab7f0525\") " pod="openstack/cinder-b147-account-create-update-qst8q" Dec 11 15:22:46 crc kubenswrapper[5050]: I1211 15:22:46.033160 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a860a2e6-279e-4b60-81cb-895bab7f0525-operator-scripts\") pod \"cinder-b147-account-create-update-qst8q\" (UID: \"a860a2e6-279e-4b60-81cb-895bab7f0525\") " pod="openstack/cinder-b147-account-create-update-qst8q" Dec 11 15:22:46 crc kubenswrapper[5050]: I1211 15:22:46.034168 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a860a2e6-279e-4b60-81cb-895bab7f0525-operator-scripts\") pod \"cinder-b147-account-create-update-qst8q\" (UID: \"a860a2e6-279e-4b60-81cb-895bab7f0525\") " pod="openstack/cinder-b147-account-create-update-qst8q" Dec 11 15:22:46 crc kubenswrapper[5050]: I1211 15:22:46.055684 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwgtd\" (UniqueName: \"kubernetes.io/projected/a860a2e6-279e-4b60-81cb-895bab7f0525-kube-api-access-jwgtd\") pod \"cinder-b147-account-create-update-qst8q\" (UID: \"a860a2e6-279e-4b60-81cb-895bab7f0525\") " pod="openstack/cinder-b147-account-create-update-qst8q" Dec 11 15:22:46 crc kubenswrapper[5050]: I1211 15:22:46.074390 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b147-account-create-update-qst8q" Dec 11 15:22:46 crc kubenswrapper[5050]: I1211 15:22:46.182587 5050 scope.go:117] "RemoveContainer" containerID="ed1d3f23f9b6ef4474802d88ea28fe8162e3959bdf1383a3d3fd1fa36174d763" Dec 11 15:22:46 crc kubenswrapper[5050]: I1211 15:22:46.216320 5050 scope.go:117] "RemoveContainer" containerID="9bd6f9acf447f4e9a54e3747076bd9c7bb0faf106c245f17cea47181eb6df076" Dec 11 15:22:46 crc kubenswrapper[5050]: I1211 15:22:46.293836 5050 scope.go:117] "RemoveContainer" containerID="5090ed60c2c19a120200602b360bdd401a1a4e4b4723991c2f7abf415bb8dc79" Dec 11 15:22:46 crc kubenswrapper[5050]: I1211 15:22:46.634796 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-jcgz4"] Dec 11 15:22:46 crc kubenswrapper[5050]: W1211 15:22:46.636854 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podddb9d2a0_555b_482f_98a9_2aacd8129ead.slice/crio-f353668f421e498876a467d0fdd96c32a1486ba74e2243ba48e59060f7f64631 WatchSource:0}: Error finding container f353668f421e498876a467d0fdd96c32a1486ba74e2243ba48e59060f7f64631: Status 404 returned error can't find the container with id f353668f421e498876a467d0fdd96c32a1486ba74e2243ba48e59060f7f64631 Dec 11 15:22:46 crc kubenswrapper[5050]: I1211 15:22:46.720762 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b147-account-create-update-qst8q"] Dec 11 15:22:47 crc kubenswrapper[5050]: I1211 15:22:47.429090 5050 generic.go:334] "Generic (PLEG): container finished" podID="a860a2e6-279e-4b60-81cb-895bab7f0525" containerID="578b239ac0eb1eceac16034f9bd1b64d1b33ce112cb38d916f0219ee5ec91442" exitCode=0 Dec 11 15:22:47 crc kubenswrapper[5050]: I1211 15:22:47.429163 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b147-account-create-update-qst8q" event={"ID":"a860a2e6-279e-4b60-81cb-895bab7f0525","Type":"ContainerDied","Data":"578b239ac0eb1eceac16034f9bd1b64d1b33ce112cb38d916f0219ee5ec91442"} Dec 11 15:22:47 crc kubenswrapper[5050]: I1211 15:22:47.429189 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b147-account-create-update-qst8q" event={"ID":"a860a2e6-279e-4b60-81cb-895bab7f0525","Type":"ContainerStarted","Data":"f78d933d0e527c350fbd10f5dad9b3e5a3828350355d9efef999d3f4125ec95a"} Dec 11 15:22:47 crc kubenswrapper[5050]: I1211 15:22:47.431209 5050 generic.go:334] "Generic (PLEG): container finished" podID="ddb9d2a0-555b-482f-98a9-2aacd8129ead" containerID="19c6a9a722efda4436f056ed0806f41b1bf98af98b3841b6c0ef07ac48fc68f3" exitCode=0 Dec 11 15:22:47 crc kubenswrapper[5050]: I1211 15:22:47.431264 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jcgz4" event={"ID":"ddb9d2a0-555b-482f-98a9-2aacd8129ead","Type":"ContainerDied","Data":"19c6a9a722efda4436f056ed0806f41b1bf98af98b3841b6c0ef07ac48fc68f3"} Dec 11 15:22:47 crc kubenswrapper[5050]: I1211 15:22:47.431310 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jcgz4" event={"ID":"ddb9d2a0-555b-482f-98a9-2aacd8129ead","Type":"ContainerStarted","Data":"f353668f421e498876a467d0fdd96c32a1486ba74e2243ba48e59060f7f64631"} Dec 11 15:22:48 crc kubenswrapper[5050]: E1211 15:22:48.615528 5050 kubelet_node_status.go:756] "Failed to set some node status fields" err="failed to validate nodeIP: route ip+net: no such network interface" node="crc" Dec 11 15:22:48 crc kubenswrapper[5050]: I1211 15:22:48.858843 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b147-account-create-update-qst8q" Dec 11 15:22:48 crc kubenswrapper[5050]: I1211 15:22:48.866666 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jcgz4" Dec 11 15:22:48 crc kubenswrapper[5050]: I1211 15:22:48.989500 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a860a2e6-279e-4b60-81cb-895bab7f0525-operator-scripts\") pod \"a860a2e6-279e-4b60-81cb-895bab7f0525\" (UID: \"a860a2e6-279e-4b60-81cb-895bab7f0525\") " Dec 11 15:22:48 crc kubenswrapper[5050]: I1211 15:22:48.989675 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddb9d2a0-555b-482f-98a9-2aacd8129ead-operator-scripts\") pod \"ddb9d2a0-555b-482f-98a9-2aacd8129ead\" (UID: \"ddb9d2a0-555b-482f-98a9-2aacd8129ead\") " Dec 11 15:22:48 crc kubenswrapper[5050]: I1211 15:22:48.989776 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwgtd\" (UniqueName: \"kubernetes.io/projected/a860a2e6-279e-4b60-81cb-895bab7f0525-kube-api-access-jwgtd\") pod \"a860a2e6-279e-4b60-81cb-895bab7f0525\" (UID: \"a860a2e6-279e-4b60-81cb-895bab7f0525\") " Dec 11 15:22:48 crc kubenswrapper[5050]: I1211 15:22:48.989805 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z5m7\" (UniqueName: \"kubernetes.io/projected/ddb9d2a0-555b-482f-98a9-2aacd8129ead-kube-api-access-8z5m7\") pod \"ddb9d2a0-555b-482f-98a9-2aacd8129ead\" (UID: \"ddb9d2a0-555b-482f-98a9-2aacd8129ead\") " Dec 11 15:22:48 crc kubenswrapper[5050]: I1211 15:22:48.990023 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a860a2e6-279e-4b60-81cb-895bab7f0525-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a860a2e6-279e-4b60-81cb-895bab7f0525" (UID: "a860a2e6-279e-4b60-81cb-895bab7f0525"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:22:48 crc kubenswrapper[5050]: I1211 15:22:48.990193 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddb9d2a0-555b-482f-98a9-2aacd8129ead-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ddb9d2a0-555b-482f-98a9-2aacd8129ead" (UID: "ddb9d2a0-555b-482f-98a9-2aacd8129ead"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:22:48 crc kubenswrapper[5050]: I1211 15:22:48.990218 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a860a2e6-279e-4b60-81cb-895bab7f0525-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:22:48 crc kubenswrapper[5050]: I1211 15:22:48.995364 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddb9d2a0-555b-482f-98a9-2aacd8129ead-kube-api-access-8z5m7" (OuterVolumeSpecName: "kube-api-access-8z5m7") pod "ddb9d2a0-555b-482f-98a9-2aacd8129ead" (UID: "ddb9d2a0-555b-482f-98a9-2aacd8129ead"). InnerVolumeSpecName "kube-api-access-8z5m7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:22:48 crc kubenswrapper[5050]: I1211 15:22:48.995473 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a860a2e6-279e-4b60-81cb-895bab7f0525-kube-api-access-jwgtd" (OuterVolumeSpecName: "kube-api-access-jwgtd") pod "a860a2e6-279e-4b60-81cb-895bab7f0525" (UID: "a860a2e6-279e-4b60-81cb-895bab7f0525"). InnerVolumeSpecName "kube-api-access-jwgtd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:22:49 crc kubenswrapper[5050]: I1211 15:22:49.091932 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ddb9d2a0-555b-482f-98a9-2aacd8129ead-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:22:49 crc kubenswrapper[5050]: I1211 15:22:49.091962 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwgtd\" (UniqueName: \"kubernetes.io/projected/a860a2e6-279e-4b60-81cb-895bab7f0525-kube-api-access-jwgtd\") on node \"crc\" DevicePath \"\"" Dec 11 15:22:49 crc kubenswrapper[5050]: I1211 15:22:49.091974 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z5m7\" (UniqueName: \"kubernetes.io/projected/ddb9d2a0-555b-482f-98a9-2aacd8129ead-kube-api-access-8z5m7\") on node \"crc\" DevicePath \"\"" Dec 11 15:22:49 crc kubenswrapper[5050]: I1211 15:22:49.448630 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b147-account-create-update-qst8q" event={"ID":"a860a2e6-279e-4b60-81cb-895bab7f0525","Type":"ContainerDied","Data":"f78d933d0e527c350fbd10f5dad9b3e5a3828350355d9efef999d3f4125ec95a"} Dec 11 15:22:49 crc kubenswrapper[5050]: I1211 15:22:49.448688 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f78d933d0e527c350fbd10f5dad9b3e5a3828350355d9efef999d3f4125ec95a" Dec 11 15:22:49 crc kubenswrapper[5050]: I1211 15:22:49.449124 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b147-account-create-update-qst8q" Dec 11 15:22:49 crc kubenswrapper[5050]: I1211 15:22:49.450732 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jcgz4" event={"ID":"ddb9d2a0-555b-482f-98a9-2aacd8129ead","Type":"ContainerDied","Data":"f353668f421e498876a467d0fdd96c32a1486ba74e2243ba48e59060f7f64631"} Dec 11 15:22:49 crc kubenswrapper[5050]: I1211 15:22:49.450762 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f353668f421e498876a467d0fdd96c32a1486ba74e2243ba48e59060f7f64631" Dec 11 15:22:49 crc kubenswrapper[5050]: I1211 15:22:49.450818 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jcgz4" Dec 11 15:22:50 crc kubenswrapper[5050]: I1211 15:22:50.928603 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-nxd6x"] Dec 11 15:22:50 crc kubenswrapper[5050]: E1211 15:22:50.929894 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a860a2e6-279e-4b60-81cb-895bab7f0525" containerName="mariadb-account-create-update" Dec 11 15:22:50 crc kubenswrapper[5050]: I1211 15:22:50.929919 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a860a2e6-279e-4b60-81cb-895bab7f0525" containerName="mariadb-account-create-update" Dec 11 15:22:50 crc kubenswrapper[5050]: E1211 15:22:50.929989 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddb9d2a0-555b-482f-98a9-2aacd8129ead" containerName="mariadb-database-create" Dec 11 15:22:50 crc kubenswrapper[5050]: I1211 15:22:50.929999 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddb9d2a0-555b-482f-98a9-2aacd8129ead" containerName="mariadb-database-create" Dec 11 15:22:50 crc kubenswrapper[5050]: I1211 15:22:50.930261 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddb9d2a0-555b-482f-98a9-2aacd8129ead" containerName="mariadb-database-create" Dec 11 15:22:50 crc kubenswrapper[5050]: I1211 15:22:50.930287 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="a860a2e6-279e-4b60-81cb-895bab7f0525" containerName="mariadb-account-create-update" Dec 11 15:22:50 crc kubenswrapper[5050]: I1211 15:22:50.931377 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nxd6x" Dec 11 15:22:50 crc kubenswrapper[5050]: I1211 15:22:50.934614 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 11 15:22:50 crc kubenswrapper[5050]: I1211 15:22:50.935187 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 11 15:22:50 crc kubenswrapper[5050]: I1211 15:22:50.936554 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-lvj2r" Dec 11 15:22:50 crc kubenswrapper[5050]: I1211 15:22:50.942471 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-nxd6x"] Dec 11 15:22:51 crc kubenswrapper[5050]: I1211 15:22:51.128920 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl4z7\" (UniqueName: \"kubernetes.io/projected/4f664216-f326-4ee5-aa8a-167f41efbd65-kube-api-access-vl4z7\") pod \"cinder-db-sync-nxd6x\" (UID: \"4f664216-f326-4ee5-aa8a-167f41efbd65\") " pod="openstack/cinder-db-sync-nxd6x" Dec 11 15:22:51 crc kubenswrapper[5050]: I1211 15:22:51.128970 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4f664216-f326-4ee5-aa8a-167f41efbd65-etc-machine-id\") pod \"cinder-db-sync-nxd6x\" (UID: \"4f664216-f326-4ee5-aa8a-167f41efbd65\") " pod="openstack/cinder-db-sync-nxd6x" Dec 11 15:22:51 crc kubenswrapper[5050]: I1211 15:22:51.129094 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f664216-f326-4ee5-aa8a-167f41efbd65-combined-ca-bundle\") pod \"cinder-db-sync-nxd6x\" (UID: \"4f664216-f326-4ee5-aa8a-167f41efbd65\") " pod="openstack/cinder-db-sync-nxd6x" Dec 11 15:22:51 crc kubenswrapper[5050]: I1211 15:22:51.129129 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4f664216-f326-4ee5-aa8a-167f41efbd65-db-sync-config-data\") pod \"cinder-db-sync-nxd6x\" (UID: \"4f664216-f326-4ee5-aa8a-167f41efbd65\") " pod="openstack/cinder-db-sync-nxd6x" Dec 11 15:22:51 crc kubenswrapper[5050]: I1211 15:22:51.129189 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f664216-f326-4ee5-aa8a-167f41efbd65-scripts\") pod \"cinder-db-sync-nxd6x\" (UID: \"4f664216-f326-4ee5-aa8a-167f41efbd65\") " pod="openstack/cinder-db-sync-nxd6x" Dec 11 15:22:51 crc kubenswrapper[5050]: I1211 15:22:51.129210 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f664216-f326-4ee5-aa8a-167f41efbd65-config-data\") pod \"cinder-db-sync-nxd6x\" (UID: \"4f664216-f326-4ee5-aa8a-167f41efbd65\") " pod="openstack/cinder-db-sync-nxd6x" Dec 11 15:22:51 crc kubenswrapper[5050]: I1211 15:22:51.230915 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f664216-f326-4ee5-aa8a-167f41efbd65-combined-ca-bundle\") pod \"cinder-db-sync-nxd6x\" (UID: \"4f664216-f326-4ee5-aa8a-167f41efbd65\") " pod="openstack/cinder-db-sync-nxd6x" Dec 11 15:22:51 crc kubenswrapper[5050]: I1211 15:22:51.230997 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4f664216-f326-4ee5-aa8a-167f41efbd65-db-sync-config-data\") pod \"cinder-db-sync-nxd6x\" (UID: \"4f664216-f326-4ee5-aa8a-167f41efbd65\") " pod="openstack/cinder-db-sync-nxd6x" Dec 11 15:22:51 crc kubenswrapper[5050]: I1211 15:22:51.231057 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f664216-f326-4ee5-aa8a-167f41efbd65-scripts\") pod \"cinder-db-sync-nxd6x\" (UID: \"4f664216-f326-4ee5-aa8a-167f41efbd65\") " pod="openstack/cinder-db-sync-nxd6x" Dec 11 15:22:51 crc kubenswrapper[5050]: I1211 15:22:51.231091 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f664216-f326-4ee5-aa8a-167f41efbd65-config-data\") pod \"cinder-db-sync-nxd6x\" (UID: \"4f664216-f326-4ee5-aa8a-167f41efbd65\") " pod="openstack/cinder-db-sync-nxd6x" Dec 11 15:22:51 crc kubenswrapper[5050]: I1211 15:22:51.231154 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl4z7\" (UniqueName: \"kubernetes.io/projected/4f664216-f326-4ee5-aa8a-167f41efbd65-kube-api-access-vl4z7\") pod \"cinder-db-sync-nxd6x\" (UID: \"4f664216-f326-4ee5-aa8a-167f41efbd65\") " pod="openstack/cinder-db-sync-nxd6x" Dec 11 15:22:51 crc kubenswrapper[5050]: I1211 15:22:51.231186 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4f664216-f326-4ee5-aa8a-167f41efbd65-etc-machine-id\") pod \"cinder-db-sync-nxd6x\" (UID: \"4f664216-f326-4ee5-aa8a-167f41efbd65\") " pod="openstack/cinder-db-sync-nxd6x" Dec 11 15:22:51 crc kubenswrapper[5050]: I1211 15:22:51.231310 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4f664216-f326-4ee5-aa8a-167f41efbd65-etc-machine-id\") pod \"cinder-db-sync-nxd6x\" (UID: \"4f664216-f326-4ee5-aa8a-167f41efbd65\") " pod="openstack/cinder-db-sync-nxd6x" Dec 11 15:22:51 crc kubenswrapper[5050]: I1211 15:22:51.238404 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f664216-f326-4ee5-aa8a-167f41efbd65-config-data\") pod \"cinder-db-sync-nxd6x\" (UID: \"4f664216-f326-4ee5-aa8a-167f41efbd65\") " pod="openstack/cinder-db-sync-nxd6x" Dec 11 15:22:51 crc kubenswrapper[5050]: I1211 15:22:51.238724 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f664216-f326-4ee5-aa8a-167f41efbd65-combined-ca-bundle\") pod \"cinder-db-sync-nxd6x\" (UID: \"4f664216-f326-4ee5-aa8a-167f41efbd65\") " pod="openstack/cinder-db-sync-nxd6x" Dec 11 15:22:51 crc kubenswrapper[5050]: I1211 15:22:51.239816 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f664216-f326-4ee5-aa8a-167f41efbd65-scripts\") pod \"cinder-db-sync-nxd6x\" (UID: \"4f664216-f326-4ee5-aa8a-167f41efbd65\") " pod="openstack/cinder-db-sync-nxd6x" Dec 11 15:22:51 crc kubenswrapper[5050]: I1211 15:22:51.244127 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4f664216-f326-4ee5-aa8a-167f41efbd65-db-sync-config-data\") pod \"cinder-db-sync-nxd6x\" (UID: \"4f664216-f326-4ee5-aa8a-167f41efbd65\") " pod="openstack/cinder-db-sync-nxd6x" Dec 11 15:22:51 crc kubenswrapper[5050]: I1211 15:22:51.259135 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl4z7\" (UniqueName: \"kubernetes.io/projected/4f664216-f326-4ee5-aa8a-167f41efbd65-kube-api-access-vl4z7\") pod \"cinder-db-sync-nxd6x\" (UID: \"4f664216-f326-4ee5-aa8a-167f41efbd65\") " pod="openstack/cinder-db-sync-nxd6x" Dec 11 15:22:51 crc kubenswrapper[5050]: I1211 15:22:51.264954 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nxd6x" Dec 11 15:22:51 crc kubenswrapper[5050]: I1211 15:22:51.713552 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-nxd6x"] Dec 11 15:22:51 crc kubenswrapper[5050]: W1211 15:22:51.722506 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f664216_f326_4ee5_aa8a_167f41efbd65.slice/crio-7e61696658ff6d616cca8e0d6c4a00b22a1b73b6c2382b6454e2401342944698 WatchSource:0}: Error finding container 7e61696658ff6d616cca8e0d6c4a00b22a1b73b6c2382b6454e2401342944698: Status 404 returned error can't find the container with id 7e61696658ff6d616cca8e0d6c4a00b22a1b73b6c2382b6454e2401342944698 Dec 11 15:22:52 crc kubenswrapper[5050]: I1211 15:22:52.479381 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nxd6x" event={"ID":"4f664216-f326-4ee5-aa8a-167f41efbd65","Type":"ContainerStarted","Data":"7e044c9a621d31bd2d96de1d63132fefaf26eba1190070e5637504856cb5b072"} Dec 11 15:22:52 crc kubenswrapper[5050]: I1211 15:22:52.479960 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nxd6x" event={"ID":"4f664216-f326-4ee5-aa8a-167f41efbd65","Type":"ContainerStarted","Data":"7e61696658ff6d616cca8e0d6c4a00b22a1b73b6c2382b6454e2401342944698"} Dec 11 15:22:52 crc kubenswrapper[5050]: I1211 15:22:52.504241 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-nxd6x" podStartSLOduration=2.5042188789999997 podStartE2EDuration="2.504218879s" podCreationTimestamp="2025-12-11 15:22:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:22:52.494090798 +0000 UTC m=+5663.337813404" watchObservedRunningTime="2025-12-11 15:22:52.504218879 +0000 UTC m=+5663.347941465" Dec 11 15:22:55 crc kubenswrapper[5050]: I1211 15:22:55.509800 5050 generic.go:334] "Generic (PLEG): container finished" podID="4f664216-f326-4ee5-aa8a-167f41efbd65" containerID="7e044c9a621d31bd2d96de1d63132fefaf26eba1190070e5637504856cb5b072" exitCode=0 Dec 11 15:22:55 crc kubenswrapper[5050]: I1211 15:22:55.509892 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nxd6x" event={"ID":"4f664216-f326-4ee5-aa8a-167f41efbd65","Type":"ContainerDied","Data":"7e044c9a621d31bd2d96de1d63132fefaf26eba1190070e5637504856cb5b072"} Dec 11 15:22:56 crc kubenswrapper[5050]: I1211 15:22:56.887286 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nxd6x" Dec 11 15:22:57 crc kubenswrapper[5050]: I1211 15:22:57.045407 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f664216-f326-4ee5-aa8a-167f41efbd65-scripts\") pod \"4f664216-f326-4ee5-aa8a-167f41efbd65\" (UID: \"4f664216-f326-4ee5-aa8a-167f41efbd65\") " Dec 11 15:22:57 crc kubenswrapper[5050]: I1211 15:22:57.045582 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vl4z7\" (UniqueName: \"kubernetes.io/projected/4f664216-f326-4ee5-aa8a-167f41efbd65-kube-api-access-vl4z7\") pod \"4f664216-f326-4ee5-aa8a-167f41efbd65\" (UID: \"4f664216-f326-4ee5-aa8a-167f41efbd65\") " Dec 11 15:22:57 crc kubenswrapper[5050]: I1211 15:22:57.045627 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4f664216-f326-4ee5-aa8a-167f41efbd65-etc-machine-id\") pod \"4f664216-f326-4ee5-aa8a-167f41efbd65\" (UID: \"4f664216-f326-4ee5-aa8a-167f41efbd65\") " Dec 11 15:22:57 crc kubenswrapper[5050]: I1211 15:22:57.045669 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4f664216-f326-4ee5-aa8a-167f41efbd65-db-sync-config-data\") pod \"4f664216-f326-4ee5-aa8a-167f41efbd65\" (UID: \"4f664216-f326-4ee5-aa8a-167f41efbd65\") " Dec 11 15:22:57 crc kubenswrapper[5050]: I1211 15:22:57.045721 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f664216-f326-4ee5-aa8a-167f41efbd65-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4f664216-f326-4ee5-aa8a-167f41efbd65" (UID: "4f664216-f326-4ee5-aa8a-167f41efbd65"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 15:22:57 crc kubenswrapper[5050]: I1211 15:22:57.045740 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f664216-f326-4ee5-aa8a-167f41efbd65-config-data\") pod \"4f664216-f326-4ee5-aa8a-167f41efbd65\" (UID: \"4f664216-f326-4ee5-aa8a-167f41efbd65\") " Dec 11 15:22:57 crc kubenswrapper[5050]: I1211 15:22:57.045817 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f664216-f326-4ee5-aa8a-167f41efbd65-combined-ca-bundle\") pod \"4f664216-f326-4ee5-aa8a-167f41efbd65\" (UID: \"4f664216-f326-4ee5-aa8a-167f41efbd65\") " Dec 11 15:22:57 crc kubenswrapper[5050]: I1211 15:22:57.046358 5050 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4f664216-f326-4ee5-aa8a-167f41efbd65-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 11 15:22:57 crc kubenswrapper[5050]: I1211 15:22:57.052319 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f664216-f326-4ee5-aa8a-167f41efbd65-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4f664216-f326-4ee5-aa8a-167f41efbd65" (UID: "4f664216-f326-4ee5-aa8a-167f41efbd65"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:22:57 crc kubenswrapper[5050]: I1211 15:22:57.052401 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f664216-f326-4ee5-aa8a-167f41efbd65-kube-api-access-vl4z7" (OuterVolumeSpecName: "kube-api-access-vl4z7") pod "4f664216-f326-4ee5-aa8a-167f41efbd65" (UID: "4f664216-f326-4ee5-aa8a-167f41efbd65"). InnerVolumeSpecName "kube-api-access-vl4z7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:22:57 crc kubenswrapper[5050]: I1211 15:22:57.053160 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f664216-f326-4ee5-aa8a-167f41efbd65-scripts" (OuterVolumeSpecName: "scripts") pod "4f664216-f326-4ee5-aa8a-167f41efbd65" (UID: "4f664216-f326-4ee5-aa8a-167f41efbd65"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:22:57 crc kubenswrapper[5050]: I1211 15:22:57.081322 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f664216-f326-4ee5-aa8a-167f41efbd65-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f664216-f326-4ee5-aa8a-167f41efbd65" (UID: "4f664216-f326-4ee5-aa8a-167f41efbd65"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:22:57 crc kubenswrapper[5050]: I1211 15:22:57.096145 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f664216-f326-4ee5-aa8a-167f41efbd65-config-data" (OuterVolumeSpecName: "config-data") pod "4f664216-f326-4ee5-aa8a-167f41efbd65" (UID: "4f664216-f326-4ee5-aa8a-167f41efbd65"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:22:57 crc kubenswrapper[5050]: I1211 15:22:57.147812 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vl4z7\" (UniqueName: \"kubernetes.io/projected/4f664216-f326-4ee5-aa8a-167f41efbd65-kube-api-access-vl4z7\") on node \"crc\" DevicePath \"\"" Dec 11 15:22:57 crc kubenswrapper[5050]: I1211 15:22:57.147842 5050 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4f664216-f326-4ee5-aa8a-167f41efbd65-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:22:57 crc kubenswrapper[5050]: I1211 15:22:57.147851 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f664216-f326-4ee5-aa8a-167f41efbd65-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:22:57 crc kubenswrapper[5050]: I1211 15:22:57.147859 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f664216-f326-4ee5-aa8a-167f41efbd65-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:22:57 crc kubenswrapper[5050]: I1211 15:22:57.147868 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f664216-f326-4ee5-aa8a-167f41efbd65-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:22:57 crc kubenswrapper[5050]: I1211 15:22:57.529718 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-nxd6x" event={"ID":"4f664216-f326-4ee5-aa8a-167f41efbd65","Type":"ContainerDied","Data":"7e61696658ff6d616cca8e0d6c4a00b22a1b73b6c2382b6454e2401342944698"} Dec 11 15:22:57 crc kubenswrapper[5050]: I1211 15:22:57.529768 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e61696658ff6d616cca8e0d6c4a00b22a1b73b6c2382b6454e2401342944698" Dec 11 15:22:57 crc kubenswrapper[5050]: I1211 15:22:57.529836 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-nxd6x" Dec 11 15:22:57 crc kubenswrapper[5050]: I1211 15:22:57.906067 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fc54567-bwk9j"] Dec 11 15:22:57 crc kubenswrapper[5050]: E1211 15:22:57.912491 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f664216-f326-4ee5-aa8a-167f41efbd65" containerName="cinder-db-sync" Dec 11 15:22:57 crc kubenswrapper[5050]: I1211 15:22:57.912680 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f664216-f326-4ee5-aa8a-167f41efbd65" containerName="cinder-db-sync" Dec 11 15:22:57 crc kubenswrapper[5050]: I1211 15:22:57.912958 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f664216-f326-4ee5-aa8a-167f41efbd65" containerName="cinder-db-sync" Dec 11 15:22:57 crc kubenswrapper[5050]: I1211 15:22:57.914268 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fc54567-bwk9j" Dec 11 15:22:57 crc kubenswrapper[5050]: I1211 15:22:57.925749 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fc54567-bwk9j"] Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.062315 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88c56e95-bf88-47a2-9c36-63f9092746c9-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fc54567-bwk9j\" (UID: \"88c56e95-bf88-47a2-9c36-63f9092746c9\") " pod="openstack/dnsmasq-dns-b8fc54567-bwk9j" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.062475 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g5f5\" (UniqueName: \"kubernetes.io/projected/88c56e95-bf88-47a2-9c36-63f9092746c9-kube-api-access-5g5f5\") pod \"dnsmasq-dns-b8fc54567-bwk9j\" (UID: \"88c56e95-bf88-47a2-9c36-63f9092746c9\") " pod="openstack/dnsmasq-dns-b8fc54567-bwk9j" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.062612 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88c56e95-bf88-47a2-9c36-63f9092746c9-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fc54567-bwk9j\" (UID: \"88c56e95-bf88-47a2-9c36-63f9092746c9\") " pod="openstack/dnsmasq-dns-b8fc54567-bwk9j" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.062693 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88c56e95-bf88-47a2-9c36-63f9092746c9-config\") pod \"dnsmasq-dns-b8fc54567-bwk9j\" (UID: \"88c56e95-bf88-47a2-9c36-63f9092746c9\") " pod="openstack/dnsmasq-dns-b8fc54567-bwk9j" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.062783 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88c56e95-bf88-47a2-9c36-63f9092746c9-dns-svc\") pod \"dnsmasq-dns-b8fc54567-bwk9j\" (UID: \"88c56e95-bf88-47a2-9c36-63f9092746c9\") " pod="openstack/dnsmasq-dns-b8fc54567-bwk9j" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.089856 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.092197 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.100738 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.100818 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-lvj2r" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.107492 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.109418 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.127179 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.164420 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88c56e95-bf88-47a2-9c36-63f9092746c9-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fc54567-bwk9j\" (UID: \"88c56e95-bf88-47a2-9c36-63f9092746c9\") " pod="openstack/dnsmasq-dns-b8fc54567-bwk9j" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.164463 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88c56e95-bf88-47a2-9c36-63f9092746c9-config\") pod \"dnsmasq-dns-b8fc54567-bwk9j\" (UID: \"88c56e95-bf88-47a2-9c36-63f9092746c9\") " pod="openstack/dnsmasq-dns-b8fc54567-bwk9j" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.164490 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88c56e95-bf88-47a2-9c36-63f9092746c9-dns-svc\") pod \"dnsmasq-dns-b8fc54567-bwk9j\" (UID: \"88c56e95-bf88-47a2-9c36-63f9092746c9\") " pod="openstack/dnsmasq-dns-b8fc54567-bwk9j" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.164563 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88c56e95-bf88-47a2-9c36-63f9092746c9-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fc54567-bwk9j\" (UID: \"88c56e95-bf88-47a2-9c36-63f9092746c9\") " pod="openstack/dnsmasq-dns-b8fc54567-bwk9j" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.164589 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g5f5\" (UniqueName: \"kubernetes.io/projected/88c56e95-bf88-47a2-9c36-63f9092746c9-kube-api-access-5g5f5\") pod \"dnsmasq-dns-b8fc54567-bwk9j\" (UID: \"88c56e95-bf88-47a2-9c36-63f9092746c9\") " pod="openstack/dnsmasq-dns-b8fc54567-bwk9j" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.165665 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88c56e95-bf88-47a2-9c36-63f9092746c9-dns-svc\") pod \"dnsmasq-dns-b8fc54567-bwk9j\" (UID: \"88c56e95-bf88-47a2-9c36-63f9092746c9\") " pod="openstack/dnsmasq-dns-b8fc54567-bwk9j" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.165716 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88c56e95-bf88-47a2-9c36-63f9092746c9-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fc54567-bwk9j\" (UID: \"88c56e95-bf88-47a2-9c36-63f9092746c9\") " pod="openstack/dnsmasq-dns-b8fc54567-bwk9j" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.165874 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88c56e95-bf88-47a2-9c36-63f9092746c9-config\") pod \"dnsmasq-dns-b8fc54567-bwk9j\" (UID: \"88c56e95-bf88-47a2-9c36-63f9092746c9\") " pod="openstack/dnsmasq-dns-b8fc54567-bwk9j" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.165874 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88c56e95-bf88-47a2-9c36-63f9092746c9-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fc54567-bwk9j\" (UID: \"88c56e95-bf88-47a2-9c36-63f9092746c9\") " pod="openstack/dnsmasq-dns-b8fc54567-bwk9j" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.207835 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g5f5\" (UniqueName: \"kubernetes.io/projected/88c56e95-bf88-47a2-9c36-63f9092746c9-kube-api-access-5g5f5\") pod \"dnsmasq-dns-b8fc54567-bwk9j\" (UID: \"88c56e95-bf88-47a2-9c36-63f9092746c9\") " pod="openstack/dnsmasq-dns-b8fc54567-bwk9j" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.235454 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fc54567-bwk9j" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.268099 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e156f879-4d43-4b29-86f2-fefc38253daf-config-data-custom\") pod \"cinder-api-0\" (UID: \"e156f879-4d43-4b29-86f2-fefc38253daf\") " pod="openstack/cinder-api-0" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.268187 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e156f879-4d43-4b29-86f2-fefc38253daf-config-data\") pod \"cinder-api-0\" (UID: \"e156f879-4d43-4b29-86f2-fefc38253daf\") " pod="openstack/cinder-api-0" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.268217 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e156f879-4d43-4b29-86f2-fefc38253daf-scripts\") pod \"cinder-api-0\" (UID: \"e156f879-4d43-4b29-86f2-fefc38253daf\") " pod="openstack/cinder-api-0" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.268276 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e156f879-4d43-4b29-86f2-fefc38253daf-logs\") pod \"cinder-api-0\" (UID: \"e156f879-4d43-4b29-86f2-fefc38253daf\") " pod="openstack/cinder-api-0" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.268294 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e156f879-4d43-4b29-86f2-fefc38253daf-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e156f879-4d43-4b29-86f2-fefc38253daf\") " pod="openstack/cinder-api-0" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.268308 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e156f879-4d43-4b29-86f2-fefc38253daf-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e156f879-4d43-4b29-86f2-fefc38253daf\") " pod="openstack/cinder-api-0" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.268350 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dr5d\" (UniqueName: \"kubernetes.io/projected/e156f879-4d43-4b29-86f2-fefc38253daf-kube-api-access-4dr5d\") pod \"cinder-api-0\" (UID: \"e156f879-4d43-4b29-86f2-fefc38253daf\") " pod="openstack/cinder-api-0" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.370357 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dr5d\" (UniqueName: \"kubernetes.io/projected/e156f879-4d43-4b29-86f2-fefc38253daf-kube-api-access-4dr5d\") pod \"cinder-api-0\" (UID: \"e156f879-4d43-4b29-86f2-fefc38253daf\") " pod="openstack/cinder-api-0" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.370445 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e156f879-4d43-4b29-86f2-fefc38253daf-config-data-custom\") pod \"cinder-api-0\" (UID: \"e156f879-4d43-4b29-86f2-fefc38253daf\") " pod="openstack/cinder-api-0" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.370485 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e156f879-4d43-4b29-86f2-fefc38253daf-config-data\") pod \"cinder-api-0\" (UID: \"e156f879-4d43-4b29-86f2-fefc38253daf\") " pod="openstack/cinder-api-0" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.370511 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e156f879-4d43-4b29-86f2-fefc38253daf-scripts\") pod \"cinder-api-0\" (UID: \"e156f879-4d43-4b29-86f2-fefc38253daf\") " pod="openstack/cinder-api-0" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.370562 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e156f879-4d43-4b29-86f2-fefc38253daf-logs\") pod \"cinder-api-0\" (UID: \"e156f879-4d43-4b29-86f2-fefc38253daf\") " pod="openstack/cinder-api-0" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.370583 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e156f879-4d43-4b29-86f2-fefc38253daf-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e156f879-4d43-4b29-86f2-fefc38253daf\") " pod="openstack/cinder-api-0" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.370598 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e156f879-4d43-4b29-86f2-fefc38253daf-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e156f879-4d43-4b29-86f2-fefc38253daf\") " pod="openstack/cinder-api-0" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.384406 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e156f879-4d43-4b29-86f2-fefc38253daf-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e156f879-4d43-4b29-86f2-fefc38253daf\") " pod="openstack/cinder-api-0" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.384817 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e156f879-4d43-4b29-86f2-fefc38253daf-logs\") pod \"cinder-api-0\" (UID: \"e156f879-4d43-4b29-86f2-fefc38253daf\") " pod="openstack/cinder-api-0" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.385379 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e156f879-4d43-4b29-86f2-fefc38253daf-config-data-custom\") pod \"cinder-api-0\" (UID: \"e156f879-4d43-4b29-86f2-fefc38253daf\") " pod="openstack/cinder-api-0" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.386533 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e156f879-4d43-4b29-86f2-fefc38253daf-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e156f879-4d43-4b29-86f2-fefc38253daf\") " pod="openstack/cinder-api-0" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.399776 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e156f879-4d43-4b29-86f2-fefc38253daf-scripts\") pod \"cinder-api-0\" (UID: \"e156f879-4d43-4b29-86f2-fefc38253daf\") " pod="openstack/cinder-api-0" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.403061 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e156f879-4d43-4b29-86f2-fefc38253daf-config-data\") pod \"cinder-api-0\" (UID: \"e156f879-4d43-4b29-86f2-fefc38253daf\") " pod="openstack/cinder-api-0" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.407438 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dr5d\" (UniqueName: \"kubernetes.io/projected/e156f879-4d43-4b29-86f2-fefc38253daf-kube-api-access-4dr5d\") pod \"cinder-api-0\" (UID: \"e156f879-4d43-4b29-86f2-fefc38253daf\") " pod="openstack/cinder-api-0" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.410390 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.868571 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fc54567-bwk9j"] Dec 11 15:22:58 crc kubenswrapper[5050]: I1211 15:22:58.978415 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 11 15:22:58 crc kubenswrapper[5050]: W1211 15:22:58.980503 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode156f879_4d43_4b29_86f2_fefc38253daf.slice/crio-9284b456494778f5748668f61529385c3dad91ee797312a650d4cc2f2f4fc327 WatchSource:0}: Error finding container 9284b456494778f5748668f61529385c3dad91ee797312a650d4cc2f2f4fc327: Status 404 returned error can't find the container with id 9284b456494778f5748668f61529385c3dad91ee797312a650d4cc2f2f4fc327 Dec 11 15:22:59 crc kubenswrapper[5050]: I1211 15:22:59.592560 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e156f879-4d43-4b29-86f2-fefc38253daf","Type":"ContainerStarted","Data":"9284b456494778f5748668f61529385c3dad91ee797312a650d4cc2f2f4fc327"} Dec 11 15:22:59 crc kubenswrapper[5050]: I1211 15:22:59.599324 5050 generic.go:334] "Generic (PLEG): container finished" podID="88c56e95-bf88-47a2-9c36-63f9092746c9" containerID="15e413a3a05acf5a23d5ae4d8f76ea61da79e0f5d6ed7933204f75b733537e8a" exitCode=0 Dec 11 15:22:59 crc kubenswrapper[5050]: I1211 15:22:59.599375 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fc54567-bwk9j" event={"ID":"88c56e95-bf88-47a2-9c36-63f9092746c9","Type":"ContainerDied","Data":"15e413a3a05acf5a23d5ae4d8f76ea61da79e0f5d6ed7933204f75b733537e8a"} Dec 11 15:22:59 crc kubenswrapper[5050]: I1211 15:22:59.599406 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fc54567-bwk9j" event={"ID":"88c56e95-bf88-47a2-9c36-63f9092746c9","Type":"ContainerStarted","Data":"70d0cf17706a8e3c8e6e0d9f8a7cb955f69b519e62a30f01baa6ffba3d3cf71f"} Dec 11 15:23:00 crc kubenswrapper[5050]: I1211 15:23:00.611057 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e156f879-4d43-4b29-86f2-fefc38253daf","Type":"ContainerStarted","Data":"1070e5aad879429cfa80ace93bad4612b20bcc52f34a2099db27c5e2210f04da"} Dec 11 15:23:00 crc kubenswrapper[5050]: I1211 15:23:00.611437 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 11 15:23:00 crc kubenswrapper[5050]: I1211 15:23:00.611462 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e156f879-4d43-4b29-86f2-fefc38253daf","Type":"ContainerStarted","Data":"245acd9b72c2fe04bd7d0f8ee5cf7652f8ab12fa9477aae011d327afc8eb1cae"} Dec 11 15:23:00 crc kubenswrapper[5050]: I1211 15:23:00.614861 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fc54567-bwk9j" event={"ID":"88c56e95-bf88-47a2-9c36-63f9092746c9","Type":"ContainerStarted","Data":"336110a6025e6d0b53ff8f886b6d86222090ad7a85dd7af0ba57ecc585ec26a9"} Dec 11 15:23:00 crc kubenswrapper[5050]: I1211 15:23:00.615143 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fc54567-bwk9j" Dec 11 15:23:00 crc kubenswrapper[5050]: I1211 15:23:00.640152 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.640132598 podStartE2EDuration="2.640132598s" podCreationTimestamp="2025-12-11 15:22:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:23:00.627399327 +0000 UTC m=+5671.471121913" watchObservedRunningTime="2025-12-11 15:23:00.640132598 +0000 UTC m=+5671.483855184" Dec 11 15:23:00 crc kubenswrapper[5050]: I1211 15:23:00.646898 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fc54567-bwk9j" podStartSLOduration=3.646879218 podStartE2EDuration="3.646879218s" podCreationTimestamp="2025-12-11 15:22:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:23:00.643472507 +0000 UTC m=+5671.487195093" watchObservedRunningTime="2025-12-11 15:23:00.646879218 +0000 UTC m=+5671.490601804" Dec 11 15:23:08 crc kubenswrapper[5050]: I1211 15:23:08.238225 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fc54567-bwk9j" Dec 11 15:23:08 crc kubenswrapper[5050]: I1211 15:23:08.312246 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d6fbbf59-rwsxw"] Dec 11 15:23:08 crc kubenswrapper[5050]: I1211 15:23:08.312470 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68d6fbbf59-rwsxw" podUID="e7bdaee0-e5cc-41e6-8ebc-f471529a89a8" containerName="dnsmasq-dns" containerID="cri-o://4daec903833e5a1d9a7b3772cdd54b26b3dbb6ce85c21b707b4f8478a2ed9ef9" gracePeriod=10 Dec 11 15:23:08 crc kubenswrapper[5050]: I1211 15:23:08.710530 5050 generic.go:334] "Generic (PLEG): container finished" podID="e7bdaee0-e5cc-41e6-8ebc-f471529a89a8" containerID="4daec903833e5a1d9a7b3772cdd54b26b3dbb6ce85c21b707b4f8478a2ed9ef9" exitCode=0 Dec 11 15:23:08 crc kubenswrapper[5050]: I1211 15:23:08.710945 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d6fbbf59-rwsxw" event={"ID":"e7bdaee0-e5cc-41e6-8ebc-f471529a89a8","Type":"ContainerDied","Data":"4daec903833e5a1d9a7b3772cdd54b26b3dbb6ce85c21b707b4f8478a2ed9ef9"} Dec 11 15:23:08 crc kubenswrapper[5050]: I1211 15:23:08.917130 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d6fbbf59-rwsxw" Dec 11 15:23:08 crc kubenswrapper[5050]: I1211 15:23:08.982412 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7bdaee0-e5cc-41e6-8ebc-f471529a89a8-ovsdbserver-nb\") pod \"e7bdaee0-e5cc-41e6-8ebc-f471529a89a8\" (UID: \"e7bdaee0-e5cc-41e6-8ebc-f471529a89a8\") " Dec 11 15:23:08 crc kubenswrapper[5050]: I1211 15:23:08.982739 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7bdaee0-e5cc-41e6-8ebc-f471529a89a8-ovsdbserver-sb\") pod \"e7bdaee0-e5cc-41e6-8ebc-f471529a89a8\" (UID: \"e7bdaee0-e5cc-41e6-8ebc-f471529a89a8\") " Dec 11 15:23:08 crc kubenswrapper[5050]: I1211 15:23:08.982782 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7hjb\" (UniqueName: \"kubernetes.io/projected/e7bdaee0-e5cc-41e6-8ebc-f471529a89a8-kube-api-access-g7hjb\") pod \"e7bdaee0-e5cc-41e6-8ebc-f471529a89a8\" (UID: \"e7bdaee0-e5cc-41e6-8ebc-f471529a89a8\") " Dec 11 15:23:08 crc kubenswrapper[5050]: I1211 15:23:08.982846 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7bdaee0-e5cc-41e6-8ebc-f471529a89a8-dns-svc\") pod \"e7bdaee0-e5cc-41e6-8ebc-f471529a89a8\" (UID: \"e7bdaee0-e5cc-41e6-8ebc-f471529a89a8\") " Dec 11 15:23:08 crc kubenswrapper[5050]: I1211 15:23:08.982900 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7bdaee0-e5cc-41e6-8ebc-f471529a89a8-config\") pod \"e7bdaee0-e5cc-41e6-8ebc-f471529a89a8\" (UID: \"e7bdaee0-e5cc-41e6-8ebc-f471529a89a8\") " Dec 11 15:23:08 crc kubenswrapper[5050]: I1211 15:23:08.992540 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7bdaee0-e5cc-41e6-8ebc-f471529a89a8-kube-api-access-g7hjb" (OuterVolumeSpecName: "kube-api-access-g7hjb") pod "e7bdaee0-e5cc-41e6-8ebc-f471529a89a8" (UID: "e7bdaee0-e5cc-41e6-8ebc-f471529a89a8"). InnerVolumeSpecName "kube-api-access-g7hjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:23:09 crc kubenswrapper[5050]: I1211 15:23:09.043566 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7bdaee0-e5cc-41e6-8ebc-f471529a89a8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e7bdaee0-e5cc-41e6-8ebc-f471529a89a8" (UID: "e7bdaee0-e5cc-41e6-8ebc-f471529a89a8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:23:09 crc kubenswrapper[5050]: I1211 15:23:09.043573 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7bdaee0-e5cc-41e6-8ebc-f471529a89a8-config" (OuterVolumeSpecName: "config") pod "e7bdaee0-e5cc-41e6-8ebc-f471529a89a8" (UID: "e7bdaee0-e5cc-41e6-8ebc-f471529a89a8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:23:09 crc kubenswrapper[5050]: I1211 15:23:09.055491 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7bdaee0-e5cc-41e6-8ebc-f471529a89a8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e7bdaee0-e5cc-41e6-8ebc-f471529a89a8" (UID: "e7bdaee0-e5cc-41e6-8ebc-f471529a89a8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:23:09 crc kubenswrapper[5050]: I1211 15:23:09.064385 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7bdaee0-e5cc-41e6-8ebc-f471529a89a8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e7bdaee0-e5cc-41e6-8ebc-f471529a89a8" (UID: "e7bdaee0-e5cc-41e6-8ebc-f471529a89a8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:23:09 crc kubenswrapper[5050]: I1211 15:23:09.085178 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e7bdaee0-e5cc-41e6-8ebc-f471529a89a8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:09 crc kubenswrapper[5050]: I1211 15:23:09.085210 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7hjb\" (UniqueName: \"kubernetes.io/projected/e7bdaee0-e5cc-41e6-8ebc-f471529a89a8-kube-api-access-g7hjb\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:09 crc kubenswrapper[5050]: I1211 15:23:09.085221 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e7bdaee0-e5cc-41e6-8ebc-f471529a89a8-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:09 crc kubenswrapper[5050]: I1211 15:23:09.085255 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7bdaee0-e5cc-41e6-8ebc-f471529a89a8-config\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:09 crc kubenswrapper[5050]: I1211 15:23:09.085264 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e7bdaee0-e5cc-41e6-8ebc-f471529a89a8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:09 crc kubenswrapper[5050]: I1211 15:23:09.719761 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d6fbbf59-rwsxw" event={"ID":"e7bdaee0-e5cc-41e6-8ebc-f471529a89a8","Type":"ContainerDied","Data":"db7ed695e6a39ff066038c2564a97defa9911c3b38d42bf3e0a05cec69f7221e"} Dec 11 15:23:09 crc kubenswrapper[5050]: I1211 15:23:09.719808 5050 scope.go:117] "RemoveContainer" containerID="4daec903833e5a1d9a7b3772cdd54b26b3dbb6ce85c21b707b4f8478a2ed9ef9" Dec 11 15:23:09 crc kubenswrapper[5050]: I1211 15:23:09.719946 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d6fbbf59-rwsxw" Dec 11 15:23:09 crc kubenswrapper[5050]: I1211 15:23:09.739954 5050 scope.go:117] "RemoveContainer" containerID="3a7c5d8248142decc485d79c6b9769faad19bceb7be82b069818796d04743137" Dec 11 15:23:09 crc kubenswrapper[5050]: I1211 15:23:09.743144 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d6fbbf59-rwsxw"] Dec 11 15:23:09 crc kubenswrapper[5050]: I1211 15:23:09.753161 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68d6fbbf59-rwsxw"] Dec 11 15:23:10 crc kubenswrapper[5050]: I1211 15:23:10.252681 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 11 15:23:10 crc kubenswrapper[5050]: I1211 15:23:10.253233 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="063d7c41-b6e3-4e21-9b57-ae16dddec75e" containerName="nova-cell0-conductor-conductor" containerID="cri-o://03b2da0c6e232f7ddc475f0fe3660a9b619575601d4596fb72229cfda6a5e070" gracePeriod=30 Dec 11 15:23:10 crc kubenswrapper[5050]: I1211 15:23:10.265346 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 15:23:10 crc kubenswrapper[5050]: I1211 15:23:10.265795 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="04bd236f-ca75-4592-ba08-b2a0975485cf" containerName="nova-api-log" containerID="cri-o://362345edf73a8955a8db318f46b29fa3a2aded319afa6bb49fa65831f409cfb3" gracePeriod=30 Dec 11 15:23:10 crc kubenswrapper[5050]: I1211 15:23:10.265900 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="04bd236f-ca75-4592-ba08-b2a0975485cf" containerName="nova-api-api" containerID="cri-o://6b074152650d06f0652cae8c5f58c063867201238db092a0eee18e9d607b157a" gracePeriod=30 Dec 11 15:23:10 crc kubenswrapper[5050]: I1211 15:23:10.275642 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 15:23:10 crc kubenswrapper[5050]: I1211 15:23:10.275833 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="63dac332-2a74-496b-bf44-83acbf69ad11" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://0d66ac326a05f25fc9b2fa6aea5c914ec9c3296dc44ce9f8f0b1caf402c6b92a" gracePeriod=30 Dec 11 15:23:10 crc kubenswrapper[5050]: I1211 15:23:10.287602 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 15:23:10 crc kubenswrapper[5050]: I1211 15:23:10.287823 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="93efaf6f-33d4-4fb3-ac60-f564a9496fdf" containerName="nova-metadata-log" containerID="cri-o://b89e12da37cfa25c10b2219419b5164111657e01581a7ac6ad51cb02c0de55d6" gracePeriod=30 Dec 11 15:23:10 crc kubenswrapper[5050]: I1211 15:23:10.288027 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="93efaf6f-33d4-4fb3-ac60-f564a9496fdf" containerName="nova-metadata-metadata" containerID="cri-o://f9c2a19739f8baaac559ab31e190fdbf57aa90f2f7cd221cab39b46bb4dfa250" gracePeriod=30 Dec 11 15:23:10 crc kubenswrapper[5050]: I1211 15:23:10.352637 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 15:23:10 crc kubenswrapper[5050]: I1211 15:23:10.353229 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="b23edbac-1c5a-49ce-9cb3-39c6b3bf689e" containerName="nova-scheduler-scheduler" containerID="cri-o://380ed4609ec1c05c25da4a2e18dc163423299fa05e358e41ced42424cd8e919a" gracePeriod=30 Dec 11 15:23:10 crc kubenswrapper[5050]: I1211 15:23:10.651728 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 11 15:23:10 crc kubenswrapper[5050]: I1211 15:23:10.750918 5050 generic.go:334] "Generic (PLEG): container finished" podID="93efaf6f-33d4-4fb3-ac60-f564a9496fdf" containerID="b89e12da37cfa25c10b2219419b5164111657e01581a7ac6ad51cb02c0de55d6" exitCode=143 Dec 11 15:23:10 crc kubenswrapper[5050]: I1211 15:23:10.751204 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"93efaf6f-33d4-4fb3-ac60-f564a9496fdf","Type":"ContainerDied","Data":"b89e12da37cfa25c10b2219419b5164111657e01581a7ac6ad51cb02c0de55d6"} Dec 11 15:23:10 crc kubenswrapper[5050]: I1211 15:23:10.754263 5050 generic.go:334] "Generic (PLEG): container finished" podID="04bd236f-ca75-4592-ba08-b2a0975485cf" containerID="362345edf73a8955a8db318f46b29fa3a2aded319afa6bb49fa65831f409cfb3" exitCode=143 Dec 11 15:23:10 crc kubenswrapper[5050]: I1211 15:23:10.754287 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"04bd236f-ca75-4592-ba08-b2a0975485cf","Type":"ContainerDied","Data":"362345edf73a8955a8db318f46b29fa3a2aded319afa6bb49fa65831f409cfb3"} Dec 11 15:23:11 crc kubenswrapper[5050]: I1211 15:23:11.276256 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-cell1-novncproxy-0" podUID="63dac332-2a74-496b-bf44-83acbf69ad11" containerName="nova-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"http://10.217.1.65:6080/vnc_lite.html\": dial tcp 10.217.1.65:6080: connect: connection refused" Dec 11 15:23:11 crc kubenswrapper[5050]: I1211 15:23:11.557615 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7bdaee0-e5cc-41e6-8ebc-f471529a89a8" path="/var/lib/kubelet/pods/e7bdaee0-e5cc-41e6-8ebc-f471529a89a8/volumes" Dec 11 15:23:11 crc kubenswrapper[5050]: I1211 15:23:11.589753 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 15:23:11 crc kubenswrapper[5050]: I1211 15:23:11.740121 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63dac332-2a74-496b-bf44-83acbf69ad11-combined-ca-bundle\") pod \"63dac332-2a74-496b-bf44-83acbf69ad11\" (UID: \"63dac332-2a74-496b-bf44-83acbf69ad11\") " Dec 11 15:23:11 crc kubenswrapper[5050]: I1211 15:23:11.740240 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63dac332-2a74-496b-bf44-83acbf69ad11-config-data\") pod \"63dac332-2a74-496b-bf44-83acbf69ad11\" (UID: \"63dac332-2a74-496b-bf44-83acbf69ad11\") " Dec 11 15:23:11 crc kubenswrapper[5050]: I1211 15:23:11.740292 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbs8m\" (UniqueName: \"kubernetes.io/projected/63dac332-2a74-496b-bf44-83acbf69ad11-kube-api-access-zbs8m\") pod \"63dac332-2a74-496b-bf44-83acbf69ad11\" (UID: \"63dac332-2a74-496b-bf44-83acbf69ad11\") " Dec 11 15:23:11 crc kubenswrapper[5050]: I1211 15:23:11.759893 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63dac332-2a74-496b-bf44-83acbf69ad11-kube-api-access-zbs8m" (OuterVolumeSpecName: "kube-api-access-zbs8m") pod "63dac332-2a74-496b-bf44-83acbf69ad11" (UID: "63dac332-2a74-496b-bf44-83acbf69ad11"). InnerVolumeSpecName "kube-api-access-zbs8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:23:11 crc kubenswrapper[5050]: I1211 15:23:11.771524 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 15:23:11 crc kubenswrapper[5050]: I1211 15:23:11.771432 5050 generic.go:334] "Generic (PLEG): container finished" podID="63dac332-2a74-496b-bf44-83acbf69ad11" containerID="0d66ac326a05f25fc9b2fa6aea5c914ec9c3296dc44ce9f8f0b1caf402c6b92a" exitCode=0 Dec 11 15:23:11 crc kubenswrapper[5050]: I1211 15:23:11.771770 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"63dac332-2a74-496b-bf44-83acbf69ad11","Type":"ContainerDied","Data":"0d66ac326a05f25fc9b2fa6aea5c914ec9c3296dc44ce9f8f0b1caf402c6b92a"} Dec 11 15:23:11 crc kubenswrapper[5050]: I1211 15:23:11.771810 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"63dac332-2a74-496b-bf44-83acbf69ad11","Type":"ContainerDied","Data":"86aa62c29a161b771d222c202d89c5f9f577a0234deb98460de7ff61dd517db2"} Dec 11 15:23:11 crc kubenswrapper[5050]: I1211 15:23:11.772374 5050 scope.go:117] "RemoveContainer" containerID="0d66ac326a05f25fc9b2fa6aea5c914ec9c3296dc44ce9f8f0b1caf402c6b92a" Dec 11 15:23:11 crc kubenswrapper[5050]: I1211 15:23:11.780706 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63dac332-2a74-496b-bf44-83acbf69ad11-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63dac332-2a74-496b-bf44-83acbf69ad11" (UID: "63dac332-2a74-496b-bf44-83acbf69ad11"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:23:11 crc kubenswrapper[5050]: I1211 15:23:11.784653 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63dac332-2a74-496b-bf44-83acbf69ad11-config-data" (OuterVolumeSpecName: "config-data") pod "63dac332-2a74-496b-bf44-83acbf69ad11" (UID: "63dac332-2a74-496b-bf44-83acbf69ad11"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:23:11 crc kubenswrapper[5050]: I1211 15:23:11.842350 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63dac332-2a74-496b-bf44-83acbf69ad11-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:11 crc kubenswrapper[5050]: I1211 15:23:11.843415 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63dac332-2a74-496b-bf44-83acbf69ad11-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:11 crc kubenswrapper[5050]: I1211 15:23:11.843433 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbs8m\" (UniqueName: \"kubernetes.io/projected/63dac332-2a74-496b-bf44-83acbf69ad11-kube-api-access-zbs8m\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:11 crc kubenswrapper[5050]: I1211 15:23:11.868369 5050 scope.go:117] "RemoveContainer" containerID="0d66ac326a05f25fc9b2fa6aea5c914ec9c3296dc44ce9f8f0b1caf402c6b92a" Dec 11 15:23:11 crc kubenswrapper[5050]: E1211 15:23:11.868948 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d66ac326a05f25fc9b2fa6aea5c914ec9c3296dc44ce9f8f0b1caf402c6b92a\": container with ID starting with 0d66ac326a05f25fc9b2fa6aea5c914ec9c3296dc44ce9f8f0b1caf402c6b92a not found: ID does not exist" containerID="0d66ac326a05f25fc9b2fa6aea5c914ec9c3296dc44ce9f8f0b1caf402c6b92a" Dec 11 15:23:11 crc kubenswrapper[5050]: I1211 15:23:11.868983 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d66ac326a05f25fc9b2fa6aea5c914ec9c3296dc44ce9f8f0b1caf402c6b92a"} err="failed to get container status \"0d66ac326a05f25fc9b2fa6aea5c914ec9c3296dc44ce9f8f0b1caf402c6b92a\": rpc error: code = NotFound desc = could not find container \"0d66ac326a05f25fc9b2fa6aea5c914ec9c3296dc44ce9f8f0b1caf402c6b92a\": container with ID starting with 0d66ac326a05f25fc9b2fa6aea5c914ec9c3296dc44ce9f8f0b1caf402c6b92a not found: ID does not exist" Dec 11 15:23:12 crc kubenswrapper[5050]: I1211 15:23:12.112819 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 15:23:12 crc kubenswrapper[5050]: I1211 15:23:12.129702 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 15:23:12 crc kubenswrapper[5050]: I1211 15:23:12.165925 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 15:23:12 crc kubenswrapper[5050]: E1211 15:23:12.166448 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7bdaee0-e5cc-41e6-8ebc-f471529a89a8" containerName="init" Dec 11 15:23:12 crc kubenswrapper[5050]: I1211 15:23:12.166477 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7bdaee0-e5cc-41e6-8ebc-f471529a89a8" containerName="init" Dec 11 15:23:12 crc kubenswrapper[5050]: E1211 15:23:12.166494 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63dac332-2a74-496b-bf44-83acbf69ad11" containerName="nova-cell1-novncproxy-novncproxy" Dec 11 15:23:12 crc kubenswrapper[5050]: I1211 15:23:12.166503 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="63dac332-2a74-496b-bf44-83acbf69ad11" containerName="nova-cell1-novncproxy-novncproxy" Dec 11 15:23:12 crc kubenswrapper[5050]: E1211 15:23:12.166539 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7bdaee0-e5cc-41e6-8ebc-f471529a89a8" containerName="dnsmasq-dns" Dec 11 15:23:12 crc kubenswrapper[5050]: I1211 15:23:12.166547 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7bdaee0-e5cc-41e6-8ebc-f471529a89a8" containerName="dnsmasq-dns" Dec 11 15:23:12 crc kubenswrapper[5050]: I1211 15:23:12.166781 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="63dac332-2a74-496b-bf44-83acbf69ad11" containerName="nova-cell1-novncproxy-novncproxy" Dec 11 15:23:12 crc kubenswrapper[5050]: I1211 15:23:12.166813 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7bdaee0-e5cc-41e6-8ebc-f471529a89a8" containerName="dnsmasq-dns" Dec 11 15:23:12 crc kubenswrapper[5050]: I1211 15:23:12.167701 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 15:23:12 crc kubenswrapper[5050]: I1211 15:23:12.171388 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 11 15:23:12 crc kubenswrapper[5050]: I1211 15:23:12.184594 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 15:23:12 crc kubenswrapper[5050]: I1211 15:23:12.252840 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdjpn\" (UniqueName: \"kubernetes.io/projected/09be86e5-17b2-4aef-a719-f080cd813391-kube-api-access-sdjpn\") pod \"nova-cell1-novncproxy-0\" (UID: \"09be86e5-17b2-4aef-a719-f080cd813391\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 15:23:12 crc kubenswrapper[5050]: I1211 15:23:12.252965 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09be86e5-17b2-4aef-a719-f080cd813391-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"09be86e5-17b2-4aef-a719-f080cd813391\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 15:23:12 crc kubenswrapper[5050]: I1211 15:23:12.253029 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09be86e5-17b2-4aef-a719-f080cd813391-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"09be86e5-17b2-4aef-a719-f080cd813391\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 15:23:12 crc kubenswrapper[5050]: I1211 15:23:12.354169 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdjpn\" (UniqueName: \"kubernetes.io/projected/09be86e5-17b2-4aef-a719-f080cd813391-kube-api-access-sdjpn\") pod \"nova-cell1-novncproxy-0\" (UID: \"09be86e5-17b2-4aef-a719-f080cd813391\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 15:23:12 crc kubenswrapper[5050]: I1211 15:23:12.354266 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09be86e5-17b2-4aef-a719-f080cd813391-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"09be86e5-17b2-4aef-a719-f080cd813391\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 15:23:12 crc kubenswrapper[5050]: I1211 15:23:12.354318 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09be86e5-17b2-4aef-a719-f080cd813391-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"09be86e5-17b2-4aef-a719-f080cd813391\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 15:23:12 crc kubenswrapper[5050]: I1211 15:23:12.362272 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09be86e5-17b2-4aef-a719-f080cd813391-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"09be86e5-17b2-4aef-a719-f080cd813391\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 15:23:12 crc kubenswrapper[5050]: I1211 15:23:12.367277 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09be86e5-17b2-4aef-a719-f080cd813391-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"09be86e5-17b2-4aef-a719-f080cd813391\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 15:23:12 crc kubenswrapper[5050]: I1211 15:23:12.368748 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdjpn\" (UniqueName: \"kubernetes.io/projected/09be86e5-17b2-4aef-a719-f080cd813391-kube-api-access-sdjpn\") pod \"nova-cell1-novncproxy-0\" (UID: \"09be86e5-17b2-4aef-a719-f080cd813391\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 15:23:12 crc kubenswrapper[5050]: I1211 15:23:12.489773 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 15:23:13 crc kubenswrapper[5050]: I1211 15:23:13.094686 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 15:23:13 crc kubenswrapper[5050]: W1211 15:23:13.147872 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09be86e5_17b2_4aef_a719_f080cd813391.slice/crio-ed97fa6e892fe00845222ac586a7119e24c7e963dccfb9f68d46adfca2638b41 WatchSource:0}: Error finding container ed97fa6e892fe00845222ac586a7119e24c7e963dccfb9f68d46adfca2638b41: Status 404 returned error can't find the container with id ed97fa6e892fe00845222ac586a7119e24c7e963dccfb9f68d46adfca2638b41 Dec 11 15:23:13 crc kubenswrapper[5050]: I1211 15:23:13.418716 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="04bd236f-ca75-4592-ba08-b2a0975485cf" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.73:8774/\": read tcp 10.217.0.2:43886->10.217.1.73:8774: read: connection reset by peer" Dec 11 15:23:13 crc kubenswrapper[5050]: I1211 15:23:13.418717 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-api-0" podUID="04bd236f-ca75-4592-ba08-b2a0975485cf" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.73:8774/\": read tcp 10.217.0.2:43888->10.217.1.73:8774: read: connection reset by peer" Dec 11 15:23:13 crc kubenswrapper[5050]: I1211 15:23:13.433466 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="93efaf6f-33d4-4fb3-ac60-f564a9496fdf" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.72:8775/\": read tcp 10.217.0.2:42886->10.217.1.72:8775: read: connection reset by peer" Dec 11 15:23:13 crc kubenswrapper[5050]: I1211 15:23:13.433511 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="93efaf6f-33d4-4fb3-ac60-f564a9496fdf" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.72:8775/\": read tcp 10.217.0.2:42882->10.217.1.72:8775: read: connection reset by peer" Dec 11 15:23:13 crc kubenswrapper[5050]: I1211 15:23:13.473615 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 11 15:23:13 crc kubenswrapper[5050]: I1211 15:23:13.473834 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="11adb6b7-1e47-45c3-932b-9f3e248d7621" containerName="nova-cell1-conductor-conductor" containerID="cri-o://47f1a1038f9fbef7ae0b1178f8e3117cb60df69aa73d0374eaa42c0f61a1c05e" gracePeriod=30 Dec 11 15:23:13 crc kubenswrapper[5050]: I1211 15:23:13.565088 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63dac332-2a74-496b-bf44-83acbf69ad11" path="/var/lib/kubelet/pods/63dac332-2a74-496b-bf44-83acbf69ad11/volumes" Dec 11 15:23:13 crc kubenswrapper[5050]: I1211 15:23:13.598568 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 15:23:13 crc kubenswrapper[5050]: I1211 15:23:13.779253 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b23edbac-1c5a-49ce-9cb3-39c6b3bf689e-combined-ca-bundle\") pod \"b23edbac-1c5a-49ce-9cb3-39c6b3bf689e\" (UID: \"b23edbac-1c5a-49ce-9cb3-39c6b3bf689e\") " Dec 11 15:23:13 crc kubenswrapper[5050]: I1211 15:23:13.779293 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6n6g\" (UniqueName: \"kubernetes.io/projected/b23edbac-1c5a-49ce-9cb3-39c6b3bf689e-kube-api-access-k6n6g\") pod \"b23edbac-1c5a-49ce-9cb3-39c6b3bf689e\" (UID: \"b23edbac-1c5a-49ce-9cb3-39c6b3bf689e\") " Dec 11 15:23:13 crc kubenswrapper[5050]: I1211 15:23:13.779324 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b23edbac-1c5a-49ce-9cb3-39c6b3bf689e-config-data\") pod \"b23edbac-1c5a-49ce-9cb3-39c6b3bf689e\" (UID: \"b23edbac-1c5a-49ce-9cb3-39c6b3bf689e\") " Dec 11 15:23:13 crc kubenswrapper[5050]: I1211 15:23:13.792460 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b23edbac-1c5a-49ce-9cb3-39c6b3bf689e-kube-api-access-k6n6g" (OuterVolumeSpecName: "kube-api-access-k6n6g") pod "b23edbac-1c5a-49ce-9cb3-39c6b3bf689e" (UID: "b23edbac-1c5a-49ce-9cb3-39c6b3bf689e"). InnerVolumeSpecName "kube-api-access-k6n6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:23:13 crc kubenswrapper[5050]: I1211 15:23:13.808375 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b23edbac-1c5a-49ce-9cb3-39c6b3bf689e-config-data" (OuterVolumeSpecName: "config-data") pod "b23edbac-1c5a-49ce-9cb3-39c6b3bf689e" (UID: "b23edbac-1c5a-49ce-9cb3-39c6b3bf689e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:23:13 crc kubenswrapper[5050]: I1211 15:23:13.812514 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-68d6fbbf59-rwsxw" podUID="e7bdaee0-e5cc-41e6-8ebc-f471529a89a8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.75:5353: i/o timeout" Dec 11 15:23:13 crc kubenswrapper[5050]: I1211 15:23:13.837707 5050 generic.go:334] "Generic (PLEG): container finished" podID="b23edbac-1c5a-49ce-9cb3-39c6b3bf689e" containerID="380ed4609ec1c05c25da4a2e18dc163423299fa05e358e41ced42424cd8e919a" exitCode=0 Dec 11 15:23:13 crc kubenswrapper[5050]: I1211 15:23:13.837796 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b23edbac-1c5a-49ce-9cb3-39c6b3bf689e","Type":"ContainerDied","Data":"380ed4609ec1c05c25da4a2e18dc163423299fa05e358e41ced42424cd8e919a"} Dec 11 15:23:13 crc kubenswrapper[5050]: I1211 15:23:13.837828 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b23edbac-1c5a-49ce-9cb3-39c6b3bf689e","Type":"ContainerDied","Data":"f1ba3da98603fd64978ea35e84a3525b084b8a51c799e62f060913ce6675436c"} Dec 11 15:23:13 crc kubenswrapper[5050]: I1211 15:23:13.837848 5050 scope.go:117] "RemoveContainer" containerID="380ed4609ec1c05c25da4a2e18dc163423299fa05e358e41ced42424cd8e919a" Dec 11 15:23:13 crc kubenswrapper[5050]: I1211 15:23:13.837991 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 15:23:13 crc kubenswrapper[5050]: I1211 15:23:13.844143 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b23edbac-1c5a-49ce-9cb3-39c6b3bf689e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b23edbac-1c5a-49ce-9cb3-39c6b3bf689e" (UID: "b23edbac-1c5a-49ce-9cb3-39c6b3bf689e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:23:13 crc kubenswrapper[5050]: I1211 15:23:13.847636 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"09be86e5-17b2-4aef-a719-f080cd813391","Type":"ContainerStarted","Data":"73eb9cfa2fd742a000bcec0648b00db6b46f79dfdc0826eecdc4311166a190a8"} Dec 11 15:23:13 crc kubenswrapper[5050]: I1211 15:23:13.847712 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"09be86e5-17b2-4aef-a719-f080cd813391","Type":"ContainerStarted","Data":"ed97fa6e892fe00845222ac586a7119e24c7e963dccfb9f68d46adfca2638b41"} Dec 11 15:23:13 crc kubenswrapper[5050]: I1211 15:23:13.857918 5050 generic.go:334] "Generic (PLEG): container finished" podID="93efaf6f-33d4-4fb3-ac60-f564a9496fdf" containerID="f9c2a19739f8baaac559ab31e190fdbf57aa90f2f7cd221cab39b46bb4dfa250" exitCode=0 Dec 11 15:23:13 crc kubenswrapper[5050]: I1211 15:23:13.857999 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"93efaf6f-33d4-4fb3-ac60-f564a9496fdf","Type":"ContainerDied","Data":"f9c2a19739f8baaac559ab31e190fdbf57aa90f2f7cd221cab39b46bb4dfa250"} Dec 11 15:23:13 crc kubenswrapper[5050]: I1211 15:23:13.862240 5050 generic.go:334] "Generic (PLEG): container finished" podID="04bd236f-ca75-4592-ba08-b2a0975485cf" containerID="6b074152650d06f0652cae8c5f58c063867201238db092a0eee18e9d607b157a" exitCode=0 Dec 11 15:23:13 crc kubenswrapper[5050]: I1211 15:23:13.862307 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"04bd236f-ca75-4592-ba08-b2a0975485cf","Type":"ContainerDied","Data":"6b074152650d06f0652cae8c5f58c063867201238db092a0eee18e9d607b157a"} Dec 11 15:23:13 crc kubenswrapper[5050]: I1211 15:23:13.872940 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.872923513 podStartE2EDuration="1.872923513s" podCreationTimestamp="2025-12-11 15:23:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:23:13.864429116 +0000 UTC m=+5684.708151702" watchObservedRunningTime="2025-12-11 15:23:13.872923513 +0000 UTC m=+5684.716646099" Dec 11 15:23:13 crc kubenswrapper[5050]: I1211 15:23:13.885527 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b23edbac-1c5a-49ce-9cb3-39c6b3bf689e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:13 crc kubenswrapper[5050]: I1211 15:23:13.885554 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6n6g\" (UniqueName: \"kubernetes.io/projected/b23edbac-1c5a-49ce-9cb3-39c6b3bf689e-kube-api-access-k6n6g\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:13 crc kubenswrapper[5050]: I1211 15:23:13.885567 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b23edbac-1c5a-49ce-9cb3-39c6b3bf689e-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:13 crc kubenswrapper[5050]: I1211 15:23:13.931241 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 15:23:13 crc kubenswrapper[5050]: I1211 15:23:13.941318 5050 scope.go:117] "RemoveContainer" containerID="380ed4609ec1c05c25da4a2e18dc163423299fa05e358e41ced42424cd8e919a" Dec 11 15:23:13 crc kubenswrapper[5050]: E1211 15:23:13.941670 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"380ed4609ec1c05c25da4a2e18dc163423299fa05e358e41ced42424cd8e919a\": container with ID starting with 380ed4609ec1c05c25da4a2e18dc163423299fa05e358e41ced42424cd8e919a not found: ID does not exist" containerID="380ed4609ec1c05c25da4a2e18dc163423299fa05e358e41ced42424cd8e919a" Dec 11 15:23:13 crc kubenswrapper[5050]: I1211 15:23:13.941699 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"380ed4609ec1c05c25da4a2e18dc163423299fa05e358e41ced42424cd8e919a"} err="failed to get container status \"380ed4609ec1c05c25da4a2e18dc163423299fa05e358e41ced42424cd8e919a\": rpc error: code = NotFound desc = could not find container \"380ed4609ec1c05c25da4a2e18dc163423299fa05e358e41ced42424cd8e919a\": container with ID starting with 380ed4609ec1c05c25da4a2e18dc163423299fa05e358e41ced42424cd8e919a not found: ID does not exist" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.030000 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.088198 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsvmz\" (UniqueName: \"kubernetes.io/projected/04bd236f-ca75-4592-ba08-b2a0975485cf-kube-api-access-hsvmz\") pod \"04bd236f-ca75-4592-ba08-b2a0975485cf\" (UID: \"04bd236f-ca75-4592-ba08-b2a0975485cf\") " Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.088247 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04bd236f-ca75-4592-ba08-b2a0975485cf-config-data\") pod \"04bd236f-ca75-4592-ba08-b2a0975485cf\" (UID: \"04bd236f-ca75-4592-ba08-b2a0975485cf\") " Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.088324 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04bd236f-ca75-4592-ba08-b2a0975485cf-logs\") pod \"04bd236f-ca75-4592-ba08-b2a0975485cf\" (UID: \"04bd236f-ca75-4592-ba08-b2a0975485cf\") " Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.088473 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04bd236f-ca75-4592-ba08-b2a0975485cf-combined-ca-bundle\") pod \"04bd236f-ca75-4592-ba08-b2a0975485cf\" (UID: \"04bd236f-ca75-4592-ba08-b2a0975485cf\") " Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.088778 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04bd236f-ca75-4592-ba08-b2a0975485cf-logs" (OuterVolumeSpecName: "logs") pod "04bd236f-ca75-4592-ba08-b2a0975485cf" (UID: "04bd236f-ca75-4592-ba08-b2a0975485cf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.088884 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04bd236f-ca75-4592-ba08-b2a0975485cf-logs\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.094594 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04bd236f-ca75-4592-ba08-b2a0975485cf-kube-api-access-hsvmz" (OuterVolumeSpecName: "kube-api-access-hsvmz") pod "04bd236f-ca75-4592-ba08-b2a0975485cf" (UID: "04bd236f-ca75-4592-ba08-b2a0975485cf"). InnerVolumeSpecName "kube-api-access-hsvmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.137071 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04bd236f-ca75-4592-ba08-b2a0975485cf-config-data" (OuterVolumeSpecName: "config-data") pod "04bd236f-ca75-4592-ba08-b2a0975485cf" (UID: "04bd236f-ca75-4592-ba08-b2a0975485cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.146106 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04bd236f-ca75-4592-ba08-b2a0975485cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04bd236f-ca75-4592-ba08-b2a0975485cf" (UID: "04bd236f-ca75-4592-ba08-b2a0975485cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.190301 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93efaf6f-33d4-4fb3-ac60-f564a9496fdf-combined-ca-bundle\") pod \"93efaf6f-33d4-4fb3-ac60-f564a9496fdf\" (UID: \"93efaf6f-33d4-4fb3-ac60-f564a9496fdf\") " Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.190527 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwrg9\" (UniqueName: \"kubernetes.io/projected/93efaf6f-33d4-4fb3-ac60-f564a9496fdf-kube-api-access-gwrg9\") pod \"93efaf6f-33d4-4fb3-ac60-f564a9496fdf\" (UID: \"93efaf6f-33d4-4fb3-ac60-f564a9496fdf\") " Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.190755 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93efaf6f-33d4-4fb3-ac60-f564a9496fdf-config-data\") pod \"93efaf6f-33d4-4fb3-ac60-f564a9496fdf\" (UID: \"93efaf6f-33d4-4fb3-ac60-f564a9496fdf\") " Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.190820 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93efaf6f-33d4-4fb3-ac60-f564a9496fdf-logs\") pod \"93efaf6f-33d4-4fb3-ac60-f564a9496fdf\" (UID: \"93efaf6f-33d4-4fb3-ac60-f564a9496fdf\") " Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.202057 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.204323 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04bd236f-ca75-4592-ba08-b2a0975485cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.204363 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsvmz\" (UniqueName: \"kubernetes.io/projected/04bd236f-ca75-4592-ba08-b2a0975485cf-kube-api-access-hsvmz\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.204377 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04bd236f-ca75-4592-ba08-b2a0975485cf-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.204895 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93efaf6f-33d4-4fb3-ac60-f564a9496fdf-logs" (OuterVolumeSpecName: "logs") pod "93efaf6f-33d4-4fb3-ac60-f564a9496fdf" (UID: "93efaf6f-33d4-4fb3-ac60-f564a9496fdf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.211287 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93efaf6f-33d4-4fb3-ac60-f564a9496fdf-kube-api-access-gwrg9" (OuterVolumeSpecName: "kube-api-access-gwrg9") pod "93efaf6f-33d4-4fb3-ac60-f564a9496fdf" (UID: "93efaf6f-33d4-4fb3-ac60-f564a9496fdf"). InnerVolumeSpecName "kube-api-access-gwrg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.227129 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.251239 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93efaf6f-33d4-4fb3-ac60-f564a9496fdf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93efaf6f-33d4-4fb3-ac60-f564a9496fdf" (UID: "93efaf6f-33d4-4fb3-ac60-f564a9496fdf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.252165 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93efaf6f-33d4-4fb3-ac60-f564a9496fdf-config-data" (OuterVolumeSpecName: "config-data") pod "93efaf6f-33d4-4fb3-ac60-f564a9496fdf" (UID: "93efaf6f-33d4-4fb3-ac60-f564a9496fdf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.256431 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 15:23:14 crc kubenswrapper[5050]: E1211 15:23:14.256846 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04bd236f-ca75-4592-ba08-b2a0975485cf" containerName="nova-api-api" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.256863 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="04bd236f-ca75-4592-ba08-b2a0975485cf" containerName="nova-api-api" Dec 11 15:23:14 crc kubenswrapper[5050]: E1211 15:23:14.256884 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93efaf6f-33d4-4fb3-ac60-f564a9496fdf" containerName="nova-metadata-metadata" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.256891 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="93efaf6f-33d4-4fb3-ac60-f564a9496fdf" containerName="nova-metadata-metadata" Dec 11 15:23:14 crc kubenswrapper[5050]: E1211 15:23:14.256903 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04bd236f-ca75-4592-ba08-b2a0975485cf" containerName="nova-api-log" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.256908 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="04bd236f-ca75-4592-ba08-b2a0975485cf" containerName="nova-api-log" Dec 11 15:23:14 crc kubenswrapper[5050]: E1211 15:23:14.256915 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93efaf6f-33d4-4fb3-ac60-f564a9496fdf" containerName="nova-metadata-log" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.256921 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="93efaf6f-33d4-4fb3-ac60-f564a9496fdf" containerName="nova-metadata-log" Dec 11 15:23:14 crc kubenswrapper[5050]: E1211 15:23:14.256934 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b23edbac-1c5a-49ce-9cb3-39c6b3bf689e" containerName="nova-scheduler-scheduler" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.256940 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23edbac-1c5a-49ce-9cb3-39c6b3bf689e" containerName="nova-scheduler-scheduler" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.257108 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="04bd236f-ca75-4592-ba08-b2a0975485cf" containerName="nova-api-api" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.257126 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23edbac-1c5a-49ce-9cb3-39c6b3bf689e" containerName="nova-scheduler-scheduler" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.257135 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="93efaf6f-33d4-4fb3-ac60-f564a9496fdf" containerName="nova-metadata-log" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.257145 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="93efaf6f-33d4-4fb3-ac60-f564a9496fdf" containerName="nova-metadata-metadata" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.257155 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="04bd236f-ca75-4592-ba08-b2a0975485cf" containerName="nova-api-log" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.257928 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.260592 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.267820 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.305923 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwrg9\" (UniqueName: \"kubernetes.io/projected/93efaf6f-33d4-4fb3-ac60-f564a9496fdf-kube-api-access-gwrg9\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.305964 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93efaf6f-33d4-4fb3-ac60-f564a9496fdf-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.305977 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93efaf6f-33d4-4fb3-ac60-f564a9496fdf-logs\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.305990 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93efaf6f-33d4-4fb3-ac60-f564a9496fdf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.408077 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm4xd\" (UniqueName: \"kubernetes.io/projected/08b1c29b-5161-48cd-ae7e-6267c4dca960-kube-api-access-sm4xd\") pod \"nova-scheduler-0\" (UID: \"08b1c29b-5161-48cd-ae7e-6267c4dca960\") " pod="openstack/nova-scheduler-0" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.408360 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b1c29b-5161-48cd-ae7e-6267c4dca960-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"08b1c29b-5161-48cd-ae7e-6267c4dca960\") " pod="openstack/nova-scheduler-0" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.408418 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08b1c29b-5161-48cd-ae7e-6267c4dca960-config-data\") pod \"nova-scheduler-0\" (UID: \"08b1c29b-5161-48cd-ae7e-6267c4dca960\") " pod="openstack/nova-scheduler-0" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.516875 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm4xd\" (UniqueName: \"kubernetes.io/projected/08b1c29b-5161-48cd-ae7e-6267c4dca960-kube-api-access-sm4xd\") pod \"nova-scheduler-0\" (UID: \"08b1c29b-5161-48cd-ae7e-6267c4dca960\") " pod="openstack/nova-scheduler-0" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.517284 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b1c29b-5161-48cd-ae7e-6267c4dca960-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"08b1c29b-5161-48cd-ae7e-6267c4dca960\") " pod="openstack/nova-scheduler-0" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.517310 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08b1c29b-5161-48cd-ae7e-6267c4dca960-config-data\") pod \"nova-scheduler-0\" (UID: \"08b1c29b-5161-48cd-ae7e-6267c4dca960\") " pod="openstack/nova-scheduler-0" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.521208 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08b1c29b-5161-48cd-ae7e-6267c4dca960-config-data\") pod \"nova-scheduler-0\" (UID: \"08b1c29b-5161-48cd-ae7e-6267c4dca960\") " pod="openstack/nova-scheduler-0" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.521800 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08b1c29b-5161-48cd-ae7e-6267c4dca960-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"08b1c29b-5161-48cd-ae7e-6267c4dca960\") " pod="openstack/nova-scheduler-0" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.535182 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm4xd\" (UniqueName: \"kubernetes.io/projected/08b1c29b-5161-48cd-ae7e-6267c4dca960-kube-api-access-sm4xd\") pod \"nova-scheduler-0\" (UID: \"08b1c29b-5161-48cd-ae7e-6267c4dca960\") " pod="openstack/nova-scheduler-0" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.583389 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.880127 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"93efaf6f-33d4-4fb3-ac60-f564a9496fdf","Type":"ContainerDied","Data":"a0a02a5d8cf8dd39c12238fbb67886ab4dd31e979f84a5177f8032948f4de935"} Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.880617 5050 scope.go:117] "RemoveContainer" containerID="f9c2a19739f8baaac559ab31e190fdbf57aa90f2f7cd221cab39b46bb4dfa250" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.880522 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.888124 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"04bd236f-ca75-4592-ba08-b2a0975485cf","Type":"ContainerDied","Data":"cb4318c61c892f16a485c3a3a4b56b7465f31c00e1d391096f2249a30627c3ee"} Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.888220 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.913116 5050 scope.go:117] "RemoveContainer" containerID="b89e12da37cfa25c10b2219419b5164111657e01581a7ac6ad51cb02c0de55d6" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.972431 5050 scope.go:117] "RemoveContainer" containerID="6b074152650d06f0652cae8c5f58c063867201238db092a0eee18e9d607b157a" Dec 11 15:23:14 crc kubenswrapper[5050]: I1211 15:23:14.978499 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.009152 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 15:23:15 crc kubenswrapper[5050]: E1211 15:23:15.015622 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="03b2da0c6e232f7ddc475f0fe3660a9b619575601d4596fb72229cfda6a5e070" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 11 15:23:15 crc kubenswrapper[5050]: E1211 15:23:15.018780 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="03b2da0c6e232f7ddc475f0fe3660a9b619575601d4596fb72229cfda6a5e070" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 11 15:23:15 crc kubenswrapper[5050]: E1211 15:23:15.020843 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="03b2da0c6e232f7ddc475f0fe3660a9b619575601d4596fb72229cfda6a5e070" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Dec 11 15:23:15 crc kubenswrapper[5050]: E1211 15:23:15.020889 5050 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="063d7c41-b6e3-4e21-9b57-ae16dddec75e" containerName="nova-cell0-conductor-conductor" Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.024387 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.031585 5050 scope.go:117] "RemoveContainer" containerID="362345edf73a8955a8db318f46b29fa3a2aded319afa6bb49fa65831f409cfb3" Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.038705 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.051836 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.054490 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.056879 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.074409 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.082953 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.087586 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.089719 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.094594 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.146583 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f5e6aa6-3780-47e9-b274-2b4045ae3e3b-logs\") pod \"nova-metadata-0\" (UID: \"8f5e6aa6-3780-47e9-b274-2b4045ae3e3b\") " pod="openstack/nova-metadata-0" Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.146653 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d482b1ac-2b67-4751-b701-79a91abfef1b-logs\") pod \"nova-api-0\" (UID: \"d482b1ac-2b67-4751-b701-79a91abfef1b\") " pod="openstack/nova-api-0" Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.146681 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d482b1ac-2b67-4751-b701-79a91abfef1b-config-data\") pod \"nova-api-0\" (UID: \"d482b1ac-2b67-4751-b701-79a91abfef1b\") " pod="openstack/nova-api-0" Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.146734 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6nvh\" (UniqueName: \"kubernetes.io/projected/8f5e6aa6-3780-47e9-b274-2b4045ae3e3b-kube-api-access-j6nvh\") pod \"nova-metadata-0\" (UID: \"8f5e6aa6-3780-47e9-b274-2b4045ae3e3b\") " pod="openstack/nova-metadata-0" Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.146800 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f5e6aa6-3780-47e9-b274-2b4045ae3e3b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8f5e6aa6-3780-47e9-b274-2b4045ae3e3b\") " pod="openstack/nova-metadata-0" Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.146821 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f5e6aa6-3780-47e9-b274-2b4045ae3e3b-config-data\") pod \"nova-metadata-0\" (UID: \"8f5e6aa6-3780-47e9-b274-2b4045ae3e3b\") " pod="openstack/nova-metadata-0" Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.146845 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrlzh\" (UniqueName: \"kubernetes.io/projected/d482b1ac-2b67-4751-b701-79a91abfef1b-kube-api-access-wrlzh\") pod \"nova-api-0\" (UID: \"d482b1ac-2b67-4751-b701-79a91abfef1b\") " pod="openstack/nova-api-0" Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.146865 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d482b1ac-2b67-4751-b701-79a91abfef1b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d482b1ac-2b67-4751-b701-79a91abfef1b\") " pod="openstack/nova-api-0" Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.151709 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.248821 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f5e6aa6-3780-47e9-b274-2b4045ae3e3b-logs\") pod \"nova-metadata-0\" (UID: \"8f5e6aa6-3780-47e9-b274-2b4045ae3e3b\") " pod="openstack/nova-metadata-0" Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.249260 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d482b1ac-2b67-4751-b701-79a91abfef1b-logs\") pod \"nova-api-0\" (UID: \"d482b1ac-2b67-4751-b701-79a91abfef1b\") " pod="openstack/nova-api-0" Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.249307 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d482b1ac-2b67-4751-b701-79a91abfef1b-config-data\") pod \"nova-api-0\" (UID: \"d482b1ac-2b67-4751-b701-79a91abfef1b\") " pod="openstack/nova-api-0" Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.249361 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6nvh\" (UniqueName: \"kubernetes.io/projected/8f5e6aa6-3780-47e9-b274-2b4045ae3e3b-kube-api-access-j6nvh\") pod \"nova-metadata-0\" (UID: \"8f5e6aa6-3780-47e9-b274-2b4045ae3e3b\") " pod="openstack/nova-metadata-0" Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.249440 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f5e6aa6-3780-47e9-b274-2b4045ae3e3b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8f5e6aa6-3780-47e9-b274-2b4045ae3e3b\") " pod="openstack/nova-metadata-0" Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.249463 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f5e6aa6-3780-47e9-b274-2b4045ae3e3b-config-data\") pod \"nova-metadata-0\" (UID: \"8f5e6aa6-3780-47e9-b274-2b4045ae3e3b\") " pod="openstack/nova-metadata-0" Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.249486 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrlzh\" (UniqueName: \"kubernetes.io/projected/d482b1ac-2b67-4751-b701-79a91abfef1b-kube-api-access-wrlzh\") pod \"nova-api-0\" (UID: \"d482b1ac-2b67-4751-b701-79a91abfef1b\") " pod="openstack/nova-api-0" Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.249509 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d482b1ac-2b67-4751-b701-79a91abfef1b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d482b1ac-2b67-4751-b701-79a91abfef1b\") " pod="openstack/nova-api-0" Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.249727 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d482b1ac-2b67-4751-b701-79a91abfef1b-logs\") pod \"nova-api-0\" (UID: \"d482b1ac-2b67-4751-b701-79a91abfef1b\") " pod="openstack/nova-api-0" Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.249892 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8f5e6aa6-3780-47e9-b274-2b4045ae3e3b-logs\") pod \"nova-metadata-0\" (UID: \"8f5e6aa6-3780-47e9-b274-2b4045ae3e3b\") " pod="openstack/nova-metadata-0" Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.253164 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d482b1ac-2b67-4751-b701-79a91abfef1b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d482b1ac-2b67-4751-b701-79a91abfef1b\") " pod="openstack/nova-api-0" Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.253808 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d482b1ac-2b67-4751-b701-79a91abfef1b-config-data\") pod \"nova-api-0\" (UID: \"d482b1ac-2b67-4751-b701-79a91abfef1b\") " pod="openstack/nova-api-0" Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.254394 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f5e6aa6-3780-47e9-b274-2b4045ae3e3b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8f5e6aa6-3780-47e9-b274-2b4045ae3e3b\") " pod="openstack/nova-metadata-0" Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.255431 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f5e6aa6-3780-47e9-b274-2b4045ae3e3b-config-data\") pod \"nova-metadata-0\" (UID: \"8f5e6aa6-3780-47e9-b274-2b4045ae3e3b\") " pod="openstack/nova-metadata-0" Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.268485 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrlzh\" (UniqueName: \"kubernetes.io/projected/d482b1ac-2b67-4751-b701-79a91abfef1b-kube-api-access-wrlzh\") pod \"nova-api-0\" (UID: \"d482b1ac-2b67-4751-b701-79a91abfef1b\") " pod="openstack/nova-api-0" Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.268522 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6nvh\" (UniqueName: \"kubernetes.io/projected/8f5e6aa6-3780-47e9-b274-2b4045ae3e3b-kube-api-access-j6nvh\") pod \"nova-metadata-0\" (UID: \"8f5e6aa6-3780-47e9-b274-2b4045ae3e3b\") " pod="openstack/nova-metadata-0" Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.377738 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.404863 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.565038 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04bd236f-ca75-4592-ba08-b2a0975485cf" path="/var/lib/kubelet/pods/04bd236f-ca75-4592-ba08-b2a0975485cf/volumes" Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.568189 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93efaf6f-33d4-4fb3-ac60-f564a9496fdf" path="/var/lib/kubelet/pods/93efaf6f-33d4-4fb3-ac60-f564a9496fdf/volumes" Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.569642 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b23edbac-1c5a-49ce-9cb3-39c6b3bf689e" path="/var/lib/kubelet/pods/b23edbac-1c5a-49ce-9cb3-39c6b3bf689e/volumes" Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.906315 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.918544 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"08b1c29b-5161-48cd-ae7e-6267c4dca960","Type":"ContainerStarted","Data":"7df84e410b71e32cadade2291282fd30653fead25cca1a4f06d73e19ab2b865f"} Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.918588 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"08b1c29b-5161-48cd-ae7e-6267c4dca960","Type":"ContainerStarted","Data":"9614e841c8ad045d92c37953c3b473f51a2f914a67fbbb073f499f9e08690099"} Dec 11 15:23:15 crc kubenswrapper[5050]: I1211 15:23:15.953061 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.9530383169999999 podStartE2EDuration="1.953038317s" podCreationTimestamp="2025-12-11 15:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:23:15.948786783 +0000 UTC m=+5686.792509369" watchObservedRunningTime="2025-12-11 15:23:15.953038317 +0000 UTC m=+5686.796760903" Dec 11 15:23:16 crc kubenswrapper[5050]: I1211 15:23:16.027821 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 15:23:16 crc kubenswrapper[5050]: I1211 15:23:16.934897 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f5e6aa6-3780-47e9-b274-2b4045ae3e3b","Type":"ContainerStarted","Data":"0b5f8ccc0ea980e0e4f97cce68d50ed611b0671e8f51b90af49bc490ca426c16"} Dec 11 15:23:16 crc kubenswrapper[5050]: I1211 15:23:16.936393 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f5e6aa6-3780-47e9-b274-2b4045ae3e3b","Type":"ContainerStarted","Data":"39f10a657b1ca63e7acdde9bb8ef1f5fa7104ed8de5e77fb2b6b91264728db77"} Dec 11 15:23:16 crc kubenswrapper[5050]: I1211 15:23:16.936456 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8f5e6aa6-3780-47e9-b274-2b4045ae3e3b","Type":"ContainerStarted","Data":"aabb97d93b221c17726ecdebca12e494f7040d63e57a63b2f2628f01a84061b2"} Dec 11 15:23:16 crc kubenswrapper[5050]: I1211 15:23:16.954830 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d482b1ac-2b67-4751-b701-79a91abfef1b","Type":"ContainerStarted","Data":"cdbea56a63b0a9e505bce148254437aca51b4d73eb991336e59be19be232806c"} Dec 11 15:23:16 crc kubenswrapper[5050]: I1211 15:23:16.954873 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d482b1ac-2b67-4751-b701-79a91abfef1b","Type":"ContainerStarted","Data":"5d8c4a8b80e39fcacd5373e0a393864d141a14e1d0d632b5049e7b3b5904fe98"} Dec 11 15:23:16 crc kubenswrapper[5050]: I1211 15:23:16.954884 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d482b1ac-2b67-4751-b701-79a91abfef1b","Type":"ContainerStarted","Data":"c3d889b0f6dd2496226fc76881e2cecb6eb50ab60d7891f201cd452c0cc5041e"} Dec 11 15:23:16 crc kubenswrapper[5050]: I1211 15:23:16.978683 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.978665029 podStartE2EDuration="2.978665029s" podCreationTimestamp="2025-12-11 15:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:23:16.963131164 +0000 UTC m=+5687.806853750" watchObservedRunningTime="2025-12-11 15:23:16.978665029 +0000 UTC m=+5687.822387615" Dec 11 15:23:16 crc kubenswrapper[5050]: I1211 15:23:16.995023 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.9949892350000002 podStartE2EDuration="2.994989235s" podCreationTimestamp="2025-12-11 15:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:23:16.984103204 +0000 UTC m=+5687.827825790" watchObservedRunningTime="2025-12-11 15:23:16.994989235 +0000 UTC m=+5687.838711821" Dec 11 15:23:17 crc kubenswrapper[5050]: I1211 15:23:17.490731 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 11 15:23:17 crc kubenswrapper[5050]: I1211 15:23:17.826078 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 11 15:23:17 crc kubenswrapper[5050]: I1211 15:23:17.962970 5050 generic.go:334] "Generic (PLEG): container finished" podID="11adb6b7-1e47-45c3-932b-9f3e248d7621" containerID="47f1a1038f9fbef7ae0b1178f8e3117cb60df69aa73d0374eaa42c0f61a1c05e" exitCode=0 Dec 11 15:23:17 crc kubenswrapper[5050]: I1211 15:23:17.963047 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 11 15:23:17 crc kubenswrapper[5050]: I1211 15:23:17.963085 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"11adb6b7-1e47-45c3-932b-9f3e248d7621","Type":"ContainerDied","Data":"47f1a1038f9fbef7ae0b1178f8e3117cb60df69aa73d0374eaa42c0f61a1c05e"} Dec 11 15:23:17 crc kubenswrapper[5050]: I1211 15:23:17.963143 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"11adb6b7-1e47-45c3-932b-9f3e248d7621","Type":"ContainerDied","Data":"e827c0ad139bbfa4c2cecdc5ef81e79f003a975f08ddbd66ac3a8db2f53a7bdc"} Dec 11 15:23:17 crc kubenswrapper[5050]: I1211 15:23:17.963183 5050 scope.go:117] "RemoveContainer" containerID="47f1a1038f9fbef7ae0b1178f8e3117cb60df69aa73d0374eaa42c0f61a1c05e" Dec 11 15:23:17 crc kubenswrapper[5050]: I1211 15:23:17.980607 5050 scope.go:117] "RemoveContainer" containerID="47f1a1038f9fbef7ae0b1178f8e3117cb60df69aa73d0374eaa42c0f61a1c05e" Dec 11 15:23:17 crc kubenswrapper[5050]: E1211 15:23:17.980973 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47f1a1038f9fbef7ae0b1178f8e3117cb60df69aa73d0374eaa42c0f61a1c05e\": container with ID starting with 47f1a1038f9fbef7ae0b1178f8e3117cb60df69aa73d0374eaa42c0f61a1c05e not found: ID does not exist" containerID="47f1a1038f9fbef7ae0b1178f8e3117cb60df69aa73d0374eaa42c0f61a1c05e" Dec 11 15:23:17 crc kubenswrapper[5050]: I1211 15:23:17.981047 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47f1a1038f9fbef7ae0b1178f8e3117cb60df69aa73d0374eaa42c0f61a1c05e"} err="failed to get container status \"47f1a1038f9fbef7ae0b1178f8e3117cb60df69aa73d0374eaa42c0f61a1c05e\": rpc error: code = NotFound desc = could not find container \"47f1a1038f9fbef7ae0b1178f8e3117cb60df69aa73d0374eaa42c0f61a1c05e\": container with ID starting with 47f1a1038f9fbef7ae0b1178f8e3117cb60df69aa73d0374eaa42c0f61a1c05e not found: ID does not exist" Dec 11 15:23:18 crc kubenswrapper[5050]: I1211 15:23:18.007719 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11adb6b7-1e47-45c3-932b-9f3e248d7621-combined-ca-bundle\") pod \"11adb6b7-1e47-45c3-932b-9f3e248d7621\" (UID: \"11adb6b7-1e47-45c3-932b-9f3e248d7621\") " Dec 11 15:23:18 crc kubenswrapper[5050]: I1211 15:23:18.008109 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11adb6b7-1e47-45c3-932b-9f3e248d7621-config-data\") pod \"11adb6b7-1e47-45c3-932b-9f3e248d7621\" (UID: \"11adb6b7-1e47-45c3-932b-9f3e248d7621\") " Dec 11 15:23:18 crc kubenswrapper[5050]: I1211 15:23:18.008234 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbzxc\" (UniqueName: \"kubernetes.io/projected/11adb6b7-1e47-45c3-932b-9f3e248d7621-kube-api-access-xbzxc\") pod \"11adb6b7-1e47-45c3-932b-9f3e248d7621\" (UID: \"11adb6b7-1e47-45c3-932b-9f3e248d7621\") " Dec 11 15:23:18 crc kubenswrapper[5050]: I1211 15:23:18.016969 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11adb6b7-1e47-45c3-932b-9f3e248d7621-kube-api-access-xbzxc" (OuterVolumeSpecName: "kube-api-access-xbzxc") pod "11adb6b7-1e47-45c3-932b-9f3e248d7621" (UID: "11adb6b7-1e47-45c3-932b-9f3e248d7621"). InnerVolumeSpecName "kube-api-access-xbzxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:23:18 crc kubenswrapper[5050]: I1211 15:23:18.033255 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11adb6b7-1e47-45c3-932b-9f3e248d7621-config-data" (OuterVolumeSpecName: "config-data") pod "11adb6b7-1e47-45c3-932b-9f3e248d7621" (UID: "11adb6b7-1e47-45c3-932b-9f3e248d7621"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:23:18 crc kubenswrapper[5050]: I1211 15:23:18.033298 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11adb6b7-1e47-45c3-932b-9f3e248d7621-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11adb6b7-1e47-45c3-932b-9f3e248d7621" (UID: "11adb6b7-1e47-45c3-932b-9f3e248d7621"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:23:18 crc kubenswrapper[5050]: I1211 15:23:18.110954 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbzxc\" (UniqueName: \"kubernetes.io/projected/11adb6b7-1e47-45c3-932b-9f3e248d7621-kube-api-access-xbzxc\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:18 crc kubenswrapper[5050]: I1211 15:23:18.110993 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11adb6b7-1e47-45c3-932b-9f3e248d7621-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:18 crc kubenswrapper[5050]: I1211 15:23:18.111036 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11adb6b7-1e47-45c3-932b-9f3e248d7621-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:18 crc kubenswrapper[5050]: I1211 15:23:18.291243 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 11 15:23:18 crc kubenswrapper[5050]: I1211 15:23:18.300857 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 11 15:23:18 crc kubenswrapper[5050]: I1211 15:23:18.316039 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 11 15:23:18 crc kubenswrapper[5050]: E1211 15:23:18.316482 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11adb6b7-1e47-45c3-932b-9f3e248d7621" containerName="nova-cell1-conductor-conductor" Dec 11 15:23:18 crc kubenswrapper[5050]: I1211 15:23:18.316506 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="11adb6b7-1e47-45c3-932b-9f3e248d7621" containerName="nova-cell1-conductor-conductor" Dec 11 15:23:18 crc kubenswrapper[5050]: I1211 15:23:18.316768 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="11adb6b7-1e47-45c3-932b-9f3e248d7621" containerName="nova-cell1-conductor-conductor" Dec 11 15:23:18 crc kubenswrapper[5050]: I1211 15:23:18.317599 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 11 15:23:18 crc kubenswrapper[5050]: I1211 15:23:18.319600 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 11 15:23:18 crc kubenswrapper[5050]: I1211 15:23:18.327641 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 11 15:23:18 crc kubenswrapper[5050]: I1211 15:23:18.518035 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m9br\" (UniqueName: \"kubernetes.io/projected/3e0c8b75-8354-41db-abde-064d71cea120-kube-api-access-7m9br\") pod \"nova-cell1-conductor-0\" (UID: \"3e0c8b75-8354-41db-abde-064d71cea120\") " pod="openstack/nova-cell1-conductor-0" Dec 11 15:23:18 crc kubenswrapper[5050]: I1211 15:23:18.518179 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e0c8b75-8354-41db-abde-064d71cea120-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3e0c8b75-8354-41db-abde-064d71cea120\") " pod="openstack/nova-cell1-conductor-0" Dec 11 15:23:18 crc kubenswrapper[5050]: I1211 15:23:18.518345 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e0c8b75-8354-41db-abde-064d71cea120-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3e0c8b75-8354-41db-abde-064d71cea120\") " pod="openstack/nova-cell1-conductor-0" Dec 11 15:23:18 crc kubenswrapper[5050]: I1211 15:23:18.621666 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m9br\" (UniqueName: \"kubernetes.io/projected/3e0c8b75-8354-41db-abde-064d71cea120-kube-api-access-7m9br\") pod \"nova-cell1-conductor-0\" (UID: \"3e0c8b75-8354-41db-abde-064d71cea120\") " pod="openstack/nova-cell1-conductor-0" Dec 11 15:23:18 crc kubenswrapper[5050]: I1211 15:23:18.622413 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e0c8b75-8354-41db-abde-064d71cea120-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3e0c8b75-8354-41db-abde-064d71cea120\") " pod="openstack/nova-cell1-conductor-0" Dec 11 15:23:18 crc kubenswrapper[5050]: I1211 15:23:18.622480 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e0c8b75-8354-41db-abde-064d71cea120-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3e0c8b75-8354-41db-abde-064d71cea120\") " pod="openstack/nova-cell1-conductor-0" Dec 11 15:23:18 crc kubenswrapper[5050]: I1211 15:23:18.625258 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e0c8b75-8354-41db-abde-064d71cea120-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3e0c8b75-8354-41db-abde-064d71cea120\") " pod="openstack/nova-cell1-conductor-0" Dec 11 15:23:18 crc kubenswrapper[5050]: I1211 15:23:18.626156 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e0c8b75-8354-41db-abde-064d71cea120-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3e0c8b75-8354-41db-abde-064d71cea120\") " pod="openstack/nova-cell1-conductor-0" Dec 11 15:23:18 crc kubenswrapper[5050]: I1211 15:23:18.642789 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m9br\" (UniqueName: \"kubernetes.io/projected/3e0c8b75-8354-41db-abde-064d71cea120-kube-api-access-7m9br\") pod \"nova-cell1-conductor-0\" (UID: \"3e0c8b75-8354-41db-abde-064d71cea120\") " pod="openstack/nova-cell1-conductor-0" Dec 11 15:23:18 crc kubenswrapper[5050]: I1211 15:23:18.651451 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 11 15:23:18 crc kubenswrapper[5050]: I1211 15:23:18.980917 5050 generic.go:334] "Generic (PLEG): container finished" podID="063d7c41-b6e3-4e21-9b57-ae16dddec75e" containerID="03b2da0c6e232f7ddc475f0fe3660a9b619575601d4596fb72229cfda6a5e070" exitCode=0 Dec 11 15:23:18 crc kubenswrapper[5050]: I1211 15:23:18.980986 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"063d7c41-b6e3-4e21-9b57-ae16dddec75e","Type":"ContainerDied","Data":"03b2da0c6e232f7ddc475f0fe3660a9b619575601d4596fb72229cfda6a5e070"} Dec 11 15:23:19 crc kubenswrapper[5050]: I1211 15:23:19.131918 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 11 15:23:19 crc kubenswrapper[5050]: W1211 15:23:19.138595 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e0c8b75_8354_41db_abde_064d71cea120.slice/crio-4f62132c94150b431c6040ae4171b6bee51c91a321220961a016b24243865f02 WatchSource:0}: Error finding container 4f62132c94150b431c6040ae4171b6bee51c91a321220961a016b24243865f02: Status 404 returned error can't find the container with id 4f62132c94150b431c6040ae4171b6bee51c91a321220961a016b24243865f02 Dec 11 15:23:19 crc kubenswrapper[5050]: I1211 15:23:19.244968 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 11 15:23:19 crc kubenswrapper[5050]: I1211 15:23:19.339981 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/063d7c41-b6e3-4e21-9b57-ae16dddec75e-config-data\") pod \"063d7c41-b6e3-4e21-9b57-ae16dddec75e\" (UID: \"063d7c41-b6e3-4e21-9b57-ae16dddec75e\") " Dec 11 15:23:19 crc kubenswrapper[5050]: I1211 15:23:19.340084 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/063d7c41-b6e3-4e21-9b57-ae16dddec75e-combined-ca-bundle\") pod \"063d7c41-b6e3-4e21-9b57-ae16dddec75e\" (UID: \"063d7c41-b6e3-4e21-9b57-ae16dddec75e\") " Dec 11 15:23:19 crc kubenswrapper[5050]: I1211 15:23:19.340140 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nh7w\" (UniqueName: \"kubernetes.io/projected/063d7c41-b6e3-4e21-9b57-ae16dddec75e-kube-api-access-9nh7w\") pod \"063d7c41-b6e3-4e21-9b57-ae16dddec75e\" (UID: \"063d7c41-b6e3-4e21-9b57-ae16dddec75e\") " Dec 11 15:23:19 crc kubenswrapper[5050]: I1211 15:23:19.344264 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/063d7c41-b6e3-4e21-9b57-ae16dddec75e-kube-api-access-9nh7w" (OuterVolumeSpecName: "kube-api-access-9nh7w") pod "063d7c41-b6e3-4e21-9b57-ae16dddec75e" (UID: "063d7c41-b6e3-4e21-9b57-ae16dddec75e"). InnerVolumeSpecName "kube-api-access-9nh7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:23:19 crc kubenswrapper[5050]: I1211 15:23:19.363702 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/063d7c41-b6e3-4e21-9b57-ae16dddec75e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "063d7c41-b6e3-4e21-9b57-ae16dddec75e" (UID: "063d7c41-b6e3-4e21-9b57-ae16dddec75e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:23:19 crc kubenswrapper[5050]: I1211 15:23:19.370348 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/063d7c41-b6e3-4e21-9b57-ae16dddec75e-config-data" (OuterVolumeSpecName: "config-data") pod "063d7c41-b6e3-4e21-9b57-ae16dddec75e" (UID: "063d7c41-b6e3-4e21-9b57-ae16dddec75e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:23:19 crc kubenswrapper[5050]: I1211 15:23:19.442204 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/063d7c41-b6e3-4e21-9b57-ae16dddec75e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:19 crc kubenswrapper[5050]: I1211 15:23:19.442238 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nh7w\" (UniqueName: \"kubernetes.io/projected/063d7c41-b6e3-4e21-9b57-ae16dddec75e-kube-api-access-9nh7w\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:19 crc kubenswrapper[5050]: I1211 15:23:19.442249 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/063d7c41-b6e3-4e21-9b57-ae16dddec75e-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:19 crc kubenswrapper[5050]: I1211 15:23:19.556067 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11adb6b7-1e47-45c3-932b-9f3e248d7621" path="/var/lib/kubelet/pods/11adb6b7-1e47-45c3-932b-9f3e248d7621/volumes" Dec 11 15:23:19 crc kubenswrapper[5050]: I1211 15:23:19.584236 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 11 15:23:19 crc kubenswrapper[5050]: I1211 15:23:19.999002 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3e0c8b75-8354-41db-abde-064d71cea120","Type":"ContainerStarted","Data":"effe322bc66200ee09562a5e6685dc754717750ee89708a39bca333a548d3d02"} Dec 11 15:23:20 crc kubenswrapper[5050]: I1211 15:23:19.999386 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3e0c8b75-8354-41db-abde-064d71cea120","Type":"ContainerStarted","Data":"4f62132c94150b431c6040ae4171b6bee51c91a321220961a016b24243865f02"} Dec 11 15:23:20 crc kubenswrapper[5050]: I1211 15:23:19.999487 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 11 15:23:20 crc kubenswrapper[5050]: I1211 15:23:20.001840 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"063d7c41-b6e3-4e21-9b57-ae16dddec75e","Type":"ContainerDied","Data":"b4da6477b27c50c9a7a30fe27f11a9c81875b735aed0b377f1a067c3a98a0b97"} Dec 11 15:23:20 crc kubenswrapper[5050]: I1211 15:23:20.001901 5050 scope.go:117] "RemoveContainer" containerID="03b2da0c6e232f7ddc475f0fe3660a9b619575601d4596fb72229cfda6a5e070" Dec 11 15:23:20 crc kubenswrapper[5050]: I1211 15:23:20.002200 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 11 15:23:20 crc kubenswrapper[5050]: I1211 15:23:20.045667 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.0456492 podStartE2EDuration="2.0456492s" podCreationTimestamp="2025-12-11 15:23:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:23:20.022465541 +0000 UTC m=+5690.866188147" watchObservedRunningTime="2025-12-11 15:23:20.0456492 +0000 UTC m=+5690.889371786" Dec 11 15:23:20 crc kubenswrapper[5050]: I1211 15:23:20.061987 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 11 15:23:20 crc kubenswrapper[5050]: I1211 15:23:20.090498 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 11 15:23:20 crc kubenswrapper[5050]: I1211 15:23:20.103303 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 11 15:23:20 crc kubenswrapper[5050]: E1211 15:23:20.103825 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="063d7c41-b6e3-4e21-9b57-ae16dddec75e" containerName="nova-cell0-conductor-conductor" Dec 11 15:23:20 crc kubenswrapper[5050]: I1211 15:23:20.103845 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="063d7c41-b6e3-4e21-9b57-ae16dddec75e" containerName="nova-cell0-conductor-conductor" Dec 11 15:23:20 crc kubenswrapper[5050]: I1211 15:23:20.104066 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="063d7c41-b6e3-4e21-9b57-ae16dddec75e" containerName="nova-cell0-conductor-conductor" Dec 11 15:23:20 crc kubenswrapper[5050]: I1211 15:23:20.104758 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 11 15:23:20 crc kubenswrapper[5050]: I1211 15:23:20.107881 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 11 15:23:20 crc kubenswrapper[5050]: I1211 15:23:20.128095 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 11 15:23:20 crc kubenswrapper[5050]: I1211 15:23:20.155245 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76689d33-fec8-4a70-b274-f47ecc684dc2-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"76689d33-fec8-4a70-b274-f47ecc684dc2\") " pod="openstack/nova-cell0-conductor-0" Dec 11 15:23:20 crc kubenswrapper[5050]: I1211 15:23:20.155408 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ks5n\" (UniqueName: \"kubernetes.io/projected/76689d33-fec8-4a70-b274-f47ecc684dc2-kube-api-access-4ks5n\") pod \"nova-cell0-conductor-0\" (UID: \"76689d33-fec8-4a70-b274-f47ecc684dc2\") " pod="openstack/nova-cell0-conductor-0" Dec 11 15:23:20 crc kubenswrapper[5050]: I1211 15:23:20.155483 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76689d33-fec8-4a70-b274-f47ecc684dc2-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"76689d33-fec8-4a70-b274-f47ecc684dc2\") " pod="openstack/nova-cell0-conductor-0" Dec 11 15:23:20 crc kubenswrapper[5050]: I1211 15:23:20.256928 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76689d33-fec8-4a70-b274-f47ecc684dc2-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"76689d33-fec8-4a70-b274-f47ecc684dc2\") " pod="openstack/nova-cell0-conductor-0" Dec 11 15:23:20 crc kubenswrapper[5050]: I1211 15:23:20.256983 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ks5n\" (UniqueName: \"kubernetes.io/projected/76689d33-fec8-4a70-b274-f47ecc684dc2-kube-api-access-4ks5n\") pod \"nova-cell0-conductor-0\" (UID: \"76689d33-fec8-4a70-b274-f47ecc684dc2\") " pod="openstack/nova-cell0-conductor-0" Dec 11 15:23:20 crc kubenswrapper[5050]: I1211 15:23:20.257002 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76689d33-fec8-4a70-b274-f47ecc684dc2-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"76689d33-fec8-4a70-b274-f47ecc684dc2\") " pod="openstack/nova-cell0-conductor-0" Dec 11 15:23:20 crc kubenswrapper[5050]: I1211 15:23:20.261237 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76689d33-fec8-4a70-b274-f47ecc684dc2-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"76689d33-fec8-4a70-b274-f47ecc684dc2\") " pod="openstack/nova-cell0-conductor-0" Dec 11 15:23:20 crc kubenswrapper[5050]: I1211 15:23:20.274961 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76689d33-fec8-4a70-b274-f47ecc684dc2-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"76689d33-fec8-4a70-b274-f47ecc684dc2\") " pod="openstack/nova-cell0-conductor-0" Dec 11 15:23:20 crc kubenswrapper[5050]: I1211 15:23:20.290245 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ks5n\" (UniqueName: \"kubernetes.io/projected/76689d33-fec8-4a70-b274-f47ecc684dc2-kube-api-access-4ks5n\") pod \"nova-cell0-conductor-0\" (UID: \"76689d33-fec8-4a70-b274-f47ecc684dc2\") " pod="openstack/nova-cell0-conductor-0" Dec 11 15:23:20 crc kubenswrapper[5050]: I1211 15:23:20.378116 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 11 15:23:20 crc kubenswrapper[5050]: I1211 15:23:20.378168 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 11 15:23:20 crc kubenswrapper[5050]: I1211 15:23:20.425564 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 11 15:23:20 crc kubenswrapper[5050]: I1211 15:23:20.892369 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 11 15:23:20 crc kubenswrapper[5050]: W1211 15:23:20.901593 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76689d33_fec8_4a70_b274_f47ecc684dc2.slice/crio-d1304767cbfaa9133bdd16a500cc702ad0e01f5a2fabf44c0b26af9ada9f232d WatchSource:0}: Error finding container d1304767cbfaa9133bdd16a500cc702ad0e01f5a2fabf44c0b26af9ada9f232d: Status 404 returned error can't find the container with id d1304767cbfaa9133bdd16a500cc702ad0e01f5a2fabf44c0b26af9ada9f232d Dec 11 15:23:21 crc kubenswrapper[5050]: I1211 15:23:21.011407 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"76689d33-fec8-4a70-b274-f47ecc684dc2","Type":"ContainerStarted","Data":"d1304767cbfaa9133bdd16a500cc702ad0e01f5a2fabf44c0b26af9ada9f232d"} Dec 11 15:23:21 crc kubenswrapper[5050]: I1211 15:23:21.575587 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="063d7c41-b6e3-4e21-9b57-ae16dddec75e" path="/var/lib/kubelet/pods/063d7c41-b6e3-4e21-9b57-ae16dddec75e/volumes" Dec 11 15:23:22 crc kubenswrapper[5050]: I1211 15:23:22.024710 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"76689d33-fec8-4a70-b274-f47ecc684dc2","Type":"ContainerStarted","Data":"f460f5f088d9e870b874ec6e86b41f39e2d6c07436de5a3cdafb5b2320ce3163"} Dec 11 15:23:22 crc kubenswrapper[5050]: I1211 15:23:22.025814 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 11 15:23:22 crc kubenswrapper[5050]: I1211 15:23:22.047384 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.04736517 podStartE2EDuration="2.04736517s" podCreationTimestamp="2025-12-11 15:23:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:23:22.040042825 +0000 UTC m=+5692.883765411" watchObservedRunningTime="2025-12-11 15:23:22.04736517 +0000 UTC m=+5692.891087756" Dec 11 15:23:22 crc kubenswrapper[5050]: I1211 15:23:22.490643 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 11 15:23:22 crc kubenswrapper[5050]: I1211 15:23:22.514489 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 11 15:23:23 crc kubenswrapper[5050]: I1211 15:23:23.048948 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 11 15:23:24 crc kubenswrapper[5050]: I1211 15:23:24.584083 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 11 15:23:24 crc kubenswrapper[5050]: I1211 15:23:24.614559 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 11 15:23:25 crc kubenswrapper[5050]: I1211 15:23:25.077037 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 11 15:23:25 crc kubenswrapper[5050]: I1211 15:23:25.378621 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 11 15:23:25 crc kubenswrapper[5050]: I1211 15:23:25.378699 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 11 15:23:25 crc kubenswrapper[5050]: I1211 15:23:25.405733 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 15:23:25 crc kubenswrapper[5050]: I1211 15:23:25.405813 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 15:23:26 crc kubenswrapper[5050]: I1211 15:23:26.544254 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d482b1ac-2b67-4751-b701-79a91abfef1b" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.84:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:23:26 crc kubenswrapper[5050]: I1211 15:23:26.544247 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8f5e6aa6-3780-47e9-b274-2b4045ae3e3b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"http://10.217.1.83:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:23:26 crc kubenswrapper[5050]: I1211 15:23:26.544297 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d482b1ac-2b67-4751-b701-79a91abfef1b" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.84:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:23:26 crc kubenswrapper[5050]: I1211 15:23:26.544291 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8f5e6aa6-3780-47e9-b274-2b4045ae3e3b" containerName="nova-metadata-log" probeResult="failure" output="Get \"http://10.217.1.83:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:23:28 crc kubenswrapper[5050]: I1211 15:23:28.677441 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 11 15:23:28 crc kubenswrapper[5050]: I1211 15:23:28.907891 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 15:23:28 crc kubenswrapper[5050]: I1211 15:23:28.909573 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 15:23:28 crc kubenswrapper[5050]: I1211 15:23:28.912006 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 11 15:23:28 crc kubenswrapper[5050]: I1211 15:23:28.923001 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 15:23:29 crc kubenswrapper[5050]: I1211 15:23:29.034084 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f437b6e-75bc-4597-9000-d2082eec6a3a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3f437b6e-75bc-4597-9000-d2082eec6a3a\") " pod="openstack/cinder-scheduler-0" Dec 11 15:23:29 crc kubenswrapper[5050]: I1211 15:23:29.034731 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f437b6e-75bc-4597-9000-d2082eec6a3a-scripts\") pod \"cinder-scheduler-0\" (UID: \"3f437b6e-75bc-4597-9000-d2082eec6a3a\") " pod="openstack/cinder-scheduler-0" Dec 11 15:23:29 crc kubenswrapper[5050]: I1211 15:23:29.034842 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f437b6e-75bc-4597-9000-d2082eec6a3a-config-data\") pod \"cinder-scheduler-0\" (UID: \"3f437b6e-75bc-4597-9000-d2082eec6a3a\") " pod="openstack/cinder-scheduler-0" Dec 11 15:23:29 crc kubenswrapper[5050]: I1211 15:23:29.034982 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f437b6e-75bc-4597-9000-d2082eec6a3a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3f437b6e-75bc-4597-9000-d2082eec6a3a\") " pod="openstack/cinder-scheduler-0" Dec 11 15:23:29 crc kubenswrapper[5050]: I1211 15:23:29.035233 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f437b6e-75bc-4597-9000-d2082eec6a3a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3f437b6e-75bc-4597-9000-d2082eec6a3a\") " pod="openstack/cinder-scheduler-0" Dec 11 15:23:29 crc kubenswrapper[5050]: I1211 15:23:29.035402 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvqnl\" (UniqueName: \"kubernetes.io/projected/3f437b6e-75bc-4597-9000-d2082eec6a3a-kube-api-access-pvqnl\") pod \"cinder-scheduler-0\" (UID: \"3f437b6e-75bc-4597-9000-d2082eec6a3a\") " pod="openstack/cinder-scheduler-0" Dec 11 15:23:29 crc kubenswrapper[5050]: I1211 15:23:29.137108 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f437b6e-75bc-4597-9000-d2082eec6a3a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3f437b6e-75bc-4597-9000-d2082eec6a3a\") " pod="openstack/cinder-scheduler-0" Dec 11 15:23:29 crc kubenswrapper[5050]: I1211 15:23:29.137176 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f437b6e-75bc-4597-9000-d2082eec6a3a-scripts\") pod \"cinder-scheduler-0\" (UID: \"3f437b6e-75bc-4597-9000-d2082eec6a3a\") " pod="openstack/cinder-scheduler-0" Dec 11 15:23:29 crc kubenswrapper[5050]: I1211 15:23:29.137206 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f437b6e-75bc-4597-9000-d2082eec6a3a-config-data\") pod \"cinder-scheduler-0\" (UID: \"3f437b6e-75bc-4597-9000-d2082eec6a3a\") " pod="openstack/cinder-scheduler-0" Dec 11 15:23:29 crc kubenswrapper[5050]: I1211 15:23:29.137239 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f437b6e-75bc-4597-9000-d2082eec6a3a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3f437b6e-75bc-4597-9000-d2082eec6a3a\") " pod="openstack/cinder-scheduler-0" Dec 11 15:23:29 crc kubenswrapper[5050]: I1211 15:23:29.137258 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f437b6e-75bc-4597-9000-d2082eec6a3a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3f437b6e-75bc-4597-9000-d2082eec6a3a\") " pod="openstack/cinder-scheduler-0" Dec 11 15:23:29 crc kubenswrapper[5050]: I1211 15:23:29.137292 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvqnl\" (UniqueName: \"kubernetes.io/projected/3f437b6e-75bc-4597-9000-d2082eec6a3a-kube-api-access-pvqnl\") pod \"cinder-scheduler-0\" (UID: \"3f437b6e-75bc-4597-9000-d2082eec6a3a\") " pod="openstack/cinder-scheduler-0" Dec 11 15:23:29 crc kubenswrapper[5050]: I1211 15:23:29.137369 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f437b6e-75bc-4597-9000-d2082eec6a3a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3f437b6e-75bc-4597-9000-d2082eec6a3a\") " pod="openstack/cinder-scheduler-0" Dec 11 15:23:29 crc kubenswrapper[5050]: I1211 15:23:29.143566 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f437b6e-75bc-4597-9000-d2082eec6a3a-scripts\") pod \"cinder-scheduler-0\" (UID: \"3f437b6e-75bc-4597-9000-d2082eec6a3a\") " pod="openstack/cinder-scheduler-0" Dec 11 15:23:29 crc kubenswrapper[5050]: I1211 15:23:29.143592 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f437b6e-75bc-4597-9000-d2082eec6a3a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3f437b6e-75bc-4597-9000-d2082eec6a3a\") " pod="openstack/cinder-scheduler-0" Dec 11 15:23:29 crc kubenswrapper[5050]: I1211 15:23:29.144555 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f437b6e-75bc-4597-9000-d2082eec6a3a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3f437b6e-75bc-4597-9000-d2082eec6a3a\") " pod="openstack/cinder-scheduler-0" Dec 11 15:23:29 crc kubenswrapper[5050]: I1211 15:23:29.146731 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f437b6e-75bc-4597-9000-d2082eec6a3a-config-data\") pod \"cinder-scheduler-0\" (UID: \"3f437b6e-75bc-4597-9000-d2082eec6a3a\") " pod="openstack/cinder-scheduler-0" Dec 11 15:23:29 crc kubenswrapper[5050]: I1211 15:23:29.156399 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvqnl\" (UniqueName: \"kubernetes.io/projected/3f437b6e-75bc-4597-9000-d2082eec6a3a-kube-api-access-pvqnl\") pod \"cinder-scheduler-0\" (UID: \"3f437b6e-75bc-4597-9000-d2082eec6a3a\") " pod="openstack/cinder-scheduler-0" Dec 11 15:23:29 crc kubenswrapper[5050]: I1211 15:23:29.227041 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 15:23:29 crc kubenswrapper[5050]: I1211 15:23:29.654741 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 15:23:30 crc kubenswrapper[5050]: I1211 15:23:30.096436 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3f437b6e-75bc-4597-9000-d2082eec6a3a","Type":"ContainerStarted","Data":"9c4680f4d1db509d45fafcba9810f824e975b205c7232d86fb5f3ba3f4d94535"} Dec 11 15:23:30 crc kubenswrapper[5050]: I1211 15:23:30.499667 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 11 15:23:30 crc kubenswrapper[5050]: I1211 15:23:30.602260 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 11 15:23:30 crc kubenswrapper[5050]: I1211 15:23:30.602628 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e156f879-4d43-4b29-86f2-fefc38253daf" containerName="cinder-api-log" containerID="cri-o://245acd9b72c2fe04bd7d0f8ee5cf7652f8ab12fa9477aae011d327afc8eb1cae" gracePeriod=30 Dec 11 15:23:30 crc kubenswrapper[5050]: I1211 15:23:30.602781 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e156f879-4d43-4b29-86f2-fefc38253daf" containerName="cinder-api" containerID="cri-o://1070e5aad879429cfa80ace93bad4612b20bcc52f34a2099db27c5e2210f04da" gracePeriod=30 Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.106905 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3f437b6e-75bc-4597-9000-d2082eec6a3a","Type":"ContainerStarted","Data":"cf1af31957de81c35a430a19290aab38b1f64bf84d13638d5ea34538cef10142"} Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.107212 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3f437b6e-75bc-4597-9000-d2082eec6a3a","Type":"ContainerStarted","Data":"556e6e2f10f75df82d6e4ce17f2d0a4f7d543a9081b5fb407074feb768780908"} Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.109022 5050 generic.go:334] "Generic (PLEG): container finished" podID="e156f879-4d43-4b29-86f2-fefc38253daf" containerID="245acd9b72c2fe04bd7d0f8ee5cf7652f8ab12fa9477aae011d327afc8eb1cae" exitCode=143 Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.109054 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e156f879-4d43-4b29-86f2-fefc38253daf","Type":"ContainerDied","Data":"245acd9b72c2fe04bd7d0f8ee5cf7652f8ab12fa9477aae011d327afc8eb1cae"} Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.350271 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.350250727 podStartE2EDuration="3.350250727s" podCreationTimestamp="2025-12-11 15:23:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:23:31.144675087 +0000 UTC m=+5701.988397673" watchObservedRunningTime="2025-12-11 15:23:31.350250727 +0000 UTC m=+5702.193973313" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.354880 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.357384 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.361472 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.366005 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.486115 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b94bc94c-a636-4f0d-bd28-0f347e7b1143-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.486173 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b94bc94c-a636-4f0d-bd28-0f347e7b1143-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.486266 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b94bc94c-a636-4f0d-bd28-0f347e7b1143-run\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.486303 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b94bc94c-a636-4f0d-bd28-0f347e7b1143-sys\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.486393 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b94bc94c-a636-4f0d-bd28-0f347e7b1143-dev\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.486458 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b94bc94c-a636-4f0d-bd28-0f347e7b1143-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.486690 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b94bc94c-a636-4f0d-bd28-0f347e7b1143-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.486719 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b94bc94c-a636-4f0d-bd28-0f347e7b1143-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.486793 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b94bc94c-a636-4f0d-bd28-0f347e7b1143-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.486828 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cvqz\" (UniqueName: \"kubernetes.io/projected/b94bc94c-a636-4f0d-bd28-0f347e7b1143-kube-api-access-4cvqz\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.486860 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b94bc94c-a636-4f0d-bd28-0f347e7b1143-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.486956 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b94bc94c-a636-4f0d-bd28-0f347e7b1143-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.486988 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b94bc94c-a636-4f0d-bd28-0f347e7b1143-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.487117 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b94bc94c-a636-4f0d-bd28-0f347e7b1143-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.487145 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b94bc94c-a636-4f0d-bd28-0f347e7b1143-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.487163 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b94bc94c-a636-4f0d-bd28-0f347e7b1143-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.589612 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b94bc94c-a636-4f0d-bd28-0f347e7b1143-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.589668 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b94bc94c-a636-4f0d-bd28-0f347e7b1143-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.589738 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b94bc94c-a636-4f0d-bd28-0f347e7b1143-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.589764 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b94bc94c-a636-4f0d-bd28-0f347e7b1143-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.589784 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b94bc94c-a636-4f0d-bd28-0f347e7b1143-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.589835 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b94bc94c-a636-4f0d-bd28-0f347e7b1143-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.589865 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b94bc94c-a636-4f0d-bd28-0f347e7b1143-sys\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.589885 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b94bc94c-a636-4f0d-bd28-0f347e7b1143-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.589908 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b94bc94c-a636-4f0d-bd28-0f347e7b1143-run\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.589938 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b94bc94c-a636-4f0d-bd28-0f347e7b1143-dev\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.589962 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b94bc94c-a636-4f0d-bd28-0f347e7b1143-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.590036 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b94bc94c-a636-4f0d-bd28-0f347e7b1143-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.590063 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b94bc94c-a636-4f0d-bd28-0f347e7b1143-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.590121 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b94bc94c-a636-4f0d-bd28-0f347e7b1143-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.590157 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cvqz\" (UniqueName: \"kubernetes.io/projected/b94bc94c-a636-4f0d-bd28-0f347e7b1143-kube-api-access-4cvqz\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.590183 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b94bc94c-a636-4f0d-bd28-0f347e7b1143-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.591449 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b94bc94c-a636-4f0d-bd28-0f347e7b1143-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.591483 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b94bc94c-a636-4f0d-bd28-0f347e7b1143-dev\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.591517 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b94bc94c-a636-4f0d-bd28-0f347e7b1143-sys\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.591559 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b94bc94c-a636-4f0d-bd28-0f347e7b1143-run\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.591621 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b94bc94c-a636-4f0d-bd28-0f347e7b1143-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.591830 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b94bc94c-a636-4f0d-bd28-0f347e7b1143-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.591902 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b94bc94c-a636-4f0d-bd28-0f347e7b1143-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.592197 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b94bc94c-a636-4f0d-bd28-0f347e7b1143-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.592336 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b94bc94c-a636-4f0d-bd28-0f347e7b1143-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.592384 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b94bc94c-a636-4f0d-bd28-0f347e7b1143-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.595823 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b94bc94c-a636-4f0d-bd28-0f347e7b1143-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.595874 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b94bc94c-a636-4f0d-bd28-0f347e7b1143-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.596397 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b94bc94c-a636-4f0d-bd28-0f347e7b1143-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.609418 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b94bc94c-a636-4f0d-bd28-0f347e7b1143-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.613823 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b94bc94c-a636-4f0d-bd28-0f347e7b1143-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.619934 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cvqz\" (UniqueName: \"kubernetes.io/projected/b94bc94c-a636-4f0d-bd28-0f347e7b1143-kube-api-access-4cvqz\") pod \"cinder-volume-volume1-0\" (UID: \"b94bc94c-a636-4f0d-bd28-0f347e7b1143\") " pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.716141 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.981604 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.983409 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Dec 11 15:23:31 crc kubenswrapper[5050]: I1211 15:23:31.986663 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.007602 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.098892 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0f954250-5982-4088-839a-8faf7bfe203c-sys\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.098962 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0f954250-5982-4088-839a-8faf7bfe203c-dev\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.099203 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0f954250-5982-4088-839a-8faf7bfe203c-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.099279 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0f954250-5982-4088-839a-8faf7bfe203c-etc-nvme\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.099305 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f954250-5982-4088-839a-8faf7bfe203c-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.099392 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0f954250-5982-4088-839a-8faf7bfe203c-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.099564 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0f954250-5982-4088-839a-8faf7bfe203c-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.099675 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0f954250-5982-4088-839a-8faf7bfe203c-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.099717 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0f954250-5982-4088-839a-8faf7bfe203c-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.099741 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f954250-5982-4088-839a-8faf7bfe203c-config-data\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.099766 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f954250-5982-4088-839a-8faf7bfe203c-config-data-custom\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.099940 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f954250-5982-4088-839a-8faf7bfe203c-scripts\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.100038 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f954250-5982-4088-839a-8faf7bfe203c-lib-modules\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.100066 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0f954250-5982-4088-839a-8faf7bfe203c-ceph\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.100332 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0f954250-5982-4088-839a-8faf7bfe203c-run\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.100417 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rv5tl\" (UniqueName: \"kubernetes.io/projected/0f954250-5982-4088-839a-8faf7bfe203c-kube-api-access-rv5tl\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.201851 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0f954250-5982-4088-839a-8faf7bfe203c-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.201961 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0f954250-5982-4088-839a-8faf7bfe203c-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.202017 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0f954250-5982-4088-839a-8faf7bfe203c-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.202041 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f954250-5982-4088-839a-8faf7bfe203c-config-data\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.202036 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0f954250-5982-4088-839a-8faf7bfe203c-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.202064 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f954250-5982-4088-839a-8faf7bfe203c-config-data-custom\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.202176 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f954250-5982-4088-839a-8faf7bfe203c-scripts\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.202162 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0f954250-5982-4088-839a-8faf7bfe203c-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.202198 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/0f954250-5982-4088-839a-8faf7bfe203c-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.202571 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f954250-5982-4088-839a-8faf7bfe203c-lib-modules\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.202607 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0f954250-5982-4088-839a-8faf7bfe203c-ceph\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.202709 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0f954250-5982-4088-839a-8faf7bfe203c-run\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.202727 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f954250-5982-4088-839a-8faf7bfe203c-lib-modules\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.202804 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0f954250-5982-4088-839a-8faf7bfe203c-run\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.202815 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rv5tl\" (UniqueName: \"kubernetes.io/projected/0f954250-5982-4088-839a-8faf7bfe203c-kube-api-access-rv5tl\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.202874 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0f954250-5982-4088-839a-8faf7bfe203c-sys\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.202939 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0f954250-5982-4088-839a-8faf7bfe203c-dev\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.202996 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0f954250-5982-4088-839a-8faf7bfe203c-sys\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.202998 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0f954250-5982-4088-839a-8faf7bfe203c-dev\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.203363 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0f954250-5982-4088-839a-8faf7bfe203c-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.203433 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0f954250-5982-4088-839a-8faf7bfe203c-etc-nvme\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.203458 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0f954250-5982-4088-839a-8faf7bfe203c-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.203469 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f954250-5982-4088-839a-8faf7bfe203c-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.203512 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0f954250-5982-4088-839a-8faf7bfe203c-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.203513 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0f954250-5982-4088-839a-8faf7bfe203c-etc-nvme\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.203568 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/0f954250-5982-4088-839a-8faf7bfe203c-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.214313 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/0f954250-5982-4088-839a-8faf7bfe203c-ceph\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.214311 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f954250-5982-4088-839a-8faf7bfe203c-config-data-custom\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.216915 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f954250-5982-4088-839a-8faf7bfe203c-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.217072 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f954250-5982-4088-839a-8faf7bfe203c-config-data\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.217379 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0f954250-5982-4088-839a-8faf7bfe203c-scripts\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.221299 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rv5tl\" (UniqueName: \"kubernetes.io/projected/0f954250-5982-4088-839a-8faf7bfe203c-kube-api-access-rv5tl\") pod \"cinder-backup-0\" (UID: \"0f954250-5982-4088-839a-8faf7bfe203c\") " pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.290825 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.295175 5050 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.311809 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Dec 11 15:23:32 crc kubenswrapper[5050]: I1211 15:23:32.888985 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Dec 11 15:23:32 crc kubenswrapper[5050]: W1211 15:23:32.891153 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f954250_5982_4088_839a_8faf7bfe203c.slice/crio-6e2523b5c870731d7ef762c5033587e7788f251c6d1b8c7abcbdd772c72e67bd WatchSource:0}: Error finding container 6e2523b5c870731d7ef762c5033587e7788f251c6d1b8c7abcbdd772c72e67bd: Status 404 returned error can't find the container with id 6e2523b5c870731d7ef762c5033587e7788f251c6d1b8c7abcbdd772c72e67bd Dec 11 15:23:33 crc kubenswrapper[5050]: I1211 15:23:33.124681 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"0f954250-5982-4088-839a-8faf7bfe203c","Type":"ContainerStarted","Data":"6e2523b5c870731d7ef762c5033587e7788f251c6d1b8c7abcbdd772c72e67bd"} Dec 11 15:23:33 crc kubenswrapper[5050]: I1211 15:23:33.126115 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"b94bc94c-a636-4f0d-bd28-0f347e7b1143","Type":"ContainerStarted","Data":"06115c5c215fe076fb7c94ddb1ce519da382e9c064aa37188fe965d27bd91c30"} Dec 11 15:23:33 crc kubenswrapper[5050]: I1211 15:23:33.763671 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="e156f879-4d43-4b29-86f2-fefc38253daf" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.1.80:8776/healthcheck\": read tcp 10.217.0.2:34276->10.217.1.80:8776: read: connection reset by peer" Dec 11 15:23:34 crc kubenswrapper[5050]: I1211 15:23:34.143989 5050 generic.go:334] "Generic (PLEG): container finished" podID="e156f879-4d43-4b29-86f2-fefc38253daf" containerID="1070e5aad879429cfa80ace93bad4612b20bcc52f34a2099db27c5e2210f04da" exitCode=0 Dec 11 15:23:34 crc kubenswrapper[5050]: I1211 15:23:34.144035 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e156f879-4d43-4b29-86f2-fefc38253daf","Type":"ContainerDied","Data":"1070e5aad879429cfa80ace93bad4612b20bcc52f34a2099db27c5e2210f04da"} Dec 11 15:23:34 crc kubenswrapper[5050]: I1211 15:23:34.145686 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"b94bc94c-a636-4f0d-bd28-0f347e7b1143","Type":"ContainerStarted","Data":"be93e909105a092ced14aca5a09c48a140e87c3ee935eba54b3b801f908c1e84"} Dec 11 15:23:34 crc kubenswrapper[5050]: I1211 15:23:34.227455 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 11 15:23:35 crc kubenswrapper[5050]: I1211 15:23:35.383888 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 11 15:23:35 crc kubenswrapper[5050]: I1211 15:23:35.384636 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 11 15:23:35 crc kubenswrapper[5050]: I1211 15:23:35.388155 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 11 15:23:35 crc kubenswrapper[5050]: I1211 15:23:35.389247 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 11 15:23:35 crc kubenswrapper[5050]: I1211 15:23:35.422291 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 11 15:23:35 crc kubenswrapper[5050]: I1211 15:23:35.423513 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 11 15:23:35 crc kubenswrapper[5050]: I1211 15:23:35.426651 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 11 15:23:35 crc kubenswrapper[5050]: I1211 15:23:35.448752 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 11 15:23:35 crc kubenswrapper[5050]: I1211 15:23:35.873722 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 15:23:35 crc kubenswrapper[5050]: I1211 15:23:35.983744 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e156f879-4d43-4b29-86f2-fefc38253daf-config-data\") pod \"e156f879-4d43-4b29-86f2-fefc38253daf\" (UID: \"e156f879-4d43-4b29-86f2-fefc38253daf\") " Dec 11 15:23:35 crc kubenswrapper[5050]: I1211 15:23:35.984223 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e156f879-4d43-4b29-86f2-fefc38253daf-combined-ca-bundle\") pod \"e156f879-4d43-4b29-86f2-fefc38253daf\" (UID: \"e156f879-4d43-4b29-86f2-fefc38253daf\") " Dec 11 15:23:35 crc kubenswrapper[5050]: I1211 15:23:35.984253 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e156f879-4d43-4b29-86f2-fefc38253daf-etc-machine-id\") pod \"e156f879-4d43-4b29-86f2-fefc38253daf\" (UID: \"e156f879-4d43-4b29-86f2-fefc38253daf\") " Dec 11 15:23:35 crc kubenswrapper[5050]: I1211 15:23:35.984323 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e156f879-4d43-4b29-86f2-fefc38253daf-config-data-custom\") pod \"e156f879-4d43-4b29-86f2-fefc38253daf\" (UID: \"e156f879-4d43-4b29-86f2-fefc38253daf\") " Dec 11 15:23:35 crc kubenswrapper[5050]: I1211 15:23:35.984389 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dr5d\" (UniqueName: \"kubernetes.io/projected/e156f879-4d43-4b29-86f2-fefc38253daf-kube-api-access-4dr5d\") pod \"e156f879-4d43-4b29-86f2-fefc38253daf\" (UID: \"e156f879-4d43-4b29-86f2-fefc38253daf\") " Dec 11 15:23:35 crc kubenswrapper[5050]: I1211 15:23:35.984497 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e156f879-4d43-4b29-86f2-fefc38253daf-scripts\") pod \"e156f879-4d43-4b29-86f2-fefc38253daf\" (UID: \"e156f879-4d43-4b29-86f2-fefc38253daf\") " Dec 11 15:23:35 crc kubenswrapper[5050]: I1211 15:23:35.984532 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e156f879-4d43-4b29-86f2-fefc38253daf-logs\") pod \"e156f879-4d43-4b29-86f2-fefc38253daf\" (UID: \"e156f879-4d43-4b29-86f2-fefc38253daf\") " Dec 11 15:23:35 crc kubenswrapper[5050]: I1211 15:23:35.984868 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e156f879-4d43-4b29-86f2-fefc38253daf-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e156f879-4d43-4b29-86f2-fefc38253daf" (UID: "e156f879-4d43-4b29-86f2-fefc38253daf"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 15:23:35 crc kubenswrapper[5050]: I1211 15:23:35.985292 5050 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e156f879-4d43-4b29-86f2-fefc38253daf-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:35 crc kubenswrapper[5050]: I1211 15:23:35.985450 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e156f879-4d43-4b29-86f2-fefc38253daf-logs" (OuterVolumeSpecName: "logs") pod "e156f879-4d43-4b29-86f2-fefc38253daf" (UID: "e156f879-4d43-4b29-86f2-fefc38253daf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:23:35 crc kubenswrapper[5050]: I1211 15:23:35.989919 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e156f879-4d43-4b29-86f2-fefc38253daf-kube-api-access-4dr5d" (OuterVolumeSpecName: "kube-api-access-4dr5d") pod "e156f879-4d43-4b29-86f2-fefc38253daf" (UID: "e156f879-4d43-4b29-86f2-fefc38253daf"). InnerVolumeSpecName "kube-api-access-4dr5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.007976 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e156f879-4d43-4b29-86f2-fefc38253daf-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e156f879-4d43-4b29-86f2-fefc38253daf" (UID: "e156f879-4d43-4b29-86f2-fefc38253daf"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.011336 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e156f879-4d43-4b29-86f2-fefc38253daf-scripts" (OuterVolumeSpecName: "scripts") pod "e156f879-4d43-4b29-86f2-fefc38253daf" (UID: "e156f879-4d43-4b29-86f2-fefc38253daf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.027591 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e156f879-4d43-4b29-86f2-fefc38253daf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e156f879-4d43-4b29-86f2-fefc38253daf" (UID: "e156f879-4d43-4b29-86f2-fefc38253daf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.089505 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e156f879-4d43-4b29-86f2-fefc38253daf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.089549 5050 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e156f879-4d43-4b29-86f2-fefc38253daf-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.089562 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dr5d\" (UniqueName: \"kubernetes.io/projected/e156f879-4d43-4b29-86f2-fefc38253daf-kube-api-access-4dr5d\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.089575 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e156f879-4d43-4b29-86f2-fefc38253daf-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.089585 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e156f879-4d43-4b29-86f2-fefc38253daf-logs\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.095467 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e156f879-4d43-4b29-86f2-fefc38253daf-config-data" (OuterVolumeSpecName: "config-data") pod "e156f879-4d43-4b29-86f2-fefc38253daf" (UID: "e156f879-4d43-4b29-86f2-fefc38253daf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.171152 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"0f954250-5982-4088-839a-8faf7bfe203c","Type":"ContainerStarted","Data":"2b1e9cc4138c3cee1e26c2a91b99b470ec5f4b50f5007ad3b68d6bbde49915a9"} Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.173839 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"b94bc94c-a636-4f0d-bd28-0f347e7b1143","Type":"ContainerStarted","Data":"70237ac067687f0726f725906d3b1520c514c5b2ca13c1941b427e936dff2395"} Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.186231 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e156f879-4d43-4b29-86f2-fefc38253daf","Type":"ContainerDied","Data":"9284b456494778f5748668f61529385c3dad91ee797312a650d4cc2f2f4fc327"} Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.186300 5050 scope.go:117] "RemoveContainer" containerID="1070e5aad879429cfa80ace93bad4612b20bcc52f34a2099db27c5e2210f04da" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.186496 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.187342 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.196751 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.196857 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e156f879-4d43-4b29-86f2-fefc38253daf-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.202294 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=4.045642831 podStartE2EDuration="5.202277412s" podCreationTimestamp="2025-12-11 15:23:31 +0000 UTC" firstStartedPulling="2025-12-11 15:23:32.294939177 +0000 UTC m=+5703.138661763" lastFinishedPulling="2025-12-11 15:23:33.451573758 +0000 UTC m=+5704.295296344" observedRunningTime="2025-12-11 15:23:36.195503251 +0000 UTC m=+5707.039225837" watchObservedRunningTime="2025-12-11 15:23:36.202277412 +0000 UTC m=+5707.045999998" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.231604 5050 scope.go:117] "RemoveContainer" containerID="245acd9b72c2fe04bd7d0f8ee5cf7652f8ab12fa9477aae011d327afc8eb1cae" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.250328 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.259590 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.275799 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 11 15:23:36 crc kubenswrapper[5050]: E1211 15:23:36.280533 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e156f879-4d43-4b29-86f2-fefc38253daf" containerName="cinder-api-log" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.280572 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e156f879-4d43-4b29-86f2-fefc38253daf" containerName="cinder-api-log" Dec 11 15:23:36 crc kubenswrapper[5050]: E1211 15:23:36.280600 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e156f879-4d43-4b29-86f2-fefc38253daf" containerName="cinder-api" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.280607 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e156f879-4d43-4b29-86f2-fefc38253daf" containerName="cinder-api" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.281053 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="e156f879-4d43-4b29-86f2-fefc38253daf" containerName="cinder-api" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.281084 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="e156f879-4d43-4b29-86f2-fefc38253daf" containerName="cinder-api-log" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.282891 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.289286 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.302532 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.401799 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2997\" (UniqueName: \"kubernetes.io/projected/11155953-ebeb-4785-92b4-d5bd566139d6-kube-api-access-w2997\") pod \"cinder-api-0\" (UID: \"11155953-ebeb-4785-92b4-d5bd566139d6\") " pod="openstack/cinder-api-0" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.401850 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11155953-ebeb-4785-92b4-d5bd566139d6-logs\") pod \"cinder-api-0\" (UID: \"11155953-ebeb-4785-92b4-d5bd566139d6\") " pod="openstack/cinder-api-0" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.401891 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/11155953-ebeb-4785-92b4-d5bd566139d6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"11155953-ebeb-4785-92b4-d5bd566139d6\") " pod="openstack/cinder-api-0" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.402047 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11155953-ebeb-4785-92b4-d5bd566139d6-scripts\") pod \"cinder-api-0\" (UID: \"11155953-ebeb-4785-92b4-d5bd566139d6\") " pod="openstack/cinder-api-0" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.402079 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11155953-ebeb-4785-92b4-d5bd566139d6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"11155953-ebeb-4785-92b4-d5bd566139d6\") " pod="openstack/cinder-api-0" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.402106 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11155953-ebeb-4785-92b4-d5bd566139d6-config-data-custom\") pod \"cinder-api-0\" (UID: \"11155953-ebeb-4785-92b4-d5bd566139d6\") " pod="openstack/cinder-api-0" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.402122 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11155953-ebeb-4785-92b4-d5bd566139d6-config-data\") pod \"cinder-api-0\" (UID: \"11155953-ebeb-4785-92b4-d5bd566139d6\") " pod="openstack/cinder-api-0" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.503941 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11155953-ebeb-4785-92b4-d5bd566139d6-scripts\") pod \"cinder-api-0\" (UID: \"11155953-ebeb-4785-92b4-d5bd566139d6\") " pod="openstack/cinder-api-0" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.503992 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11155953-ebeb-4785-92b4-d5bd566139d6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"11155953-ebeb-4785-92b4-d5bd566139d6\") " pod="openstack/cinder-api-0" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.504038 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11155953-ebeb-4785-92b4-d5bd566139d6-config-data-custom\") pod \"cinder-api-0\" (UID: \"11155953-ebeb-4785-92b4-d5bd566139d6\") " pod="openstack/cinder-api-0" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.504055 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11155953-ebeb-4785-92b4-d5bd566139d6-config-data\") pod \"cinder-api-0\" (UID: \"11155953-ebeb-4785-92b4-d5bd566139d6\") " pod="openstack/cinder-api-0" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.504105 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2997\" (UniqueName: \"kubernetes.io/projected/11155953-ebeb-4785-92b4-d5bd566139d6-kube-api-access-w2997\") pod \"cinder-api-0\" (UID: \"11155953-ebeb-4785-92b4-d5bd566139d6\") " pod="openstack/cinder-api-0" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.504128 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11155953-ebeb-4785-92b4-d5bd566139d6-logs\") pod \"cinder-api-0\" (UID: \"11155953-ebeb-4785-92b4-d5bd566139d6\") " pod="openstack/cinder-api-0" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.504158 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/11155953-ebeb-4785-92b4-d5bd566139d6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"11155953-ebeb-4785-92b4-d5bd566139d6\") " pod="openstack/cinder-api-0" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.504236 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/11155953-ebeb-4785-92b4-d5bd566139d6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"11155953-ebeb-4785-92b4-d5bd566139d6\") " pod="openstack/cinder-api-0" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.507319 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11155953-ebeb-4785-92b4-d5bd566139d6-logs\") pod \"cinder-api-0\" (UID: \"11155953-ebeb-4785-92b4-d5bd566139d6\") " pod="openstack/cinder-api-0" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.510032 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11155953-ebeb-4785-92b4-d5bd566139d6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"11155953-ebeb-4785-92b4-d5bd566139d6\") " pod="openstack/cinder-api-0" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.510513 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11155953-ebeb-4785-92b4-d5bd566139d6-scripts\") pod \"cinder-api-0\" (UID: \"11155953-ebeb-4785-92b4-d5bd566139d6\") " pod="openstack/cinder-api-0" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.511169 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11155953-ebeb-4785-92b4-d5bd566139d6-config-data\") pod \"cinder-api-0\" (UID: \"11155953-ebeb-4785-92b4-d5bd566139d6\") " pod="openstack/cinder-api-0" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.512130 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/11155953-ebeb-4785-92b4-d5bd566139d6-config-data-custom\") pod \"cinder-api-0\" (UID: \"11155953-ebeb-4785-92b4-d5bd566139d6\") " pod="openstack/cinder-api-0" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.521983 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2997\" (UniqueName: \"kubernetes.io/projected/11155953-ebeb-4785-92b4-d5bd566139d6-kube-api-access-w2997\") pod \"cinder-api-0\" (UID: \"11155953-ebeb-4785-92b4-d5bd566139d6\") " pod="openstack/cinder-api-0" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.606991 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 15:23:36 crc kubenswrapper[5050]: I1211 15:23:36.718137 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:37 crc kubenswrapper[5050]: I1211 15:23:37.105650 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 11 15:23:37 crc kubenswrapper[5050]: I1211 15:23:37.199409 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"11155953-ebeb-4785-92b4-d5bd566139d6","Type":"ContainerStarted","Data":"966ebda30365316f228727274e162baed02b25de65daa3e2e949586c8ebf0e22"} Dec 11 15:23:37 crc kubenswrapper[5050]: I1211 15:23:37.206873 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"0f954250-5982-4088-839a-8faf7bfe203c","Type":"ContainerStarted","Data":"784d7a7731d09dddb22e535157d66709b59c7b2b5c523330745af886c0ad6b42"} Dec 11 15:23:37 crc kubenswrapper[5050]: I1211 15:23:37.312535 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Dec 11 15:23:37 crc kubenswrapper[5050]: I1211 15:23:37.562724 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e156f879-4d43-4b29-86f2-fefc38253daf" path="/var/lib/kubelet/pods/e156f879-4d43-4b29-86f2-fefc38253daf/volumes" Dec 11 15:23:38 crc kubenswrapper[5050]: I1211 15:23:38.220769 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"11155953-ebeb-4785-92b4-d5bd566139d6","Type":"ContainerStarted","Data":"9c785a443360d8959859fd39ed40cb9ca6269bcda7f0db711a21ed6c93d7d368"} Dec 11 15:23:39 crc kubenswrapper[5050]: I1211 15:23:39.229771 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"11155953-ebeb-4785-92b4-d5bd566139d6","Type":"ContainerStarted","Data":"c2e2bbf0dbd86f76c8bc7b48a4f2eaa5aeacc69ba62ac4e7f31311ec62389e9c"} Dec 11 15:23:39 crc kubenswrapper[5050]: I1211 15:23:39.247224 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=5.483376029 podStartE2EDuration="8.247206115s" podCreationTimestamp="2025-12-11 15:23:31 +0000 UTC" firstStartedPulling="2025-12-11 15:23:32.893455042 +0000 UTC m=+5703.737177628" lastFinishedPulling="2025-12-11 15:23:35.657285128 +0000 UTC m=+5706.501007714" observedRunningTime="2025-12-11 15:23:37.232796905 +0000 UTC m=+5708.076519501" watchObservedRunningTime="2025-12-11 15:23:39.247206115 +0000 UTC m=+5710.090928701" Dec 11 15:23:39 crc kubenswrapper[5050]: I1211 15:23:39.251686 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.251674834 podStartE2EDuration="3.251674834s" podCreationTimestamp="2025-12-11 15:23:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:23:39.245589512 +0000 UTC m=+5710.089312118" watchObservedRunningTime="2025-12-11 15:23:39.251674834 +0000 UTC m=+5710.095397420" Dec 11 15:23:39 crc kubenswrapper[5050]: I1211 15:23:39.451337 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 11 15:23:39 crc kubenswrapper[5050]: I1211 15:23:39.494177 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 15:23:40 crc kubenswrapper[5050]: I1211 15:23:40.241177 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 11 15:23:40 crc kubenswrapper[5050]: I1211 15:23:40.241378 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="3f437b6e-75bc-4597-9000-d2082eec6a3a" containerName="cinder-scheduler" containerID="cri-o://556e6e2f10f75df82d6e4ce17f2d0a4f7d543a9081b5fb407074feb768780908" gracePeriod=30 Dec 11 15:23:40 crc kubenswrapper[5050]: I1211 15:23:40.241399 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="3f437b6e-75bc-4597-9000-d2082eec6a3a" containerName="probe" containerID="cri-o://cf1af31957de81c35a430a19290aab38b1f64bf84d13638d5ea34538cef10142" gracePeriod=30 Dec 11 15:23:41 crc kubenswrapper[5050]: I1211 15:23:41.251291 5050 generic.go:334] "Generic (PLEG): container finished" podID="3f437b6e-75bc-4597-9000-d2082eec6a3a" containerID="cf1af31957de81c35a430a19290aab38b1f64bf84d13638d5ea34538cef10142" exitCode=0 Dec 11 15:23:41 crc kubenswrapper[5050]: I1211 15:23:41.251679 5050 generic.go:334] "Generic (PLEG): container finished" podID="3f437b6e-75bc-4597-9000-d2082eec6a3a" containerID="556e6e2f10f75df82d6e4ce17f2d0a4f7d543a9081b5fb407074feb768780908" exitCode=0 Dec 11 15:23:41 crc kubenswrapper[5050]: I1211 15:23:41.251343 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3f437b6e-75bc-4597-9000-d2082eec6a3a","Type":"ContainerDied","Data":"cf1af31957de81c35a430a19290aab38b1f64bf84d13638d5ea34538cef10142"} Dec 11 15:23:41 crc kubenswrapper[5050]: I1211 15:23:41.251755 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3f437b6e-75bc-4597-9000-d2082eec6a3a","Type":"ContainerDied","Data":"556e6e2f10f75df82d6e4ce17f2d0a4f7d543a9081b5fb407074feb768780908"} Dec 11 15:23:41 crc kubenswrapper[5050]: I1211 15:23:41.929532 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.162889 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.233544 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f437b6e-75bc-4597-9000-d2082eec6a3a-etc-machine-id\") pod \"3f437b6e-75bc-4597-9000-d2082eec6a3a\" (UID: \"3f437b6e-75bc-4597-9000-d2082eec6a3a\") " Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.233707 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f437b6e-75bc-4597-9000-d2082eec6a3a-config-data\") pod \"3f437b6e-75bc-4597-9000-d2082eec6a3a\" (UID: \"3f437b6e-75bc-4597-9000-d2082eec6a3a\") " Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.233786 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f437b6e-75bc-4597-9000-d2082eec6a3a-combined-ca-bundle\") pod \"3f437b6e-75bc-4597-9000-d2082eec6a3a\" (UID: \"3f437b6e-75bc-4597-9000-d2082eec6a3a\") " Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.233703 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f437b6e-75bc-4597-9000-d2082eec6a3a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3f437b6e-75bc-4597-9000-d2082eec6a3a" (UID: "3f437b6e-75bc-4597-9000-d2082eec6a3a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.233850 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvqnl\" (UniqueName: \"kubernetes.io/projected/3f437b6e-75bc-4597-9000-d2082eec6a3a-kube-api-access-pvqnl\") pod \"3f437b6e-75bc-4597-9000-d2082eec6a3a\" (UID: \"3f437b6e-75bc-4597-9000-d2082eec6a3a\") " Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.233919 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f437b6e-75bc-4597-9000-d2082eec6a3a-config-data-custom\") pod \"3f437b6e-75bc-4597-9000-d2082eec6a3a\" (UID: \"3f437b6e-75bc-4597-9000-d2082eec6a3a\") " Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.233992 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f437b6e-75bc-4597-9000-d2082eec6a3a-scripts\") pod \"3f437b6e-75bc-4597-9000-d2082eec6a3a\" (UID: \"3f437b6e-75bc-4597-9000-d2082eec6a3a\") " Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.234488 5050 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f437b6e-75bc-4597-9000-d2082eec6a3a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.239793 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f437b6e-75bc-4597-9000-d2082eec6a3a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3f437b6e-75bc-4597-9000-d2082eec6a3a" (UID: "3f437b6e-75bc-4597-9000-d2082eec6a3a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.239825 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f437b6e-75bc-4597-9000-d2082eec6a3a-kube-api-access-pvqnl" (OuterVolumeSpecName: "kube-api-access-pvqnl") pod "3f437b6e-75bc-4597-9000-d2082eec6a3a" (UID: "3f437b6e-75bc-4597-9000-d2082eec6a3a"). InnerVolumeSpecName "kube-api-access-pvqnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.239923 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f437b6e-75bc-4597-9000-d2082eec6a3a-scripts" (OuterVolumeSpecName: "scripts") pod "3f437b6e-75bc-4597-9000-d2082eec6a3a" (UID: "3f437b6e-75bc-4597-9000-d2082eec6a3a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.267398 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3f437b6e-75bc-4597-9000-d2082eec6a3a","Type":"ContainerDied","Data":"9c4680f4d1db509d45fafcba9810f824e975b205c7232d86fb5f3ba3f4d94535"} Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.267734 5050 scope.go:117] "RemoveContainer" containerID="cf1af31957de81c35a430a19290aab38b1f64bf84d13638d5ea34538cef10142" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.267492 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.302578 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f437b6e-75bc-4597-9000-d2082eec6a3a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f437b6e-75bc-4597-9000-d2082eec6a3a" (UID: "3f437b6e-75bc-4597-9000-d2082eec6a3a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.334678 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f437b6e-75bc-4597-9000-d2082eec6a3a-config-data" (OuterVolumeSpecName: "config-data") pod "3f437b6e-75bc-4597-9000-d2082eec6a3a" (UID: "3f437b6e-75bc-4597-9000-d2082eec6a3a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.335567 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f437b6e-75bc-4597-9000-d2082eec6a3a-config-data\") pod \"3f437b6e-75bc-4597-9000-d2082eec6a3a\" (UID: \"3f437b6e-75bc-4597-9000-d2082eec6a3a\") " Dec 11 15:23:42 crc kubenswrapper[5050]: W1211 15:23:42.335634 5050 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/3f437b6e-75bc-4597-9000-d2082eec6a3a/volumes/kubernetes.io~secret/config-data Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.335646 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f437b6e-75bc-4597-9000-d2082eec6a3a-config-data" (OuterVolumeSpecName: "config-data") pod "3f437b6e-75bc-4597-9000-d2082eec6a3a" (UID: "3f437b6e-75bc-4597-9000-d2082eec6a3a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.336283 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f437b6e-75bc-4597-9000-d2082eec6a3a-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.336307 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f437b6e-75bc-4597-9000-d2082eec6a3a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.336322 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvqnl\" (UniqueName: \"kubernetes.io/projected/3f437b6e-75bc-4597-9000-d2082eec6a3a-kube-api-access-pvqnl\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.336335 5050 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f437b6e-75bc-4597-9000-d2082eec6a3a-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.336346 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f437b6e-75bc-4597-9000-d2082eec6a3a-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.411567 5050 scope.go:117] "RemoveContainer" containerID="556e6e2f10f75df82d6e4ce17f2d0a4f7d543a9081b5fb407074feb768780908" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.537794 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.611360 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.622311 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.634297 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 15:23:42 crc kubenswrapper[5050]: E1211 15:23:42.634723 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f437b6e-75bc-4597-9000-d2082eec6a3a" containerName="cinder-scheduler" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.634741 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f437b6e-75bc-4597-9000-d2082eec6a3a" containerName="cinder-scheduler" Dec 11 15:23:42 crc kubenswrapper[5050]: E1211 15:23:42.634781 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f437b6e-75bc-4597-9000-d2082eec6a3a" containerName="probe" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.634787 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f437b6e-75bc-4597-9000-d2082eec6a3a" containerName="probe" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.634951 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f437b6e-75bc-4597-9000-d2082eec6a3a" containerName="probe" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.634975 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f437b6e-75bc-4597-9000-d2082eec6a3a" containerName="cinder-scheduler" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.636354 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.638571 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.655400 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.744858 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/131d56da-b770-4452-97c9-b585434da431-scripts\") pod \"cinder-scheduler-0\" (UID: \"131d56da-b770-4452-97c9-b585434da431\") " pod="openstack/cinder-scheduler-0" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.744984 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkkjz\" (UniqueName: \"kubernetes.io/projected/131d56da-b770-4452-97c9-b585434da431-kube-api-access-qkkjz\") pod \"cinder-scheduler-0\" (UID: \"131d56da-b770-4452-97c9-b585434da431\") " pod="openstack/cinder-scheduler-0" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.745053 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/131d56da-b770-4452-97c9-b585434da431-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"131d56da-b770-4452-97c9-b585434da431\") " pod="openstack/cinder-scheduler-0" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.745090 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/131d56da-b770-4452-97c9-b585434da431-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"131d56da-b770-4452-97c9-b585434da431\") " pod="openstack/cinder-scheduler-0" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.745294 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/131d56da-b770-4452-97c9-b585434da431-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"131d56da-b770-4452-97c9-b585434da431\") " pod="openstack/cinder-scheduler-0" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.745342 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/131d56da-b770-4452-97c9-b585434da431-config-data\") pod \"cinder-scheduler-0\" (UID: \"131d56da-b770-4452-97c9-b585434da431\") " pod="openstack/cinder-scheduler-0" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.847066 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/131d56da-b770-4452-97c9-b585434da431-scripts\") pod \"cinder-scheduler-0\" (UID: \"131d56da-b770-4452-97c9-b585434da431\") " pod="openstack/cinder-scheduler-0" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.847181 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkkjz\" (UniqueName: \"kubernetes.io/projected/131d56da-b770-4452-97c9-b585434da431-kube-api-access-qkkjz\") pod \"cinder-scheduler-0\" (UID: \"131d56da-b770-4452-97c9-b585434da431\") " pod="openstack/cinder-scheduler-0" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.847218 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/131d56da-b770-4452-97c9-b585434da431-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"131d56da-b770-4452-97c9-b585434da431\") " pod="openstack/cinder-scheduler-0" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.847239 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/131d56da-b770-4452-97c9-b585434da431-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"131d56da-b770-4452-97c9-b585434da431\") " pod="openstack/cinder-scheduler-0" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.847287 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/131d56da-b770-4452-97c9-b585434da431-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"131d56da-b770-4452-97c9-b585434da431\") " pod="openstack/cinder-scheduler-0" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.847301 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/131d56da-b770-4452-97c9-b585434da431-config-data\") pod \"cinder-scheduler-0\" (UID: \"131d56da-b770-4452-97c9-b585434da431\") " pod="openstack/cinder-scheduler-0" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.847358 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/131d56da-b770-4452-97c9-b585434da431-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"131d56da-b770-4452-97c9-b585434da431\") " pod="openstack/cinder-scheduler-0" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.853006 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/131d56da-b770-4452-97c9-b585434da431-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"131d56da-b770-4452-97c9-b585434da431\") " pod="openstack/cinder-scheduler-0" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.853106 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/131d56da-b770-4452-97c9-b585434da431-config-data\") pod \"cinder-scheduler-0\" (UID: \"131d56da-b770-4452-97c9-b585434da431\") " pod="openstack/cinder-scheduler-0" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.853721 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/131d56da-b770-4452-97c9-b585434da431-scripts\") pod \"cinder-scheduler-0\" (UID: \"131d56da-b770-4452-97c9-b585434da431\") " pod="openstack/cinder-scheduler-0" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.854306 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/131d56da-b770-4452-97c9-b585434da431-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"131d56da-b770-4452-97c9-b585434da431\") " pod="openstack/cinder-scheduler-0" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.865679 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkkjz\" (UniqueName: \"kubernetes.io/projected/131d56da-b770-4452-97c9-b585434da431-kube-api-access-qkkjz\") pod \"cinder-scheduler-0\" (UID: \"131d56da-b770-4452-97c9-b585434da431\") " pod="openstack/cinder-scheduler-0" Dec 11 15:23:42 crc kubenswrapper[5050]: I1211 15:23:42.988481 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 15:23:43 crc kubenswrapper[5050]: I1211 15:23:43.430488 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 15:23:43 crc kubenswrapper[5050]: I1211 15:23:43.558324 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f437b6e-75bc-4597-9000-d2082eec6a3a" path="/var/lib/kubelet/pods/3f437b6e-75bc-4597-9000-d2082eec6a3a/volumes" Dec 11 15:23:44 crc kubenswrapper[5050]: I1211 15:23:44.296654 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"131d56da-b770-4452-97c9-b585434da431","Type":"ContainerStarted","Data":"d3e1fc093c150e4e423d3557b3ec8c637526707132012ee62f3903c848934b44"} Dec 11 15:23:44 crc kubenswrapper[5050]: I1211 15:23:44.296946 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"131d56da-b770-4452-97c9-b585434da431","Type":"ContainerStarted","Data":"847b84a6adf4c6b174b1c9d40f99f3462fa0811b8e18dc1c4ea8e7dc7f86beb8"} Dec 11 15:23:45 crc kubenswrapper[5050]: I1211 15:23:45.310518 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"131d56da-b770-4452-97c9-b585434da431","Type":"ContainerStarted","Data":"c04a0317369b0a487fd680292047296577499dd5900e9caa4278da65814c704c"} Dec 11 15:23:45 crc kubenswrapper[5050]: I1211 15:23:45.329613 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.32959248 podStartE2EDuration="3.32959248s" podCreationTimestamp="2025-12-11 15:23:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:23:45.324605897 +0000 UTC m=+5716.168328483" watchObservedRunningTime="2025-12-11 15:23:45.32959248 +0000 UTC m=+5716.173315066" Dec 11 15:23:47 crc kubenswrapper[5050]: I1211 15:23:47.989456 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 11 15:23:48 crc kubenswrapper[5050]: I1211 15:23:48.483047 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 11 15:23:53 crc kubenswrapper[5050]: I1211 15:23:53.186005 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 11 15:24:10 crc kubenswrapper[5050]: I1211 15:24:10.796729 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 15:24:10 crc kubenswrapper[5050]: I1211 15:24:10.797432 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 15:24:40 crc kubenswrapper[5050]: I1211 15:24:40.796425 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 15:24:40 crc kubenswrapper[5050]: I1211 15:24:40.797165 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 15:24:46 crc kubenswrapper[5050]: I1211 15:24:46.695758 5050 scope.go:117] "RemoveContainer" containerID="2fecee04939a8bbc1c0882005945f113c0816a3b86833f66803dce3e80894c33" Dec 11 15:24:46 crc kubenswrapper[5050]: I1211 15:24:46.716245 5050 scope.go:117] "RemoveContainer" containerID="f40a12d709ef29585091a6c86e877fdf647a9c3f6d43d18d1c91b9be70195e74" Dec 11 15:25:10 crc kubenswrapper[5050]: I1211 15:25:10.796133 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 15:25:10 crc kubenswrapper[5050]: I1211 15:25:10.797143 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 15:25:10 crc kubenswrapper[5050]: I1211 15:25:10.797203 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 15:25:10 crc kubenswrapper[5050]: I1211 15:25:10.797933 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cd874cd802dc062c08f2ff800368e85c46f12b5b02fbe3836d6e9375c3dc66f8"} pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 15:25:10 crc kubenswrapper[5050]: I1211 15:25:10.797989 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" containerID="cri-o://cd874cd802dc062c08f2ff800368e85c46f12b5b02fbe3836d6e9375c3dc66f8" gracePeriod=600 Dec 11 15:25:12 crc kubenswrapper[5050]: I1211 15:25:12.158581 5050 generic.go:334] "Generic (PLEG): container finished" podID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerID="cd874cd802dc062c08f2ff800368e85c46f12b5b02fbe3836d6e9375c3dc66f8" exitCode=0 Dec 11 15:25:12 crc kubenswrapper[5050]: I1211 15:25:12.158699 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerDied","Data":"cd874cd802dc062c08f2ff800368e85c46f12b5b02fbe3836d6e9375c3dc66f8"} Dec 11 15:25:12 crc kubenswrapper[5050]: I1211 15:25:12.159072 5050 scope.go:117] "RemoveContainer" containerID="5e751c2ff817a404aae81889734d5a9bd57af497d8e885add288368bb5b0040e" Dec 11 15:25:14 crc kubenswrapper[5050]: I1211 15:25:14.181167 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerStarted","Data":"ebf03c01a0c1f974cd2ea46998cc404d2f6db1e8b59294fa047964a30636886e"} Dec 11 15:25:21 crc kubenswrapper[5050]: I1211 15:25:21.514282 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-64hw5"] Dec 11 15:25:21 crc kubenswrapper[5050]: I1211 15:25:21.517174 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-64hw5" Dec 11 15:25:21 crc kubenswrapper[5050]: I1211 15:25:21.522293 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-64hw5"] Dec 11 15:25:21 crc kubenswrapper[5050]: I1211 15:25:21.530221 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk5t2\" (UniqueName: \"kubernetes.io/projected/577c3778-67c2-4488-9f29-ea57d11059d8-kube-api-access-dk5t2\") pod \"redhat-operators-64hw5\" (UID: \"577c3778-67c2-4488-9f29-ea57d11059d8\") " pod="openshift-marketplace/redhat-operators-64hw5" Dec 11 15:25:21 crc kubenswrapper[5050]: I1211 15:25:21.530282 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/577c3778-67c2-4488-9f29-ea57d11059d8-catalog-content\") pod \"redhat-operators-64hw5\" (UID: \"577c3778-67c2-4488-9f29-ea57d11059d8\") " pod="openshift-marketplace/redhat-operators-64hw5" Dec 11 15:25:21 crc kubenswrapper[5050]: I1211 15:25:21.530307 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/577c3778-67c2-4488-9f29-ea57d11059d8-utilities\") pod \"redhat-operators-64hw5\" (UID: \"577c3778-67c2-4488-9f29-ea57d11059d8\") " pod="openshift-marketplace/redhat-operators-64hw5" Dec 11 15:25:21 crc kubenswrapper[5050]: I1211 15:25:21.632318 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dk5t2\" (UniqueName: \"kubernetes.io/projected/577c3778-67c2-4488-9f29-ea57d11059d8-kube-api-access-dk5t2\") pod \"redhat-operators-64hw5\" (UID: \"577c3778-67c2-4488-9f29-ea57d11059d8\") " pod="openshift-marketplace/redhat-operators-64hw5" Dec 11 15:25:21 crc kubenswrapper[5050]: I1211 15:25:21.632368 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/577c3778-67c2-4488-9f29-ea57d11059d8-catalog-content\") pod \"redhat-operators-64hw5\" (UID: \"577c3778-67c2-4488-9f29-ea57d11059d8\") " pod="openshift-marketplace/redhat-operators-64hw5" Dec 11 15:25:21 crc kubenswrapper[5050]: I1211 15:25:21.632385 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/577c3778-67c2-4488-9f29-ea57d11059d8-utilities\") pod \"redhat-operators-64hw5\" (UID: \"577c3778-67c2-4488-9f29-ea57d11059d8\") " pod="openshift-marketplace/redhat-operators-64hw5" Dec 11 15:25:21 crc kubenswrapper[5050]: I1211 15:25:21.633652 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/577c3778-67c2-4488-9f29-ea57d11059d8-utilities\") pod \"redhat-operators-64hw5\" (UID: \"577c3778-67c2-4488-9f29-ea57d11059d8\") " pod="openshift-marketplace/redhat-operators-64hw5" Dec 11 15:25:21 crc kubenswrapper[5050]: I1211 15:25:21.633779 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/577c3778-67c2-4488-9f29-ea57d11059d8-catalog-content\") pod \"redhat-operators-64hw5\" (UID: \"577c3778-67c2-4488-9f29-ea57d11059d8\") " pod="openshift-marketplace/redhat-operators-64hw5" Dec 11 15:25:21 crc kubenswrapper[5050]: I1211 15:25:21.684276 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dk5t2\" (UniqueName: \"kubernetes.io/projected/577c3778-67c2-4488-9f29-ea57d11059d8-kube-api-access-dk5t2\") pod \"redhat-operators-64hw5\" (UID: \"577c3778-67c2-4488-9f29-ea57d11059d8\") " pod="openshift-marketplace/redhat-operators-64hw5" Dec 11 15:25:21 crc kubenswrapper[5050]: I1211 15:25:21.860369 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-64hw5" Dec 11 15:25:22 crc kubenswrapper[5050]: I1211 15:25:22.327028 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-64hw5"] Dec 11 15:25:23 crc kubenswrapper[5050]: I1211 15:25:23.256899 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-64hw5" event={"ID":"577c3778-67c2-4488-9f29-ea57d11059d8","Type":"ContainerStarted","Data":"a49edff9bafc76f0fd38f7aa8c604b252093157ec330ae46f296bb0a9e00e95f"} Dec 11 15:25:24 crc kubenswrapper[5050]: I1211 15:25:24.266218 5050 generic.go:334] "Generic (PLEG): container finished" podID="577c3778-67c2-4488-9f29-ea57d11059d8" containerID="f6c353ddf0851219239814f3fe58ceb7361901c361604b7ff3b2196b777c9a50" exitCode=0 Dec 11 15:25:24 crc kubenswrapper[5050]: I1211 15:25:24.266265 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-64hw5" event={"ID":"577c3778-67c2-4488-9f29-ea57d11059d8","Type":"ContainerDied","Data":"f6c353ddf0851219239814f3fe58ceb7361901c361604b7ff3b2196b777c9a50"} Dec 11 15:25:30 crc kubenswrapper[5050]: I1211 15:25:30.327415 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-64hw5" event={"ID":"577c3778-67c2-4488-9f29-ea57d11059d8","Type":"ContainerStarted","Data":"aba82de2f9bab7048c9c43ca4435d887d6d89f7f05e0bbf5e4e3add4fd7300b8"} Dec 11 15:25:32 crc kubenswrapper[5050]: I1211 15:25:32.347206 5050 generic.go:334] "Generic (PLEG): container finished" podID="577c3778-67c2-4488-9f29-ea57d11059d8" containerID="aba82de2f9bab7048c9c43ca4435d887d6d89f7f05e0bbf5e4e3add4fd7300b8" exitCode=0 Dec 11 15:25:32 crc kubenswrapper[5050]: I1211 15:25:32.347402 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-64hw5" event={"ID":"577c3778-67c2-4488-9f29-ea57d11059d8","Type":"ContainerDied","Data":"aba82de2f9bab7048c9c43ca4435d887d6d89f7f05e0bbf5e4e3add4fd7300b8"} Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.344217 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-4494l"] Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.346416 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4494l" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.348568 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-fjq8l" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.354477 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-z8z22"] Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.356759 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-z8z22" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.362596 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4494l"] Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.366276 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.403590 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ed6b539-c9da-4b55-8df5-dd96bfc586dc-scripts\") pod \"ovn-controller-ovs-z8z22\" (UID: \"5ed6b539-c9da-4b55-8df5-dd96bfc586dc\") " pod="openstack/ovn-controller-ovs-z8z22" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.403651 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5ed6b539-c9da-4b55-8df5-dd96bfc586dc-etc-ovs\") pod \"ovn-controller-ovs-z8z22\" (UID: \"5ed6b539-c9da-4b55-8df5-dd96bfc586dc\") " pod="openstack/ovn-controller-ovs-z8z22" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.403916 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6797fb3-9b0d-4856-a741-eb4c640a65ba-var-run-ovn\") pod \"ovn-controller-4494l\" (UID: \"d6797fb3-9b0d-4856-a741-eb4c640a65ba\") " pod="openstack/ovn-controller-4494l" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.403947 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d6797fb3-9b0d-4856-a741-eb4c640a65ba-var-log-ovn\") pod \"ovn-controller-4494l\" (UID: \"d6797fb3-9b0d-4856-a741-eb4c640a65ba\") " pod="openstack/ovn-controller-4494l" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.403985 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmb5r\" (UniqueName: \"kubernetes.io/projected/d6797fb3-9b0d-4856-a741-eb4c640a65ba-kube-api-access-dmb5r\") pod \"ovn-controller-4494l\" (UID: \"d6797fb3-9b0d-4856-a741-eb4c640a65ba\") " pod="openstack/ovn-controller-4494l" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.404028 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5ed6b539-c9da-4b55-8df5-dd96bfc586dc-var-run\") pod \"ovn-controller-ovs-z8z22\" (UID: \"5ed6b539-c9da-4b55-8df5-dd96bfc586dc\") " pod="openstack/ovn-controller-ovs-z8z22" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.404193 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnw99\" (UniqueName: \"kubernetes.io/projected/5ed6b539-c9da-4b55-8df5-dd96bfc586dc-kube-api-access-mnw99\") pod \"ovn-controller-ovs-z8z22\" (UID: \"5ed6b539-c9da-4b55-8df5-dd96bfc586dc\") " pod="openstack/ovn-controller-ovs-z8z22" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.404222 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5ed6b539-c9da-4b55-8df5-dd96bfc586dc-var-lib\") pod \"ovn-controller-ovs-z8z22\" (UID: \"5ed6b539-c9da-4b55-8df5-dd96bfc586dc\") " pod="openstack/ovn-controller-ovs-z8z22" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.404405 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d6797fb3-9b0d-4856-a741-eb4c640a65ba-var-run\") pod \"ovn-controller-4494l\" (UID: \"d6797fb3-9b0d-4856-a741-eb4c640a65ba\") " pod="openstack/ovn-controller-4494l" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.404474 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5ed6b539-c9da-4b55-8df5-dd96bfc586dc-var-log\") pod \"ovn-controller-ovs-z8z22\" (UID: \"5ed6b539-c9da-4b55-8df5-dd96bfc586dc\") " pod="openstack/ovn-controller-ovs-z8z22" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.404524 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6797fb3-9b0d-4856-a741-eb4c640a65ba-scripts\") pod \"ovn-controller-4494l\" (UID: \"d6797fb3-9b0d-4856-a741-eb4c640a65ba\") " pod="openstack/ovn-controller-4494l" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.412667 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-z8z22"] Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.505870 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d6797fb3-9b0d-4856-a741-eb4c640a65ba-var-run\") pod \"ovn-controller-4494l\" (UID: \"d6797fb3-9b0d-4856-a741-eb4c640a65ba\") " pod="openstack/ovn-controller-4494l" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.505922 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5ed6b539-c9da-4b55-8df5-dd96bfc586dc-var-log\") pod \"ovn-controller-ovs-z8z22\" (UID: \"5ed6b539-c9da-4b55-8df5-dd96bfc586dc\") " pod="openstack/ovn-controller-ovs-z8z22" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.505961 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6797fb3-9b0d-4856-a741-eb4c640a65ba-scripts\") pod \"ovn-controller-4494l\" (UID: \"d6797fb3-9b0d-4856-a741-eb4c640a65ba\") " pod="openstack/ovn-controller-4494l" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.505985 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ed6b539-c9da-4b55-8df5-dd96bfc586dc-scripts\") pod \"ovn-controller-ovs-z8z22\" (UID: \"5ed6b539-c9da-4b55-8df5-dd96bfc586dc\") " pod="openstack/ovn-controller-ovs-z8z22" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.506005 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5ed6b539-c9da-4b55-8df5-dd96bfc586dc-etc-ovs\") pod \"ovn-controller-ovs-z8z22\" (UID: \"5ed6b539-c9da-4b55-8df5-dd96bfc586dc\") " pod="openstack/ovn-controller-ovs-z8z22" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.506044 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6797fb3-9b0d-4856-a741-eb4c640a65ba-var-run-ovn\") pod \"ovn-controller-4494l\" (UID: \"d6797fb3-9b0d-4856-a741-eb4c640a65ba\") " pod="openstack/ovn-controller-4494l" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.506069 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d6797fb3-9b0d-4856-a741-eb4c640a65ba-var-log-ovn\") pod \"ovn-controller-4494l\" (UID: \"d6797fb3-9b0d-4856-a741-eb4c640a65ba\") " pod="openstack/ovn-controller-4494l" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.506097 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmb5r\" (UniqueName: \"kubernetes.io/projected/d6797fb3-9b0d-4856-a741-eb4c640a65ba-kube-api-access-dmb5r\") pod \"ovn-controller-4494l\" (UID: \"d6797fb3-9b0d-4856-a741-eb4c640a65ba\") " pod="openstack/ovn-controller-4494l" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.506122 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5ed6b539-c9da-4b55-8df5-dd96bfc586dc-var-run\") pod \"ovn-controller-ovs-z8z22\" (UID: \"5ed6b539-c9da-4b55-8df5-dd96bfc586dc\") " pod="openstack/ovn-controller-ovs-z8z22" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.506245 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnw99\" (UniqueName: \"kubernetes.io/projected/5ed6b539-c9da-4b55-8df5-dd96bfc586dc-kube-api-access-mnw99\") pod \"ovn-controller-ovs-z8z22\" (UID: \"5ed6b539-c9da-4b55-8df5-dd96bfc586dc\") " pod="openstack/ovn-controller-ovs-z8z22" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.506267 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5ed6b539-c9da-4b55-8df5-dd96bfc586dc-var-lib\") pod \"ovn-controller-ovs-z8z22\" (UID: \"5ed6b539-c9da-4b55-8df5-dd96bfc586dc\") " pod="openstack/ovn-controller-ovs-z8z22" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.506324 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5ed6b539-c9da-4b55-8df5-dd96bfc586dc-var-log\") pod \"ovn-controller-ovs-z8z22\" (UID: \"5ed6b539-c9da-4b55-8df5-dd96bfc586dc\") " pod="openstack/ovn-controller-ovs-z8z22" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.506343 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d6797fb3-9b0d-4856-a741-eb4c640a65ba-var-run\") pod \"ovn-controller-4494l\" (UID: \"d6797fb3-9b0d-4856-a741-eb4c640a65ba\") " pod="openstack/ovn-controller-4494l" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.506356 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d6797fb3-9b0d-4856-a741-eb4c640a65ba-var-run-ovn\") pod \"ovn-controller-4494l\" (UID: \"d6797fb3-9b0d-4856-a741-eb4c640a65ba\") " pod="openstack/ovn-controller-4494l" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.506386 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d6797fb3-9b0d-4856-a741-eb4c640a65ba-var-log-ovn\") pod \"ovn-controller-4494l\" (UID: \"d6797fb3-9b0d-4856-a741-eb4c640a65ba\") " pod="openstack/ovn-controller-4494l" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.506418 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5ed6b539-c9da-4b55-8df5-dd96bfc586dc-var-lib\") pod \"ovn-controller-ovs-z8z22\" (UID: \"5ed6b539-c9da-4b55-8df5-dd96bfc586dc\") " pod="openstack/ovn-controller-ovs-z8z22" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.506665 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5ed6b539-c9da-4b55-8df5-dd96bfc586dc-var-run\") pod \"ovn-controller-ovs-z8z22\" (UID: \"5ed6b539-c9da-4b55-8df5-dd96bfc586dc\") " pod="openstack/ovn-controller-ovs-z8z22" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.507148 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5ed6b539-c9da-4b55-8df5-dd96bfc586dc-etc-ovs\") pod \"ovn-controller-ovs-z8z22\" (UID: \"5ed6b539-c9da-4b55-8df5-dd96bfc586dc\") " pod="openstack/ovn-controller-ovs-z8z22" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.508665 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ed6b539-c9da-4b55-8df5-dd96bfc586dc-scripts\") pod \"ovn-controller-ovs-z8z22\" (UID: \"5ed6b539-c9da-4b55-8df5-dd96bfc586dc\") " pod="openstack/ovn-controller-ovs-z8z22" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.510749 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6797fb3-9b0d-4856-a741-eb4c640a65ba-scripts\") pod \"ovn-controller-4494l\" (UID: \"d6797fb3-9b0d-4856-a741-eb4c640a65ba\") " pod="openstack/ovn-controller-4494l" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.533178 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmb5r\" (UniqueName: \"kubernetes.io/projected/d6797fb3-9b0d-4856-a741-eb4c640a65ba-kube-api-access-dmb5r\") pod \"ovn-controller-4494l\" (UID: \"d6797fb3-9b0d-4856-a741-eb4c640a65ba\") " pod="openstack/ovn-controller-4494l" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.537955 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnw99\" (UniqueName: \"kubernetes.io/projected/5ed6b539-c9da-4b55-8df5-dd96bfc586dc-kube-api-access-mnw99\") pod \"ovn-controller-ovs-z8z22\" (UID: \"5ed6b539-c9da-4b55-8df5-dd96bfc586dc\") " pod="openstack/ovn-controller-ovs-z8z22" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.676100 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4494l" Dec 11 15:25:35 crc kubenswrapper[5050]: I1211 15:25:35.690411 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-z8z22" Dec 11 15:25:36 crc kubenswrapper[5050]: I1211 15:25:36.150169 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4494l"] Dec 11 15:25:36 crc kubenswrapper[5050]: I1211 15:25:36.397516 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4494l" event={"ID":"d6797fb3-9b0d-4856-a741-eb4c640a65ba","Type":"ContainerStarted","Data":"fac8d58c7cf6e4cc84dd9ba3bfa57d0b6fa0e98775a6f954fae3c02097e2ca0c"} Dec 11 15:25:37 crc kubenswrapper[5050]: I1211 15:25:37.922824 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-z8z22"] Dec 11 15:25:37 crc kubenswrapper[5050]: W1211 15:25:37.931069 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ed6b539_c9da_4b55_8df5_dd96bfc586dc.slice/crio-ec0f6f043a02f82d5339c7989ffc97610966d29511e21866147d52cde42adec9 WatchSource:0}: Error finding container ec0f6f043a02f82d5339c7989ffc97610966d29511e21866147d52cde42adec9: Status 404 returned error can't find the container with id ec0f6f043a02f82d5339c7989ffc97610966d29511e21866147d52cde42adec9 Dec 11 15:25:38 crc kubenswrapper[5050]: I1211 15:25:38.030699 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-hhqwr"] Dec 11 15:25:38 crc kubenswrapper[5050]: I1211 15:25:38.038854 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-hhqwr" Dec 11 15:25:38 crc kubenswrapper[5050]: I1211 15:25:38.043881 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 11 15:25:38 crc kubenswrapper[5050]: I1211 15:25:38.046402 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-hhqwr"] Dec 11 15:25:38 crc kubenswrapper[5050]: I1211 15:25:38.170175 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c6ba7710-21fa-406a-87bd-a33fe84cb6ed-ovn-rundir\") pod \"ovn-controller-metrics-hhqwr\" (UID: \"c6ba7710-21fa-406a-87bd-a33fe84cb6ed\") " pod="openstack/ovn-controller-metrics-hhqwr" Dec 11 15:25:38 crc kubenswrapper[5050]: I1211 15:25:38.170230 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6ba7710-21fa-406a-87bd-a33fe84cb6ed-config\") pod \"ovn-controller-metrics-hhqwr\" (UID: \"c6ba7710-21fa-406a-87bd-a33fe84cb6ed\") " pod="openstack/ovn-controller-metrics-hhqwr" Dec 11 15:25:38 crc kubenswrapper[5050]: I1211 15:25:38.170295 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c6ba7710-21fa-406a-87bd-a33fe84cb6ed-ovs-rundir\") pod \"ovn-controller-metrics-hhqwr\" (UID: \"c6ba7710-21fa-406a-87bd-a33fe84cb6ed\") " pod="openstack/ovn-controller-metrics-hhqwr" Dec 11 15:25:38 crc kubenswrapper[5050]: I1211 15:25:38.170414 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgjdj\" (UniqueName: \"kubernetes.io/projected/c6ba7710-21fa-406a-87bd-a33fe84cb6ed-kube-api-access-bgjdj\") pod \"ovn-controller-metrics-hhqwr\" (UID: \"c6ba7710-21fa-406a-87bd-a33fe84cb6ed\") " pod="openstack/ovn-controller-metrics-hhqwr" Dec 11 15:25:38 crc kubenswrapper[5050]: I1211 15:25:38.272065 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c6ba7710-21fa-406a-87bd-a33fe84cb6ed-ovn-rundir\") pod \"ovn-controller-metrics-hhqwr\" (UID: \"c6ba7710-21fa-406a-87bd-a33fe84cb6ed\") " pod="openstack/ovn-controller-metrics-hhqwr" Dec 11 15:25:38 crc kubenswrapper[5050]: I1211 15:25:38.272142 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6ba7710-21fa-406a-87bd-a33fe84cb6ed-config\") pod \"ovn-controller-metrics-hhqwr\" (UID: \"c6ba7710-21fa-406a-87bd-a33fe84cb6ed\") " pod="openstack/ovn-controller-metrics-hhqwr" Dec 11 15:25:38 crc kubenswrapper[5050]: I1211 15:25:38.272203 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c6ba7710-21fa-406a-87bd-a33fe84cb6ed-ovs-rundir\") pod \"ovn-controller-metrics-hhqwr\" (UID: \"c6ba7710-21fa-406a-87bd-a33fe84cb6ed\") " pod="openstack/ovn-controller-metrics-hhqwr" Dec 11 15:25:38 crc kubenswrapper[5050]: I1211 15:25:38.272296 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgjdj\" (UniqueName: \"kubernetes.io/projected/c6ba7710-21fa-406a-87bd-a33fe84cb6ed-kube-api-access-bgjdj\") pod \"ovn-controller-metrics-hhqwr\" (UID: \"c6ba7710-21fa-406a-87bd-a33fe84cb6ed\") " pod="openstack/ovn-controller-metrics-hhqwr" Dec 11 15:25:38 crc kubenswrapper[5050]: I1211 15:25:38.272314 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/c6ba7710-21fa-406a-87bd-a33fe84cb6ed-ovn-rundir\") pod \"ovn-controller-metrics-hhqwr\" (UID: \"c6ba7710-21fa-406a-87bd-a33fe84cb6ed\") " pod="openstack/ovn-controller-metrics-hhqwr" Dec 11 15:25:38 crc kubenswrapper[5050]: I1211 15:25:38.272540 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/c6ba7710-21fa-406a-87bd-a33fe84cb6ed-ovs-rundir\") pod \"ovn-controller-metrics-hhqwr\" (UID: \"c6ba7710-21fa-406a-87bd-a33fe84cb6ed\") " pod="openstack/ovn-controller-metrics-hhqwr" Dec 11 15:25:38 crc kubenswrapper[5050]: I1211 15:25:38.272947 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6ba7710-21fa-406a-87bd-a33fe84cb6ed-config\") pod \"ovn-controller-metrics-hhqwr\" (UID: \"c6ba7710-21fa-406a-87bd-a33fe84cb6ed\") " pod="openstack/ovn-controller-metrics-hhqwr" Dec 11 15:25:38 crc kubenswrapper[5050]: I1211 15:25:38.297236 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgjdj\" (UniqueName: \"kubernetes.io/projected/c6ba7710-21fa-406a-87bd-a33fe84cb6ed-kube-api-access-bgjdj\") pod \"ovn-controller-metrics-hhqwr\" (UID: \"c6ba7710-21fa-406a-87bd-a33fe84cb6ed\") " pod="openstack/ovn-controller-metrics-hhqwr" Dec 11 15:25:38 crc kubenswrapper[5050]: I1211 15:25:38.386648 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-hhqwr" Dec 11 15:25:38 crc kubenswrapper[5050]: I1211 15:25:38.417834 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-z8z22" event={"ID":"5ed6b539-c9da-4b55-8df5-dd96bfc586dc","Type":"ContainerStarted","Data":"ec0f6f043a02f82d5339c7989ffc97610966d29511e21866147d52cde42adec9"} Dec 11 15:25:38 crc kubenswrapper[5050]: I1211 15:25:38.840499 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-hhqwr"] Dec 11 15:25:39 crc kubenswrapper[5050]: I1211 15:25:39.427519 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-hhqwr" event={"ID":"c6ba7710-21fa-406a-87bd-a33fe84cb6ed","Type":"ContainerStarted","Data":"1dcfbbd7871ebba05a9a4ffe1b18056c456f189e3171cd4f68ec536fc7d95d4a"} Dec 11 15:25:42 crc kubenswrapper[5050]: I1211 15:25:42.684471 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-create-krjxg"] Dec 11 15:25:42 crc kubenswrapper[5050]: I1211 15:25:42.686469 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-krjxg" Dec 11 15:25:42 crc kubenswrapper[5050]: I1211 15:25:42.711380 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-krjxg"] Dec 11 15:25:42 crc kubenswrapper[5050]: I1211 15:25:42.858709 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/836453ed-a74b-46f5-a16e-7e5276f60c2a-operator-scripts\") pod \"octavia-db-create-krjxg\" (UID: \"836453ed-a74b-46f5-a16e-7e5276f60c2a\") " pod="openstack/octavia-db-create-krjxg" Dec 11 15:25:42 crc kubenswrapper[5050]: I1211 15:25:42.858784 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv7kf\" (UniqueName: \"kubernetes.io/projected/836453ed-a74b-46f5-a16e-7e5276f60c2a-kube-api-access-hv7kf\") pod \"octavia-db-create-krjxg\" (UID: \"836453ed-a74b-46f5-a16e-7e5276f60c2a\") " pod="openstack/octavia-db-create-krjxg" Dec 11 15:25:42 crc kubenswrapper[5050]: I1211 15:25:42.961148 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv7kf\" (UniqueName: \"kubernetes.io/projected/836453ed-a74b-46f5-a16e-7e5276f60c2a-kube-api-access-hv7kf\") pod \"octavia-db-create-krjxg\" (UID: \"836453ed-a74b-46f5-a16e-7e5276f60c2a\") " pod="openstack/octavia-db-create-krjxg" Dec 11 15:25:42 crc kubenswrapper[5050]: I1211 15:25:42.961382 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/836453ed-a74b-46f5-a16e-7e5276f60c2a-operator-scripts\") pod \"octavia-db-create-krjxg\" (UID: \"836453ed-a74b-46f5-a16e-7e5276f60c2a\") " pod="openstack/octavia-db-create-krjxg" Dec 11 15:25:42 crc kubenswrapper[5050]: I1211 15:25:42.962382 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/836453ed-a74b-46f5-a16e-7e5276f60c2a-operator-scripts\") pod \"octavia-db-create-krjxg\" (UID: \"836453ed-a74b-46f5-a16e-7e5276f60c2a\") " pod="openstack/octavia-db-create-krjxg" Dec 11 15:25:42 crc kubenswrapper[5050]: I1211 15:25:42.984946 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv7kf\" (UniqueName: \"kubernetes.io/projected/836453ed-a74b-46f5-a16e-7e5276f60c2a-kube-api-access-hv7kf\") pod \"octavia-db-create-krjxg\" (UID: \"836453ed-a74b-46f5-a16e-7e5276f60c2a\") " pod="openstack/octavia-db-create-krjxg" Dec 11 15:25:43 crc kubenswrapper[5050]: I1211 15:25:43.039101 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-krjxg" Dec 11 15:25:43 crc kubenswrapper[5050]: I1211 15:25:43.462967 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4494l" event={"ID":"d6797fb3-9b0d-4856-a741-eb4c640a65ba","Type":"ContainerStarted","Data":"c39c4100b65ec2808affe1a017775f8a6a1752d5c7724a7e948345d68b55e5b8"} Dec 11 15:25:43 crc kubenswrapper[5050]: I1211 15:25:43.463464 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-4494l" Dec 11 15:25:43 crc kubenswrapper[5050]: I1211 15:25:43.465046 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-hhqwr" event={"ID":"c6ba7710-21fa-406a-87bd-a33fe84cb6ed","Type":"ContainerStarted","Data":"8abff4893fe62ab5efe7ca72691c816bc24958156b7c1c2e32c87d05d281d673"} Dec 11 15:25:43 crc kubenswrapper[5050]: I1211 15:25:43.466620 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-z8z22" event={"ID":"5ed6b539-c9da-4b55-8df5-dd96bfc586dc","Type":"ContainerStarted","Data":"7b5d6e0297d57d5239eb9a9ce2329d0b0ed8eb12cbdb8934bad03fdce1d34d95"} Dec 11 15:25:43 crc kubenswrapper[5050]: I1211 15:25:43.491453 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-4494l" podStartSLOduration=8.491432465 podStartE2EDuration="8.491432465s" podCreationTimestamp="2025-12-11 15:25:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:25:43.484633453 +0000 UTC m=+5834.328356059" watchObservedRunningTime="2025-12-11 15:25:43.491432465 +0000 UTC m=+5834.335155051" Dec 11 15:25:43 crc kubenswrapper[5050]: I1211 15:25:43.503469 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-hhqwr" podStartSLOduration=6.503446576 podStartE2EDuration="6.503446576s" podCreationTimestamp="2025-12-11 15:25:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:25:43.498412071 +0000 UTC m=+5834.342134657" watchObservedRunningTime="2025-12-11 15:25:43.503446576 +0000 UTC m=+5834.347169162" Dec 11 15:25:43 crc kubenswrapper[5050]: I1211 15:25:43.571065 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-create-krjxg"] Dec 11 15:25:44 crc kubenswrapper[5050]: I1211 15:25:44.478143 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-64hw5" event={"ID":"577c3778-67c2-4488-9f29-ea57d11059d8","Type":"ContainerStarted","Data":"7c6cd0ca7c69892a2b0f10fa3d983551535fe721eacd4ef1e0c9003124603b17"} Dec 11 15:25:44 crc kubenswrapper[5050]: I1211 15:25:44.479230 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-krjxg" event={"ID":"836453ed-a74b-46f5-a16e-7e5276f60c2a","Type":"ContainerStarted","Data":"9e56f19e5869738342f96a8e222804d4c666589a16281530f19499f18f7ad123"} Dec 11 15:25:44 crc kubenswrapper[5050]: I1211 15:25:44.480358 5050 generic.go:334] "Generic (PLEG): container finished" podID="5ed6b539-c9da-4b55-8df5-dd96bfc586dc" containerID="7b5d6e0297d57d5239eb9a9ce2329d0b0ed8eb12cbdb8934bad03fdce1d34d95" exitCode=0 Dec 11 15:25:44 crc kubenswrapper[5050]: I1211 15:25:44.481257 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-z8z22" event={"ID":"5ed6b539-c9da-4b55-8df5-dd96bfc586dc","Type":"ContainerDied","Data":"7b5d6e0297d57d5239eb9a9ce2329d0b0ed8eb12cbdb8934bad03fdce1d34d95"} Dec 11 15:25:45 crc kubenswrapper[5050]: I1211 15:25:45.184621 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-b4a7-account-create-update-f95fr"] Dec 11 15:25:45 crc kubenswrapper[5050]: I1211 15:25:45.186315 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-b4a7-account-create-update-f95fr" Dec 11 15:25:45 crc kubenswrapper[5050]: I1211 15:25:45.187987 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-db-secret" Dec 11 15:25:45 crc kubenswrapper[5050]: I1211 15:25:45.192776 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-b4a7-account-create-update-f95fr"] Dec 11 15:25:45 crc kubenswrapper[5050]: I1211 15:25:45.331759 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57a4b1c0-1a75-4092-9a96-b4171f480b4f-operator-scripts\") pod \"octavia-b4a7-account-create-update-f95fr\" (UID: \"57a4b1c0-1a75-4092-9a96-b4171f480b4f\") " pod="openstack/octavia-b4a7-account-create-update-f95fr" Dec 11 15:25:45 crc kubenswrapper[5050]: I1211 15:25:45.331914 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbjwb\" (UniqueName: \"kubernetes.io/projected/57a4b1c0-1a75-4092-9a96-b4171f480b4f-kube-api-access-kbjwb\") pod \"octavia-b4a7-account-create-update-f95fr\" (UID: \"57a4b1c0-1a75-4092-9a96-b4171f480b4f\") " pod="openstack/octavia-b4a7-account-create-update-f95fr" Dec 11 15:25:45 crc kubenswrapper[5050]: I1211 15:25:45.433128 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbjwb\" (UniqueName: \"kubernetes.io/projected/57a4b1c0-1a75-4092-9a96-b4171f480b4f-kube-api-access-kbjwb\") pod \"octavia-b4a7-account-create-update-f95fr\" (UID: \"57a4b1c0-1a75-4092-9a96-b4171f480b4f\") " pod="openstack/octavia-b4a7-account-create-update-f95fr" Dec 11 15:25:45 crc kubenswrapper[5050]: I1211 15:25:45.433243 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57a4b1c0-1a75-4092-9a96-b4171f480b4f-operator-scripts\") pod \"octavia-b4a7-account-create-update-f95fr\" (UID: \"57a4b1c0-1a75-4092-9a96-b4171f480b4f\") " pod="openstack/octavia-b4a7-account-create-update-f95fr" Dec 11 15:25:45 crc kubenswrapper[5050]: I1211 15:25:45.434116 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57a4b1c0-1a75-4092-9a96-b4171f480b4f-operator-scripts\") pod \"octavia-b4a7-account-create-update-f95fr\" (UID: \"57a4b1c0-1a75-4092-9a96-b4171f480b4f\") " pod="openstack/octavia-b4a7-account-create-update-f95fr" Dec 11 15:25:45 crc kubenswrapper[5050]: I1211 15:25:45.456028 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbjwb\" (UniqueName: \"kubernetes.io/projected/57a4b1c0-1a75-4092-9a96-b4171f480b4f-kube-api-access-kbjwb\") pod \"octavia-b4a7-account-create-update-f95fr\" (UID: \"57a4b1c0-1a75-4092-9a96-b4171f480b4f\") " pod="openstack/octavia-b4a7-account-create-update-f95fr" Dec 11 15:25:45 crc kubenswrapper[5050]: I1211 15:25:45.493056 5050 generic.go:334] "Generic (PLEG): container finished" podID="836453ed-a74b-46f5-a16e-7e5276f60c2a" containerID="20b7d6ef6474684bc5192d50805612f0ca77ba2bcf3f51a98dc9f1be239188c9" exitCode=0 Dec 11 15:25:45 crc kubenswrapper[5050]: I1211 15:25:45.493216 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-krjxg" event={"ID":"836453ed-a74b-46f5-a16e-7e5276f60c2a","Type":"ContainerDied","Data":"20b7d6ef6474684bc5192d50805612f0ca77ba2bcf3f51a98dc9f1be239188c9"} Dec 11 15:25:45 crc kubenswrapper[5050]: I1211 15:25:45.506000 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-z8z22" event={"ID":"5ed6b539-c9da-4b55-8df5-dd96bfc586dc","Type":"ContainerStarted","Data":"6a60e167356fcda4a396aff13e5ead791b1d35a13537dfed7b7d0aff4bf7322d"} Dec 11 15:25:45 crc kubenswrapper[5050]: I1211 15:25:45.506065 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-z8z22" event={"ID":"5ed6b539-c9da-4b55-8df5-dd96bfc586dc","Type":"ContainerStarted","Data":"c5423e7c03642aae06bfbeb2fca78276826bd49c13ac1fbf07e0768a9912d749"} Dec 11 15:25:45 crc kubenswrapper[5050]: I1211 15:25:45.506566 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-z8z22" Dec 11 15:25:45 crc kubenswrapper[5050]: I1211 15:25:45.511910 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-b4a7-account-create-update-f95fr" Dec 11 15:25:45 crc kubenswrapper[5050]: I1211 15:25:45.544360 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-64hw5" podStartSLOduration=5.68625594 podStartE2EDuration="24.544342282s" podCreationTimestamp="2025-12-11 15:25:21 +0000 UTC" firstStartedPulling="2025-12-11 15:25:24.26818627 +0000 UTC m=+5815.111908856" lastFinishedPulling="2025-12-11 15:25:43.126272612 +0000 UTC m=+5833.969995198" observedRunningTime="2025-12-11 15:25:45.539477612 +0000 UTC m=+5836.383200198" watchObservedRunningTime="2025-12-11 15:25:45.544342282 +0000 UTC m=+5836.388064868" Dec 11 15:25:45 crc kubenswrapper[5050]: I1211 15:25:45.572061 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-z8z22" podStartSLOduration=10.572045682 podStartE2EDuration="10.572045682s" podCreationTimestamp="2025-12-11 15:25:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:25:45.567949993 +0000 UTC m=+5836.411672589" watchObservedRunningTime="2025-12-11 15:25:45.572045682 +0000 UTC m=+5836.415768268" Dec 11 15:25:45 crc kubenswrapper[5050]: I1211 15:25:45.690430 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-z8z22" Dec 11 15:25:45 crc kubenswrapper[5050]: I1211 15:25:45.999551 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-b4a7-account-create-update-f95fr"] Dec 11 15:25:46 crc kubenswrapper[5050]: I1211 15:25:46.519756 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-b4a7-account-create-update-f95fr" event={"ID":"57a4b1c0-1a75-4092-9a96-b4171f480b4f","Type":"ContainerStarted","Data":"b9d50468aa2e054abc49cf0f7babb6b1e4f301f68eb8b30d820facff22b3eb60"} Dec 11 15:25:46 crc kubenswrapper[5050]: I1211 15:25:46.520116 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-b4a7-account-create-update-f95fr" event={"ID":"57a4b1c0-1a75-4092-9a96-b4171f480b4f","Type":"ContainerStarted","Data":"dec0912dc247b12f8cfb021b0ec32c28bf82a5e48fe480301eaa0db4f742db89"} Dec 11 15:25:46 crc kubenswrapper[5050]: I1211 15:25:46.538220 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-b4a7-account-create-update-f95fr" podStartSLOduration=1.538202906 podStartE2EDuration="1.538202906s" podCreationTimestamp="2025-12-11 15:25:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:25:46.53349458 +0000 UTC m=+5837.377217176" watchObservedRunningTime="2025-12-11 15:25:46.538202906 +0000 UTC m=+5837.381925492" Dec 11 15:25:46 crc kubenswrapper[5050]: I1211 15:25:46.773545 5050 scope.go:117] "RemoveContainer" containerID="4129d7734129500ad151c362161ef43eddbe1207c26a158e0e19296e82353712" Dec 11 15:25:46 crc kubenswrapper[5050]: I1211 15:25:46.869694 5050 scope.go:117] "RemoveContainer" containerID="f8fb7f30263f3f868127bc338e25fba3a81a7a4d294e605ce240261839fdb05f" Dec 11 15:25:46 crc kubenswrapper[5050]: I1211 15:25:46.906968 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-krjxg" Dec 11 15:25:47 crc kubenswrapper[5050]: I1211 15:25:47.068499 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hv7kf\" (UniqueName: \"kubernetes.io/projected/836453ed-a74b-46f5-a16e-7e5276f60c2a-kube-api-access-hv7kf\") pod \"836453ed-a74b-46f5-a16e-7e5276f60c2a\" (UID: \"836453ed-a74b-46f5-a16e-7e5276f60c2a\") " Dec 11 15:25:47 crc kubenswrapper[5050]: I1211 15:25:47.068811 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/836453ed-a74b-46f5-a16e-7e5276f60c2a-operator-scripts\") pod \"836453ed-a74b-46f5-a16e-7e5276f60c2a\" (UID: \"836453ed-a74b-46f5-a16e-7e5276f60c2a\") " Dec 11 15:25:47 crc kubenswrapper[5050]: I1211 15:25:47.069596 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/836453ed-a74b-46f5-a16e-7e5276f60c2a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "836453ed-a74b-46f5-a16e-7e5276f60c2a" (UID: "836453ed-a74b-46f5-a16e-7e5276f60c2a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:25:47 crc kubenswrapper[5050]: I1211 15:25:47.074085 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/836453ed-a74b-46f5-a16e-7e5276f60c2a-kube-api-access-hv7kf" (OuterVolumeSpecName: "kube-api-access-hv7kf") pod "836453ed-a74b-46f5-a16e-7e5276f60c2a" (UID: "836453ed-a74b-46f5-a16e-7e5276f60c2a"). InnerVolumeSpecName "kube-api-access-hv7kf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:25:47 crc kubenswrapper[5050]: I1211 15:25:47.170428 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hv7kf\" (UniqueName: \"kubernetes.io/projected/836453ed-a74b-46f5-a16e-7e5276f60c2a-kube-api-access-hv7kf\") on node \"crc\" DevicePath \"\"" Dec 11 15:25:47 crc kubenswrapper[5050]: I1211 15:25:47.170464 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/836453ed-a74b-46f5-a16e-7e5276f60c2a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:25:47 crc kubenswrapper[5050]: I1211 15:25:47.531528 5050 generic.go:334] "Generic (PLEG): container finished" podID="57a4b1c0-1a75-4092-9a96-b4171f480b4f" containerID="b9d50468aa2e054abc49cf0f7babb6b1e4f301f68eb8b30d820facff22b3eb60" exitCode=0 Dec 11 15:25:47 crc kubenswrapper[5050]: I1211 15:25:47.531783 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-b4a7-account-create-update-f95fr" event={"ID":"57a4b1c0-1a75-4092-9a96-b4171f480b4f","Type":"ContainerDied","Data":"b9d50468aa2e054abc49cf0f7babb6b1e4f301f68eb8b30d820facff22b3eb60"} Dec 11 15:25:47 crc kubenswrapper[5050]: I1211 15:25:47.533772 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-create-krjxg" Dec 11 15:25:47 crc kubenswrapper[5050]: I1211 15:25:47.535122 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-create-krjxg" event={"ID":"836453ed-a74b-46f5-a16e-7e5276f60c2a","Type":"ContainerDied","Data":"9e56f19e5869738342f96a8e222804d4c666589a16281530f19499f18f7ad123"} Dec 11 15:25:47 crc kubenswrapper[5050]: I1211 15:25:47.535196 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e56f19e5869738342f96a8e222804d4c666589a16281530f19499f18f7ad123" Dec 11 15:25:48 crc kubenswrapper[5050]: I1211 15:25:48.056390 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-2f73-account-create-update-q7tdz"] Dec 11 15:25:48 crc kubenswrapper[5050]: I1211 15:25:48.068141 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-6lrzt"] Dec 11 15:25:48 crc kubenswrapper[5050]: I1211 15:25:48.079733 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-2f73-account-create-update-q7tdz"] Dec 11 15:25:48 crc kubenswrapper[5050]: I1211 15:25:48.090649 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-6lrzt"] Dec 11 15:25:48 crc kubenswrapper[5050]: I1211 15:25:48.882446 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-b4a7-account-create-update-f95fr" Dec 11 15:25:49 crc kubenswrapper[5050]: I1211 15:25:49.019342 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57a4b1c0-1a75-4092-9a96-b4171f480b4f-operator-scripts\") pod \"57a4b1c0-1a75-4092-9a96-b4171f480b4f\" (UID: \"57a4b1c0-1a75-4092-9a96-b4171f480b4f\") " Dec 11 15:25:49 crc kubenswrapper[5050]: I1211 15:25:49.019453 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbjwb\" (UniqueName: \"kubernetes.io/projected/57a4b1c0-1a75-4092-9a96-b4171f480b4f-kube-api-access-kbjwb\") pod \"57a4b1c0-1a75-4092-9a96-b4171f480b4f\" (UID: \"57a4b1c0-1a75-4092-9a96-b4171f480b4f\") " Dec 11 15:25:49 crc kubenswrapper[5050]: I1211 15:25:49.020805 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57a4b1c0-1a75-4092-9a96-b4171f480b4f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "57a4b1c0-1a75-4092-9a96-b4171f480b4f" (UID: "57a4b1c0-1a75-4092-9a96-b4171f480b4f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:25:49 crc kubenswrapper[5050]: I1211 15:25:49.026246 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a4b1c0-1a75-4092-9a96-b4171f480b4f-kube-api-access-kbjwb" (OuterVolumeSpecName: "kube-api-access-kbjwb") pod "57a4b1c0-1a75-4092-9a96-b4171f480b4f" (UID: "57a4b1c0-1a75-4092-9a96-b4171f480b4f"). InnerVolumeSpecName "kube-api-access-kbjwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:25:49 crc kubenswrapper[5050]: I1211 15:25:49.121617 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57a4b1c0-1a75-4092-9a96-b4171f480b4f-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:25:49 crc kubenswrapper[5050]: I1211 15:25:49.121664 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbjwb\" (UniqueName: \"kubernetes.io/projected/57a4b1c0-1a75-4092-9a96-b4171f480b4f-kube-api-access-kbjwb\") on node \"crc\" DevicePath \"\"" Dec 11 15:25:49 crc kubenswrapper[5050]: I1211 15:25:49.560577 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d7528a4-821a-4b77-8dc9-91b73ead942f" path="/var/lib/kubelet/pods/8d7528a4-821a-4b77-8dc9-91b73ead942f/volumes" Dec 11 15:25:49 crc kubenswrapper[5050]: I1211 15:25:49.561281 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0df6fee-7c04-4607-9472-294071bcb806" path="/var/lib/kubelet/pods/e0df6fee-7c04-4607-9472-294071bcb806/volumes" Dec 11 15:25:49 crc kubenswrapper[5050]: I1211 15:25:49.562933 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-b4a7-account-create-update-f95fr" event={"ID":"57a4b1c0-1a75-4092-9a96-b4171f480b4f","Type":"ContainerDied","Data":"dec0912dc247b12f8cfb021b0ec32c28bf82a5e48fe480301eaa0db4f742db89"} Dec 11 15:25:49 crc kubenswrapper[5050]: I1211 15:25:49.562961 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dec0912dc247b12f8cfb021b0ec32c28bf82a5e48fe480301eaa0db4f742db89" Dec 11 15:25:49 crc kubenswrapper[5050]: I1211 15:25:49.563004 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-b4a7-account-create-update-f95fr" Dec 11 15:25:51 crc kubenswrapper[5050]: I1211 15:25:51.860657 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-64hw5" Dec 11 15:25:51 crc kubenswrapper[5050]: I1211 15:25:51.860982 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-64hw5" Dec 11 15:25:51 crc kubenswrapper[5050]: I1211 15:25:51.918137 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-64hw5" Dec 11 15:25:52 crc kubenswrapper[5050]: I1211 15:25:52.666731 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-64hw5" Dec 11 15:25:52 crc kubenswrapper[5050]: I1211 15:25:52.717714 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-64hw5"] Dec 11 15:25:53 crc kubenswrapper[5050]: I1211 15:25:53.592959 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-persistence-db-create-rg9ff"] Dec 11 15:25:53 crc kubenswrapper[5050]: E1211 15:25:53.593663 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57a4b1c0-1a75-4092-9a96-b4171f480b4f" containerName="mariadb-account-create-update" Dec 11 15:25:53 crc kubenswrapper[5050]: I1211 15:25:53.593682 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="57a4b1c0-1a75-4092-9a96-b4171f480b4f" containerName="mariadb-account-create-update" Dec 11 15:25:53 crc kubenswrapper[5050]: E1211 15:25:53.593720 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="836453ed-a74b-46f5-a16e-7e5276f60c2a" containerName="mariadb-database-create" Dec 11 15:25:53 crc kubenswrapper[5050]: I1211 15:25:53.593728 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="836453ed-a74b-46f5-a16e-7e5276f60c2a" containerName="mariadb-database-create" Dec 11 15:25:53 crc kubenswrapper[5050]: I1211 15:25:53.593967 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="836453ed-a74b-46f5-a16e-7e5276f60c2a" containerName="mariadb-database-create" Dec 11 15:25:53 crc kubenswrapper[5050]: I1211 15:25:53.593989 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="57a4b1c0-1a75-4092-9a96-b4171f480b4f" containerName="mariadb-account-create-update" Dec 11 15:25:53 crc kubenswrapper[5050]: I1211 15:25:53.594798 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-rg9ff" Dec 11 15:25:53 crc kubenswrapper[5050]: I1211 15:25:53.602899 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-rg9ff"] Dec 11 15:25:53 crc kubenswrapper[5050]: I1211 15:25:53.610086 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e95122a2-0a89-4c0d-a67e-1fbe72cbb208-operator-scripts\") pod \"octavia-persistence-db-create-rg9ff\" (UID: \"e95122a2-0a89-4c0d-a67e-1fbe72cbb208\") " pod="openstack/octavia-persistence-db-create-rg9ff" Dec 11 15:25:53 crc kubenswrapper[5050]: I1211 15:25:53.610424 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pff5n\" (UniqueName: \"kubernetes.io/projected/e95122a2-0a89-4c0d-a67e-1fbe72cbb208-kube-api-access-pff5n\") pod \"octavia-persistence-db-create-rg9ff\" (UID: \"e95122a2-0a89-4c0d-a67e-1fbe72cbb208\") " pod="openstack/octavia-persistence-db-create-rg9ff" Dec 11 15:25:53 crc kubenswrapper[5050]: I1211 15:25:53.712463 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e95122a2-0a89-4c0d-a67e-1fbe72cbb208-operator-scripts\") pod \"octavia-persistence-db-create-rg9ff\" (UID: \"e95122a2-0a89-4c0d-a67e-1fbe72cbb208\") " pod="openstack/octavia-persistence-db-create-rg9ff" Dec 11 15:25:53 crc kubenswrapper[5050]: I1211 15:25:53.712613 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pff5n\" (UniqueName: \"kubernetes.io/projected/e95122a2-0a89-4c0d-a67e-1fbe72cbb208-kube-api-access-pff5n\") pod \"octavia-persistence-db-create-rg9ff\" (UID: \"e95122a2-0a89-4c0d-a67e-1fbe72cbb208\") " pod="openstack/octavia-persistence-db-create-rg9ff" Dec 11 15:25:53 crc kubenswrapper[5050]: I1211 15:25:53.713374 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e95122a2-0a89-4c0d-a67e-1fbe72cbb208-operator-scripts\") pod \"octavia-persistence-db-create-rg9ff\" (UID: \"e95122a2-0a89-4c0d-a67e-1fbe72cbb208\") " pod="openstack/octavia-persistence-db-create-rg9ff" Dec 11 15:25:53 crc kubenswrapper[5050]: I1211 15:25:53.744732 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pff5n\" (UniqueName: \"kubernetes.io/projected/e95122a2-0a89-4c0d-a67e-1fbe72cbb208-kube-api-access-pff5n\") pod \"octavia-persistence-db-create-rg9ff\" (UID: \"e95122a2-0a89-4c0d-a67e-1fbe72cbb208\") " pod="openstack/octavia-persistence-db-create-rg9ff" Dec 11 15:25:53 crc kubenswrapper[5050]: I1211 15:25:53.917791 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-rg9ff" Dec 11 15:25:54 crc kubenswrapper[5050]: I1211 15:25:54.184093 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-9764-account-create-update-lp57s"] Dec 11 15:25:54 crc kubenswrapper[5050]: I1211 15:25:54.185767 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-9764-account-create-update-lp57s" Dec 11 15:25:54 crc kubenswrapper[5050]: I1211 15:25:54.187968 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-persistence-db-secret" Dec 11 15:25:54 crc kubenswrapper[5050]: I1211 15:25:54.211553 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-9764-account-create-update-lp57s"] Dec 11 15:25:54 crc kubenswrapper[5050]: I1211 15:25:54.323159 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea7f0570-a2ef-4e47-a947-19341754adc1-operator-scripts\") pod \"octavia-9764-account-create-update-lp57s\" (UID: \"ea7f0570-a2ef-4e47-a947-19341754adc1\") " pod="openstack/octavia-9764-account-create-update-lp57s" Dec 11 15:25:54 crc kubenswrapper[5050]: I1211 15:25:54.323235 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzdvc\" (UniqueName: \"kubernetes.io/projected/ea7f0570-a2ef-4e47-a947-19341754adc1-kube-api-access-nzdvc\") pod \"octavia-9764-account-create-update-lp57s\" (UID: \"ea7f0570-a2ef-4e47-a947-19341754adc1\") " pod="openstack/octavia-9764-account-create-update-lp57s" Dec 11 15:25:54 crc kubenswrapper[5050]: I1211 15:25:54.349308 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-persistence-db-create-rg9ff"] Dec 11 15:25:54 crc kubenswrapper[5050]: W1211 15:25:54.350033 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode95122a2_0a89_4c0d_a67e_1fbe72cbb208.slice/crio-0f203f2d416f97f09d1e639f16c1367629a142a58e0593cced82c1912904b4c9 WatchSource:0}: Error finding container 0f203f2d416f97f09d1e639f16c1367629a142a58e0593cced82c1912904b4c9: Status 404 returned error can't find the container with id 0f203f2d416f97f09d1e639f16c1367629a142a58e0593cced82c1912904b4c9 Dec 11 15:25:54 crc kubenswrapper[5050]: I1211 15:25:54.425142 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea7f0570-a2ef-4e47-a947-19341754adc1-operator-scripts\") pod \"octavia-9764-account-create-update-lp57s\" (UID: \"ea7f0570-a2ef-4e47-a947-19341754adc1\") " pod="openstack/octavia-9764-account-create-update-lp57s" Dec 11 15:25:54 crc kubenswrapper[5050]: I1211 15:25:54.425236 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzdvc\" (UniqueName: \"kubernetes.io/projected/ea7f0570-a2ef-4e47-a947-19341754adc1-kube-api-access-nzdvc\") pod \"octavia-9764-account-create-update-lp57s\" (UID: \"ea7f0570-a2ef-4e47-a947-19341754adc1\") " pod="openstack/octavia-9764-account-create-update-lp57s" Dec 11 15:25:54 crc kubenswrapper[5050]: I1211 15:25:54.427046 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea7f0570-a2ef-4e47-a947-19341754adc1-operator-scripts\") pod \"octavia-9764-account-create-update-lp57s\" (UID: \"ea7f0570-a2ef-4e47-a947-19341754adc1\") " pod="openstack/octavia-9764-account-create-update-lp57s" Dec 11 15:25:54 crc kubenswrapper[5050]: I1211 15:25:54.443997 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzdvc\" (UniqueName: \"kubernetes.io/projected/ea7f0570-a2ef-4e47-a947-19341754adc1-kube-api-access-nzdvc\") pod \"octavia-9764-account-create-update-lp57s\" (UID: \"ea7f0570-a2ef-4e47-a947-19341754adc1\") " pod="openstack/octavia-9764-account-create-update-lp57s" Dec 11 15:25:54 crc kubenswrapper[5050]: I1211 15:25:54.517456 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-9764-account-create-update-lp57s" Dec 11 15:25:54 crc kubenswrapper[5050]: I1211 15:25:54.651483 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-64hw5" podUID="577c3778-67c2-4488-9f29-ea57d11059d8" containerName="registry-server" containerID="cri-o://7c6cd0ca7c69892a2b0f10fa3d983551535fe721eacd4ef1e0c9003124603b17" gracePeriod=2 Dec 11 15:25:54 crc kubenswrapper[5050]: I1211 15:25:54.651812 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-rg9ff" event={"ID":"e95122a2-0a89-4c0d-a67e-1fbe72cbb208","Type":"ContainerStarted","Data":"0f203f2d416f97f09d1e639f16c1367629a142a58e0593cced82c1912904b4c9"} Dec 11 15:25:54 crc kubenswrapper[5050]: I1211 15:25:54.806731 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-9764-account-create-update-lp57s"] Dec 11 15:25:55 crc kubenswrapper[5050]: I1211 15:25:55.058500 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-nm5r6"] Dec 11 15:25:55 crc kubenswrapper[5050]: I1211 15:25:55.068156 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-nm5r6"] Dec 11 15:25:55 crc kubenswrapper[5050]: I1211 15:25:55.562145 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d056d58-517c-49db-bb61-1a0394fdd271" path="/var/lib/kubelet/pods/8d056d58-517c-49db-bb61-1a0394fdd271/volumes" Dec 11 15:25:55 crc kubenswrapper[5050]: I1211 15:25:55.693988 5050 generic.go:334] "Generic (PLEG): container finished" podID="e95122a2-0a89-4c0d-a67e-1fbe72cbb208" containerID="a7ddedfd8c099629fce721e3a016dcf36b6feb805345f717bc8590fa2767312b" exitCode=0 Dec 11 15:25:55 crc kubenswrapper[5050]: I1211 15:25:55.694104 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-rg9ff" event={"ID":"e95122a2-0a89-4c0d-a67e-1fbe72cbb208","Type":"ContainerDied","Data":"a7ddedfd8c099629fce721e3a016dcf36b6feb805345f717bc8590fa2767312b"} Dec 11 15:25:55 crc kubenswrapper[5050]: I1211 15:25:55.699384 5050 generic.go:334] "Generic (PLEG): container finished" podID="577c3778-67c2-4488-9f29-ea57d11059d8" containerID="7c6cd0ca7c69892a2b0f10fa3d983551535fe721eacd4ef1e0c9003124603b17" exitCode=0 Dec 11 15:25:55 crc kubenswrapper[5050]: I1211 15:25:55.699435 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-64hw5" event={"ID":"577c3778-67c2-4488-9f29-ea57d11059d8","Type":"ContainerDied","Data":"7c6cd0ca7c69892a2b0f10fa3d983551535fe721eacd4ef1e0c9003124603b17"} Dec 11 15:25:55 crc kubenswrapper[5050]: I1211 15:25:55.701367 5050 generic.go:334] "Generic (PLEG): container finished" podID="ea7f0570-a2ef-4e47-a947-19341754adc1" containerID="ca9002d0b3b3d6d9130a5971f97fa09de488225866dbeab6a3b09dc5aa324393" exitCode=0 Dec 11 15:25:55 crc kubenswrapper[5050]: I1211 15:25:55.701398 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-9764-account-create-update-lp57s" event={"ID":"ea7f0570-a2ef-4e47-a947-19341754adc1","Type":"ContainerDied","Data":"ca9002d0b3b3d6d9130a5971f97fa09de488225866dbeab6a3b09dc5aa324393"} Dec 11 15:25:55 crc kubenswrapper[5050]: I1211 15:25:55.701441 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-9764-account-create-update-lp57s" event={"ID":"ea7f0570-a2ef-4e47-a947-19341754adc1","Type":"ContainerStarted","Data":"3a00dccc2e30a5a383836309f4275f4a7b3889a613b2009cf739d08da334290e"} Dec 11 15:25:55 crc kubenswrapper[5050]: I1211 15:25:55.813120 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-64hw5" Dec 11 15:25:55 crc kubenswrapper[5050]: I1211 15:25:55.872360 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/577c3778-67c2-4488-9f29-ea57d11059d8-utilities\") pod \"577c3778-67c2-4488-9f29-ea57d11059d8\" (UID: \"577c3778-67c2-4488-9f29-ea57d11059d8\") " Dec 11 15:25:55 crc kubenswrapper[5050]: I1211 15:25:55.872644 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/577c3778-67c2-4488-9f29-ea57d11059d8-catalog-content\") pod \"577c3778-67c2-4488-9f29-ea57d11059d8\" (UID: \"577c3778-67c2-4488-9f29-ea57d11059d8\") " Dec 11 15:25:55 crc kubenswrapper[5050]: I1211 15:25:55.873196 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/577c3778-67c2-4488-9f29-ea57d11059d8-utilities" (OuterVolumeSpecName: "utilities") pod "577c3778-67c2-4488-9f29-ea57d11059d8" (UID: "577c3778-67c2-4488-9f29-ea57d11059d8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:25:55 crc kubenswrapper[5050]: I1211 15:25:55.887799 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dk5t2\" (UniqueName: \"kubernetes.io/projected/577c3778-67c2-4488-9f29-ea57d11059d8-kube-api-access-dk5t2\") pod \"577c3778-67c2-4488-9f29-ea57d11059d8\" (UID: \"577c3778-67c2-4488-9f29-ea57d11059d8\") " Dec 11 15:25:55 crc kubenswrapper[5050]: I1211 15:25:55.889334 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/577c3778-67c2-4488-9f29-ea57d11059d8-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 15:25:55 crc kubenswrapper[5050]: I1211 15:25:55.893299 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/577c3778-67c2-4488-9f29-ea57d11059d8-kube-api-access-dk5t2" (OuterVolumeSpecName: "kube-api-access-dk5t2") pod "577c3778-67c2-4488-9f29-ea57d11059d8" (UID: "577c3778-67c2-4488-9f29-ea57d11059d8"). InnerVolumeSpecName "kube-api-access-dk5t2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:25:55 crc kubenswrapper[5050]: I1211 15:25:55.991773 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/577c3778-67c2-4488-9f29-ea57d11059d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "577c3778-67c2-4488-9f29-ea57d11059d8" (UID: "577c3778-67c2-4488-9f29-ea57d11059d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:25:55 crc kubenswrapper[5050]: I1211 15:25:55.992585 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/577c3778-67c2-4488-9f29-ea57d11059d8-catalog-content\") pod \"577c3778-67c2-4488-9f29-ea57d11059d8\" (UID: \"577c3778-67c2-4488-9f29-ea57d11059d8\") " Dec 11 15:25:55 crc kubenswrapper[5050]: W1211 15:25:55.992681 5050 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/577c3778-67c2-4488-9f29-ea57d11059d8/volumes/kubernetes.io~empty-dir/catalog-content Dec 11 15:25:55 crc kubenswrapper[5050]: I1211 15:25:55.992706 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/577c3778-67c2-4488-9f29-ea57d11059d8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "577c3778-67c2-4488-9f29-ea57d11059d8" (UID: "577c3778-67c2-4488-9f29-ea57d11059d8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:25:55 crc kubenswrapper[5050]: I1211 15:25:55.993080 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dk5t2\" (UniqueName: \"kubernetes.io/projected/577c3778-67c2-4488-9f29-ea57d11059d8-kube-api-access-dk5t2\") on node \"crc\" DevicePath \"\"" Dec 11 15:25:55 crc kubenswrapper[5050]: I1211 15:25:55.993100 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/577c3778-67c2-4488-9f29-ea57d11059d8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 15:25:56 crc kubenswrapper[5050]: I1211 15:25:56.715133 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-64hw5" event={"ID":"577c3778-67c2-4488-9f29-ea57d11059d8","Type":"ContainerDied","Data":"a49edff9bafc76f0fd38f7aa8c604b252093157ec330ae46f296bb0a9e00e95f"} Dec 11 15:25:56 crc kubenswrapper[5050]: I1211 15:25:56.715284 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-64hw5" Dec 11 15:25:56 crc kubenswrapper[5050]: I1211 15:25:56.715371 5050 scope.go:117] "RemoveContainer" containerID="7c6cd0ca7c69892a2b0f10fa3d983551535fe721eacd4ef1e0c9003124603b17" Dec 11 15:25:56 crc kubenswrapper[5050]: I1211 15:25:56.765780 5050 scope.go:117] "RemoveContainer" containerID="aba82de2f9bab7048c9c43ca4435d887d6d89f7f05e0bbf5e4e3add4fd7300b8" Dec 11 15:25:56 crc kubenswrapper[5050]: I1211 15:25:56.767712 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-64hw5"] Dec 11 15:25:56 crc kubenswrapper[5050]: I1211 15:25:56.780641 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-64hw5"] Dec 11 15:25:56 crc kubenswrapper[5050]: I1211 15:25:56.810286 5050 scope.go:117] "RemoveContainer" containerID="f6c353ddf0851219239814f3fe58ceb7361901c361604b7ff3b2196b777c9a50" Dec 11 15:25:57 crc kubenswrapper[5050]: I1211 15:25:57.204847 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-rg9ff" Dec 11 15:25:57 crc kubenswrapper[5050]: I1211 15:25:57.213557 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-9764-account-create-update-lp57s" Dec 11 15:25:57 crc kubenswrapper[5050]: I1211 15:25:57.323369 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzdvc\" (UniqueName: \"kubernetes.io/projected/ea7f0570-a2ef-4e47-a947-19341754adc1-kube-api-access-nzdvc\") pod \"ea7f0570-a2ef-4e47-a947-19341754adc1\" (UID: \"ea7f0570-a2ef-4e47-a947-19341754adc1\") " Dec 11 15:25:57 crc kubenswrapper[5050]: I1211 15:25:57.323498 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea7f0570-a2ef-4e47-a947-19341754adc1-operator-scripts\") pod \"ea7f0570-a2ef-4e47-a947-19341754adc1\" (UID: \"ea7f0570-a2ef-4e47-a947-19341754adc1\") " Dec 11 15:25:57 crc kubenswrapper[5050]: I1211 15:25:57.323542 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pff5n\" (UniqueName: \"kubernetes.io/projected/e95122a2-0a89-4c0d-a67e-1fbe72cbb208-kube-api-access-pff5n\") pod \"e95122a2-0a89-4c0d-a67e-1fbe72cbb208\" (UID: \"e95122a2-0a89-4c0d-a67e-1fbe72cbb208\") " Dec 11 15:25:57 crc kubenswrapper[5050]: I1211 15:25:57.323617 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e95122a2-0a89-4c0d-a67e-1fbe72cbb208-operator-scripts\") pod \"e95122a2-0a89-4c0d-a67e-1fbe72cbb208\" (UID: \"e95122a2-0a89-4c0d-a67e-1fbe72cbb208\") " Dec 11 15:25:57 crc kubenswrapper[5050]: I1211 15:25:57.324459 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea7f0570-a2ef-4e47-a947-19341754adc1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ea7f0570-a2ef-4e47-a947-19341754adc1" (UID: "ea7f0570-a2ef-4e47-a947-19341754adc1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:25:57 crc kubenswrapper[5050]: I1211 15:25:57.324609 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e95122a2-0a89-4c0d-a67e-1fbe72cbb208-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e95122a2-0a89-4c0d-a67e-1fbe72cbb208" (UID: "e95122a2-0a89-4c0d-a67e-1fbe72cbb208"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:25:57 crc kubenswrapper[5050]: I1211 15:25:57.328768 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea7f0570-a2ef-4e47-a947-19341754adc1-kube-api-access-nzdvc" (OuterVolumeSpecName: "kube-api-access-nzdvc") pod "ea7f0570-a2ef-4e47-a947-19341754adc1" (UID: "ea7f0570-a2ef-4e47-a947-19341754adc1"). InnerVolumeSpecName "kube-api-access-nzdvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:25:57 crc kubenswrapper[5050]: I1211 15:25:57.329094 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e95122a2-0a89-4c0d-a67e-1fbe72cbb208-kube-api-access-pff5n" (OuterVolumeSpecName: "kube-api-access-pff5n") pod "e95122a2-0a89-4c0d-a67e-1fbe72cbb208" (UID: "e95122a2-0a89-4c0d-a67e-1fbe72cbb208"). InnerVolumeSpecName "kube-api-access-pff5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:25:57 crc kubenswrapper[5050]: I1211 15:25:57.425469 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ea7f0570-a2ef-4e47-a947-19341754adc1-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:25:57 crc kubenswrapper[5050]: I1211 15:25:57.425503 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pff5n\" (UniqueName: \"kubernetes.io/projected/e95122a2-0a89-4c0d-a67e-1fbe72cbb208-kube-api-access-pff5n\") on node \"crc\" DevicePath \"\"" Dec 11 15:25:57 crc kubenswrapper[5050]: I1211 15:25:57.425547 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e95122a2-0a89-4c0d-a67e-1fbe72cbb208-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:25:57 crc kubenswrapper[5050]: I1211 15:25:57.425556 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzdvc\" (UniqueName: \"kubernetes.io/projected/ea7f0570-a2ef-4e47-a947-19341754adc1-kube-api-access-nzdvc\") on node \"crc\" DevicePath \"\"" Dec 11 15:25:57 crc kubenswrapper[5050]: I1211 15:25:57.559734 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="577c3778-67c2-4488-9f29-ea57d11059d8" path="/var/lib/kubelet/pods/577c3778-67c2-4488-9f29-ea57d11059d8/volumes" Dec 11 15:25:57 crc kubenswrapper[5050]: I1211 15:25:57.726789 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-9764-account-create-update-lp57s" event={"ID":"ea7f0570-a2ef-4e47-a947-19341754adc1","Type":"ContainerDied","Data":"3a00dccc2e30a5a383836309f4275f4a7b3889a613b2009cf739d08da334290e"} Dec 11 15:25:57 crc kubenswrapper[5050]: I1211 15:25:57.727787 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a00dccc2e30a5a383836309f4275f4a7b3889a613b2009cf739d08da334290e" Dec 11 15:25:57 crc kubenswrapper[5050]: I1211 15:25:57.726822 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-9764-account-create-update-lp57s" Dec 11 15:25:57 crc kubenswrapper[5050]: I1211 15:25:57.728529 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-persistence-db-create-rg9ff" event={"ID":"e95122a2-0a89-4c0d-a67e-1fbe72cbb208","Type":"ContainerDied","Data":"0f203f2d416f97f09d1e639f16c1367629a142a58e0593cced82c1912904b4c9"} Dec 11 15:25:57 crc kubenswrapper[5050]: I1211 15:25:57.728563 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f203f2d416f97f09d1e639f16c1367629a142a58e0593cced82c1912904b4c9" Dec 11 15:25:57 crc kubenswrapper[5050]: I1211 15:25:57.728630 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-persistence-db-create-rg9ff" Dec 11 15:25:59 crc kubenswrapper[5050]: I1211 15:25:59.644530 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-api-55d86c656b-9rqm4"] Dec 11 15:25:59 crc kubenswrapper[5050]: E1211 15:25:59.645339 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea7f0570-a2ef-4e47-a947-19341754adc1" containerName="mariadb-account-create-update" Dec 11 15:25:59 crc kubenswrapper[5050]: I1211 15:25:59.645356 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea7f0570-a2ef-4e47-a947-19341754adc1" containerName="mariadb-account-create-update" Dec 11 15:25:59 crc kubenswrapper[5050]: E1211 15:25:59.645389 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e95122a2-0a89-4c0d-a67e-1fbe72cbb208" containerName="mariadb-database-create" Dec 11 15:25:59 crc kubenswrapper[5050]: I1211 15:25:59.645397 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e95122a2-0a89-4c0d-a67e-1fbe72cbb208" containerName="mariadb-database-create" Dec 11 15:25:59 crc kubenswrapper[5050]: E1211 15:25:59.645423 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="577c3778-67c2-4488-9f29-ea57d11059d8" containerName="registry-server" Dec 11 15:25:59 crc kubenswrapper[5050]: I1211 15:25:59.645430 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="577c3778-67c2-4488-9f29-ea57d11059d8" containerName="registry-server" Dec 11 15:25:59 crc kubenswrapper[5050]: E1211 15:25:59.645443 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="577c3778-67c2-4488-9f29-ea57d11059d8" containerName="extract-content" Dec 11 15:25:59 crc kubenswrapper[5050]: I1211 15:25:59.645449 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="577c3778-67c2-4488-9f29-ea57d11059d8" containerName="extract-content" Dec 11 15:25:59 crc kubenswrapper[5050]: E1211 15:25:59.645467 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="577c3778-67c2-4488-9f29-ea57d11059d8" containerName="extract-utilities" Dec 11 15:25:59 crc kubenswrapper[5050]: I1211 15:25:59.645476 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="577c3778-67c2-4488-9f29-ea57d11059d8" containerName="extract-utilities" Dec 11 15:25:59 crc kubenswrapper[5050]: I1211 15:25:59.645693 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="577c3778-67c2-4488-9f29-ea57d11059d8" containerName="registry-server" Dec 11 15:25:59 crc kubenswrapper[5050]: I1211 15:25:59.645713 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea7f0570-a2ef-4e47-a947-19341754adc1" containerName="mariadb-account-create-update" Dec 11 15:25:59 crc kubenswrapper[5050]: I1211 15:25:59.645730 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="e95122a2-0a89-4c0d-a67e-1fbe72cbb208" containerName="mariadb-database-create" Dec 11 15:25:59 crc kubenswrapper[5050]: I1211 15:25:59.656388 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-55d86c656b-9rqm4" Dec 11 15:25:59 crc kubenswrapper[5050]: I1211 15:25:59.663972 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-config-data" Dec 11 15:25:59 crc kubenswrapper[5050]: I1211 15:25:59.664215 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-scripts" Dec 11 15:25:59 crc kubenswrapper[5050]: I1211 15:25:59.664458 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-octavia-dockercfg-h4g5n" Dec 11 15:25:59 crc kubenswrapper[5050]: I1211 15:25:59.680321 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-55d86c656b-9rqm4"] Dec 11 15:25:59 crc kubenswrapper[5050]: I1211 15:25:59.789320 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14c40f4c-7d89-4d8e-a1f5-923ab611e584-combined-ca-bundle\") pod \"octavia-api-55d86c656b-9rqm4\" (UID: \"14c40f4c-7d89-4d8e-a1f5-923ab611e584\") " pod="openstack/octavia-api-55d86c656b-9rqm4" Dec 11 15:25:59 crc kubenswrapper[5050]: I1211 15:25:59.789410 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/14c40f4c-7d89-4d8e-a1f5-923ab611e584-config-data-merged\") pod \"octavia-api-55d86c656b-9rqm4\" (UID: \"14c40f4c-7d89-4d8e-a1f5-923ab611e584\") " pod="openstack/octavia-api-55d86c656b-9rqm4" Dec 11 15:25:59 crc kubenswrapper[5050]: I1211 15:25:59.789548 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/14c40f4c-7d89-4d8e-a1f5-923ab611e584-octavia-run\") pod \"octavia-api-55d86c656b-9rqm4\" (UID: \"14c40f4c-7d89-4d8e-a1f5-923ab611e584\") " pod="openstack/octavia-api-55d86c656b-9rqm4" Dec 11 15:25:59 crc kubenswrapper[5050]: I1211 15:25:59.789579 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14c40f4c-7d89-4d8e-a1f5-923ab611e584-scripts\") pod \"octavia-api-55d86c656b-9rqm4\" (UID: \"14c40f4c-7d89-4d8e-a1f5-923ab611e584\") " pod="openstack/octavia-api-55d86c656b-9rqm4" Dec 11 15:25:59 crc kubenswrapper[5050]: I1211 15:25:59.789630 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14c40f4c-7d89-4d8e-a1f5-923ab611e584-config-data\") pod \"octavia-api-55d86c656b-9rqm4\" (UID: \"14c40f4c-7d89-4d8e-a1f5-923ab611e584\") " pod="openstack/octavia-api-55d86c656b-9rqm4" Dec 11 15:25:59 crc kubenswrapper[5050]: I1211 15:25:59.891442 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14c40f4c-7d89-4d8e-a1f5-923ab611e584-config-data\") pod \"octavia-api-55d86c656b-9rqm4\" (UID: \"14c40f4c-7d89-4d8e-a1f5-923ab611e584\") " pod="openstack/octavia-api-55d86c656b-9rqm4" Dec 11 15:25:59 crc kubenswrapper[5050]: I1211 15:25:59.891515 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14c40f4c-7d89-4d8e-a1f5-923ab611e584-combined-ca-bundle\") pod \"octavia-api-55d86c656b-9rqm4\" (UID: \"14c40f4c-7d89-4d8e-a1f5-923ab611e584\") " pod="openstack/octavia-api-55d86c656b-9rqm4" Dec 11 15:25:59 crc kubenswrapper[5050]: I1211 15:25:59.891558 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/14c40f4c-7d89-4d8e-a1f5-923ab611e584-config-data-merged\") pod \"octavia-api-55d86c656b-9rqm4\" (UID: \"14c40f4c-7d89-4d8e-a1f5-923ab611e584\") " pod="openstack/octavia-api-55d86c656b-9rqm4" Dec 11 15:25:59 crc kubenswrapper[5050]: I1211 15:25:59.891660 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/14c40f4c-7d89-4d8e-a1f5-923ab611e584-octavia-run\") pod \"octavia-api-55d86c656b-9rqm4\" (UID: \"14c40f4c-7d89-4d8e-a1f5-923ab611e584\") " pod="openstack/octavia-api-55d86c656b-9rqm4" Dec 11 15:25:59 crc kubenswrapper[5050]: I1211 15:25:59.891682 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14c40f4c-7d89-4d8e-a1f5-923ab611e584-scripts\") pod \"octavia-api-55d86c656b-9rqm4\" (UID: \"14c40f4c-7d89-4d8e-a1f5-923ab611e584\") " pod="openstack/octavia-api-55d86c656b-9rqm4" Dec 11 15:25:59 crc kubenswrapper[5050]: I1211 15:25:59.892532 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/14c40f4c-7d89-4d8e-a1f5-923ab611e584-config-data-merged\") pod \"octavia-api-55d86c656b-9rqm4\" (UID: \"14c40f4c-7d89-4d8e-a1f5-923ab611e584\") " pod="openstack/octavia-api-55d86c656b-9rqm4" Dec 11 15:25:59 crc kubenswrapper[5050]: I1211 15:25:59.892549 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"octavia-run\" (UniqueName: \"kubernetes.io/empty-dir/14c40f4c-7d89-4d8e-a1f5-923ab611e584-octavia-run\") pod \"octavia-api-55d86c656b-9rqm4\" (UID: \"14c40f4c-7d89-4d8e-a1f5-923ab611e584\") " pod="openstack/octavia-api-55d86c656b-9rqm4" Dec 11 15:25:59 crc kubenswrapper[5050]: I1211 15:25:59.896949 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14c40f4c-7d89-4d8e-a1f5-923ab611e584-config-data\") pod \"octavia-api-55d86c656b-9rqm4\" (UID: \"14c40f4c-7d89-4d8e-a1f5-923ab611e584\") " pod="openstack/octavia-api-55d86c656b-9rqm4" Dec 11 15:25:59 crc kubenswrapper[5050]: I1211 15:25:59.901733 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14c40f4c-7d89-4d8e-a1f5-923ab611e584-combined-ca-bundle\") pod \"octavia-api-55d86c656b-9rqm4\" (UID: \"14c40f4c-7d89-4d8e-a1f5-923ab611e584\") " pod="openstack/octavia-api-55d86c656b-9rqm4" Dec 11 15:25:59 crc kubenswrapper[5050]: I1211 15:25:59.903346 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14c40f4c-7d89-4d8e-a1f5-923ab611e584-scripts\") pod \"octavia-api-55d86c656b-9rqm4\" (UID: \"14c40f4c-7d89-4d8e-a1f5-923ab611e584\") " pod="openstack/octavia-api-55d86c656b-9rqm4" Dec 11 15:26:00 crc kubenswrapper[5050]: I1211 15:26:00.018709 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-api-55d86c656b-9rqm4" Dec 11 15:26:00 crc kubenswrapper[5050]: W1211 15:26:00.461402 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14c40f4c_7d89_4d8e_a1f5_923ab611e584.slice/crio-332a96510f1687583a6c073e53d221d759945fd590ef010a5eac479ca3ffa930 WatchSource:0}: Error finding container 332a96510f1687583a6c073e53d221d759945fd590ef010a5eac479ca3ffa930: Status 404 returned error can't find the container with id 332a96510f1687583a6c073e53d221d759945fd590ef010a5eac479ca3ffa930 Dec 11 15:26:00 crc kubenswrapper[5050]: I1211 15:26:00.463268 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-api-55d86c656b-9rqm4"] Dec 11 15:26:00 crc kubenswrapper[5050]: I1211 15:26:00.809611 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-55d86c656b-9rqm4" event={"ID":"14c40f4c-7d89-4d8e-a1f5-923ab611e584","Type":"ContainerStarted","Data":"332a96510f1687583a6c073e53d221d759945fd590ef010a5eac479ca3ffa930"} Dec 11 15:26:08 crc kubenswrapper[5050]: I1211 15:26:08.038249 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-q4zxd"] Dec 11 15:26:08 crc kubenswrapper[5050]: I1211 15:26:08.054490 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-q4zxd"] Dec 11 15:26:09 crc kubenswrapper[5050]: I1211 15:26:09.557167 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="947107ea-e024-4476-944e-6c3662bc6557" path="/var/lib/kubelet/pods/947107ea-e024-4476-944e-6c3662bc6557/volumes" Dec 11 15:26:15 crc kubenswrapper[5050]: I1211 15:26:15.711980 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-4494l" podUID="d6797fb3-9b0d-4856-a741-eb4c640a65ba" containerName="ovn-controller" probeResult="failure" output=< Dec 11 15:26:15 crc kubenswrapper[5050]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 11 15:26:15 crc kubenswrapper[5050]: > Dec 11 15:26:15 crc kubenswrapper[5050]: I1211 15:26:15.731232 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-z8z22" Dec 11 15:26:15 crc kubenswrapper[5050]: I1211 15:26:15.733556 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-z8z22" Dec 11 15:26:15 crc kubenswrapper[5050]: I1211 15:26:15.861043 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-4494l-config-w7vjc"] Dec 11 15:26:15 crc kubenswrapper[5050]: I1211 15:26:15.864228 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4494l-config-w7vjc" Dec 11 15:26:15 crc kubenswrapper[5050]: I1211 15:26:15.866738 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 11 15:26:15 crc kubenswrapper[5050]: I1211 15:26:15.872943 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4494l-config-w7vjc"] Dec 11 15:26:15 crc kubenswrapper[5050]: I1211 15:26:15.942988 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0bcbd257-d52a-4f0b-8ef4-53a97b8482f3-additional-scripts\") pod \"ovn-controller-4494l-config-w7vjc\" (UID: \"0bcbd257-d52a-4f0b-8ef4-53a97b8482f3\") " pod="openstack/ovn-controller-4494l-config-w7vjc" Dec 11 15:26:15 crc kubenswrapper[5050]: I1211 15:26:15.943103 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0bcbd257-d52a-4f0b-8ef4-53a97b8482f3-var-log-ovn\") pod \"ovn-controller-4494l-config-w7vjc\" (UID: \"0bcbd257-d52a-4f0b-8ef4-53a97b8482f3\") " pod="openstack/ovn-controller-4494l-config-w7vjc" Dec 11 15:26:15 crc kubenswrapper[5050]: I1211 15:26:15.943218 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0bcbd257-d52a-4f0b-8ef4-53a97b8482f3-var-run\") pod \"ovn-controller-4494l-config-w7vjc\" (UID: \"0bcbd257-d52a-4f0b-8ef4-53a97b8482f3\") " pod="openstack/ovn-controller-4494l-config-w7vjc" Dec 11 15:26:15 crc kubenswrapper[5050]: I1211 15:26:15.943289 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxcr9\" (UniqueName: \"kubernetes.io/projected/0bcbd257-d52a-4f0b-8ef4-53a97b8482f3-kube-api-access-zxcr9\") pod \"ovn-controller-4494l-config-w7vjc\" (UID: \"0bcbd257-d52a-4f0b-8ef4-53a97b8482f3\") " pod="openstack/ovn-controller-4494l-config-w7vjc" Dec 11 15:26:15 crc kubenswrapper[5050]: I1211 15:26:15.943392 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0bcbd257-d52a-4f0b-8ef4-53a97b8482f3-var-run-ovn\") pod \"ovn-controller-4494l-config-w7vjc\" (UID: \"0bcbd257-d52a-4f0b-8ef4-53a97b8482f3\") " pod="openstack/ovn-controller-4494l-config-w7vjc" Dec 11 15:26:15 crc kubenswrapper[5050]: I1211 15:26:15.943548 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0bcbd257-d52a-4f0b-8ef4-53a97b8482f3-scripts\") pod \"ovn-controller-4494l-config-w7vjc\" (UID: \"0bcbd257-d52a-4f0b-8ef4-53a97b8482f3\") " pod="openstack/ovn-controller-4494l-config-w7vjc" Dec 11 15:26:16 crc kubenswrapper[5050]: I1211 15:26:16.045382 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0bcbd257-d52a-4f0b-8ef4-53a97b8482f3-additional-scripts\") pod \"ovn-controller-4494l-config-w7vjc\" (UID: \"0bcbd257-d52a-4f0b-8ef4-53a97b8482f3\") " pod="openstack/ovn-controller-4494l-config-w7vjc" Dec 11 15:26:16 crc kubenswrapper[5050]: I1211 15:26:16.045449 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0bcbd257-d52a-4f0b-8ef4-53a97b8482f3-var-log-ovn\") pod \"ovn-controller-4494l-config-w7vjc\" (UID: \"0bcbd257-d52a-4f0b-8ef4-53a97b8482f3\") " pod="openstack/ovn-controller-4494l-config-w7vjc" Dec 11 15:26:16 crc kubenswrapper[5050]: I1211 15:26:16.045508 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0bcbd257-d52a-4f0b-8ef4-53a97b8482f3-var-run\") pod \"ovn-controller-4494l-config-w7vjc\" (UID: \"0bcbd257-d52a-4f0b-8ef4-53a97b8482f3\") " pod="openstack/ovn-controller-4494l-config-w7vjc" Dec 11 15:26:16 crc kubenswrapper[5050]: I1211 15:26:16.045532 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxcr9\" (UniqueName: \"kubernetes.io/projected/0bcbd257-d52a-4f0b-8ef4-53a97b8482f3-kube-api-access-zxcr9\") pod \"ovn-controller-4494l-config-w7vjc\" (UID: \"0bcbd257-d52a-4f0b-8ef4-53a97b8482f3\") " pod="openstack/ovn-controller-4494l-config-w7vjc" Dec 11 15:26:16 crc kubenswrapper[5050]: I1211 15:26:16.045605 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0bcbd257-d52a-4f0b-8ef4-53a97b8482f3-var-run-ovn\") pod \"ovn-controller-4494l-config-w7vjc\" (UID: \"0bcbd257-d52a-4f0b-8ef4-53a97b8482f3\") " pod="openstack/ovn-controller-4494l-config-w7vjc" Dec 11 15:26:16 crc kubenswrapper[5050]: I1211 15:26:16.045689 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0bcbd257-d52a-4f0b-8ef4-53a97b8482f3-scripts\") pod \"ovn-controller-4494l-config-w7vjc\" (UID: \"0bcbd257-d52a-4f0b-8ef4-53a97b8482f3\") " pod="openstack/ovn-controller-4494l-config-w7vjc" Dec 11 15:26:16 crc kubenswrapper[5050]: I1211 15:26:16.047343 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0bcbd257-d52a-4f0b-8ef4-53a97b8482f3-var-log-ovn\") pod \"ovn-controller-4494l-config-w7vjc\" (UID: \"0bcbd257-d52a-4f0b-8ef4-53a97b8482f3\") " pod="openstack/ovn-controller-4494l-config-w7vjc" Dec 11 15:26:16 crc kubenswrapper[5050]: I1211 15:26:16.047390 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0bcbd257-d52a-4f0b-8ef4-53a97b8482f3-var-run\") pod \"ovn-controller-4494l-config-w7vjc\" (UID: \"0bcbd257-d52a-4f0b-8ef4-53a97b8482f3\") " pod="openstack/ovn-controller-4494l-config-w7vjc" Dec 11 15:26:16 crc kubenswrapper[5050]: I1211 15:26:16.047465 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0bcbd257-d52a-4f0b-8ef4-53a97b8482f3-var-run-ovn\") pod \"ovn-controller-4494l-config-w7vjc\" (UID: \"0bcbd257-d52a-4f0b-8ef4-53a97b8482f3\") " pod="openstack/ovn-controller-4494l-config-w7vjc" Dec 11 15:26:16 crc kubenswrapper[5050]: I1211 15:26:16.048745 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0bcbd257-d52a-4f0b-8ef4-53a97b8482f3-scripts\") pod \"ovn-controller-4494l-config-w7vjc\" (UID: \"0bcbd257-d52a-4f0b-8ef4-53a97b8482f3\") " pod="openstack/ovn-controller-4494l-config-w7vjc" Dec 11 15:26:16 crc kubenswrapper[5050]: I1211 15:26:16.049204 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0bcbd257-d52a-4f0b-8ef4-53a97b8482f3-additional-scripts\") pod \"ovn-controller-4494l-config-w7vjc\" (UID: \"0bcbd257-d52a-4f0b-8ef4-53a97b8482f3\") " pod="openstack/ovn-controller-4494l-config-w7vjc" Dec 11 15:26:16 crc kubenswrapper[5050]: I1211 15:26:16.076690 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxcr9\" (UniqueName: \"kubernetes.io/projected/0bcbd257-d52a-4f0b-8ef4-53a97b8482f3-kube-api-access-zxcr9\") pod \"ovn-controller-4494l-config-w7vjc\" (UID: \"0bcbd257-d52a-4f0b-8ef4-53a97b8482f3\") " pod="openstack/ovn-controller-4494l-config-w7vjc" Dec 11 15:26:16 crc kubenswrapper[5050]: I1211 15:26:16.181555 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4494l-config-w7vjc" Dec 11 15:26:16 crc kubenswrapper[5050]: I1211 15:26:16.621798 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4494l-config-w7vjc"] Dec 11 15:26:16 crc kubenswrapper[5050]: I1211 15:26:16.990641 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4494l-config-w7vjc" event={"ID":"0bcbd257-d52a-4f0b-8ef4-53a97b8482f3","Type":"ContainerStarted","Data":"87f6e553d59012765d4bd8e5f96c831c449dc547f36b79edbb178faa3120fe8d"} Dec 11 15:26:18 crc kubenswrapper[5050]: I1211 15:26:18.000935 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4494l-config-w7vjc" event={"ID":"0bcbd257-d52a-4f0b-8ef4-53a97b8482f3","Type":"ContainerStarted","Data":"ef87bc8a5bcc42721914762d982301661a280a48eb6b5900a1409172956af3fb"} Dec 11 15:26:19 crc kubenswrapper[5050]: I1211 15:26:19.010776 5050 generic.go:334] "Generic (PLEG): container finished" podID="0bcbd257-d52a-4f0b-8ef4-53a97b8482f3" containerID="ef87bc8a5bcc42721914762d982301661a280a48eb6b5900a1409172956af3fb" exitCode=0 Dec 11 15:26:19 crc kubenswrapper[5050]: I1211 15:26:19.010818 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4494l-config-w7vjc" event={"ID":"0bcbd257-d52a-4f0b-8ef4-53a97b8482f3","Type":"ContainerDied","Data":"ef87bc8a5bcc42721914762d982301661a280a48eb6b5900a1409172956af3fb"} Dec 11 15:26:20 crc kubenswrapper[5050]: I1211 15:26:20.717603 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-4494l" Dec 11 15:26:28 crc kubenswrapper[5050]: I1211 15:26:28.108332 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4494l-config-w7vjc" event={"ID":"0bcbd257-d52a-4f0b-8ef4-53a97b8482f3","Type":"ContainerDied","Data":"87f6e553d59012765d4bd8e5f96c831c449dc547f36b79edbb178faa3120fe8d"} Dec 11 15:26:28 crc kubenswrapper[5050]: I1211 15:26:28.108771 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87f6e553d59012765d4bd8e5f96c831c449dc547f36b79edbb178faa3120fe8d" Dec 11 15:26:28 crc kubenswrapper[5050]: I1211 15:26:28.186720 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4494l-config-w7vjc" Dec 11 15:26:28 crc kubenswrapper[5050]: I1211 15:26:28.302144 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0bcbd257-d52a-4f0b-8ef4-53a97b8482f3-var-run\") pod \"0bcbd257-d52a-4f0b-8ef4-53a97b8482f3\" (UID: \"0bcbd257-d52a-4f0b-8ef4-53a97b8482f3\") " Dec 11 15:26:28 crc kubenswrapper[5050]: I1211 15:26:28.302294 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxcr9\" (UniqueName: \"kubernetes.io/projected/0bcbd257-d52a-4f0b-8ef4-53a97b8482f3-kube-api-access-zxcr9\") pod \"0bcbd257-d52a-4f0b-8ef4-53a97b8482f3\" (UID: \"0bcbd257-d52a-4f0b-8ef4-53a97b8482f3\") " Dec 11 15:26:28 crc kubenswrapper[5050]: I1211 15:26:28.302330 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0bcbd257-d52a-4f0b-8ef4-53a97b8482f3-additional-scripts\") pod \"0bcbd257-d52a-4f0b-8ef4-53a97b8482f3\" (UID: \"0bcbd257-d52a-4f0b-8ef4-53a97b8482f3\") " Dec 11 15:26:28 crc kubenswrapper[5050]: I1211 15:26:28.302260 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bcbd257-d52a-4f0b-8ef4-53a97b8482f3-var-run" (OuterVolumeSpecName: "var-run") pod "0bcbd257-d52a-4f0b-8ef4-53a97b8482f3" (UID: "0bcbd257-d52a-4f0b-8ef4-53a97b8482f3"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 15:26:28 crc kubenswrapper[5050]: I1211 15:26:28.302487 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0bcbd257-d52a-4f0b-8ef4-53a97b8482f3-scripts\") pod \"0bcbd257-d52a-4f0b-8ef4-53a97b8482f3\" (UID: \"0bcbd257-d52a-4f0b-8ef4-53a97b8482f3\") " Dec 11 15:26:28 crc kubenswrapper[5050]: I1211 15:26:28.302531 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0bcbd257-d52a-4f0b-8ef4-53a97b8482f3-var-run-ovn\") pod \"0bcbd257-d52a-4f0b-8ef4-53a97b8482f3\" (UID: \"0bcbd257-d52a-4f0b-8ef4-53a97b8482f3\") " Dec 11 15:26:28 crc kubenswrapper[5050]: I1211 15:26:28.302599 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0bcbd257-d52a-4f0b-8ef4-53a97b8482f3-var-log-ovn\") pod \"0bcbd257-d52a-4f0b-8ef4-53a97b8482f3\" (UID: \"0bcbd257-d52a-4f0b-8ef4-53a97b8482f3\") " Dec 11 15:26:28 crc kubenswrapper[5050]: I1211 15:26:28.303071 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bcbd257-d52a-4f0b-8ef4-53a97b8482f3-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "0bcbd257-d52a-4f0b-8ef4-53a97b8482f3" (UID: "0bcbd257-d52a-4f0b-8ef4-53a97b8482f3"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:26:28 crc kubenswrapper[5050]: I1211 15:26:28.303097 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bcbd257-d52a-4f0b-8ef4-53a97b8482f3-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "0bcbd257-d52a-4f0b-8ef4-53a97b8482f3" (UID: "0bcbd257-d52a-4f0b-8ef4-53a97b8482f3"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 15:26:28 crc kubenswrapper[5050]: I1211 15:26:28.303164 5050 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/0bcbd257-d52a-4f0b-8ef4-53a97b8482f3-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:26:28 crc kubenswrapper[5050]: I1211 15:26:28.303181 5050 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/0bcbd257-d52a-4f0b-8ef4-53a97b8482f3-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 11 15:26:28 crc kubenswrapper[5050]: I1211 15:26:28.303193 5050 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0bcbd257-d52a-4f0b-8ef4-53a97b8482f3-var-run\") on node \"crc\" DevicePath \"\"" Dec 11 15:26:28 crc kubenswrapper[5050]: I1211 15:26:28.303240 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0bcbd257-d52a-4f0b-8ef4-53a97b8482f3-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "0bcbd257-d52a-4f0b-8ef4-53a97b8482f3" (UID: "0bcbd257-d52a-4f0b-8ef4-53a97b8482f3"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 15:26:28 crc kubenswrapper[5050]: I1211 15:26:28.303772 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bcbd257-d52a-4f0b-8ef4-53a97b8482f3-scripts" (OuterVolumeSpecName: "scripts") pod "0bcbd257-d52a-4f0b-8ef4-53a97b8482f3" (UID: "0bcbd257-d52a-4f0b-8ef4-53a97b8482f3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:26:28 crc kubenswrapper[5050]: I1211 15:26:28.307671 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bcbd257-d52a-4f0b-8ef4-53a97b8482f3-kube-api-access-zxcr9" (OuterVolumeSpecName: "kube-api-access-zxcr9") pod "0bcbd257-d52a-4f0b-8ef4-53a97b8482f3" (UID: "0bcbd257-d52a-4f0b-8ef4-53a97b8482f3"). InnerVolumeSpecName "kube-api-access-zxcr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:26:28 crc kubenswrapper[5050]: I1211 15:26:28.405785 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxcr9\" (UniqueName: \"kubernetes.io/projected/0bcbd257-d52a-4f0b-8ef4-53a97b8482f3-kube-api-access-zxcr9\") on node \"crc\" DevicePath \"\"" Dec 11 15:26:28 crc kubenswrapper[5050]: I1211 15:26:28.405843 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0bcbd257-d52a-4f0b-8ef4-53a97b8482f3-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:26:28 crc kubenswrapper[5050]: I1211 15:26:28.405862 5050 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/0bcbd257-d52a-4f0b-8ef4-53a97b8482f3-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 11 15:26:29 crc kubenswrapper[5050]: I1211 15:26:29.116514 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4494l-config-w7vjc" Dec 11 15:26:29 crc kubenswrapper[5050]: I1211 15:26:29.295439 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-4494l-config-w7vjc"] Dec 11 15:26:29 crc kubenswrapper[5050]: I1211 15:26:29.306929 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-4494l-config-w7vjc"] Dec 11 15:26:29 crc kubenswrapper[5050]: I1211 15:26:29.401144 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-4494l-config-fbmz5"] Dec 11 15:26:29 crc kubenswrapper[5050]: E1211 15:26:29.403129 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bcbd257-d52a-4f0b-8ef4-53a97b8482f3" containerName="ovn-config" Dec 11 15:26:29 crc kubenswrapper[5050]: I1211 15:26:29.403155 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bcbd257-d52a-4f0b-8ef4-53a97b8482f3" containerName="ovn-config" Dec 11 15:26:29 crc kubenswrapper[5050]: I1211 15:26:29.403334 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bcbd257-d52a-4f0b-8ef4-53a97b8482f3" containerName="ovn-config" Dec 11 15:26:29 crc kubenswrapper[5050]: I1211 15:26:29.404099 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4494l-config-fbmz5" Dec 11 15:26:29 crc kubenswrapper[5050]: I1211 15:26:29.405986 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 11 15:26:29 crc kubenswrapper[5050]: I1211 15:26:29.426545 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4494l-config-fbmz5"] Dec 11 15:26:29 crc kubenswrapper[5050]: I1211 15:26:29.527548 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/33daebf9-9439-4c8a-a54a-7e8a472cc768-var-run\") pod \"ovn-controller-4494l-config-fbmz5\" (UID: \"33daebf9-9439-4c8a-a54a-7e8a472cc768\") " pod="openstack/ovn-controller-4494l-config-fbmz5" Dec 11 15:26:29 crc kubenswrapper[5050]: I1211 15:26:29.527599 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/33daebf9-9439-4c8a-a54a-7e8a472cc768-additional-scripts\") pod \"ovn-controller-4494l-config-fbmz5\" (UID: \"33daebf9-9439-4c8a-a54a-7e8a472cc768\") " pod="openstack/ovn-controller-4494l-config-fbmz5" Dec 11 15:26:29 crc kubenswrapper[5050]: I1211 15:26:29.527630 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5qb4\" (UniqueName: \"kubernetes.io/projected/33daebf9-9439-4c8a-a54a-7e8a472cc768-kube-api-access-w5qb4\") pod \"ovn-controller-4494l-config-fbmz5\" (UID: \"33daebf9-9439-4c8a-a54a-7e8a472cc768\") " pod="openstack/ovn-controller-4494l-config-fbmz5" Dec 11 15:26:29 crc kubenswrapper[5050]: I1211 15:26:29.527952 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/33daebf9-9439-4c8a-a54a-7e8a472cc768-var-log-ovn\") pod \"ovn-controller-4494l-config-fbmz5\" (UID: \"33daebf9-9439-4c8a-a54a-7e8a472cc768\") " pod="openstack/ovn-controller-4494l-config-fbmz5" Dec 11 15:26:29 crc kubenswrapper[5050]: I1211 15:26:29.528039 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/33daebf9-9439-4c8a-a54a-7e8a472cc768-var-run-ovn\") pod \"ovn-controller-4494l-config-fbmz5\" (UID: \"33daebf9-9439-4c8a-a54a-7e8a472cc768\") " pod="openstack/ovn-controller-4494l-config-fbmz5" Dec 11 15:26:29 crc kubenswrapper[5050]: I1211 15:26:29.528103 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/33daebf9-9439-4c8a-a54a-7e8a472cc768-scripts\") pod \"ovn-controller-4494l-config-fbmz5\" (UID: \"33daebf9-9439-4c8a-a54a-7e8a472cc768\") " pod="openstack/ovn-controller-4494l-config-fbmz5" Dec 11 15:26:29 crc kubenswrapper[5050]: I1211 15:26:29.594194 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bcbd257-d52a-4f0b-8ef4-53a97b8482f3" path="/var/lib/kubelet/pods/0bcbd257-d52a-4f0b-8ef4-53a97b8482f3/volumes" Dec 11 15:26:29 crc kubenswrapper[5050]: I1211 15:26:29.630567 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/33daebf9-9439-4c8a-a54a-7e8a472cc768-var-run\") pod \"ovn-controller-4494l-config-fbmz5\" (UID: \"33daebf9-9439-4c8a-a54a-7e8a472cc768\") " pod="openstack/ovn-controller-4494l-config-fbmz5" Dec 11 15:26:29 crc kubenswrapper[5050]: I1211 15:26:29.630626 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/33daebf9-9439-4c8a-a54a-7e8a472cc768-additional-scripts\") pod \"ovn-controller-4494l-config-fbmz5\" (UID: \"33daebf9-9439-4c8a-a54a-7e8a472cc768\") " pod="openstack/ovn-controller-4494l-config-fbmz5" Dec 11 15:26:29 crc kubenswrapper[5050]: I1211 15:26:29.630658 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5qb4\" (UniqueName: \"kubernetes.io/projected/33daebf9-9439-4c8a-a54a-7e8a472cc768-kube-api-access-w5qb4\") pod \"ovn-controller-4494l-config-fbmz5\" (UID: \"33daebf9-9439-4c8a-a54a-7e8a472cc768\") " pod="openstack/ovn-controller-4494l-config-fbmz5" Dec 11 15:26:29 crc kubenswrapper[5050]: I1211 15:26:29.630751 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/33daebf9-9439-4c8a-a54a-7e8a472cc768-var-log-ovn\") pod \"ovn-controller-4494l-config-fbmz5\" (UID: \"33daebf9-9439-4c8a-a54a-7e8a472cc768\") " pod="openstack/ovn-controller-4494l-config-fbmz5" Dec 11 15:26:29 crc kubenswrapper[5050]: I1211 15:26:29.630775 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/33daebf9-9439-4c8a-a54a-7e8a472cc768-var-run-ovn\") pod \"ovn-controller-4494l-config-fbmz5\" (UID: \"33daebf9-9439-4c8a-a54a-7e8a472cc768\") " pod="openstack/ovn-controller-4494l-config-fbmz5" Dec 11 15:26:29 crc kubenswrapper[5050]: I1211 15:26:29.630802 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/33daebf9-9439-4c8a-a54a-7e8a472cc768-scripts\") pod \"ovn-controller-4494l-config-fbmz5\" (UID: \"33daebf9-9439-4c8a-a54a-7e8a472cc768\") " pod="openstack/ovn-controller-4494l-config-fbmz5" Dec 11 15:26:29 crc kubenswrapper[5050]: I1211 15:26:29.631176 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/33daebf9-9439-4c8a-a54a-7e8a472cc768-var-run\") pod \"ovn-controller-4494l-config-fbmz5\" (UID: \"33daebf9-9439-4c8a-a54a-7e8a472cc768\") " pod="openstack/ovn-controller-4494l-config-fbmz5" Dec 11 15:26:29 crc kubenswrapper[5050]: I1211 15:26:29.631735 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/33daebf9-9439-4c8a-a54a-7e8a472cc768-var-log-ovn\") pod \"ovn-controller-4494l-config-fbmz5\" (UID: \"33daebf9-9439-4c8a-a54a-7e8a472cc768\") " pod="openstack/ovn-controller-4494l-config-fbmz5" Dec 11 15:26:29 crc kubenswrapper[5050]: I1211 15:26:29.631893 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/33daebf9-9439-4c8a-a54a-7e8a472cc768-var-run-ovn\") pod \"ovn-controller-4494l-config-fbmz5\" (UID: \"33daebf9-9439-4c8a-a54a-7e8a472cc768\") " pod="openstack/ovn-controller-4494l-config-fbmz5" Dec 11 15:26:29 crc kubenswrapper[5050]: I1211 15:26:29.633346 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/33daebf9-9439-4c8a-a54a-7e8a472cc768-scripts\") pod \"ovn-controller-4494l-config-fbmz5\" (UID: \"33daebf9-9439-4c8a-a54a-7e8a472cc768\") " pod="openstack/ovn-controller-4494l-config-fbmz5" Dec 11 15:26:29 crc kubenswrapper[5050]: I1211 15:26:29.634528 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 11 15:26:29 crc kubenswrapper[5050]: I1211 15:26:29.643890 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/33daebf9-9439-4c8a-a54a-7e8a472cc768-additional-scripts\") pod \"ovn-controller-4494l-config-fbmz5\" (UID: \"33daebf9-9439-4c8a-a54a-7e8a472cc768\") " pod="openstack/ovn-controller-4494l-config-fbmz5" Dec 11 15:26:29 crc kubenswrapper[5050]: I1211 15:26:29.653835 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5qb4\" (UniqueName: \"kubernetes.io/projected/33daebf9-9439-4c8a-a54a-7e8a472cc768-kube-api-access-w5qb4\") pod \"ovn-controller-4494l-config-fbmz5\" (UID: \"33daebf9-9439-4c8a-a54a-7e8a472cc768\") " pod="openstack/ovn-controller-4494l-config-fbmz5" Dec 11 15:26:29 crc kubenswrapper[5050]: I1211 15:26:29.729075 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4494l-config-fbmz5" Dec 11 15:26:30 crc kubenswrapper[5050]: E1211 15:26:30.793993 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-octavia-api@sha256:8c3632033f8c004f31a1c7c57c5ca7b450a11e9170a220b8943b57f80717c70c" Dec 11 15:26:30 crc kubenswrapper[5050]: E1211 15:26:30.794570 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-octavia-api@sha256:8c3632033f8c004f31a1c7c57c5ca7b450a11e9170a220b8943b57f80717c70c,Command:[/bin/bash],Args:[-c /usr/local/bin/container-scripts/init.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-merged,ReadOnly:false,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42437,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42437,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-api-55d86c656b-9rqm4_openstack(14c40f4c-7d89-4d8e-a1f5-923ab611e584): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 15:26:30 crc kubenswrapper[5050]: E1211 15:26:30.795825 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/octavia-api-55d86c656b-9rqm4" podUID="14c40f4c-7d89-4d8e-a1f5-923ab611e584" Dec 11 15:26:31 crc kubenswrapper[5050]: E1211 15:26:31.136308 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-octavia-api@sha256:8c3632033f8c004f31a1c7c57c5ca7b450a11e9170a220b8943b57f80717c70c\\\"\"" pod="openstack/octavia-api-55d86c656b-9rqm4" podUID="14c40f4c-7d89-4d8e-a1f5-923ab611e584" Dec 11 15:26:31 crc kubenswrapper[5050]: I1211 15:26:31.196633 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4494l-config-fbmz5"] Dec 11 15:26:32 crc kubenswrapper[5050]: I1211 15:26:32.143701 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4494l-config-fbmz5" event={"ID":"33daebf9-9439-4c8a-a54a-7e8a472cc768","Type":"ContainerStarted","Data":"919526ce458eecdc706f3209ee706b33539871c9fa5dc8a0bdb096afb3e5c27a"} Dec 11 15:26:32 crc kubenswrapper[5050]: I1211 15:26:32.143984 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4494l-config-fbmz5" event={"ID":"33daebf9-9439-4c8a-a54a-7e8a472cc768","Type":"ContainerStarted","Data":"b5f31061f0c6d2d8653ed21105cdef84c224ec1e8240c88a5b7744134b9930db"} Dec 11 15:26:32 crc kubenswrapper[5050]: I1211 15:26:32.162068 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-4494l-config-fbmz5" podStartSLOduration=3.162048523 podStartE2EDuration="3.162048523s" podCreationTimestamp="2025-12-11 15:26:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:26:32.160647265 +0000 UTC m=+5883.004369861" watchObservedRunningTime="2025-12-11 15:26:32.162048523 +0000 UTC m=+5883.005771129" Dec 11 15:26:33 crc kubenswrapper[5050]: I1211 15:26:33.160248 5050 generic.go:334] "Generic (PLEG): container finished" podID="33daebf9-9439-4c8a-a54a-7e8a472cc768" containerID="919526ce458eecdc706f3209ee706b33539871c9fa5dc8a0bdb096afb3e5c27a" exitCode=0 Dec 11 15:26:33 crc kubenswrapper[5050]: I1211 15:26:33.160366 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4494l-config-fbmz5" event={"ID":"33daebf9-9439-4c8a-a54a-7e8a472cc768","Type":"ContainerDied","Data":"919526ce458eecdc706f3209ee706b33539871c9fa5dc8a0bdb096afb3e5c27a"} Dec 11 15:26:34 crc kubenswrapper[5050]: I1211 15:26:34.561134 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4494l-config-fbmz5" Dec 11 15:26:34 crc kubenswrapper[5050]: I1211 15:26:34.642891 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5qb4\" (UniqueName: \"kubernetes.io/projected/33daebf9-9439-4c8a-a54a-7e8a472cc768-kube-api-access-w5qb4\") pod \"33daebf9-9439-4c8a-a54a-7e8a472cc768\" (UID: \"33daebf9-9439-4c8a-a54a-7e8a472cc768\") " Dec 11 15:26:34 crc kubenswrapper[5050]: I1211 15:26:34.643117 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/33daebf9-9439-4c8a-a54a-7e8a472cc768-var-run-ovn\") pod \"33daebf9-9439-4c8a-a54a-7e8a472cc768\" (UID: \"33daebf9-9439-4c8a-a54a-7e8a472cc768\") " Dec 11 15:26:34 crc kubenswrapper[5050]: I1211 15:26:34.643196 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/33daebf9-9439-4c8a-a54a-7e8a472cc768-scripts\") pod \"33daebf9-9439-4c8a-a54a-7e8a472cc768\" (UID: \"33daebf9-9439-4c8a-a54a-7e8a472cc768\") " Dec 11 15:26:34 crc kubenswrapper[5050]: I1211 15:26:34.643277 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/33daebf9-9439-4c8a-a54a-7e8a472cc768-additional-scripts\") pod \"33daebf9-9439-4c8a-a54a-7e8a472cc768\" (UID: \"33daebf9-9439-4c8a-a54a-7e8a472cc768\") " Dec 11 15:26:34 crc kubenswrapper[5050]: I1211 15:26:34.643318 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/33daebf9-9439-4c8a-a54a-7e8a472cc768-var-log-ovn\") pod \"33daebf9-9439-4c8a-a54a-7e8a472cc768\" (UID: \"33daebf9-9439-4c8a-a54a-7e8a472cc768\") " Dec 11 15:26:34 crc kubenswrapper[5050]: I1211 15:26:34.643191 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33daebf9-9439-4c8a-a54a-7e8a472cc768-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "33daebf9-9439-4c8a-a54a-7e8a472cc768" (UID: "33daebf9-9439-4c8a-a54a-7e8a472cc768"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 15:26:34 crc kubenswrapper[5050]: I1211 15:26:34.643397 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/33daebf9-9439-4c8a-a54a-7e8a472cc768-var-run\") pod \"33daebf9-9439-4c8a-a54a-7e8a472cc768\" (UID: \"33daebf9-9439-4c8a-a54a-7e8a472cc768\") " Dec 11 15:26:34 crc kubenswrapper[5050]: I1211 15:26:34.643461 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33daebf9-9439-4c8a-a54a-7e8a472cc768-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "33daebf9-9439-4c8a-a54a-7e8a472cc768" (UID: "33daebf9-9439-4c8a-a54a-7e8a472cc768"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 15:26:34 crc kubenswrapper[5050]: I1211 15:26:34.643518 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33daebf9-9439-4c8a-a54a-7e8a472cc768-var-run" (OuterVolumeSpecName: "var-run") pod "33daebf9-9439-4c8a-a54a-7e8a472cc768" (UID: "33daebf9-9439-4c8a-a54a-7e8a472cc768"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 15:26:34 crc kubenswrapper[5050]: I1211 15:26:34.643823 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33daebf9-9439-4c8a-a54a-7e8a472cc768-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "33daebf9-9439-4c8a-a54a-7e8a472cc768" (UID: "33daebf9-9439-4c8a-a54a-7e8a472cc768"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:26:34 crc kubenswrapper[5050]: I1211 15:26:34.644195 5050 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/33daebf9-9439-4c8a-a54a-7e8a472cc768-var-run\") on node \"crc\" DevicePath \"\"" Dec 11 15:26:34 crc kubenswrapper[5050]: I1211 15:26:34.644218 5050 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/33daebf9-9439-4c8a-a54a-7e8a472cc768-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 11 15:26:34 crc kubenswrapper[5050]: I1211 15:26:34.644230 5050 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/33daebf9-9439-4c8a-a54a-7e8a472cc768-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:26:34 crc kubenswrapper[5050]: I1211 15:26:34.644241 5050 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/33daebf9-9439-4c8a-a54a-7e8a472cc768-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 11 15:26:34 crc kubenswrapper[5050]: I1211 15:26:34.644618 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33daebf9-9439-4c8a-a54a-7e8a472cc768-scripts" (OuterVolumeSpecName: "scripts") pod "33daebf9-9439-4c8a-a54a-7e8a472cc768" (UID: "33daebf9-9439-4c8a-a54a-7e8a472cc768"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:26:34 crc kubenswrapper[5050]: I1211 15:26:34.652452 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33daebf9-9439-4c8a-a54a-7e8a472cc768-kube-api-access-w5qb4" (OuterVolumeSpecName: "kube-api-access-w5qb4") pod "33daebf9-9439-4c8a-a54a-7e8a472cc768" (UID: "33daebf9-9439-4c8a-a54a-7e8a472cc768"). InnerVolumeSpecName "kube-api-access-w5qb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:26:34 crc kubenswrapper[5050]: I1211 15:26:34.746481 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5qb4\" (UniqueName: \"kubernetes.io/projected/33daebf9-9439-4c8a-a54a-7e8a472cc768-kube-api-access-w5qb4\") on node \"crc\" DevicePath \"\"" Dec 11 15:26:34 crc kubenswrapper[5050]: I1211 15:26:34.746521 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/33daebf9-9439-4c8a-a54a-7e8a472cc768-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:26:35 crc kubenswrapper[5050]: I1211 15:26:35.179027 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4494l-config-fbmz5" event={"ID":"33daebf9-9439-4c8a-a54a-7e8a472cc768","Type":"ContainerDied","Data":"b5f31061f0c6d2d8653ed21105cdef84c224ec1e8240c88a5b7744134b9930db"} Dec 11 15:26:35 crc kubenswrapper[5050]: I1211 15:26:35.179323 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5f31061f0c6d2d8653ed21105cdef84c224ec1e8240c88a5b7744134b9930db" Dec 11 15:26:35 crc kubenswrapper[5050]: I1211 15:26:35.179093 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4494l-config-fbmz5" Dec 11 15:26:35 crc kubenswrapper[5050]: I1211 15:26:35.657310 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-4494l-config-fbmz5"] Dec 11 15:26:35 crc kubenswrapper[5050]: I1211 15:26:35.666067 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-4494l-config-fbmz5"] Dec 11 15:26:37 crc kubenswrapper[5050]: I1211 15:26:37.384243 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bx48h"] Dec 11 15:26:37 crc kubenswrapper[5050]: E1211 15:26:37.384994 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33daebf9-9439-4c8a-a54a-7e8a472cc768" containerName="ovn-config" Dec 11 15:26:37 crc kubenswrapper[5050]: I1211 15:26:37.385025 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="33daebf9-9439-4c8a-a54a-7e8a472cc768" containerName="ovn-config" Dec 11 15:26:37 crc kubenswrapper[5050]: I1211 15:26:37.385222 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="33daebf9-9439-4c8a-a54a-7e8a472cc768" containerName="ovn-config" Dec 11 15:26:37 crc kubenswrapper[5050]: I1211 15:26:37.386772 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bx48h" Dec 11 15:26:37 crc kubenswrapper[5050]: I1211 15:26:37.402767 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bx48h"] Dec 11 15:26:37 crc kubenswrapper[5050]: I1211 15:26:37.500511 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b09316c3-7eb9-476d-9d5a-fa9f93f19dd0-catalog-content\") pod \"redhat-marketplace-bx48h\" (UID: \"b09316c3-7eb9-476d-9d5a-fa9f93f19dd0\") " pod="openshift-marketplace/redhat-marketplace-bx48h" Dec 11 15:26:37 crc kubenswrapper[5050]: I1211 15:26:37.500599 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg7cc\" (UniqueName: \"kubernetes.io/projected/b09316c3-7eb9-476d-9d5a-fa9f93f19dd0-kube-api-access-sg7cc\") pod \"redhat-marketplace-bx48h\" (UID: \"b09316c3-7eb9-476d-9d5a-fa9f93f19dd0\") " pod="openshift-marketplace/redhat-marketplace-bx48h" Dec 11 15:26:37 crc kubenswrapper[5050]: I1211 15:26:37.500908 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b09316c3-7eb9-476d-9d5a-fa9f93f19dd0-utilities\") pod \"redhat-marketplace-bx48h\" (UID: \"b09316c3-7eb9-476d-9d5a-fa9f93f19dd0\") " pod="openshift-marketplace/redhat-marketplace-bx48h" Dec 11 15:26:37 crc kubenswrapper[5050]: I1211 15:26:37.557743 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33daebf9-9439-4c8a-a54a-7e8a472cc768" path="/var/lib/kubelet/pods/33daebf9-9439-4c8a-a54a-7e8a472cc768/volumes" Dec 11 15:26:37 crc kubenswrapper[5050]: I1211 15:26:37.603350 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b09316c3-7eb9-476d-9d5a-fa9f93f19dd0-utilities\") pod \"redhat-marketplace-bx48h\" (UID: \"b09316c3-7eb9-476d-9d5a-fa9f93f19dd0\") " pod="openshift-marketplace/redhat-marketplace-bx48h" Dec 11 15:26:37 crc kubenswrapper[5050]: I1211 15:26:37.603529 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b09316c3-7eb9-476d-9d5a-fa9f93f19dd0-catalog-content\") pod \"redhat-marketplace-bx48h\" (UID: \"b09316c3-7eb9-476d-9d5a-fa9f93f19dd0\") " pod="openshift-marketplace/redhat-marketplace-bx48h" Dec 11 15:26:37 crc kubenswrapper[5050]: I1211 15:26:37.603596 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg7cc\" (UniqueName: \"kubernetes.io/projected/b09316c3-7eb9-476d-9d5a-fa9f93f19dd0-kube-api-access-sg7cc\") pod \"redhat-marketplace-bx48h\" (UID: \"b09316c3-7eb9-476d-9d5a-fa9f93f19dd0\") " pod="openshift-marketplace/redhat-marketplace-bx48h" Dec 11 15:26:37 crc kubenswrapper[5050]: I1211 15:26:37.603984 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b09316c3-7eb9-476d-9d5a-fa9f93f19dd0-catalog-content\") pod \"redhat-marketplace-bx48h\" (UID: \"b09316c3-7eb9-476d-9d5a-fa9f93f19dd0\") " pod="openshift-marketplace/redhat-marketplace-bx48h" Dec 11 15:26:37 crc kubenswrapper[5050]: I1211 15:26:37.604300 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b09316c3-7eb9-476d-9d5a-fa9f93f19dd0-utilities\") pod \"redhat-marketplace-bx48h\" (UID: \"b09316c3-7eb9-476d-9d5a-fa9f93f19dd0\") " pod="openshift-marketplace/redhat-marketplace-bx48h" Dec 11 15:26:37 crc kubenswrapper[5050]: I1211 15:26:37.626507 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg7cc\" (UniqueName: \"kubernetes.io/projected/b09316c3-7eb9-476d-9d5a-fa9f93f19dd0-kube-api-access-sg7cc\") pod \"redhat-marketplace-bx48h\" (UID: \"b09316c3-7eb9-476d-9d5a-fa9f93f19dd0\") " pod="openshift-marketplace/redhat-marketplace-bx48h" Dec 11 15:26:37 crc kubenswrapper[5050]: I1211 15:26:37.711177 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bx48h" Dec 11 15:26:37 crc kubenswrapper[5050]: I1211 15:26:37.981075 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bx48h"] Dec 11 15:26:38 crc kubenswrapper[5050]: I1211 15:26:38.203966 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bx48h" event={"ID":"b09316c3-7eb9-476d-9d5a-fa9f93f19dd0","Type":"ContainerStarted","Data":"3e0bd91177eb51ae22ebdc262115bca219843ca02bde8fc2ea680ae8c2c75966"} Dec 11 15:26:39 crc kubenswrapper[5050]: I1211 15:26:39.215678 5050 generic.go:334] "Generic (PLEG): container finished" podID="b09316c3-7eb9-476d-9d5a-fa9f93f19dd0" containerID="d3535eaba13e16044a2c894505dc18bbb25c52b66e29d9123188449324f28b21" exitCode=0 Dec 11 15:26:39 crc kubenswrapper[5050]: I1211 15:26:39.215729 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bx48h" event={"ID":"b09316c3-7eb9-476d-9d5a-fa9f93f19dd0","Type":"ContainerDied","Data":"d3535eaba13e16044a2c894505dc18bbb25c52b66e29d9123188449324f28b21"} Dec 11 15:26:40 crc kubenswrapper[5050]: I1211 15:26:40.227571 5050 generic.go:334] "Generic (PLEG): container finished" podID="b09316c3-7eb9-476d-9d5a-fa9f93f19dd0" containerID="6622b52a942edfb5255ccf339c6bb12376c1547f93656c78dbc0a1a3b6849f61" exitCode=0 Dec 11 15:26:40 crc kubenswrapper[5050]: I1211 15:26:40.227619 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bx48h" event={"ID":"b09316c3-7eb9-476d-9d5a-fa9f93f19dd0","Type":"ContainerDied","Data":"6622b52a942edfb5255ccf339c6bb12376c1547f93656c78dbc0a1a3b6849f61"} Dec 11 15:26:42 crc kubenswrapper[5050]: I1211 15:26:42.249220 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bx48h" event={"ID":"b09316c3-7eb9-476d-9d5a-fa9f93f19dd0","Type":"ContainerStarted","Data":"6f120b298baea0995391e9796e65884226b3a66167c51e6c094584e10cf411bb"} Dec 11 15:26:42 crc kubenswrapper[5050]: I1211 15:26:42.276597 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bx48h" podStartSLOduration=3.249144368 podStartE2EDuration="5.276577596s" podCreationTimestamp="2025-12-11 15:26:37 +0000 UTC" firstStartedPulling="2025-12-11 15:26:39.217632499 +0000 UTC m=+5890.061355085" lastFinishedPulling="2025-12-11 15:26:41.245065727 +0000 UTC m=+5892.088788313" observedRunningTime="2025-12-11 15:26:42.267495813 +0000 UTC m=+5893.111218399" watchObservedRunningTime="2025-12-11 15:26:42.276577596 +0000 UTC m=+5893.120300182" Dec 11 15:26:44 crc kubenswrapper[5050]: I1211 15:26:44.267142 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-55d86c656b-9rqm4" event={"ID":"14c40f4c-7d89-4d8e-a1f5-923ab611e584","Type":"ContainerStarted","Data":"2157ac7f94401f8b22eafafb0940f54c50120d088c126b6a387c9f14d66e60dd"} Dec 11 15:26:45 crc kubenswrapper[5050]: I1211 15:26:45.280599 5050 generic.go:334] "Generic (PLEG): container finished" podID="14c40f4c-7d89-4d8e-a1f5-923ab611e584" containerID="2157ac7f94401f8b22eafafb0940f54c50120d088c126b6a387c9f14d66e60dd" exitCode=0 Dec 11 15:26:45 crc kubenswrapper[5050]: I1211 15:26:45.280658 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-55d86c656b-9rqm4" event={"ID":"14c40f4c-7d89-4d8e-a1f5-923ab611e584","Type":"ContainerDied","Data":"2157ac7f94401f8b22eafafb0940f54c50120d088c126b6a387c9f14d66e60dd"} Dec 11 15:26:46 crc kubenswrapper[5050]: I1211 15:26:46.292444 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-55d86c656b-9rqm4" event={"ID":"14c40f4c-7d89-4d8e-a1f5-923ab611e584","Type":"ContainerStarted","Data":"3d039eeeee67585b2ba8d4ca58953b5d8c3b4372b74f301d1e6ba66a7402bd3c"} Dec 11 15:26:46 crc kubenswrapper[5050]: I1211 15:26:46.976244 5050 scope.go:117] "RemoveContainer" containerID="dad52fb14ae49173970fec9a1474f102876cd319baf5b98e1a2e258db6195b1c" Dec 11 15:26:46 crc kubenswrapper[5050]: I1211 15:26:46.998957 5050 scope.go:117] "RemoveContainer" containerID="bd0e53fec676ed986a2d1d9e01bc075c850dac71cef43be6f141386032947922" Dec 11 15:26:47 crc kubenswrapper[5050]: I1211 15:26:47.088055 5050 scope.go:117] "RemoveContainer" containerID="d05b3caa18e07de84e75cefb136db179f84b31d2cc199887e819c2c980ee5dd1" Dec 11 15:26:47 crc kubenswrapper[5050]: I1211 15:26:47.122231 5050 scope.go:117] "RemoveContainer" containerID="f011bec977616880d8a08188f06bc011ef7fbffa37afa2194d46c72b23506ff5" Dec 11 15:26:47 crc kubenswrapper[5050]: I1211 15:26:47.301428 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-api-55d86c656b-9rqm4" event={"ID":"14c40f4c-7d89-4d8e-a1f5-923ab611e584","Type":"ContainerStarted","Data":"ef11d114ad7fcb8a0bdc2963b633b8bf31c216f43dc85e64fb607e8610ac5282"} Dec 11 15:26:47 crc kubenswrapper[5050]: I1211 15:26:47.712091 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bx48h" Dec 11 15:26:47 crc kubenswrapper[5050]: I1211 15:26:47.712378 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bx48h" Dec 11 15:26:47 crc kubenswrapper[5050]: I1211 15:26:47.761927 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bx48h" Dec 11 15:26:48 crc kubenswrapper[5050]: I1211 15:26:48.317654 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-55d86c656b-9rqm4" Dec 11 15:26:48 crc kubenswrapper[5050]: I1211 15:26:48.318061 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-api-55d86c656b-9rqm4" Dec 11 15:26:48 crc kubenswrapper[5050]: I1211 15:26:48.339392 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-api-55d86c656b-9rqm4" podStartSLOduration=6.642589716 podStartE2EDuration="49.339374258s" podCreationTimestamp="2025-12-11 15:25:59 +0000 UTC" firstStartedPulling="2025-12-11 15:26:00.463325391 +0000 UTC m=+5851.307047977" lastFinishedPulling="2025-12-11 15:26:43.160109933 +0000 UTC m=+5894.003832519" observedRunningTime="2025-12-11 15:26:48.337659293 +0000 UTC m=+5899.181381889" watchObservedRunningTime="2025-12-11 15:26:48.339374258 +0000 UTC m=+5899.183096844" Dec 11 15:26:48 crc kubenswrapper[5050]: I1211 15:26:48.372214 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bx48h" Dec 11 15:26:48 crc kubenswrapper[5050]: I1211 15:26:48.416372 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bx48h"] Dec 11 15:26:50 crc kubenswrapper[5050]: I1211 15:26:50.332590 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bx48h" podUID="b09316c3-7eb9-476d-9d5a-fa9f93f19dd0" containerName="registry-server" containerID="cri-o://6f120b298baea0995391e9796e65884226b3a66167c51e6c094584e10cf411bb" gracePeriod=2 Dec 11 15:26:52 crc kubenswrapper[5050]: I1211 15:26:52.638845 5050 generic.go:334] "Generic (PLEG): container finished" podID="b09316c3-7eb9-476d-9d5a-fa9f93f19dd0" containerID="6f120b298baea0995391e9796e65884226b3a66167c51e6c094584e10cf411bb" exitCode=0 Dec 11 15:26:52 crc kubenswrapper[5050]: I1211 15:26:52.638931 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bx48h" event={"ID":"b09316c3-7eb9-476d-9d5a-fa9f93f19dd0","Type":"ContainerDied","Data":"6f120b298baea0995391e9796e65884226b3a66167c51e6c094584e10cf411bb"} Dec 11 15:26:54 crc kubenswrapper[5050]: I1211 15:26:54.593261 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bx48h" Dec 11 15:26:54 crc kubenswrapper[5050]: I1211 15:26:54.659896 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bx48h" event={"ID":"b09316c3-7eb9-476d-9d5a-fa9f93f19dd0","Type":"ContainerDied","Data":"3e0bd91177eb51ae22ebdc262115bca219843ca02bde8fc2ea680ae8c2c75966"} Dec 11 15:26:54 crc kubenswrapper[5050]: I1211 15:26:54.659954 5050 scope.go:117] "RemoveContainer" containerID="6f120b298baea0995391e9796e65884226b3a66167c51e6c094584e10cf411bb" Dec 11 15:26:54 crc kubenswrapper[5050]: I1211 15:26:54.660150 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bx48h" Dec 11 15:26:54 crc kubenswrapper[5050]: I1211 15:26:54.682924 5050 scope.go:117] "RemoveContainer" containerID="6622b52a942edfb5255ccf339c6bb12376c1547f93656c78dbc0a1a3b6849f61" Dec 11 15:26:54 crc kubenswrapper[5050]: I1211 15:26:54.700063 5050 scope.go:117] "RemoveContainer" containerID="d3535eaba13e16044a2c894505dc18bbb25c52b66e29d9123188449324f28b21" Dec 11 15:26:54 crc kubenswrapper[5050]: I1211 15:26:54.783039 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b09316c3-7eb9-476d-9d5a-fa9f93f19dd0-catalog-content\") pod \"b09316c3-7eb9-476d-9d5a-fa9f93f19dd0\" (UID: \"b09316c3-7eb9-476d-9d5a-fa9f93f19dd0\") " Dec 11 15:26:54 crc kubenswrapper[5050]: I1211 15:26:54.783094 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg7cc\" (UniqueName: \"kubernetes.io/projected/b09316c3-7eb9-476d-9d5a-fa9f93f19dd0-kube-api-access-sg7cc\") pod \"b09316c3-7eb9-476d-9d5a-fa9f93f19dd0\" (UID: \"b09316c3-7eb9-476d-9d5a-fa9f93f19dd0\") " Dec 11 15:26:54 crc kubenswrapper[5050]: I1211 15:26:54.783376 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b09316c3-7eb9-476d-9d5a-fa9f93f19dd0-utilities\") pod \"b09316c3-7eb9-476d-9d5a-fa9f93f19dd0\" (UID: \"b09316c3-7eb9-476d-9d5a-fa9f93f19dd0\") " Dec 11 15:26:54 crc kubenswrapper[5050]: I1211 15:26:54.784514 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b09316c3-7eb9-476d-9d5a-fa9f93f19dd0-utilities" (OuterVolumeSpecName: "utilities") pod "b09316c3-7eb9-476d-9d5a-fa9f93f19dd0" (UID: "b09316c3-7eb9-476d-9d5a-fa9f93f19dd0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:26:54 crc kubenswrapper[5050]: I1211 15:26:54.801724 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b09316c3-7eb9-476d-9d5a-fa9f93f19dd0-kube-api-access-sg7cc" (OuterVolumeSpecName: "kube-api-access-sg7cc") pod "b09316c3-7eb9-476d-9d5a-fa9f93f19dd0" (UID: "b09316c3-7eb9-476d-9d5a-fa9f93f19dd0"). InnerVolumeSpecName "kube-api-access-sg7cc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:26:54 crc kubenswrapper[5050]: I1211 15:26:54.811749 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b09316c3-7eb9-476d-9d5a-fa9f93f19dd0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b09316c3-7eb9-476d-9d5a-fa9f93f19dd0" (UID: "b09316c3-7eb9-476d-9d5a-fa9f93f19dd0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:26:54 crc kubenswrapper[5050]: I1211 15:26:54.886881 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b09316c3-7eb9-476d-9d5a-fa9f93f19dd0-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 15:26:54 crc kubenswrapper[5050]: I1211 15:26:54.886925 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b09316c3-7eb9-476d-9d5a-fa9f93f19dd0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 15:26:54 crc kubenswrapper[5050]: I1211 15:26:54.886939 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sg7cc\" (UniqueName: \"kubernetes.io/projected/b09316c3-7eb9-476d-9d5a-fa9f93f19dd0-kube-api-access-sg7cc\") on node \"crc\" DevicePath \"\"" Dec 11 15:26:55 crc kubenswrapper[5050]: I1211 15:26:55.010828 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bx48h"] Dec 11 15:26:55 crc kubenswrapper[5050]: I1211 15:26:55.025065 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bx48h"] Dec 11 15:26:55 crc kubenswrapper[5050]: I1211 15:26:55.559407 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b09316c3-7eb9-476d-9d5a-fa9f93f19dd0" path="/var/lib/kubelet/pods/b09316c3-7eb9-476d-9d5a-fa9f93f19dd0/volumes" Dec 11 15:27:04 crc kubenswrapper[5050]: I1211 15:27:04.082080 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-55d86c656b-9rqm4" Dec 11 15:27:04 crc kubenswrapper[5050]: I1211 15:27:04.274036 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-api-55d86c656b-9rqm4" Dec 11 15:27:32 crc kubenswrapper[5050]: I1211 15:27:32.357121 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-rsyslog-qm6sk"] Dec 11 15:27:32 crc kubenswrapper[5050]: E1211 15:27:32.358152 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b09316c3-7eb9-476d-9d5a-fa9f93f19dd0" containerName="extract-content" Dec 11 15:27:32 crc kubenswrapper[5050]: I1211 15:27:32.358169 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="b09316c3-7eb9-476d-9d5a-fa9f93f19dd0" containerName="extract-content" Dec 11 15:27:32 crc kubenswrapper[5050]: E1211 15:27:32.358182 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b09316c3-7eb9-476d-9d5a-fa9f93f19dd0" containerName="extract-utilities" Dec 11 15:27:32 crc kubenswrapper[5050]: I1211 15:27:32.358189 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="b09316c3-7eb9-476d-9d5a-fa9f93f19dd0" containerName="extract-utilities" Dec 11 15:27:32 crc kubenswrapper[5050]: E1211 15:27:32.358231 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b09316c3-7eb9-476d-9d5a-fa9f93f19dd0" containerName="registry-server" Dec 11 15:27:32 crc kubenswrapper[5050]: I1211 15:27:32.358240 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="b09316c3-7eb9-476d-9d5a-fa9f93f19dd0" containerName="registry-server" Dec 11 15:27:32 crc kubenswrapper[5050]: I1211 15:27:32.358467 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="b09316c3-7eb9-476d-9d5a-fa9f93f19dd0" containerName="registry-server" Dec 11 15:27:32 crc kubenswrapper[5050]: I1211 15:27:32.360327 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-qm6sk" Dec 11 15:27:32 crc kubenswrapper[5050]: I1211 15:27:32.362989 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"octavia-hmport-map" Dec 11 15:27:32 crc kubenswrapper[5050]: I1211 15:27:32.363443 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-config-data" Dec 11 15:27:32 crc kubenswrapper[5050]: I1211 15:27:32.369343 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-qm6sk"] Dec 11 15:27:32 crc kubenswrapper[5050]: I1211 15:27:32.376507 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-scripts" Dec 11 15:27:32 crc kubenswrapper[5050]: I1211 15:27:32.504002 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4942da3-ee1e-4a43-8500-f0092c12a9c6-config-data\") pod \"octavia-rsyslog-qm6sk\" (UID: \"f4942da3-ee1e-4a43-8500-f0092c12a9c6\") " pod="openstack/octavia-rsyslog-qm6sk" Dec 11 15:27:32 crc kubenswrapper[5050]: I1211 15:27:32.504077 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4942da3-ee1e-4a43-8500-f0092c12a9c6-scripts\") pod \"octavia-rsyslog-qm6sk\" (UID: \"f4942da3-ee1e-4a43-8500-f0092c12a9c6\") " pod="openstack/octavia-rsyslog-qm6sk" Dec 11 15:27:32 crc kubenswrapper[5050]: I1211 15:27:32.504252 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/f4942da3-ee1e-4a43-8500-f0092c12a9c6-config-data-merged\") pod \"octavia-rsyslog-qm6sk\" (UID: \"f4942da3-ee1e-4a43-8500-f0092c12a9c6\") " pod="openstack/octavia-rsyslog-qm6sk" Dec 11 15:27:32 crc kubenswrapper[5050]: I1211 15:27:32.504330 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/f4942da3-ee1e-4a43-8500-f0092c12a9c6-hm-ports\") pod \"octavia-rsyslog-qm6sk\" (UID: \"f4942da3-ee1e-4a43-8500-f0092c12a9c6\") " pod="openstack/octavia-rsyslog-qm6sk" Dec 11 15:27:32 crc kubenswrapper[5050]: I1211 15:27:32.606094 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/f4942da3-ee1e-4a43-8500-f0092c12a9c6-config-data-merged\") pod \"octavia-rsyslog-qm6sk\" (UID: \"f4942da3-ee1e-4a43-8500-f0092c12a9c6\") " pod="openstack/octavia-rsyslog-qm6sk" Dec 11 15:27:32 crc kubenswrapper[5050]: I1211 15:27:32.606196 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/f4942da3-ee1e-4a43-8500-f0092c12a9c6-hm-ports\") pod \"octavia-rsyslog-qm6sk\" (UID: \"f4942da3-ee1e-4a43-8500-f0092c12a9c6\") " pod="openstack/octavia-rsyslog-qm6sk" Dec 11 15:27:32 crc kubenswrapper[5050]: I1211 15:27:32.606230 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4942da3-ee1e-4a43-8500-f0092c12a9c6-scripts\") pod \"octavia-rsyslog-qm6sk\" (UID: \"f4942da3-ee1e-4a43-8500-f0092c12a9c6\") " pod="openstack/octavia-rsyslog-qm6sk" Dec 11 15:27:32 crc kubenswrapper[5050]: I1211 15:27:32.606248 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4942da3-ee1e-4a43-8500-f0092c12a9c6-config-data\") pod \"octavia-rsyslog-qm6sk\" (UID: \"f4942da3-ee1e-4a43-8500-f0092c12a9c6\") " pod="openstack/octavia-rsyslog-qm6sk" Dec 11 15:27:32 crc kubenswrapper[5050]: I1211 15:27:32.606623 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/f4942da3-ee1e-4a43-8500-f0092c12a9c6-config-data-merged\") pod \"octavia-rsyslog-qm6sk\" (UID: \"f4942da3-ee1e-4a43-8500-f0092c12a9c6\") " pod="openstack/octavia-rsyslog-qm6sk" Dec 11 15:27:32 crc kubenswrapper[5050]: I1211 15:27:32.607512 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/f4942da3-ee1e-4a43-8500-f0092c12a9c6-hm-ports\") pod \"octavia-rsyslog-qm6sk\" (UID: \"f4942da3-ee1e-4a43-8500-f0092c12a9c6\") " pod="openstack/octavia-rsyslog-qm6sk" Dec 11 15:27:32 crc kubenswrapper[5050]: I1211 15:27:32.613544 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4942da3-ee1e-4a43-8500-f0092c12a9c6-scripts\") pod \"octavia-rsyslog-qm6sk\" (UID: \"f4942da3-ee1e-4a43-8500-f0092c12a9c6\") " pod="openstack/octavia-rsyslog-qm6sk" Dec 11 15:27:32 crc kubenswrapper[5050]: I1211 15:27:32.635767 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4942da3-ee1e-4a43-8500-f0092c12a9c6-config-data\") pod \"octavia-rsyslog-qm6sk\" (UID: \"f4942da3-ee1e-4a43-8500-f0092c12a9c6\") " pod="openstack/octavia-rsyslog-qm6sk" Dec 11 15:27:32 crc kubenswrapper[5050]: I1211 15:27:32.739699 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-rsyslog-qm6sk" Dec 11 15:27:32 crc kubenswrapper[5050]: I1211 15:27:32.977057 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-56c9f55b99-xv6vc"] Dec 11 15:27:32 crc kubenswrapper[5050]: I1211 15:27:32.979571 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-56c9f55b99-xv6vc" Dec 11 15:27:32 crc kubenswrapper[5050]: I1211 15:27:32.983532 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Dec 11 15:27:32 crc kubenswrapper[5050]: I1211 15:27:32.999069 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-56c9f55b99-xv6vc"] Dec 11 15:27:33 crc kubenswrapper[5050]: I1211 15:27:33.117039 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/2ebd0d88-3bc1-494c-98a7-9e494d499d0e-amphora-image\") pod \"octavia-image-upload-56c9f55b99-xv6vc\" (UID: \"2ebd0d88-3bc1-494c-98a7-9e494d499d0e\") " pod="openstack/octavia-image-upload-56c9f55b99-xv6vc" Dec 11 15:27:33 crc kubenswrapper[5050]: I1211 15:27:33.117471 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ebd0d88-3bc1-494c-98a7-9e494d499d0e-httpd-config\") pod \"octavia-image-upload-56c9f55b99-xv6vc\" (UID: \"2ebd0d88-3bc1-494c-98a7-9e494d499d0e\") " pod="openstack/octavia-image-upload-56c9f55b99-xv6vc" Dec 11 15:27:33 crc kubenswrapper[5050]: I1211 15:27:33.219042 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/2ebd0d88-3bc1-494c-98a7-9e494d499d0e-amphora-image\") pod \"octavia-image-upload-56c9f55b99-xv6vc\" (UID: \"2ebd0d88-3bc1-494c-98a7-9e494d499d0e\") " pod="openstack/octavia-image-upload-56c9f55b99-xv6vc" Dec 11 15:27:33 crc kubenswrapper[5050]: I1211 15:27:33.219138 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ebd0d88-3bc1-494c-98a7-9e494d499d0e-httpd-config\") pod \"octavia-image-upload-56c9f55b99-xv6vc\" (UID: \"2ebd0d88-3bc1-494c-98a7-9e494d499d0e\") " pod="openstack/octavia-image-upload-56c9f55b99-xv6vc" Dec 11 15:27:33 crc kubenswrapper[5050]: I1211 15:27:33.219944 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/2ebd0d88-3bc1-494c-98a7-9e494d499d0e-amphora-image\") pod \"octavia-image-upload-56c9f55b99-xv6vc\" (UID: \"2ebd0d88-3bc1-494c-98a7-9e494d499d0e\") " pod="openstack/octavia-image-upload-56c9f55b99-xv6vc" Dec 11 15:27:33 crc kubenswrapper[5050]: I1211 15:27:33.227040 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ebd0d88-3bc1-494c-98a7-9e494d499d0e-httpd-config\") pod \"octavia-image-upload-56c9f55b99-xv6vc\" (UID: \"2ebd0d88-3bc1-494c-98a7-9e494d499d0e\") " pod="openstack/octavia-image-upload-56c9f55b99-xv6vc" Dec 11 15:27:33 crc kubenswrapper[5050]: I1211 15:27:33.311729 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-56c9f55b99-xv6vc" Dec 11 15:27:33 crc kubenswrapper[5050]: I1211 15:27:33.410500 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-rsyslog-qm6sk"] Dec 11 15:27:33 crc kubenswrapper[5050]: I1211 15:27:33.767982 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-56c9f55b99-xv6vc"] Dec 11 15:27:33 crc kubenswrapper[5050]: W1211 15:27:33.768832 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ebd0d88_3bc1_494c_98a7_9e494d499d0e.slice/crio-2918cdc891bd6175bc806a6ee82a5535b130797de5025d69b6d5ee538df5fd08 WatchSource:0}: Error finding container 2918cdc891bd6175bc806a6ee82a5535b130797de5025d69b6d5ee538df5fd08: Status 404 returned error can't find the container with id 2918cdc891bd6175bc806a6ee82a5535b130797de5025d69b6d5ee538df5fd08 Dec 11 15:27:34 crc kubenswrapper[5050]: I1211 15:27:34.000122 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-qm6sk" event={"ID":"f4942da3-ee1e-4a43-8500-f0092c12a9c6","Type":"ContainerStarted","Data":"a0ee7ba55d7def49c262f14351fe4ba3adccc5146280c958bd5b2684d222d706"} Dec 11 15:27:34 crc kubenswrapper[5050]: I1211 15:27:34.001874 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-56c9f55b99-xv6vc" event={"ID":"2ebd0d88-3bc1-494c-98a7-9e494d499d0e","Type":"ContainerStarted","Data":"2918cdc891bd6175bc806a6ee82a5535b130797de5025d69b6d5ee538df5fd08"} Dec 11 15:27:36 crc kubenswrapper[5050]: I1211 15:27:36.033596 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-qm6sk" event={"ID":"f4942da3-ee1e-4a43-8500-f0092c12a9c6","Type":"ContainerStarted","Data":"09d744897185e6f7f994fa74acca86dca5ecedc8a0dec23258b2630bc5fc7e6e"} Dec 11 15:27:37 crc kubenswrapper[5050]: I1211 15:27:37.737697 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-healthmanager-xkf2c"] Dec 11 15:27:37 crc kubenswrapper[5050]: I1211 15:27:37.739758 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-xkf2c" Dec 11 15:27:37 crc kubenswrapper[5050]: I1211 15:27:37.741791 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-config-data" Dec 11 15:27:37 crc kubenswrapper[5050]: I1211 15:27:37.742071 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-scripts" Dec 11 15:27:37 crc kubenswrapper[5050]: I1211 15:27:37.742306 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-certs-secret" Dec 11 15:27:37 crc kubenswrapper[5050]: I1211 15:27:37.750672 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-xkf2c"] Dec 11 15:27:37 crc kubenswrapper[5050]: I1211 15:27:37.848032 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/bcaea4dd-4d84-4e56-8f9c-26b27575ef64-hm-ports\") pod \"octavia-healthmanager-xkf2c\" (UID: \"bcaea4dd-4d84-4e56-8f9c-26b27575ef64\") " pod="openstack/octavia-healthmanager-xkf2c" Dec 11 15:27:37 crc kubenswrapper[5050]: I1211 15:27:37.848094 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcaea4dd-4d84-4e56-8f9c-26b27575ef64-scripts\") pod \"octavia-healthmanager-xkf2c\" (UID: \"bcaea4dd-4d84-4e56-8f9c-26b27575ef64\") " pod="openstack/octavia-healthmanager-xkf2c" Dec 11 15:27:37 crc kubenswrapper[5050]: I1211 15:27:37.848273 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/bcaea4dd-4d84-4e56-8f9c-26b27575ef64-amphora-certs\") pod \"octavia-healthmanager-xkf2c\" (UID: \"bcaea4dd-4d84-4e56-8f9c-26b27575ef64\") " pod="openstack/octavia-healthmanager-xkf2c" Dec 11 15:27:37 crc kubenswrapper[5050]: I1211 15:27:37.848528 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcaea4dd-4d84-4e56-8f9c-26b27575ef64-combined-ca-bundle\") pod \"octavia-healthmanager-xkf2c\" (UID: \"bcaea4dd-4d84-4e56-8f9c-26b27575ef64\") " pod="openstack/octavia-healthmanager-xkf2c" Dec 11 15:27:37 crc kubenswrapper[5050]: I1211 15:27:37.848713 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcaea4dd-4d84-4e56-8f9c-26b27575ef64-config-data\") pod \"octavia-healthmanager-xkf2c\" (UID: \"bcaea4dd-4d84-4e56-8f9c-26b27575ef64\") " pod="openstack/octavia-healthmanager-xkf2c" Dec 11 15:27:37 crc kubenswrapper[5050]: I1211 15:27:37.849135 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/bcaea4dd-4d84-4e56-8f9c-26b27575ef64-config-data-merged\") pod \"octavia-healthmanager-xkf2c\" (UID: \"bcaea4dd-4d84-4e56-8f9c-26b27575ef64\") " pod="openstack/octavia-healthmanager-xkf2c" Dec 11 15:27:37 crc kubenswrapper[5050]: I1211 15:27:37.951407 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcaea4dd-4d84-4e56-8f9c-26b27575ef64-config-data\") pod \"octavia-healthmanager-xkf2c\" (UID: \"bcaea4dd-4d84-4e56-8f9c-26b27575ef64\") " pod="openstack/octavia-healthmanager-xkf2c" Dec 11 15:27:37 crc kubenswrapper[5050]: I1211 15:27:37.951601 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/bcaea4dd-4d84-4e56-8f9c-26b27575ef64-config-data-merged\") pod \"octavia-healthmanager-xkf2c\" (UID: \"bcaea4dd-4d84-4e56-8f9c-26b27575ef64\") " pod="openstack/octavia-healthmanager-xkf2c" Dec 11 15:27:37 crc kubenswrapper[5050]: I1211 15:27:37.951742 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/bcaea4dd-4d84-4e56-8f9c-26b27575ef64-hm-ports\") pod \"octavia-healthmanager-xkf2c\" (UID: \"bcaea4dd-4d84-4e56-8f9c-26b27575ef64\") " pod="openstack/octavia-healthmanager-xkf2c" Dec 11 15:27:37 crc kubenswrapper[5050]: I1211 15:27:37.951785 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcaea4dd-4d84-4e56-8f9c-26b27575ef64-scripts\") pod \"octavia-healthmanager-xkf2c\" (UID: \"bcaea4dd-4d84-4e56-8f9c-26b27575ef64\") " pod="openstack/octavia-healthmanager-xkf2c" Dec 11 15:27:37 crc kubenswrapper[5050]: I1211 15:27:37.951870 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/bcaea4dd-4d84-4e56-8f9c-26b27575ef64-amphora-certs\") pod \"octavia-healthmanager-xkf2c\" (UID: \"bcaea4dd-4d84-4e56-8f9c-26b27575ef64\") " pod="openstack/octavia-healthmanager-xkf2c" Dec 11 15:27:37 crc kubenswrapper[5050]: I1211 15:27:37.951947 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcaea4dd-4d84-4e56-8f9c-26b27575ef64-combined-ca-bundle\") pod \"octavia-healthmanager-xkf2c\" (UID: \"bcaea4dd-4d84-4e56-8f9c-26b27575ef64\") " pod="openstack/octavia-healthmanager-xkf2c" Dec 11 15:27:37 crc kubenswrapper[5050]: I1211 15:27:37.952330 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/bcaea4dd-4d84-4e56-8f9c-26b27575ef64-config-data-merged\") pod \"octavia-healthmanager-xkf2c\" (UID: \"bcaea4dd-4d84-4e56-8f9c-26b27575ef64\") " pod="openstack/octavia-healthmanager-xkf2c" Dec 11 15:27:37 crc kubenswrapper[5050]: I1211 15:27:37.953078 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/bcaea4dd-4d84-4e56-8f9c-26b27575ef64-hm-ports\") pod \"octavia-healthmanager-xkf2c\" (UID: \"bcaea4dd-4d84-4e56-8f9c-26b27575ef64\") " pod="openstack/octavia-healthmanager-xkf2c" Dec 11 15:27:37 crc kubenswrapper[5050]: I1211 15:27:37.958342 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/bcaea4dd-4d84-4e56-8f9c-26b27575ef64-amphora-certs\") pod \"octavia-healthmanager-xkf2c\" (UID: \"bcaea4dd-4d84-4e56-8f9c-26b27575ef64\") " pod="openstack/octavia-healthmanager-xkf2c" Dec 11 15:27:37 crc kubenswrapper[5050]: I1211 15:27:37.958385 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bcaea4dd-4d84-4e56-8f9c-26b27575ef64-combined-ca-bundle\") pod \"octavia-healthmanager-xkf2c\" (UID: \"bcaea4dd-4d84-4e56-8f9c-26b27575ef64\") " pod="openstack/octavia-healthmanager-xkf2c" Dec 11 15:27:37 crc kubenswrapper[5050]: I1211 15:27:37.967611 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bcaea4dd-4d84-4e56-8f9c-26b27575ef64-scripts\") pod \"octavia-healthmanager-xkf2c\" (UID: \"bcaea4dd-4d84-4e56-8f9c-26b27575ef64\") " pod="openstack/octavia-healthmanager-xkf2c" Dec 11 15:27:37 crc kubenswrapper[5050]: I1211 15:27:37.985968 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bcaea4dd-4d84-4e56-8f9c-26b27575ef64-config-data\") pod \"octavia-healthmanager-xkf2c\" (UID: \"bcaea4dd-4d84-4e56-8f9c-26b27575ef64\") " pod="openstack/octavia-healthmanager-xkf2c" Dec 11 15:27:38 crc kubenswrapper[5050]: I1211 15:27:38.052220 5050 generic.go:334] "Generic (PLEG): container finished" podID="f4942da3-ee1e-4a43-8500-f0092c12a9c6" containerID="09d744897185e6f7f994fa74acca86dca5ecedc8a0dec23258b2630bc5fc7e6e" exitCode=0 Dec 11 15:27:38 crc kubenswrapper[5050]: I1211 15:27:38.052266 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-qm6sk" event={"ID":"f4942da3-ee1e-4a43-8500-f0092c12a9c6","Type":"ContainerDied","Data":"09d744897185e6f7f994fa74acca86dca5ecedc8a0dec23258b2630bc5fc7e6e"} Dec 11 15:27:38 crc kubenswrapper[5050]: I1211 15:27:38.078672 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-healthmanager-xkf2c" Dec 11 15:27:38 crc kubenswrapper[5050]: I1211 15:27:38.558849 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-db-sync-n8tmc"] Dec 11 15:27:38 crc kubenswrapper[5050]: I1211 15:27:38.561048 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-n8tmc" Dec 11 15:27:38 crc kubenswrapper[5050]: I1211 15:27:38.569516 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-n8tmc"] Dec 11 15:27:38 crc kubenswrapper[5050]: I1211 15:27:38.573410 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-scripts" Dec 11 15:27:38 crc kubenswrapper[5050]: I1211 15:27:38.674366 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb80c24-230a-4b3f-979a-9520f51ed32c-config-data\") pod \"octavia-db-sync-n8tmc\" (UID: \"6fb80c24-230a-4b3f-979a-9520f51ed32c\") " pod="openstack/octavia-db-sync-n8tmc" Dec 11 15:27:38 crc kubenswrapper[5050]: I1211 15:27:38.674439 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/6fb80c24-230a-4b3f-979a-9520f51ed32c-config-data-merged\") pod \"octavia-db-sync-n8tmc\" (UID: \"6fb80c24-230a-4b3f-979a-9520f51ed32c\") " pod="openstack/octavia-db-sync-n8tmc" Dec 11 15:27:38 crc kubenswrapper[5050]: I1211 15:27:38.674997 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fb80c24-230a-4b3f-979a-9520f51ed32c-scripts\") pod \"octavia-db-sync-n8tmc\" (UID: \"6fb80c24-230a-4b3f-979a-9520f51ed32c\") " pod="openstack/octavia-db-sync-n8tmc" Dec 11 15:27:38 crc kubenswrapper[5050]: I1211 15:27:38.675804 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb80c24-230a-4b3f-979a-9520f51ed32c-combined-ca-bundle\") pod \"octavia-db-sync-n8tmc\" (UID: \"6fb80c24-230a-4b3f-979a-9520f51ed32c\") " pod="openstack/octavia-db-sync-n8tmc" Dec 11 15:27:38 crc kubenswrapper[5050]: I1211 15:27:38.697920 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-healthmanager-xkf2c"] Dec 11 15:27:38 crc kubenswrapper[5050]: I1211 15:27:38.787370 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/6fb80c24-230a-4b3f-979a-9520f51ed32c-config-data-merged\") pod \"octavia-db-sync-n8tmc\" (UID: \"6fb80c24-230a-4b3f-979a-9520f51ed32c\") " pod="openstack/octavia-db-sync-n8tmc" Dec 11 15:27:38 crc kubenswrapper[5050]: I1211 15:27:38.787524 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fb80c24-230a-4b3f-979a-9520f51ed32c-scripts\") pod \"octavia-db-sync-n8tmc\" (UID: \"6fb80c24-230a-4b3f-979a-9520f51ed32c\") " pod="openstack/octavia-db-sync-n8tmc" Dec 11 15:27:38 crc kubenswrapper[5050]: I1211 15:27:38.787643 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb80c24-230a-4b3f-979a-9520f51ed32c-combined-ca-bundle\") pod \"octavia-db-sync-n8tmc\" (UID: \"6fb80c24-230a-4b3f-979a-9520f51ed32c\") " pod="openstack/octavia-db-sync-n8tmc" Dec 11 15:27:38 crc kubenswrapper[5050]: I1211 15:27:38.787710 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb80c24-230a-4b3f-979a-9520f51ed32c-config-data\") pod \"octavia-db-sync-n8tmc\" (UID: \"6fb80c24-230a-4b3f-979a-9520f51ed32c\") " pod="openstack/octavia-db-sync-n8tmc" Dec 11 15:27:38 crc kubenswrapper[5050]: I1211 15:27:38.788219 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/6fb80c24-230a-4b3f-979a-9520f51ed32c-config-data-merged\") pod \"octavia-db-sync-n8tmc\" (UID: \"6fb80c24-230a-4b3f-979a-9520f51ed32c\") " pod="openstack/octavia-db-sync-n8tmc" Dec 11 15:27:38 crc kubenswrapper[5050]: I1211 15:27:38.794982 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb80c24-230a-4b3f-979a-9520f51ed32c-combined-ca-bundle\") pod \"octavia-db-sync-n8tmc\" (UID: \"6fb80c24-230a-4b3f-979a-9520f51ed32c\") " pod="openstack/octavia-db-sync-n8tmc" Dec 11 15:27:38 crc kubenswrapper[5050]: I1211 15:27:38.795076 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fb80c24-230a-4b3f-979a-9520f51ed32c-scripts\") pod \"octavia-db-sync-n8tmc\" (UID: \"6fb80c24-230a-4b3f-979a-9520f51ed32c\") " pod="openstack/octavia-db-sync-n8tmc" Dec 11 15:27:38 crc kubenswrapper[5050]: I1211 15:27:38.797038 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb80c24-230a-4b3f-979a-9520f51ed32c-config-data\") pod \"octavia-db-sync-n8tmc\" (UID: \"6fb80c24-230a-4b3f-979a-9520f51ed32c\") " pod="openstack/octavia-db-sync-n8tmc" Dec 11 15:27:38 crc kubenswrapper[5050]: I1211 15:27:38.896936 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-n8tmc" Dec 11 15:27:39 crc kubenswrapper[5050]: I1211 15:27:39.078931 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-xkf2c" event={"ID":"bcaea4dd-4d84-4e56-8f9c-26b27575ef64","Type":"ContainerStarted","Data":"a630e620f48efd84726b1babbc42e7651cf09e0669db2c0d906a457aa6bf085f"} Dec 11 15:27:39 crc kubenswrapper[5050]: I1211 15:27:39.423046 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-db-sync-n8tmc"] Dec 11 15:27:40 crc kubenswrapper[5050]: I1211 15:27:40.089504 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-n8tmc" event={"ID":"6fb80c24-230a-4b3f-979a-9520f51ed32c","Type":"ContainerStarted","Data":"1d099f4757689b0ae0ee3d38ad9bbbff8515ef57796bc52eada98c52d16da6c4"} Dec 11 15:27:40 crc kubenswrapper[5050]: I1211 15:27:40.403143 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-housekeeping-vjddp"] Dec 11 15:27:40 crc kubenswrapper[5050]: I1211 15:27:40.406883 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-vjddp" Dec 11 15:27:40 crc kubenswrapper[5050]: I1211 15:27:40.411957 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-config-data" Dec 11 15:27:40 crc kubenswrapper[5050]: I1211 15:27:40.414095 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-scripts" Dec 11 15:27:40 crc kubenswrapper[5050]: I1211 15:27:40.426137 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/6ad45f64-d96d-4509-ab75-653af9140565-config-data-merged\") pod \"octavia-housekeeping-vjddp\" (UID: \"6ad45f64-d96d-4509-ab75-653af9140565\") " pod="openstack/octavia-housekeeping-vjddp" Dec 11 15:27:40 crc kubenswrapper[5050]: I1211 15:27:40.426204 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/6ad45f64-d96d-4509-ab75-653af9140565-hm-ports\") pod \"octavia-housekeeping-vjddp\" (UID: \"6ad45f64-d96d-4509-ab75-653af9140565\") " pod="openstack/octavia-housekeeping-vjddp" Dec 11 15:27:40 crc kubenswrapper[5050]: I1211 15:27:40.426324 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ad45f64-d96d-4509-ab75-653af9140565-config-data\") pod \"octavia-housekeeping-vjddp\" (UID: \"6ad45f64-d96d-4509-ab75-653af9140565\") " pod="openstack/octavia-housekeeping-vjddp" Dec 11 15:27:40 crc kubenswrapper[5050]: I1211 15:27:40.426406 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ad45f64-d96d-4509-ab75-653af9140565-scripts\") pod \"octavia-housekeeping-vjddp\" (UID: \"6ad45f64-d96d-4509-ab75-653af9140565\") " pod="openstack/octavia-housekeeping-vjddp" Dec 11 15:27:40 crc kubenswrapper[5050]: I1211 15:27:40.426481 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad45f64-d96d-4509-ab75-653af9140565-combined-ca-bundle\") pod \"octavia-housekeeping-vjddp\" (UID: \"6ad45f64-d96d-4509-ab75-653af9140565\") " pod="openstack/octavia-housekeeping-vjddp" Dec 11 15:27:40 crc kubenswrapper[5050]: I1211 15:27:40.426662 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/6ad45f64-d96d-4509-ab75-653af9140565-amphora-certs\") pod \"octavia-housekeeping-vjddp\" (UID: \"6ad45f64-d96d-4509-ab75-653af9140565\") " pod="openstack/octavia-housekeeping-vjddp" Dec 11 15:27:40 crc kubenswrapper[5050]: I1211 15:27:40.430252 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-vjddp"] Dec 11 15:27:40 crc kubenswrapper[5050]: I1211 15:27:40.528640 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ad45f64-d96d-4509-ab75-653af9140565-config-data\") pod \"octavia-housekeeping-vjddp\" (UID: \"6ad45f64-d96d-4509-ab75-653af9140565\") " pod="openstack/octavia-housekeeping-vjddp" Dec 11 15:27:40 crc kubenswrapper[5050]: I1211 15:27:40.528971 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ad45f64-d96d-4509-ab75-653af9140565-scripts\") pod \"octavia-housekeeping-vjddp\" (UID: \"6ad45f64-d96d-4509-ab75-653af9140565\") " pod="openstack/octavia-housekeeping-vjddp" Dec 11 15:27:40 crc kubenswrapper[5050]: I1211 15:27:40.529175 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad45f64-d96d-4509-ab75-653af9140565-combined-ca-bundle\") pod \"octavia-housekeeping-vjddp\" (UID: \"6ad45f64-d96d-4509-ab75-653af9140565\") " pod="openstack/octavia-housekeeping-vjddp" Dec 11 15:27:40 crc kubenswrapper[5050]: I1211 15:27:40.529359 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/6ad45f64-d96d-4509-ab75-653af9140565-amphora-certs\") pod \"octavia-housekeeping-vjddp\" (UID: \"6ad45f64-d96d-4509-ab75-653af9140565\") " pod="openstack/octavia-housekeeping-vjddp" Dec 11 15:27:40 crc kubenswrapper[5050]: I1211 15:27:40.529498 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/6ad45f64-d96d-4509-ab75-653af9140565-config-data-merged\") pod \"octavia-housekeeping-vjddp\" (UID: \"6ad45f64-d96d-4509-ab75-653af9140565\") " pod="openstack/octavia-housekeeping-vjddp" Dec 11 15:27:40 crc kubenswrapper[5050]: I1211 15:27:40.530223 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/6ad45f64-d96d-4509-ab75-653af9140565-config-data-merged\") pod \"octavia-housekeeping-vjddp\" (UID: \"6ad45f64-d96d-4509-ab75-653af9140565\") " pod="openstack/octavia-housekeeping-vjddp" Dec 11 15:27:40 crc kubenswrapper[5050]: I1211 15:27:40.529528 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/6ad45f64-d96d-4509-ab75-653af9140565-hm-ports\") pod \"octavia-housekeeping-vjddp\" (UID: \"6ad45f64-d96d-4509-ab75-653af9140565\") " pod="openstack/octavia-housekeeping-vjddp" Dec 11 15:27:40 crc kubenswrapper[5050]: I1211 15:27:40.532778 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/6ad45f64-d96d-4509-ab75-653af9140565-hm-ports\") pod \"octavia-housekeeping-vjddp\" (UID: \"6ad45f64-d96d-4509-ab75-653af9140565\") " pod="openstack/octavia-housekeeping-vjddp" Dec 11 15:27:40 crc kubenswrapper[5050]: I1211 15:27:40.539106 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/6ad45f64-d96d-4509-ab75-653af9140565-amphora-certs\") pod \"octavia-housekeeping-vjddp\" (UID: \"6ad45f64-d96d-4509-ab75-653af9140565\") " pod="openstack/octavia-housekeeping-vjddp" Dec 11 15:27:40 crc kubenswrapper[5050]: I1211 15:27:40.539528 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ad45f64-d96d-4509-ab75-653af9140565-combined-ca-bundle\") pod \"octavia-housekeeping-vjddp\" (UID: \"6ad45f64-d96d-4509-ab75-653af9140565\") " pod="openstack/octavia-housekeeping-vjddp" Dec 11 15:27:40 crc kubenswrapper[5050]: I1211 15:27:40.540235 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6ad45f64-d96d-4509-ab75-653af9140565-scripts\") pod \"octavia-housekeeping-vjddp\" (UID: \"6ad45f64-d96d-4509-ab75-653af9140565\") " pod="openstack/octavia-housekeeping-vjddp" Dec 11 15:27:40 crc kubenswrapper[5050]: I1211 15:27:40.553506 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6ad45f64-d96d-4509-ab75-653af9140565-config-data\") pod \"octavia-housekeeping-vjddp\" (UID: \"6ad45f64-d96d-4509-ab75-653af9140565\") " pod="openstack/octavia-housekeeping-vjddp" Dec 11 15:27:40 crc kubenswrapper[5050]: I1211 15:27:40.744856 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-housekeeping-vjddp" Dec 11 15:27:40 crc kubenswrapper[5050]: I1211 15:27:40.796661 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 15:27:40 crc kubenswrapper[5050]: I1211 15:27:40.796731 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 15:27:41 crc kubenswrapper[5050]: I1211 15:27:41.108309 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-xkf2c" event={"ID":"bcaea4dd-4d84-4e56-8f9c-26b27575ef64","Type":"ContainerStarted","Data":"2757ca567cef654d65cd7d9046e71b89a094406ae81ea3e99cae6c9c40504e36"} Dec 11 15:27:41 crc kubenswrapper[5050]: I1211 15:27:41.117803 5050 generic.go:334] "Generic (PLEG): container finished" podID="6fb80c24-230a-4b3f-979a-9520f51ed32c" containerID="ef3cc1b152fe2e6c01d02bf653b28a0ff7cfa29a0a06fa31c7e70a25e9c5370b" exitCode=0 Dec 11 15:27:41 crc kubenswrapper[5050]: I1211 15:27:41.117863 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-n8tmc" event={"ID":"6fb80c24-230a-4b3f-979a-9520f51ed32c","Type":"ContainerDied","Data":"ef3cc1b152fe2e6c01d02bf653b28a0ff7cfa29a0a06fa31c7e70a25e9c5370b"} Dec 11 15:27:41 crc kubenswrapper[5050]: I1211 15:27:41.314811 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-housekeeping-vjddp"] Dec 11 15:27:42 crc kubenswrapper[5050]: I1211 15:27:42.230189 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-worker-fnqhw"] Dec 11 15:27:42 crc kubenswrapper[5050]: I1211 15:27:42.233250 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-fnqhw" Dec 11 15:27:42 crc kubenswrapper[5050]: I1211 15:27:42.242915 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-config-data" Dec 11 15:27:42 crc kubenswrapper[5050]: I1211 15:27:42.243040 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-scripts" Dec 11 15:27:42 crc kubenswrapper[5050]: I1211 15:27:42.245455 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-fnqhw"] Dec 11 15:27:42 crc kubenswrapper[5050]: I1211 15:27:42.371091 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/29044957-e973-4fa2-90df-b93179e29721-amphora-certs\") pod \"octavia-worker-fnqhw\" (UID: \"29044957-e973-4fa2-90df-b93179e29721\") " pod="openstack/octavia-worker-fnqhw" Dec 11 15:27:42 crc kubenswrapper[5050]: I1211 15:27:42.371146 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/29044957-e973-4fa2-90df-b93179e29721-hm-ports\") pod \"octavia-worker-fnqhw\" (UID: \"29044957-e973-4fa2-90df-b93179e29721\") " pod="openstack/octavia-worker-fnqhw" Dec 11 15:27:42 crc kubenswrapper[5050]: I1211 15:27:42.371209 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29044957-e973-4fa2-90df-b93179e29721-config-data\") pod \"octavia-worker-fnqhw\" (UID: \"29044957-e973-4fa2-90df-b93179e29721\") " pod="openstack/octavia-worker-fnqhw" Dec 11 15:27:42 crc kubenswrapper[5050]: I1211 15:27:42.371267 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/29044957-e973-4fa2-90df-b93179e29721-config-data-merged\") pod \"octavia-worker-fnqhw\" (UID: \"29044957-e973-4fa2-90df-b93179e29721\") " pod="openstack/octavia-worker-fnqhw" Dec 11 15:27:42 crc kubenswrapper[5050]: I1211 15:27:42.371301 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29044957-e973-4fa2-90df-b93179e29721-scripts\") pod \"octavia-worker-fnqhw\" (UID: \"29044957-e973-4fa2-90df-b93179e29721\") " pod="openstack/octavia-worker-fnqhw" Dec 11 15:27:42 crc kubenswrapper[5050]: I1211 15:27:42.371321 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29044957-e973-4fa2-90df-b93179e29721-combined-ca-bundle\") pod \"octavia-worker-fnqhw\" (UID: \"29044957-e973-4fa2-90df-b93179e29721\") " pod="openstack/octavia-worker-fnqhw" Dec 11 15:27:42 crc kubenswrapper[5050]: I1211 15:27:42.472815 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29044957-e973-4fa2-90df-b93179e29721-config-data\") pod \"octavia-worker-fnqhw\" (UID: \"29044957-e973-4fa2-90df-b93179e29721\") " pod="openstack/octavia-worker-fnqhw" Dec 11 15:27:42 crc kubenswrapper[5050]: I1211 15:27:42.473506 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/29044957-e973-4fa2-90df-b93179e29721-config-data-merged\") pod \"octavia-worker-fnqhw\" (UID: \"29044957-e973-4fa2-90df-b93179e29721\") " pod="openstack/octavia-worker-fnqhw" Dec 11 15:27:42 crc kubenswrapper[5050]: I1211 15:27:42.473654 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29044957-e973-4fa2-90df-b93179e29721-scripts\") pod \"octavia-worker-fnqhw\" (UID: \"29044957-e973-4fa2-90df-b93179e29721\") " pod="openstack/octavia-worker-fnqhw" Dec 11 15:27:42 crc kubenswrapper[5050]: I1211 15:27:42.473768 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29044957-e973-4fa2-90df-b93179e29721-combined-ca-bundle\") pod \"octavia-worker-fnqhw\" (UID: \"29044957-e973-4fa2-90df-b93179e29721\") " pod="openstack/octavia-worker-fnqhw" Dec 11 15:27:42 crc kubenswrapper[5050]: I1211 15:27:42.473948 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/29044957-e973-4fa2-90df-b93179e29721-amphora-certs\") pod \"octavia-worker-fnqhw\" (UID: \"29044957-e973-4fa2-90df-b93179e29721\") " pod="openstack/octavia-worker-fnqhw" Dec 11 15:27:42 crc kubenswrapper[5050]: I1211 15:27:42.474148 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/29044957-e973-4fa2-90df-b93179e29721-hm-ports\") pod \"octavia-worker-fnqhw\" (UID: \"29044957-e973-4fa2-90df-b93179e29721\") " pod="openstack/octavia-worker-fnqhw" Dec 11 15:27:42 crc kubenswrapper[5050]: I1211 15:27:42.474499 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/29044957-e973-4fa2-90df-b93179e29721-config-data-merged\") pod \"octavia-worker-fnqhw\" (UID: \"29044957-e973-4fa2-90df-b93179e29721\") " pod="openstack/octavia-worker-fnqhw" Dec 11 15:27:42 crc kubenswrapper[5050]: I1211 15:27:42.475043 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hm-ports\" (UniqueName: \"kubernetes.io/configmap/29044957-e973-4fa2-90df-b93179e29721-hm-ports\") pod \"octavia-worker-fnqhw\" (UID: \"29044957-e973-4fa2-90df-b93179e29721\") " pod="openstack/octavia-worker-fnqhw" Dec 11 15:27:42 crc kubenswrapper[5050]: I1211 15:27:42.479649 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29044957-e973-4fa2-90df-b93179e29721-scripts\") pod \"octavia-worker-fnqhw\" (UID: \"29044957-e973-4fa2-90df-b93179e29721\") " pod="openstack/octavia-worker-fnqhw" Dec 11 15:27:42 crc kubenswrapper[5050]: I1211 15:27:42.480706 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-certs\" (UniqueName: \"kubernetes.io/secret/29044957-e973-4fa2-90df-b93179e29721-amphora-certs\") pod \"octavia-worker-fnqhw\" (UID: \"29044957-e973-4fa2-90df-b93179e29721\") " pod="openstack/octavia-worker-fnqhw" Dec 11 15:27:42 crc kubenswrapper[5050]: I1211 15:27:42.480793 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29044957-e973-4fa2-90df-b93179e29721-config-data\") pod \"octavia-worker-fnqhw\" (UID: \"29044957-e973-4fa2-90df-b93179e29721\") " pod="openstack/octavia-worker-fnqhw" Dec 11 15:27:42 crc kubenswrapper[5050]: I1211 15:27:42.487880 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29044957-e973-4fa2-90df-b93179e29721-combined-ca-bundle\") pod \"octavia-worker-fnqhw\" (UID: \"29044957-e973-4fa2-90df-b93179e29721\") " pod="openstack/octavia-worker-fnqhw" Dec 11 15:27:42 crc kubenswrapper[5050]: I1211 15:27:42.551676 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-worker-fnqhw" Dec 11 15:27:44 crc kubenswrapper[5050]: I1211 15:27:44.147067 5050 generic.go:334] "Generic (PLEG): container finished" podID="bcaea4dd-4d84-4e56-8f9c-26b27575ef64" containerID="2757ca567cef654d65cd7d9046e71b89a094406ae81ea3e99cae6c9c40504e36" exitCode=0 Dec 11 15:27:44 crc kubenswrapper[5050]: I1211 15:27:44.147129 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-xkf2c" event={"ID":"bcaea4dd-4d84-4e56-8f9c-26b27575ef64","Type":"ContainerDied","Data":"2757ca567cef654d65cd7d9046e71b89a094406ae81ea3e99cae6c9c40504e36"} Dec 11 15:27:46 crc kubenswrapper[5050]: I1211 15:27:46.167455 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-vjddp" event={"ID":"6ad45f64-d96d-4509-ab75-653af9140565","Type":"ContainerStarted","Data":"fc46d279e79942110f52f27a512c0dcb83abbcb573fc9c93612cfe5f53f5607a"} Dec 11 15:27:58 crc kubenswrapper[5050]: E1211 15:27:58.089296 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/gthiemonge/octavia-amphora-image:latest" Dec 11 15:27:58 crc kubenswrapper[5050]: E1211 15:27:58.090022 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/gthiemonge/octavia-amphora-image,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DEST_DIR,Value:/usr/local/apache2/htdocs,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:amphora-image,ReadOnly:false,MountPath:/usr/local/apache2/htdocs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-image-upload-56c9f55b99-xv6vc_openstack(2ebd0d88-3bc1-494c-98a7-9e494d499d0e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 15:27:58 crc kubenswrapper[5050]: E1211 15:27:58.091211 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/octavia-image-upload-56c9f55b99-xv6vc" podUID="2ebd0d88-3bc1-494c-98a7-9e494d499d0e" Dec 11 15:27:58 crc kubenswrapper[5050]: I1211 15:27:58.100057 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-worker-fnqhw"] Dec 11 15:27:58 crc kubenswrapper[5050]: E1211 15:27:58.281140 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/gthiemonge/octavia-amphora-image\\\"\"" pod="openstack/octavia-image-upload-56c9f55b99-xv6vc" podUID="2ebd0d88-3bc1-494c-98a7-9e494d499d0e" Dec 11 15:27:59 crc kubenswrapper[5050]: I1211 15:27:59.290589 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-healthmanager-xkf2c" event={"ID":"bcaea4dd-4d84-4e56-8f9c-26b27575ef64","Type":"ContainerStarted","Data":"4f9b305004f0da8b077b71e0308f1a84a02a2de7a3ef865d922ef370eefa4a94"} Dec 11 15:27:59 crc kubenswrapper[5050]: I1211 15:27:59.292602 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-healthmanager-xkf2c" Dec 11 15:27:59 crc kubenswrapper[5050]: I1211 15:27:59.293709 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-rsyslog-qm6sk" event={"ID":"f4942da3-ee1e-4a43-8500-f0092c12a9c6","Type":"ContainerStarted","Data":"8a2c4c0587b90c161efb4cc45867a098cd54ddd331fbcfe70c0c7c03604ae8c5"} Dec 11 15:27:59 crc kubenswrapper[5050]: I1211 15:27:59.294549 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-rsyslog-qm6sk" Dec 11 15:27:59 crc kubenswrapper[5050]: I1211 15:27:59.298254 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-n8tmc" event={"ID":"6fb80c24-230a-4b3f-979a-9520f51ed32c","Type":"ContainerStarted","Data":"f7a228099a0ae42dc9365fae502c1f5da022dac8d917141b5eab3dd3c293792c"} Dec 11 15:27:59 crc kubenswrapper[5050]: I1211 15:27:59.299768 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-fnqhw" event={"ID":"29044957-e973-4fa2-90df-b93179e29721","Type":"ContainerStarted","Data":"acd446fa36f579219ffeef2681d41246ed06d3b789cb1adedddca1ca1e866dab"} Dec 11 15:27:59 crc kubenswrapper[5050]: I1211 15:27:59.301063 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-vjddp" event={"ID":"6ad45f64-d96d-4509-ab75-653af9140565","Type":"ContainerStarted","Data":"f424a520ede76c4d5a1228e061a3996d79c7bc427f5a2f017f64053fa8af3f15"} Dec 11 15:27:59 crc kubenswrapper[5050]: I1211 15:27:59.319880 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-healthmanager-xkf2c" podStartSLOduration=22.319851017 podStartE2EDuration="22.319851017s" podCreationTimestamp="2025-12-11 15:27:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:27:59.313845146 +0000 UTC m=+5970.157567762" watchObservedRunningTime="2025-12-11 15:27:59.319851017 +0000 UTC m=+5970.163573603" Dec 11 15:27:59 crc kubenswrapper[5050]: I1211 15:27:59.368239 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-db-sync-n8tmc" podStartSLOduration=21.368215278 podStartE2EDuration="21.368215278s" podCreationTimestamp="2025-12-11 15:27:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:27:59.357228845 +0000 UTC m=+5970.200951431" watchObservedRunningTime="2025-12-11 15:27:59.368215278 +0000 UTC m=+5970.211937864" Dec 11 15:27:59 crc kubenswrapper[5050]: I1211 15:27:59.381876 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-rsyslog-qm6sk" podStartSLOduration=2.81706541 podStartE2EDuration="27.381848092s" podCreationTimestamp="2025-12-11 15:27:32 +0000 UTC" firstStartedPulling="2025-12-11 15:27:33.418484037 +0000 UTC m=+5944.262206623" lastFinishedPulling="2025-12-11 15:27:57.983266719 +0000 UTC m=+5968.826989305" observedRunningTime="2025-12-11 15:27:59.375108402 +0000 UTC m=+5970.218830988" watchObservedRunningTime="2025-12-11 15:27:59.381848092 +0000 UTC m=+5970.225570678" Dec 11 15:28:00 crc kubenswrapper[5050]: I1211 15:28:00.448508 5050 generic.go:334] "Generic (PLEG): container finished" podID="6ad45f64-d96d-4509-ab75-653af9140565" containerID="f424a520ede76c4d5a1228e061a3996d79c7bc427f5a2f017f64053fa8af3f15" exitCode=0 Dec 11 15:28:00 crc kubenswrapper[5050]: I1211 15:28:00.450513 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-vjddp" event={"ID":"6ad45f64-d96d-4509-ab75-653af9140565","Type":"ContainerDied","Data":"f424a520ede76c4d5a1228e061a3996d79c7bc427f5a2f017f64053fa8af3f15"} Dec 11 15:28:01 crc kubenswrapper[5050]: I1211 15:28:01.458737 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-housekeeping-vjddp" event={"ID":"6ad45f64-d96d-4509-ab75-653af9140565","Type":"ContainerStarted","Data":"8861fd6ad65e9c6c6041320599f1221d006c594b4e7fec69cab9fde7c8e17957"} Dec 11 15:28:01 crc kubenswrapper[5050]: I1211 15:28:01.459249 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-housekeeping-vjddp" Dec 11 15:28:01 crc kubenswrapper[5050]: I1211 15:28:01.460798 5050 generic.go:334] "Generic (PLEG): container finished" podID="6fb80c24-230a-4b3f-979a-9520f51ed32c" containerID="f7a228099a0ae42dc9365fae502c1f5da022dac8d917141b5eab3dd3c293792c" exitCode=0 Dec 11 15:28:01 crc kubenswrapper[5050]: I1211 15:28:01.460838 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-n8tmc" event={"ID":"6fb80c24-230a-4b3f-979a-9520f51ed32c","Type":"ContainerDied","Data":"f7a228099a0ae42dc9365fae502c1f5da022dac8d917141b5eab3dd3c293792c"} Dec 11 15:28:01 crc kubenswrapper[5050]: I1211 15:28:01.462366 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-fnqhw" event={"ID":"29044957-e973-4fa2-90df-b93179e29721","Type":"ContainerStarted","Data":"f2eba576c41e3a767b422df31156f04f984d7b56a5f4c1c97ccf4f1e8e1dc53f"} Dec 11 15:28:01 crc kubenswrapper[5050]: I1211 15:28:01.482457 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-housekeeping-vjddp" podStartSLOduration=9.016965733 podStartE2EDuration="21.482441703s" podCreationTimestamp="2025-12-11 15:27:40 +0000 UTC" firstStartedPulling="2025-12-11 15:27:46.143418258 +0000 UTC m=+5956.987140844" lastFinishedPulling="2025-12-11 15:27:58.608894228 +0000 UTC m=+5969.452616814" observedRunningTime="2025-12-11 15:28:01.480731068 +0000 UTC m=+5972.324453654" watchObservedRunningTime="2025-12-11 15:28:01.482441703 +0000 UTC m=+5972.326164289" Dec 11 15:28:02 crc kubenswrapper[5050]: I1211 15:28:02.473526 5050 generic.go:334] "Generic (PLEG): container finished" podID="29044957-e973-4fa2-90df-b93179e29721" containerID="f2eba576c41e3a767b422df31156f04f984d7b56a5f4c1c97ccf4f1e8e1dc53f" exitCode=0 Dec 11 15:28:02 crc kubenswrapper[5050]: I1211 15:28:02.474085 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-fnqhw" event={"ID":"29044957-e973-4fa2-90df-b93179e29721","Type":"ContainerDied","Data":"f2eba576c41e3a767b422df31156f04f984d7b56a5f4c1c97ccf4f1e8e1dc53f"} Dec 11 15:28:02 crc kubenswrapper[5050]: I1211 15:28:02.950728 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-n8tmc" Dec 11 15:28:03 crc kubenswrapper[5050]: I1211 15:28:03.029475 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb80c24-230a-4b3f-979a-9520f51ed32c-config-data\") pod \"6fb80c24-230a-4b3f-979a-9520f51ed32c\" (UID: \"6fb80c24-230a-4b3f-979a-9520f51ed32c\") " Dec 11 15:28:03 crc kubenswrapper[5050]: I1211 15:28:03.029574 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fb80c24-230a-4b3f-979a-9520f51ed32c-scripts\") pod \"6fb80c24-230a-4b3f-979a-9520f51ed32c\" (UID: \"6fb80c24-230a-4b3f-979a-9520f51ed32c\") " Dec 11 15:28:03 crc kubenswrapper[5050]: I1211 15:28:03.029614 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb80c24-230a-4b3f-979a-9520f51ed32c-combined-ca-bundle\") pod \"6fb80c24-230a-4b3f-979a-9520f51ed32c\" (UID: \"6fb80c24-230a-4b3f-979a-9520f51ed32c\") " Dec 11 15:28:03 crc kubenswrapper[5050]: I1211 15:28:03.029661 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/6fb80c24-230a-4b3f-979a-9520f51ed32c-config-data-merged\") pod \"6fb80c24-230a-4b3f-979a-9520f51ed32c\" (UID: \"6fb80c24-230a-4b3f-979a-9520f51ed32c\") " Dec 11 15:28:03 crc kubenswrapper[5050]: I1211 15:28:03.038722 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fb80c24-230a-4b3f-979a-9520f51ed32c-config-data" (OuterVolumeSpecName: "config-data") pod "6fb80c24-230a-4b3f-979a-9520f51ed32c" (UID: "6fb80c24-230a-4b3f-979a-9520f51ed32c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:28:03 crc kubenswrapper[5050]: I1211 15:28:03.039624 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fb80c24-230a-4b3f-979a-9520f51ed32c-scripts" (OuterVolumeSpecName: "scripts") pod "6fb80c24-230a-4b3f-979a-9520f51ed32c" (UID: "6fb80c24-230a-4b3f-979a-9520f51ed32c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:28:03 crc kubenswrapper[5050]: I1211 15:28:03.078059 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fb80c24-230a-4b3f-979a-9520f51ed32c-config-data-merged" (OuterVolumeSpecName: "config-data-merged") pod "6fb80c24-230a-4b3f-979a-9520f51ed32c" (UID: "6fb80c24-230a-4b3f-979a-9520f51ed32c"). InnerVolumeSpecName "config-data-merged". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:28:03 crc kubenswrapper[5050]: I1211 15:28:03.079338 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fb80c24-230a-4b3f-979a-9520f51ed32c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6fb80c24-230a-4b3f-979a-9520f51ed32c" (UID: "6fb80c24-230a-4b3f-979a-9520f51ed32c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:28:03 crc kubenswrapper[5050]: I1211 15:28:03.131708 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6fb80c24-230a-4b3f-979a-9520f51ed32c-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:28:03 crc kubenswrapper[5050]: I1211 15:28:03.131745 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb80c24-230a-4b3f-979a-9520f51ed32c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:28:03 crc kubenswrapper[5050]: I1211 15:28:03.131757 5050 reconciler_common.go:293] "Volume detached for volume \"config-data-merged\" (UniqueName: \"kubernetes.io/empty-dir/6fb80c24-230a-4b3f-979a-9520f51ed32c-config-data-merged\") on node \"crc\" DevicePath \"\"" Dec 11 15:28:03 crc kubenswrapper[5050]: I1211 15:28:03.131765 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fb80c24-230a-4b3f-979a-9520f51ed32c-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:28:03 crc kubenswrapper[5050]: I1211 15:28:03.486400 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-db-sync-n8tmc" Dec 11 15:28:03 crc kubenswrapper[5050]: I1211 15:28:03.487590 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-db-sync-n8tmc" event={"ID":"6fb80c24-230a-4b3f-979a-9520f51ed32c","Type":"ContainerDied","Data":"1d099f4757689b0ae0ee3d38ad9bbbff8515ef57796bc52eada98c52d16da6c4"} Dec 11 15:28:03 crc kubenswrapper[5050]: I1211 15:28:03.487630 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d099f4757689b0ae0ee3d38ad9bbbff8515ef57796bc52eada98c52d16da6c4" Dec 11 15:28:03 crc kubenswrapper[5050]: I1211 15:28:03.489401 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-worker-fnqhw" event={"ID":"29044957-e973-4fa2-90df-b93179e29721","Type":"ContainerStarted","Data":"e48fb2d277d4bb6d43aab6a32cacdad010f1d4a4e95ed0018a60bf77ab7e3150"} Dec 11 15:28:03 crc kubenswrapper[5050]: I1211 15:28:03.490738 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/octavia-worker-fnqhw" Dec 11 15:28:03 crc kubenswrapper[5050]: I1211 15:28:03.566763 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-worker-fnqhw" podStartSLOduration=20.005340989 podStartE2EDuration="21.56673432s" podCreationTimestamp="2025-12-11 15:27:42 +0000 UTC" firstStartedPulling="2025-12-11 15:27:58.520696993 +0000 UTC m=+5969.364419579" lastFinishedPulling="2025-12-11 15:28:00.082090334 +0000 UTC m=+5970.925812910" observedRunningTime="2025-12-11 15:28:03.526923757 +0000 UTC m=+5974.370646353" watchObservedRunningTime="2025-12-11 15:28:03.56673432 +0000 UTC m=+5974.410456906" Dec 11 15:28:08 crc kubenswrapper[5050]: I1211 15:28:08.135338 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-healthmanager-xkf2c" Dec 11 15:28:10 crc kubenswrapper[5050]: I1211 15:28:10.590772 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-56c9f55b99-xv6vc" event={"ID":"2ebd0d88-3bc1-494c-98a7-9e494d499d0e","Type":"ContainerStarted","Data":"9e5e27425b1c055cc1dae09b33e0fffaac58d6f7f8443f27c3df126117dcf4a1"} Dec 11 15:28:10 crc kubenswrapper[5050]: I1211 15:28:10.777987 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-housekeeping-vjddp" Dec 11 15:28:10 crc kubenswrapper[5050]: I1211 15:28:10.800739 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 15:28:10 crc kubenswrapper[5050]: I1211 15:28:10.800794 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 15:28:12 crc kubenswrapper[5050]: I1211 15:28:12.582973 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-worker-fnqhw" Dec 11 15:28:14 crc kubenswrapper[5050]: I1211 15:28:14.631476 5050 generic.go:334] "Generic (PLEG): container finished" podID="2ebd0d88-3bc1-494c-98a7-9e494d499d0e" containerID="9e5e27425b1c055cc1dae09b33e0fffaac58d6f7f8443f27c3df126117dcf4a1" exitCode=0 Dec 11 15:28:14 crc kubenswrapper[5050]: I1211 15:28:14.631572 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-56c9f55b99-xv6vc" event={"ID":"2ebd0d88-3bc1-494c-98a7-9e494d499d0e","Type":"ContainerDied","Data":"9e5e27425b1c055cc1dae09b33e0fffaac58d6f7f8443f27c3df126117dcf4a1"} Dec 11 15:28:15 crc kubenswrapper[5050]: I1211 15:28:15.644425 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-56c9f55b99-xv6vc" event={"ID":"2ebd0d88-3bc1-494c-98a7-9e494d499d0e","Type":"ContainerStarted","Data":"27626695d55b1f6bac43b5fb67e29c26b93cbf719f151929017a0e8abf60424b"} Dec 11 15:28:15 crc kubenswrapper[5050]: I1211 15:28:15.673182 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-56c9f55b99-xv6vc" podStartSLOduration=7.167521948 podStartE2EDuration="43.673161001s" podCreationTimestamp="2025-12-11 15:27:32 +0000 UTC" firstStartedPulling="2025-12-11 15:27:33.771788853 +0000 UTC m=+5944.615511439" lastFinishedPulling="2025-12-11 15:28:10.277427906 +0000 UTC m=+5981.121150492" observedRunningTime="2025-12-11 15:28:15.657217896 +0000 UTC m=+5986.500940492" watchObservedRunningTime="2025-12-11 15:28:15.673161001 +0000 UTC m=+5986.516883577" Dec 11 15:28:17 crc kubenswrapper[5050]: I1211 15:28:17.768454 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/octavia-rsyslog-qm6sk" Dec 11 15:28:27 crc kubenswrapper[5050]: I1211 15:28:27.045844 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-053d-account-create-update-rpgfv"] Dec 11 15:28:27 crc kubenswrapper[5050]: I1211 15:28:27.059797 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-2h48r"] Dec 11 15:28:27 crc kubenswrapper[5050]: I1211 15:28:27.072578 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-2h48r"] Dec 11 15:28:27 crc kubenswrapper[5050]: I1211 15:28:27.087062 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-053d-account-create-update-rpgfv"] Dec 11 15:28:27 crc kubenswrapper[5050]: I1211 15:28:27.556671 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="603ef10e-2ec0-4d47-8be0-3cc91679ecd7" path="/var/lib/kubelet/pods/603ef10e-2ec0-4d47-8be0-3cc91679ecd7/volumes" Dec 11 15:28:27 crc kubenswrapper[5050]: I1211 15:28:27.557668 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf783186-0e66-4bf7-bbbf-0cbd6f432736" path="/var/lib/kubelet/pods/bf783186-0e66-4bf7-bbbf-0cbd6f432736/volumes" Dec 11 15:28:33 crc kubenswrapper[5050]: I1211 15:28:33.042160 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-ljxnd"] Dec 11 15:28:33 crc kubenswrapper[5050]: I1211 15:28:33.060815 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-ljxnd"] Dec 11 15:28:33 crc kubenswrapper[5050]: I1211 15:28:33.560924 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ceabef0-4c99-4a43-8920-7aba1337fbc9" path="/var/lib/kubelet/pods/6ceabef0-4c99-4a43-8920-7aba1337fbc9/volumes" Dec 11 15:28:38 crc kubenswrapper[5050]: I1211 15:28:38.368725 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-56c9f55b99-xv6vc"] Dec 11 15:28:38 crc kubenswrapper[5050]: I1211 15:28:38.369546 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/octavia-image-upload-56c9f55b99-xv6vc" podUID="2ebd0d88-3bc1-494c-98a7-9e494d499d0e" containerName="octavia-amphora-httpd" containerID="cri-o://27626695d55b1f6bac43b5fb67e29c26b93cbf719f151929017a0e8abf60424b" gracePeriod=30 Dec 11 15:28:38 crc kubenswrapper[5050]: I1211 15:28:38.868098 5050 generic.go:334] "Generic (PLEG): container finished" podID="2ebd0d88-3bc1-494c-98a7-9e494d499d0e" containerID="27626695d55b1f6bac43b5fb67e29c26b93cbf719f151929017a0e8abf60424b" exitCode=0 Dec 11 15:28:38 crc kubenswrapper[5050]: I1211 15:28:38.868180 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-56c9f55b99-xv6vc" event={"ID":"2ebd0d88-3bc1-494c-98a7-9e494d499d0e","Type":"ContainerDied","Data":"27626695d55b1f6bac43b5fb67e29c26b93cbf719f151929017a0e8abf60424b"} Dec 11 15:28:38 crc kubenswrapper[5050]: I1211 15:28:38.868409 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-56c9f55b99-xv6vc" event={"ID":"2ebd0d88-3bc1-494c-98a7-9e494d499d0e","Type":"ContainerDied","Data":"2918cdc891bd6175bc806a6ee82a5535b130797de5025d69b6d5ee538df5fd08"} Dec 11 15:28:38 crc kubenswrapper[5050]: I1211 15:28:38.868421 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2918cdc891bd6175bc806a6ee82a5535b130797de5025d69b6d5ee538df5fd08" Dec 11 15:28:38 crc kubenswrapper[5050]: I1211 15:28:38.868696 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-56c9f55b99-xv6vc" Dec 11 15:28:38 crc kubenswrapper[5050]: I1211 15:28:38.964706 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/2ebd0d88-3bc1-494c-98a7-9e494d499d0e-amphora-image\") pod \"2ebd0d88-3bc1-494c-98a7-9e494d499d0e\" (UID: \"2ebd0d88-3bc1-494c-98a7-9e494d499d0e\") " Dec 11 15:28:38 crc kubenswrapper[5050]: I1211 15:28:38.964949 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ebd0d88-3bc1-494c-98a7-9e494d499d0e-httpd-config\") pod \"2ebd0d88-3bc1-494c-98a7-9e494d499d0e\" (UID: \"2ebd0d88-3bc1-494c-98a7-9e494d499d0e\") " Dec 11 15:28:39 crc kubenswrapper[5050]: I1211 15:28:39.019289 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ebd0d88-3bc1-494c-98a7-9e494d499d0e-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "2ebd0d88-3bc1-494c-98a7-9e494d499d0e" (UID: "2ebd0d88-3bc1-494c-98a7-9e494d499d0e"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:28:39 crc kubenswrapper[5050]: I1211 15:28:39.046479 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ebd0d88-3bc1-494c-98a7-9e494d499d0e-amphora-image" (OuterVolumeSpecName: "amphora-image") pod "2ebd0d88-3bc1-494c-98a7-9e494d499d0e" (UID: "2ebd0d88-3bc1-494c-98a7-9e494d499d0e"). InnerVolumeSpecName "amphora-image". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:28:39 crc kubenswrapper[5050]: I1211 15:28:39.066929 5050 reconciler_common.go:293] "Volume detached for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/2ebd0d88-3bc1-494c-98a7-9e494d499d0e-amphora-image\") on node \"crc\" DevicePath \"\"" Dec 11 15:28:39 crc kubenswrapper[5050]: I1211 15:28:39.066968 5050 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ebd0d88-3bc1-494c-98a7-9e494d499d0e-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 11 15:28:39 crc kubenswrapper[5050]: I1211 15:28:39.880451 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-56c9f55b99-xv6vc" Dec 11 15:28:39 crc kubenswrapper[5050]: I1211 15:28:39.912096 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-image-upload-56c9f55b99-xv6vc"] Dec 11 15:28:39 crc kubenswrapper[5050]: I1211 15:28:39.928655 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-image-upload-56c9f55b99-xv6vc"] Dec 11 15:28:40 crc kubenswrapper[5050]: I1211 15:28:40.796455 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 15:28:40 crc kubenswrapper[5050]: I1211 15:28:40.796719 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 15:28:40 crc kubenswrapper[5050]: I1211 15:28:40.796778 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 15:28:40 crc kubenswrapper[5050]: I1211 15:28:40.797593 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ebf03c01a0c1f974cd2ea46998cc404d2f6db1e8b59294fa047964a30636886e"} pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 15:28:40 crc kubenswrapper[5050]: I1211 15:28:40.797658 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" containerID="cri-o://ebf03c01a0c1f974cd2ea46998cc404d2f6db1e8b59294fa047964a30636886e" gracePeriod=600 Dec 11 15:28:40 crc kubenswrapper[5050]: E1211 15:28:40.925267 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:28:41 crc kubenswrapper[5050]: I1211 15:28:41.577220 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ebd0d88-3bc1-494c-98a7-9e494d499d0e" path="/var/lib/kubelet/pods/2ebd0d88-3bc1-494c-98a7-9e494d499d0e/volumes" Dec 11 15:28:41 crc kubenswrapper[5050]: I1211 15:28:41.900464 5050 generic.go:334] "Generic (PLEG): container finished" podID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerID="ebf03c01a0c1f974cd2ea46998cc404d2f6db1e8b59294fa047964a30636886e" exitCode=0 Dec 11 15:28:41 crc kubenswrapper[5050]: I1211 15:28:41.900509 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerDied","Data":"ebf03c01a0c1f974cd2ea46998cc404d2f6db1e8b59294fa047964a30636886e"} Dec 11 15:28:41 crc kubenswrapper[5050]: I1211 15:28:41.900576 5050 scope.go:117] "RemoveContainer" containerID="cd874cd802dc062c08f2ff800368e85c46f12b5b02fbe3836d6e9375c3dc66f8" Dec 11 15:28:41 crc kubenswrapper[5050]: I1211 15:28:41.901343 5050 scope.go:117] "RemoveContainer" containerID="ebf03c01a0c1f974cd2ea46998cc404d2f6db1e8b59294fa047964a30636886e" Dec 11 15:28:41 crc kubenswrapper[5050]: E1211 15:28:41.901698 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:28:43 crc kubenswrapper[5050]: I1211 15:28:43.409950 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/octavia-image-upload-56c9f55b99-fpj6c"] Dec 11 15:28:43 crc kubenswrapper[5050]: E1211 15:28:43.410718 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ebd0d88-3bc1-494c-98a7-9e494d499d0e" containerName="octavia-amphora-httpd" Dec 11 15:28:43 crc kubenswrapper[5050]: I1211 15:28:43.410730 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ebd0d88-3bc1-494c-98a7-9e494d499d0e" containerName="octavia-amphora-httpd" Dec 11 15:28:43 crc kubenswrapper[5050]: E1211 15:28:43.410739 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fb80c24-230a-4b3f-979a-9520f51ed32c" containerName="octavia-db-sync" Dec 11 15:28:43 crc kubenswrapper[5050]: I1211 15:28:43.410745 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fb80c24-230a-4b3f-979a-9520f51ed32c" containerName="octavia-db-sync" Dec 11 15:28:43 crc kubenswrapper[5050]: E1211 15:28:43.410766 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ebd0d88-3bc1-494c-98a7-9e494d499d0e" containerName="init" Dec 11 15:28:43 crc kubenswrapper[5050]: I1211 15:28:43.410775 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ebd0d88-3bc1-494c-98a7-9e494d499d0e" containerName="init" Dec 11 15:28:43 crc kubenswrapper[5050]: E1211 15:28:43.410796 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fb80c24-230a-4b3f-979a-9520f51ed32c" containerName="init" Dec 11 15:28:43 crc kubenswrapper[5050]: I1211 15:28:43.410803 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fb80c24-230a-4b3f-979a-9520f51ed32c" containerName="init" Dec 11 15:28:43 crc kubenswrapper[5050]: I1211 15:28:43.410995 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fb80c24-230a-4b3f-979a-9520f51ed32c" containerName="octavia-db-sync" Dec 11 15:28:43 crc kubenswrapper[5050]: I1211 15:28:43.411029 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ebd0d88-3bc1-494c-98a7-9e494d499d0e" containerName="octavia-amphora-httpd" Dec 11 15:28:43 crc kubenswrapper[5050]: I1211 15:28:43.412219 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-56c9f55b99-fpj6c" Dec 11 15:28:43 crc kubenswrapper[5050]: I1211 15:28:43.414959 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Dec 11 15:28:43 crc kubenswrapper[5050]: I1211 15:28:43.428991 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-56c9f55b99-fpj6c"] Dec 11 15:28:43 crc kubenswrapper[5050]: I1211 15:28:43.456995 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1534f3a8-a346-40ac-b46a-d39035b92a45-httpd-config\") pod \"octavia-image-upload-56c9f55b99-fpj6c\" (UID: \"1534f3a8-a346-40ac-b46a-d39035b92a45\") " pod="openstack/octavia-image-upload-56c9f55b99-fpj6c" Dec 11 15:28:43 crc kubenswrapper[5050]: I1211 15:28:43.457592 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/1534f3a8-a346-40ac-b46a-d39035b92a45-amphora-image\") pod \"octavia-image-upload-56c9f55b99-fpj6c\" (UID: \"1534f3a8-a346-40ac-b46a-d39035b92a45\") " pod="openstack/octavia-image-upload-56c9f55b99-fpj6c" Dec 11 15:28:43 crc kubenswrapper[5050]: I1211 15:28:43.559564 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1534f3a8-a346-40ac-b46a-d39035b92a45-httpd-config\") pod \"octavia-image-upload-56c9f55b99-fpj6c\" (UID: \"1534f3a8-a346-40ac-b46a-d39035b92a45\") " pod="openstack/octavia-image-upload-56c9f55b99-fpj6c" Dec 11 15:28:43 crc kubenswrapper[5050]: I1211 15:28:43.560066 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/1534f3a8-a346-40ac-b46a-d39035b92a45-amphora-image\") pod \"octavia-image-upload-56c9f55b99-fpj6c\" (UID: \"1534f3a8-a346-40ac-b46a-d39035b92a45\") " pod="openstack/octavia-image-upload-56c9f55b99-fpj6c" Dec 11 15:28:43 crc kubenswrapper[5050]: I1211 15:28:43.560523 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"amphora-image\" (UniqueName: \"kubernetes.io/empty-dir/1534f3a8-a346-40ac-b46a-d39035b92a45-amphora-image\") pod \"octavia-image-upload-56c9f55b99-fpj6c\" (UID: \"1534f3a8-a346-40ac-b46a-d39035b92a45\") " pod="openstack/octavia-image-upload-56c9f55b99-fpj6c" Dec 11 15:28:43 crc kubenswrapper[5050]: I1211 15:28:43.572633 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1534f3a8-a346-40ac-b46a-d39035b92a45-httpd-config\") pod \"octavia-image-upload-56c9f55b99-fpj6c\" (UID: \"1534f3a8-a346-40ac-b46a-d39035b92a45\") " pod="openstack/octavia-image-upload-56c9f55b99-fpj6c" Dec 11 15:28:43 crc kubenswrapper[5050]: I1211 15:28:43.745129 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/octavia-image-upload-56c9f55b99-fpj6c" Dec 11 15:28:44 crc kubenswrapper[5050]: I1211 15:28:44.194843 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/octavia-image-upload-56c9f55b99-fpj6c"] Dec 11 15:28:44 crc kubenswrapper[5050]: I1211 15:28:44.202878 5050 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 15:28:44 crc kubenswrapper[5050]: I1211 15:28:44.931189 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-56c9f55b99-fpj6c" event={"ID":"1534f3a8-a346-40ac-b46a-d39035b92a45","Type":"ContainerStarted","Data":"b83f56e5db259098db9f8d1ede95e90d0a94a6d67d140814fc091b20efc393b3"} Dec 11 15:28:44 crc kubenswrapper[5050]: I1211 15:28:44.931498 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-56c9f55b99-fpj6c" event={"ID":"1534f3a8-a346-40ac-b46a-d39035b92a45","Type":"ContainerStarted","Data":"5c077a62a57766814b75fe6613a9b322c413abb079b2470cf849816d6b64f5d8"} Dec 11 15:28:45 crc kubenswrapper[5050]: I1211 15:28:45.940875 5050 generic.go:334] "Generic (PLEG): container finished" podID="1534f3a8-a346-40ac-b46a-d39035b92a45" containerID="b83f56e5db259098db9f8d1ede95e90d0a94a6d67d140814fc091b20efc393b3" exitCode=0 Dec 11 15:28:45 crc kubenswrapper[5050]: I1211 15:28:45.940968 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-56c9f55b99-fpj6c" event={"ID":"1534f3a8-a346-40ac-b46a-d39035b92a45","Type":"ContainerDied","Data":"b83f56e5db259098db9f8d1ede95e90d0a94a6d67d140814fc091b20efc393b3"} Dec 11 15:28:46 crc kubenswrapper[5050]: I1211 15:28:46.953483 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/octavia-image-upload-56c9f55b99-fpj6c" event={"ID":"1534f3a8-a346-40ac-b46a-d39035b92a45","Type":"ContainerStarted","Data":"d94a52db5da7c6fec0fa095de64a972bd74caaaf7725d5bbf6413809eaadf349"} Dec 11 15:28:46 crc kubenswrapper[5050]: I1211 15:28:46.968735 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/octavia-image-upload-56c9f55b99-fpj6c" podStartSLOduration=3.467735188 podStartE2EDuration="3.968717788s" podCreationTimestamp="2025-12-11 15:28:43 +0000 UTC" firstStartedPulling="2025-12-11 15:28:44.202690244 +0000 UTC m=+6015.046412830" lastFinishedPulling="2025-12-11 15:28:44.703672844 +0000 UTC m=+6015.547395430" observedRunningTime="2025-12-11 15:28:46.966804617 +0000 UTC m=+6017.810527203" watchObservedRunningTime="2025-12-11 15:28:46.968717788 +0000 UTC m=+6017.812440374" Dec 11 15:28:53 crc kubenswrapper[5050]: I1211 15:28:53.547745 5050 scope.go:117] "RemoveContainer" containerID="ebf03c01a0c1f974cd2ea46998cc404d2f6db1e8b59294fa047964a30636886e" Dec 11 15:28:53 crc kubenswrapper[5050]: E1211 15:28:53.548589 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:28:57 crc kubenswrapper[5050]: I1211 15:28:57.582594 5050 scope.go:117] "RemoveContainer" containerID="d925b2e10fffcd83c9dca4bbccffba7cc18fcbc56b6c47081746d9e5501db6d7" Dec 11 15:28:59 crc kubenswrapper[5050]: I1211 15:28:59.046740 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-6kwz5"] Dec 11 15:28:59 crc kubenswrapper[5050]: I1211 15:28:59.059397 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-3395-account-create-update-bft8b"] Dec 11 15:28:59 crc kubenswrapper[5050]: I1211 15:28:59.068196 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-6kwz5"] Dec 11 15:28:59 crc kubenswrapper[5050]: I1211 15:28:59.077339 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-3395-account-create-update-bft8b"] Dec 11 15:28:59 crc kubenswrapper[5050]: I1211 15:28:59.224954 5050 scope.go:117] "RemoveContainer" containerID="4651c66ce862378022d60a0fe8f3fc0e0447fc5a295f69811107ab8b831a9cd4" Dec 11 15:28:59 crc kubenswrapper[5050]: I1211 15:28:59.313182 5050 scope.go:117] "RemoveContainer" containerID="1915832d9493f4f7d5d3aa66249038d1213fb2e67070b301c39c161b5c64a9c0" Dec 11 15:28:59 crc kubenswrapper[5050]: I1211 15:28:59.344749 5050 scope.go:117] "RemoveContainer" containerID="ec40b8ff0d4d8946d6868b9474d48f69c789b8edcc09389bc0734b745a313f61" Dec 11 15:28:59 crc kubenswrapper[5050]: I1211 15:28:59.385082 5050 scope.go:117] "RemoveContainer" containerID="39a083055865128f8d2d36931caadeb9cfe2196d13a8e2829dd33dd4817eac5c" Dec 11 15:28:59 crc kubenswrapper[5050]: I1211 15:28:59.565637 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13f1df50-df33-400c-8ee9-0d04a733c5c2" path="/var/lib/kubelet/pods/13f1df50-df33-400c-8ee9-0d04a733c5c2/volumes" Dec 11 15:28:59 crc kubenswrapper[5050]: I1211 15:28:59.566694 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6afafbe-4972-4ecd-a7f9-102e6dc01e06" path="/var/lib/kubelet/pods/e6afafbe-4972-4ecd-a7f9-102e6dc01e06/volumes" Dec 11 15:29:06 crc kubenswrapper[5050]: I1211 15:29:06.036033 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-2z5q4"] Dec 11 15:29:06 crc kubenswrapper[5050]: I1211 15:29:06.046287 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-2z5q4"] Dec 11 15:29:07 crc kubenswrapper[5050]: I1211 15:29:07.564476 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0c23522-39f3-4930-8afd-d56611078533" path="/var/lib/kubelet/pods/b0c23522-39f3-4930-8afd-d56611078533/volumes" Dec 11 15:29:08 crc kubenswrapper[5050]: I1211 15:29:08.546419 5050 scope.go:117] "RemoveContainer" containerID="ebf03c01a0c1f974cd2ea46998cc404d2f6db1e8b59294fa047964a30636886e" Dec 11 15:29:08 crc kubenswrapper[5050]: E1211 15:29:08.547553 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:29:23 crc kubenswrapper[5050]: I1211 15:29:23.545895 5050 scope.go:117] "RemoveContainer" containerID="ebf03c01a0c1f974cd2ea46998cc404d2f6db1e8b59294fa047964a30636886e" Dec 11 15:29:23 crc kubenswrapper[5050]: E1211 15:29:23.547061 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.627635 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-576555fb6f-qscc8"] Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.630191 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-576555fb6f-qscc8" Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.634583 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.634894 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.635109 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-d7bqh" Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.635252 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.641619 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-576555fb6f-qscc8"] Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.696321 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.696726 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="56c61de9-b025-4705-8311-bade624f6e13" containerName="glance-log" containerID="cri-o://8176f6c1b0091367c95acae545668a0ba33f785d033543b0e3a3d92feb1aeef3" gracePeriod=30 Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.696997 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="56c61de9-b025-4705-8311-bade624f6e13" containerName="glance-httpd" containerID="cri-o://f9dd0c437488bdff4d14d42a97042ad703e2262bd29aeb6d8358bded905a5645" gracePeriod=30 Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.723933 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-76bf858989-5x589"] Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.725942 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76bf858989-5x589" Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.735692 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/adbd909e-6283-4642-a386-8e5cfff2f199-config-data\") pod \"horizon-576555fb6f-qscc8\" (UID: \"adbd909e-6283-4642-a386-8e5cfff2f199\") " pod="openstack/horizon-576555fb6f-qscc8" Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.735785 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg65p\" (UniqueName: \"kubernetes.io/projected/adbd909e-6283-4642-a386-8e5cfff2f199-kube-api-access-dg65p\") pod \"horizon-576555fb6f-qscc8\" (UID: \"adbd909e-6283-4642-a386-8e5cfff2f199\") " pod="openstack/horizon-576555fb6f-qscc8" Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.735868 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/adbd909e-6283-4642-a386-8e5cfff2f199-horizon-secret-key\") pod \"horizon-576555fb6f-qscc8\" (UID: \"adbd909e-6283-4642-a386-8e5cfff2f199\") " pod="openstack/horizon-576555fb6f-qscc8" Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.735915 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/adbd909e-6283-4642-a386-8e5cfff2f199-scripts\") pod \"horizon-576555fb6f-qscc8\" (UID: \"adbd909e-6283-4642-a386-8e5cfff2f199\") " pod="openstack/horizon-576555fb6f-qscc8" Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.735959 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adbd909e-6283-4642-a386-8e5cfff2f199-logs\") pod \"horizon-576555fb6f-qscc8\" (UID: \"adbd909e-6283-4642-a386-8e5cfff2f199\") " pod="openstack/horizon-576555fb6f-qscc8" Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.780083 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-76bf858989-5x589"] Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.797335 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.797809 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c444ddc6-1d1e-4d4c-8b33-bda628807710" containerName="glance-httpd" containerID="cri-o://8a328144a3a0b9b16561e611c2a2ff017b7a36d90e65e6d28fa4044c99f7daa3" gracePeriod=30 Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.798055 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c444ddc6-1d1e-4d4c-8b33-bda628807710" containerName="glance-log" containerID="cri-o://a717d621c73eecb2a3dc8285b2583252b255f5a56b6dc361078f2f205a706971" gracePeriod=30 Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.838233 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/adbd909e-6283-4642-a386-8e5cfff2f199-horizon-secret-key\") pod \"horizon-576555fb6f-qscc8\" (UID: \"adbd909e-6283-4642-a386-8e5cfff2f199\") " pod="openstack/horizon-576555fb6f-qscc8" Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.838294 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/32b521d5-20fd-45a3-899f-bd6b605107a5-horizon-secret-key\") pod \"horizon-76bf858989-5x589\" (UID: \"32b521d5-20fd-45a3-899f-bd6b605107a5\") " pod="openstack/horizon-76bf858989-5x589" Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.838320 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32b521d5-20fd-45a3-899f-bd6b605107a5-logs\") pod \"horizon-76bf858989-5x589\" (UID: \"32b521d5-20fd-45a3-899f-bd6b605107a5\") " pod="openstack/horizon-76bf858989-5x589" Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.838340 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/adbd909e-6283-4642-a386-8e5cfff2f199-scripts\") pod \"horizon-576555fb6f-qscc8\" (UID: \"adbd909e-6283-4642-a386-8e5cfff2f199\") " pod="openstack/horizon-576555fb6f-qscc8" Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.838374 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adbd909e-6283-4642-a386-8e5cfff2f199-logs\") pod \"horizon-576555fb6f-qscc8\" (UID: \"adbd909e-6283-4642-a386-8e5cfff2f199\") " pod="openstack/horizon-576555fb6f-qscc8" Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.838407 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njjkt\" (UniqueName: \"kubernetes.io/projected/32b521d5-20fd-45a3-899f-bd6b605107a5-kube-api-access-njjkt\") pod \"horizon-76bf858989-5x589\" (UID: \"32b521d5-20fd-45a3-899f-bd6b605107a5\") " pod="openstack/horizon-76bf858989-5x589" Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.838454 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/adbd909e-6283-4642-a386-8e5cfff2f199-config-data\") pod \"horizon-576555fb6f-qscc8\" (UID: \"adbd909e-6283-4642-a386-8e5cfff2f199\") " pod="openstack/horizon-576555fb6f-qscc8" Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.838485 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32b521d5-20fd-45a3-899f-bd6b605107a5-scripts\") pod \"horizon-76bf858989-5x589\" (UID: \"32b521d5-20fd-45a3-899f-bd6b605107a5\") " pod="openstack/horizon-76bf858989-5x589" Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.838500 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/32b521d5-20fd-45a3-899f-bd6b605107a5-config-data\") pod \"horizon-76bf858989-5x589\" (UID: \"32b521d5-20fd-45a3-899f-bd6b605107a5\") " pod="openstack/horizon-76bf858989-5x589" Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.838552 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg65p\" (UniqueName: \"kubernetes.io/projected/adbd909e-6283-4642-a386-8e5cfff2f199-kube-api-access-dg65p\") pod \"horizon-576555fb6f-qscc8\" (UID: \"adbd909e-6283-4642-a386-8e5cfff2f199\") " pod="openstack/horizon-576555fb6f-qscc8" Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.839505 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/adbd909e-6283-4642-a386-8e5cfff2f199-scripts\") pod \"horizon-576555fb6f-qscc8\" (UID: \"adbd909e-6283-4642-a386-8e5cfff2f199\") " pod="openstack/horizon-576555fb6f-qscc8" Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.839729 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adbd909e-6283-4642-a386-8e5cfff2f199-logs\") pod \"horizon-576555fb6f-qscc8\" (UID: \"adbd909e-6283-4642-a386-8e5cfff2f199\") " pod="openstack/horizon-576555fb6f-qscc8" Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.841664 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/adbd909e-6283-4642-a386-8e5cfff2f199-config-data\") pod \"horizon-576555fb6f-qscc8\" (UID: \"adbd909e-6283-4642-a386-8e5cfff2f199\") " pod="openstack/horizon-576555fb6f-qscc8" Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.853827 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/adbd909e-6283-4642-a386-8e5cfff2f199-horizon-secret-key\") pod \"horizon-576555fb6f-qscc8\" (UID: \"adbd909e-6283-4642-a386-8e5cfff2f199\") " pod="openstack/horizon-576555fb6f-qscc8" Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.858944 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg65p\" (UniqueName: \"kubernetes.io/projected/adbd909e-6283-4642-a386-8e5cfff2f199-kube-api-access-dg65p\") pod \"horizon-576555fb6f-qscc8\" (UID: \"adbd909e-6283-4642-a386-8e5cfff2f199\") " pod="openstack/horizon-576555fb6f-qscc8" Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.940190 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/32b521d5-20fd-45a3-899f-bd6b605107a5-horizon-secret-key\") pod \"horizon-76bf858989-5x589\" (UID: \"32b521d5-20fd-45a3-899f-bd6b605107a5\") " pod="openstack/horizon-76bf858989-5x589" Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.940234 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32b521d5-20fd-45a3-899f-bd6b605107a5-logs\") pod \"horizon-76bf858989-5x589\" (UID: \"32b521d5-20fd-45a3-899f-bd6b605107a5\") " pod="openstack/horizon-76bf858989-5x589" Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.940297 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njjkt\" (UniqueName: \"kubernetes.io/projected/32b521d5-20fd-45a3-899f-bd6b605107a5-kube-api-access-njjkt\") pod \"horizon-76bf858989-5x589\" (UID: \"32b521d5-20fd-45a3-899f-bd6b605107a5\") " pod="openstack/horizon-76bf858989-5x589" Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.940379 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32b521d5-20fd-45a3-899f-bd6b605107a5-scripts\") pod \"horizon-76bf858989-5x589\" (UID: \"32b521d5-20fd-45a3-899f-bd6b605107a5\") " pod="openstack/horizon-76bf858989-5x589" Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.940667 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/32b521d5-20fd-45a3-899f-bd6b605107a5-config-data\") pod \"horizon-76bf858989-5x589\" (UID: \"32b521d5-20fd-45a3-899f-bd6b605107a5\") " pod="openstack/horizon-76bf858989-5x589" Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.940752 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32b521d5-20fd-45a3-899f-bd6b605107a5-logs\") pod \"horizon-76bf858989-5x589\" (UID: \"32b521d5-20fd-45a3-899f-bd6b605107a5\") " pod="openstack/horizon-76bf858989-5x589" Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.941256 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32b521d5-20fd-45a3-899f-bd6b605107a5-scripts\") pod \"horizon-76bf858989-5x589\" (UID: \"32b521d5-20fd-45a3-899f-bd6b605107a5\") " pod="openstack/horizon-76bf858989-5x589" Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.941717 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/32b521d5-20fd-45a3-899f-bd6b605107a5-config-data\") pod \"horizon-76bf858989-5x589\" (UID: \"32b521d5-20fd-45a3-899f-bd6b605107a5\") " pod="openstack/horizon-76bf858989-5x589" Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.943503 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/32b521d5-20fd-45a3-899f-bd6b605107a5-horizon-secret-key\") pod \"horizon-76bf858989-5x589\" (UID: \"32b521d5-20fd-45a3-899f-bd6b605107a5\") " pod="openstack/horizon-76bf858989-5x589" Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.955509 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njjkt\" (UniqueName: \"kubernetes.io/projected/32b521d5-20fd-45a3-899f-bd6b605107a5-kube-api-access-njjkt\") pod \"horizon-76bf858989-5x589\" (UID: \"32b521d5-20fd-45a3-899f-bd6b605107a5\") " pod="openstack/horizon-76bf858989-5x589" Dec 11 15:29:28 crc kubenswrapper[5050]: I1211 15:29:28.962530 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-576555fb6f-qscc8" Dec 11 15:29:29 crc kubenswrapper[5050]: I1211 15:29:29.057506 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76bf858989-5x589" Dec 11 15:29:29 crc kubenswrapper[5050]: I1211 15:29:29.357444 5050 generic.go:334] "Generic (PLEG): container finished" podID="56c61de9-b025-4705-8311-bade624f6e13" containerID="8176f6c1b0091367c95acae545668a0ba33f785d033543b0e3a3d92feb1aeef3" exitCode=143 Dec 11 15:29:29 crc kubenswrapper[5050]: I1211 15:29:29.357628 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"56c61de9-b025-4705-8311-bade624f6e13","Type":"ContainerDied","Data":"8176f6c1b0091367c95acae545668a0ba33f785d033543b0e3a3d92feb1aeef3"} Dec 11 15:29:29 crc kubenswrapper[5050]: I1211 15:29:29.361985 5050 generic.go:334] "Generic (PLEG): container finished" podID="c444ddc6-1d1e-4d4c-8b33-bda628807710" containerID="a717d621c73eecb2a3dc8285b2583252b255f5a56b6dc361078f2f205a706971" exitCode=143 Dec 11 15:29:29 crc kubenswrapper[5050]: I1211 15:29:29.362083 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c444ddc6-1d1e-4d4c-8b33-bda628807710","Type":"ContainerDied","Data":"a717d621c73eecb2a3dc8285b2583252b255f5a56b6dc361078f2f205a706971"} Dec 11 15:29:29 crc kubenswrapper[5050]: I1211 15:29:29.379856 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-76bf858989-5x589"] Dec 11 15:29:29 crc kubenswrapper[5050]: I1211 15:29:29.395890 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-576555fb6f-qscc8"] Dec 11 15:29:29 crc kubenswrapper[5050]: I1211 15:29:29.423795 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-77d699676c-cljf5"] Dec 11 15:29:29 crc kubenswrapper[5050]: I1211 15:29:29.425393 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77d699676c-cljf5" Dec 11 15:29:29 crc kubenswrapper[5050]: I1211 15:29:29.482503 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77d699676c-cljf5"] Dec 11 15:29:29 crc kubenswrapper[5050]: I1211 15:29:29.521039 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-576555fb6f-qscc8"] Dec 11 15:29:29 crc kubenswrapper[5050]: I1211 15:29:29.562995 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mr4z\" (UniqueName: \"kubernetes.io/projected/7da61c1c-03d8-4bcc-8bc9-23ccb60c1652-kube-api-access-8mr4z\") pod \"horizon-77d699676c-cljf5\" (UID: \"7da61c1c-03d8-4bcc-8bc9-23ccb60c1652\") " pod="openstack/horizon-77d699676c-cljf5" Dec 11 15:29:29 crc kubenswrapper[5050]: I1211 15:29:29.563271 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7da61c1c-03d8-4bcc-8bc9-23ccb60c1652-config-data\") pod \"horizon-77d699676c-cljf5\" (UID: \"7da61c1c-03d8-4bcc-8bc9-23ccb60c1652\") " pod="openstack/horizon-77d699676c-cljf5" Dec 11 15:29:29 crc kubenswrapper[5050]: I1211 15:29:29.563322 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7da61c1c-03d8-4bcc-8bc9-23ccb60c1652-horizon-secret-key\") pod \"horizon-77d699676c-cljf5\" (UID: \"7da61c1c-03d8-4bcc-8bc9-23ccb60c1652\") " pod="openstack/horizon-77d699676c-cljf5" Dec 11 15:29:29 crc kubenswrapper[5050]: I1211 15:29:29.563349 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7da61c1c-03d8-4bcc-8bc9-23ccb60c1652-scripts\") pod \"horizon-77d699676c-cljf5\" (UID: \"7da61c1c-03d8-4bcc-8bc9-23ccb60c1652\") " pod="openstack/horizon-77d699676c-cljf5" Dec 11 15:29:29 crc kubenswrapper[5050]: I1211 15:29:29.563368 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7da61c1c-03d8-4bcc-8bc9-23ccb60c1652-logs\") pod \"horizon-77d699676c-cljf5\" (UID: \"7da61c1c-03d8-4bcc-8bc9-23ccb60c1652\") " pod="openstack/horizon-77d699676c-cljf5" Dec 11 15:29:29 crc kubenswrapper[5050]: I1211 15:29:29.665232 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7da61c1c-03d8-4bcc-8bc9-23ccb60c1652-config-data\") pod \"horizon-77d699676c-cljf5\" (UID: \"7da61c1c-03d8-4bcc-8bc9-23ccb60c1652\") " pod="openstack/horizon-77d699676c-cljf5" Dec 11 15:29:29 crc kubenswrapper[5050]: I1211 15:29:29.665317 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7da61c1c-03d8-4bcc-8bc9-23ccb60c1652-horizon-secret-key\") pod \"horizon-77d699676c-cljf5\" (UID: \"7da61c1c-03d8-4bcc-8bc9-23ccb60c1652\") " pod="openstack/horizon-77d699676c-cljf5" Dec 11 15:29:29 crc kubenswrapper[5050]: I1211 15:29:29.665357 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7da61c1c-03d8-4bcc-8bc9-23ccb60c1652-scripts\") pod \"horizon-77d699676c-cljf5\" (UID: \"7da61c1c-03d8-4bcc-8bc9-23ccb60c1652\") " pod="openstack/horizon-77d699676c-cljf5" Dec 11 15:29:29 crc kubenswrapper[5050]: I1211 15:29:29.665383 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7da61c1c-03d8-4bcc-8bc9-23ccb60c1652-logs\") pod \"horizon-77d699676c-cljf5\" (UID: \"7da61c1c-03d8-4bcc-8bc9-23ccb60c1652\") " pod="openstack/horizon-77d699676c-cljf5" Dec 11 15:29:29 crc kubenswrapper[5050]: I1211 15:29:29.665423 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mr4z\" (UniqueName: \"kubernetes.io/projected/7da61c1c-03d8-4bcc-8bc9-23ccb60c1652-kube-api-access-8mr4z\") pod \"horizon-77d699676c-cljf5\" (UID: \"7da61c1c-03d8-4bcc-8bc9-23ccb60c1652\") " pod="openstack/horizon-77d699676c-cljf5" Dec 11 15:29:29 crc kubenswrapper[5050]: I1211 15:29:29.667581 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7da61c1c-03d8-4bcc-8bc9-23ccb60c1652-scripts\") pod \"horizon-77d699676c-cljf5\" (UID: \"7da61c1c-03d8-4bcc-8bc9-23ccb60c1652\") " pod="openstack/horizon-77d699676c-cljf5" Dec 11 15:29:29 crc kubenswrapper[5050]: I1211 15:29:29.667736 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7da61c1c-03d8-4bcc-8bc9-23ccb60c1652-logs\") pod \"horizon-77d699676c-cljf5\" (UID: \"7da61c1c-03d8-4bcc-8bc9-23ccb60c1652\") " pod="openstack/horizon-77d699676c-cljf5" Dec 11 15:29:29 crc kubenswrapper[5050]: I1211 15:29:29.669183 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7da61c1c-03d8-4bcc-8bc9-23ccb60c1652-config-data\") pod \"horizon-77d699676c-cljf5\" (UID: \"7da61c1c-03d8-4bcc-8bc9-23ccb60c1652\") " pod="openstack/horizon-77d699676c-cljf5" Dec 11 15:29:29 crc kubenswrapper[5050]: I1211 15:29:29.675914 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7da61c1c-03d8-4bcc-8bc9-23ccb60c1652-horizon-secret-key\") pod \"horizon-77d699676c-cljf5\" (UID: \"7da61c1c-03d8-4bcc-8bc9-23ccb60c1652\") " pod="openstack/horizon-77d699676c-cljf5" Dec 11 15:29:29 crc kubenswrapper[5050]: I1211 15:29:29.685461 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mr4z\" (UniqueName: \"kubernetes.io/projected/7da61c1c-03d8-4bcc-8bc9-23ccb60c1652-kube-api-access-8mr4z\") pod \"horizon-77d699676c-cljf5\" (UID: \"7da61c1c-03d8-4bcc-8bc9-23ccb60c1652\") " pod="openstack/horizon-77d699676c-cljf5" Dec 11 15:29:29 crc kubenswrapper[5050]: I1211 15:29:29.770979 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77d699676c-cljf5" Dec 11 15:29:30 crc kubenswrapper[5050]: I1211 15:29:30.259365 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77d699676c-cljf5"] Dec 11 15:29:30 crc kubenswrapper[5050]: W1211 15:29:30.261784 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7da61c1c_03d8_4bcc_8bc9_23ccb60c1652.slice/crio-5b468a749f92c6d83f2044a8a336e7b8cf8669a554430c6b9e5742ad0b4d9de1 WatchSource:0}: Error finding container 5b468a749f92c6d83f2044a8a336e7b8cf8669a554430c6b9e5742ad0b4d9de1: Status 404 returned error can't find the container with id 5b468a749f92c6d83f2044a8a336e7b8cf8669a554430c6b9e5742ad0b4d9de1 Dec 11 15:29:30 crc kubenswrapper[5050]: I1211 15:29:30.373954 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-576555fb6f-qscc8" event={"ID":"adbd909e-6283-4642-a386-8e5cfff2f199","Type":"ContainerStarted","Data":"0465eea08e4e410d700bf1214ba2f04d38f95456fa78613d50e5f2bef8b5a18b"} Dec 11 15:29:30 crc kubenswrapper[5050]: I1211 15:29:30.376214 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76bf858989-5x589" event={"ID":"32b521d5-20fd-45a3-899f-bd6b605107a5","Type":"ContainerStarted","Data":"f36a010f06a285ab0a590a050708d633eab1b6dc1afc038830c8580fa7977ca7"} Dec 11 15:29:30 crc kubenswrapper[5050]: I1211 15:29:30.377846 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77d699676c-cljf5" event={"ID":"7da61c1c-03d8-4bcc-8bc9-23ccb60c1652","Type":"ContainerStarted","Data":"5b468a749f92c6d83f2044a8a336e7b8cf8669a554430c6b9e5742ad0b4d9de1"} Dec 11 15:29:31 crc kubenswrapper[5050]: I1211 15:29:31.924034 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="56c61de9-b025-4705-8311-bade624f6e13" containerName="glance-httpd" probeResult="failure" output="Get \"http://10.217.1.45:9292/healthcheck\": dial tcp 10.217.1.45:9292: connect: connection refused" Dec 11 15:29:31 crc kubenswrapper[5050]: I1211 15:29:31.924170 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="56c61de9-b025-4705-8311-bade624f6e13" containerName="glance-log" probeResult="failure" output="Get \"http://10.217.1.45:9292/healthcheck\": dial tcp 10.217.1.45:9292: connect: connection refused" Dec 11 15:29:32 crc kubenswrapper[5050]: I1211 15:29:32.399915 5050 generic.go:334] "Generic (PLEG): container finished" podID="56c61de9-b025-4705-8311-bade624f6e13" containerID="f9dd0c437488bdff4d14d42a97042ad703e2262bd29aeb6d8358bded905a5645" exitCode=0 Dec 11 15:29:32 crc kubenswrapper[5050]: I1211 15:29:32.399984 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"56c61de9-b025-4705-8311-bade624f6e13","Type":"ContainerDied","Data":"f9dd0c437488bdff4d14d42a97042ad703e2262bd29aeb6d8358bded905a5645"} Dec 11 15:29:32 crc kubenswrapper[5050]: I1211 15:29:32.402436 5050 generic.go:334] "Generic (PLEG): container finished" podID="c444ddc6-1d1e-4d4c-8b33-bda628807710" containerID="8a328144a3a0b9b16561e611c2a2ff017b7a36d90e65e6d28fa4044c99f7daa3" exitCode=0 Dec 11 15:29:32 crc kubenswrapper[5050]: I1211 15:29:32.402473 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c444ddc6-1d1e-4d4c-8b33-bda628807710","Type":"ContainerDied","Data":"8a328144a3a0b9b16561e611c2a2ff017b7a36d90e65e6d28fa4044c99f7daa3"} Dec 11 15:29:33 crc kubenswrapper[5050]: I1211 15:29:33.921283 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="c444ddc6-1d1e-4d4c-8b33-bda628807710" containerName="glance-log" probeResult="failure" output="Get \"http://10.217.1.46:9292/healthcheck\": dial tcp 10.217.1.46:9292: connect: connection refused" Dec 11 15:29:33 crc kubenswrapper[5050]: I1211 15:29:33.921283 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="c444ddc6-1d1e-4d4c-8b33-bda628807710" containerName="glance-httpd" probeResult="failure" output="Get \"http://10.217.1.46:9292/healthcheck\": dial tcp 10.217.1.46:9292: connect: connection refused" Dec 11 15:29:36 crc kubenswrapper[5050]: E1211 15:29:36.906393 5050 kubelet_node_status.go:756] "Failed to set some node status fields" err="failed to validate nodeIP: route ip+net: no such network interface" node="crc" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.044466 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.220358 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56c61de9-b025-4705-8311-bade624f6e13-config-data\") pod \"56c61de9-b025-4705-8311-bade624f6e13\" (UID: \"56c61de9-b025-4705-8311-bade624f6e13\") " Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.220460 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29nrn\" (UniqueName: \"kubernetes.io/projected/56c61de9-b025-4705-8311-bade624f6e13-kube-api-access-29nrn\") pod \"56c61de9-b025-4705-8311-bade624f6e13\" (UID: \"56c61de9-b025-4705-8311-bade624f6e13\") " Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.220675 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/56c61de9-b025-4705-8311-bade624f6e13-httpd-run\") pod \"56c61de9-b025-4705-8311-bade624f6e13\" (UID: \"56c61de9-b025-4705-8311-bade624f6e13\") " Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.220721 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/56c61de9-b025-4705-8311-bade624f6e13-ceph\") pod \"56c61de9-b025-4705-8311-bade624f6e13\" (UID: \"56c61de9-b025-4705-8311-bade624f6e13\") " Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.220804 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56c61de9-b025-4705-8311-bade624f6e13-logs\") pod \"56c61de9-b025-4705-8311-bade624f6e13\" (UID: \"56c61de9-b025-4705-8311-bade624f6e13\") " Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.220861 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56c61de9-b025-4705-8311-bade624f6e13-scripts\") pod \"56c61de9-b025-4705-8311-bade624f6e13\" (UID: \"56c61de9-b025-4705-8311-bade624f6e13\") " Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.220906 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c61de9-b025-4705-8311-bade624f6e13-combined-ca-bundle\") pod \"56c61de9-b025-4705-8311-bade624f6e13\" (UID: \"56c61de9-b025-4705-8311-bade624f6e13\") " Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.229157 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56c61de9-b025-4705-8311-bade624f6e13-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "56c61de9-b025-4705-8311-bade624f6e13" (UID: "56c61de9-b025-4705-8311-bade624f6e13"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.229263 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56c61de9-b025-4705-8311-bade624f6e13-logs" (OuterVolumeSpecName: "logs") pod "56c61de9-b025-4705-8311-bade624f6e13" (UID: "56c61de9-b025-4705-8311-bade624f6e13"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.240225 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56c61de9-b025-4705-8311-bade624f6e13-kube-api-access-29nrn" (OuterVolumeSpecName: "kube-api-access-29nrn") pod "56c61de9-b025-4705-8311-bade624f6e13" (UID: "56c61de9-b025-4705-8311-bade624f6e13"). InnerVolumeSpecName "kube-api-access-29nrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.241963 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56c61de9-b025-4705-8311-bade624f6e13-ceph" (OuterVolumeSpecName: "ceph") pod "56c61de9-b025-4705-8311-bade624f6e13" (UID: "56c61de9-b025-4705-8311-bade624f6e13"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.245620 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56c61de9-b025-4705-8311-bade624f6e13-scripts" (OuterVolumeSpecName: "scripts") pod "56c61de9-b025-4705-8311-bade624f6e13" (UID: "56c61de9-b025-4705-8311-bade624f6e13"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.304418 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56c61de9-b025-4705-8311-bade624f6e13-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "56c61de9-b025-4705-8311-bade624f6e13" (UID: "56c61de9-b025-4705-8311-bade624f6e13"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.324432 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29nrn\" (UniqueName: \"kubernetes.io/projected/56c61de9-b025-4705-8311-bade624f6e13-kube-api-access-29nrn\") on node \"crc\" DevicePath \"\"" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.324459 5050 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/56c61de9-b025-4705-8311-bade624f6e13-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.324469 5050 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/56c61de9-b025-4705-8311-bade624f6e13-ceph\") on node \"crc\" DevicePath \"\"" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.324477 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56c61de9-b025-4705-8311-bade624f6e13-logs\") on node \"crc\" DevicePath \"\"" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.324485 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56c61de9-b025-4705-8311-bade624f6e13-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.324492 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56c61de9-b025-4705-8311-bade624f6e13-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.333180 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56c61de9-b025-4705-8311-bade624f6e13-config-data" (OuterVolumeSpecName: "config-data") pod "56c61de9-b025-4705-8311-bade624f6e13" (UID: "56c61de9-b025-4705-8311-bade624f6e13"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.426828 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56c61de9-b025-4705-8311-bade624f6e13-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.453237 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"56c61de9-b025-4705-8311-bade624f6e13","Type":"ContainerDied","Data":"7b259b6daf4ab16b8cc98718fb4284dc2718dea8af03e0a5869e1df4703accdc"} Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.453292 5050 scope.go:117] "RemoveContainer" containerID="f9dd0c437488bdff4d14d42a97042ad703e2262bd29aeb6d8358bded905a5645" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.453313 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.456329 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-576555fb6f-qscc8" event={"ID":"adbd909e-6283-4642-a386-8e5cfff2f199","Type":"ContainerStarted","Data":"06fda559c919c413e3b11c7a5acc0df2f40f7527385816c092affd1087075b2c"} Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.456369 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-576555fb6f-qscc8" event={"ID":"adbd909e-6283-4642-a386-8e5cfff2f199","Type":"ContainerStarted","Data":"4ba813f858c8b2a29b98a985aae1143279a0ef2b672cb37677836cf6c86767ed"} Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.456500 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-576555fb6f-qscc8" podUID="adbd909e-6283-4642-a386-8e5cfff2f199" containerName="horizon-log" containerID="cri-o://4ba813f858c8b2a29b98a985aae1143279a0ef2b672cb37677836cf6c86767ed" gracePeriod=30 Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.456558 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-576555fb6f-qscc8" podUID="adbd909e-6283-4642-a386-8e5cfff2f199" containerName="horizon" containerID="cri-o://06fda559c919c413e3b11c7a5acc0df2f40f7527385816c092affd1087075b2c" gracePeriod=30 Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.471846 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76bf858989-5x589" event={"ID":"32b521d5-20fd-45a3-899f-bd6b605107a5","Type":"ContainerStarted","Data":"ed40918a857d442157ef5a86bc46bf4c1e750179bf25d5078680814160df7074"} Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.471886 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76bf858989-5x589" event={"ID":"32b521d5-20fd-45a3-899f-bd6b605107a5","Type":"ContainerStarted","Data":"645c0842ad484bd26a50d403490ba1090f7a0165e591f6f2020f8f2bae092ffa"} Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.473739 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77d699676c-cljf5" event={"ID":"7da61c1c-03d8-4bcc-8bc9-23ccb60c1652","Type":"ContainerStarted","Data":"5bf10d2460c1f2e2f63841e0a3f56ffd837b3829bff105ae681afd9617fd7ce8"} Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.473769 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77d699676c-cljf5" event={"ID":"7da61c1c-03d8-4bcc-8bc9-23ccb60c1652","Type":"ContainerStarted","Data":"5c6fe59cd09872bf34d0ff029b4368490e629638561d1ea2a03ca6d1f4fc6549"} Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.484683 5050 scope.go:117] "RemoveContainer" containerID="8176f6c1b0091367c95acae545668a0ba33f785d033543b0e3a3d92feb1aeef3" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.488376 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-576555fb6f-qscc8" podStartSLOduration=2.211485913 podStartE2EDuration="9.488352649s" podCreationTimestamp="2025-12-11 15:29:28 +0000 UTC" firstStartedPulling="2025-12-11 15:29:29.492000308 +0000 UTC m=+6060.335722894" lastFinishedPulling="2025-12-11 15:29:36.768867044 +0000 UTC m=+6067.612589630" observedRunningTime="2025-12-11 15:29:37.47527328 +0000 UTC m=+6068.318995866" watchObservedRunningTime="2025-12-11 15:29:37.488352649 +0000 UTC m=+6068.332075235" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.510917 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-76bf858989-5x589" podStartSLOduration=2.205323647 podStartE2EDuration="9.510887421s" podCreationTimestamp="2025-12-11 15:29:28 +0000 UTC" firstStartedPulling="2025-12-11 15:29:29.384195618 +0000 UTC m=+6060.227918204" lastFinishedPulling="2025-12-11 15:29:36.689759392 +0000 UTC m=+6067.533481978" observedRunningTime="2025-12-11 15:29:37.500412051 +0000 UTC m=+6068.344134637" watchObservedRunningTime="2025-12-11 15:29:37.510887421 +0000 UTC m=+6068.354609997" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.527584 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-77d699676c-cljf5" podStartSLOduration=2.079994027 podStartE2EDuration="8.527557546s" podCreationTimestamp="2025-12-11 15:29:29 +0000 UTC" firstStartedPulling="2025-12-11 15:29:30.264641352 +0000 UTC m=+6061.108363938" lastFinishedPulling="2025-12-11 15:29:36.712204871 +0000 UTC m=+6067.555927457" observedRunningTime="2025-12-11 15:29:37.51984417 +0000 UTC m=+6068.363566756" watchObservedRunningTime="2025-12-11 15:29:37.527557546 +0000 UTC m=+6068.371280132" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.565122 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.577810 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.589085 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 15:29:37 crc kubenswrapper[5050]: E1211 15:29:37.589676 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c61de9-b025-4705-8311-bade624f6e13" containerName="glance-log" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.589702 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c61de9-b025-4705-8311-bade624f6e13" containerName="glance-log" Dec 11 15:29:37 crc kubenswrapper[5050]: E1211 15:29:37.589723 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c61de9-b025-4705-8311-bade624f6e13" containerName="glance-httpd" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.589731 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c61de9-b025-4705-8311-bade624f6e13" containerName="glance-httpd" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.590043 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c61de9-b025-4705-8311-bade624f6e13" containerName="glance-log" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.590066 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c61de9-b025-4705-8311-bade624f6e13" containerName="glance-httpd" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.591635 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.600562 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.602659 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.733230 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/04726990-8178-4558-9db5-d01140512d63-ceph\") pod \"glance-default-external-api-0\" (UID: \"04726990-8178-4558-9db5-d01140512d63\") " pod="openstack/glance-default-external-api-0" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.733296 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/04726990-8178-4558-9db5-d01140512d63-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"04726990-8178-4558-9db5-d01140512d63\") " pod="openstack/glance-default-external-api-0" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.733473 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04726990-8178-4558-9db5-d01140512d63-config-data\") pod \"glance-default-external-api-0\" (UID: \"04726990-8178-4558-9db5-d01140512d63\") " pod="openstack/glance-default-external-api-0" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.733524 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04726990-8178-4558-9db5-d01140512d63-logs\") pod \"glance-default-external-api-0\" (UID: \"04726990-8178-4558-9db5-d01140512d63\") " pod="openstack/glance-default-external-api-0" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.733576 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04726990-8178-4558-9db5-d01140512d63-scripts\") pod \"glance-default-external-api-0\" (UID: \"04726990-8178-4558-9db5-d01140512d63\") " pod="openstack/glance-default-external-api-0" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.733951 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04726990-8178-4558-9db5-d01140512d63-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"04726990-8178-4558-9db5-d01140512d63\") " pod="openstack/glance-default-external-api-0" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.733997 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5vll\" (UniqueName: \"kubernetes.io/projected/04726990-8178-4558-9db5-d01140512d63-kube-api-access-n5vll\") pod \"glance-default-external-api-0\" (UID: \"04726990-8178-4558-9db5-d01140512d63\") " pod="openstack/glance-default-external-api-0" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.835839 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04726990-8178-4558-9db5-d01140512d63-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"04726990-8178-4558-9db5-d01140512d63\") " pod="openstack/glance-default-external-api-0" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.835891 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5vll\" (UniqueName: \"kubernetes.io/projected/04726990-8178-4558-9db5-d01140512d63-kube-api-access-n5vll\") pod \"glance-default-external-api-0\" (UID: \"04726990-8178-4558-9db5-d01140512d63\") " pod="openstack/glance-default-external-api-0" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.835938 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/04726990-8178-4558-9db5-d01140512d63-ceph\") pod \"glance-default-external-api-0\" (UID: \"04726990-8178-4558-9db5-d01140512d63\") " pod="openstack/glance-default-external-api-0" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.835965 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/04726990-8178-4558-9db5-d01140512d63-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"04726990-8178-4558-9db5-d01140512d63\") " pod="openstack/glance-default-external-api-0" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.836003 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04726990-8178-4558-9db5-d01140512d63-config-data\") pod \"glance-default-external-api-0\" (UID: \"04726990-8178-4558-9db5-d01140512d63\") " pod="openstack/glance-default-external-api-0" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.836036 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04726990-8178-4558-9db5-d01140512d63-logs\") pod \"glance-default-external-api-0\" (UID: \"04726990-8178-4558-9db5-d01140512d63\") " pod="openstack/glance-default-external-api-0" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.836060 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04726990-8178-4558-9db5-d01140512d63-scripts\") pod \"glance-default-external-api-0\" (UID: \"04726990-8178-4558-9db5-d01140512d63\") " pod="openstack/glance-default-external-api-0" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.840179 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04726990-8178-4558-9db5-d01140512d63-scripts\") pod \"glance-default-external-api-0\" (UID: \"04726990-8178-4558-9db5-d01140512d63\") " pod="openstack/glance-default-external-api-0" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.840533 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04726990-8178-4558-9db5-d01140512d63-logs\") pod \"glance-default-external-api-0\" (UID: \"04726990-8178-4558-9db5-d01140512d63\") " pod="openstack/glance-default-external-api-0" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.840932 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04726990-8178-4558-9db5-d01140512d63-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"04726990-8178-4558-9db5-d01140512d63\") " pod="openstack/glance-default-external-api-0" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.840989 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/04726990-8178-4558-9db5-d01140512d63-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"04726990-8178-4558-9db5-d01140512d63\") " pod="openstack/glance-default-external-api-0" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.841878 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04726990-8178-4558-9db5-d01140512d63-config-data\") pod \"glance-default-external-api-0\" (UID: \"04726990-8178-4558-9db5-d01140512d63\") " pod="openstack/glance-default-external-api-0" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.842279 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/04726990-8178-4558-9db5-d01140512d63-ceph\") pod \"glance-default-external-api-0\" (UID: \"04726990-8178-4558-9db5-d01140512d63\") " pod="openstack/glance-default-external-api-0" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.874668 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5vll\" (UniqueName: \"kubernetes.io/projected/04726990-8178-4558-9db5-d01140512d63-kube-api-access-n5vll\") pod \"glance-default-external-api-0\" (UID: \"04726990-8178-4558-9db5-d01140512d63\") " pod="openstack/glance-default-external-api-0" Dec 11 15:29:37 crc kubenswrapper[5050]: I1211 15:29:37.915117 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.093454 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.247833 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c444ddc6-1d1e-4d4c-8b33-bda628807710-config-data\") pod \"c444ddc6-1d1e-4d4c-8b33-bda628807710\" (UID: \"c444ddc6-1d1e-4d4c-8b33-bda628807710\") " Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.247888 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdjl2\" (UniqueName: \"kubernetes.io/projected/c444ddc6-1d1e-4d4c-8b33-bda628807710-kube-api-access-bdjl2\") pod \"c444ddc6-1d1e-4d4c-8b33-bda628807710\" (UID: \"c444ddc6-1d1e-4d4c-8b33-bda628807710\") " Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.247946 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c444ddc6-1d1e-4d4c-8b33-bda628807710-httpd-run\") pod \"c444ddc6-1d1e-4d4c-8b33-bda628807710\" (UID: \"c444ddc6-1d1e-4d4c-8b33-bda628807710\") " Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.247981 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c444ddc6-1d1e-4d4c-8b33-bda628807710-logs\") pod \"c444ddc6-1d1e-4d4c-8b33-bda628807710\" (UID: \"c444ddc6-1d1e-4d4c-8b33-bda628807710\") " Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.248603 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c444ddc6-1d1e-4d4c-8b33-bda628807710-logs" (OuterVolumeSpecName: "logs") pod "c444ddc6-1d1e-4d4c-8b33-bda628807710" (UID: "c444ddc6-1d1e-4d4c-8b33-bda628807710"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.248836 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c444ddc6-1d1e-4d4c-8b33-bda628807710-combined-ca-bundle\") pod \"c444ddc6-1d1e-4d4c-8b33-bda628807710\" (UID: \"c444ddc6-1d1e-4d4c-8b33-bda628807710\") " Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.248895 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c444ddc6-1d1e-4d4c-8b33-bda628807710-scripts\") pod \"c444ddc6-1d1e-4d4c-8b33-bda628807710\" (UID: \"c444ddc6-1d1e-4d4c-8b33-bda628807710\") " Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.248991 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c444ddc6-1d1e-4d4c-8b33-bda628807710-ceph\") pod \"c444ddc6-1d1e-4d4c-8b33-bda628807710\" (UID: \"c444ddc6-1d1e-4d4c-8b33-bda628807710\") " Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.250764 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c444ddc6-1d1e-4d4c-8b33-bda628807710-logs\") on node \"crc\" DevicePath \"\"" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.250794 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c444ddc6-1d1e-4d4c-8b33-bda628807710-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c444ddc6-1d1e-4d4c-8b33-bda628807710" (UID: "c444ddc6-1d1e-4d4c-8b33-bda628807710"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.259225 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c444ddc6-1d1e-4d4c-8b33-bda628807710-ceph" (OuterVolumeSpecName: "ceph") pod "c444ddc6-1d1e-4d4c-8b33-bda628807710" (UID: "c444ddc6-1d1e-4d4c-8b33-bda628807710"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.260532 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c444ddc6-1d1e-4d4c-8b33-bda628807710-kube-api-access-bdjl2" (OuterVolumeSpecName: "kube-api-access-bdjl2") pod "c444ddc6-1d1e-4d4c-8b33-bda628807710" (UID: "c444ddc6-1d1e-4d4c-8b33-bda628807710"). InnerVolumeSpecName "kube-api-access-bdjl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.261117 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c444ddc6-1d1e-4d4c-8b33-bda628807710-scripts" (OuterVolumeSpecName: "scripts") pod "c444ddc6-1d1e-4d4c-8b33-bda628807710" (UID: "c444ddc6-1d1e-4d4c-8b33-bda628807710"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.284682 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c444ddc6-1d1e-4d4c-8b33-bda628807710-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c444ddc6-1d1e-4d4c-8b33-bda628807710" (UID: "c444ddc6-1d1e-4d4c-8b33-bda628807710"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.304242 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c444ddc6-1d1e-4d4c-8b33-bda628807710-config-data" (OuterVolumeSpecName: "config-data") pod "c444ddc6-1d1e-4d4c-8b33-bda628807710" (UID: "c444ddc6-1d1e-4d4c-8b33-bda628807710"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.352876 5050 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c444ddc6-1d1e-4d4c-8b33-bda628807710-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.352916 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c444ddc6-1d1e-4d4c-8b33-bda628807710-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.352930 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c444ddc6-1d1e-4d4c-8b33-bda628807710-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.352939 5050 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c444ddc6-1d1e-4d4c-8b33-bda628807710-ceph\") on node \"crc\" DevicePath \"\"" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.352947 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c444ddc6-1d1e-4d4c-8b33-bda628807710-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.352957 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdjl2\" (UniqueName: \"kubernetes.io/projected/c444ddc6-1d1e-4d4c-8b33-bda628807710-kube-api-access-bdjl2\") on node \"crc\" DevicePath \"\"" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.501418 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c444ddc6-1d1e-4d4c-8b33-bda628807710","Type":"ContainerDied","Data":"75e77d9253de0f5df1c0abd79897278a28495a70e23110dcfbf58f9587dfee83"} Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.501826 5050 scope.go:117] "RemoveContainer" containerID="8a328144a3a0b9b16561e611c2a2ff017b7a36d90e65e6d28fa4044c99f7daa3" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.501455 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.546251 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.546296 5050 scope.go:117] "RemoveContainer" containerID="ebf03c01a0c1f974cd2ea46998cc404d2f6db1e8b59294fa047964a30636886e" Dec 11 15:29:38 crc kubenswrapper[5050]: E1211 15:29:38.546560 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.547208 5050 scope.go:117] "RemoveContainer" containerID="a717d621c73eecb2a3dc8285b2583252b255f5a56b6dc361078f2f205a706971" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.558352 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.581540 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 15:29:38 crc kubenswrapper[5050]: E1211 15:29:38.582125 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c444ddc6-1d1e-4d4c-8b33-bda628807710" containerName="glance-log" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.582152 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="c444ddc6-1d1e-4d4c-8b33-bda628807710" containerName="glance-log" Dec 11 15:29:38 crc kubenswrapper[5050]: E1211 15:29:38.582183 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c444ddc6-1d1e-4d4c-8b33-bda628807710" containerName="glance-httpd" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.582194 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="c444ddc6-1d1e-4d4c-8b33-bda628807710" containerName="glance-httpd" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.582450 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="c444ddc6-1d1e-4d4c-8b33-bda628807710" containerName="glance-httpd" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.582495 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="c444ddc6-1d1e-4d4c-8b33-bda628807710" containerName="glance-log" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.583874 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.586783 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 11 15:29:38 crc kubenswrapper[5050]: W1211 15:29:38.621771 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04726990_8178_4558_9db5_d01140512d63.slice/crio-dd9569090df21ca4cb282ee533718bce2be1cef496607afd27397fd977f3e9a2 WatchSource:0}: Error finding container dd9569090df21ca4cb282ee533718bce2be1cef496607afd27397fd977f3e9a2: Status 404 returned error can't find the container with id dd9569090df21ca4cb282ee533718bce2be1cef496607afd27397fd977f3e9a2 Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.637627 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.673900 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.764526 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ab1ef48-9579-472e-ab1d-a6efe2269dd9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8ab1ef48-9579-472e-ab1d-a6efe2269dd9\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.764607 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8ab1ef48-9579-472e-ab1d-a6efe2269dd9-ceph\") pod \"glance-default-internal-api-0\" (UID: \"8ab1ef48-9579-472e-ab1d-a6efe2269dd9\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.764664 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ab1ef48-9579-472e-ab1d-a6efe2269dd9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8ab1ef48-9579-472e-ab1d-a6efe2269dd9\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.764751 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v2gf\" (UniqueName: \"kubernetes.io/projected/8ab1ef48-9579-472e-ab1d-a6efe2269dd9-kube-api-access-8v2gf\") pod \"glance-default-internal-api-0\" (UID: \"8ab1ef48-9579-472e-ab1d-a6efe2269dd9\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.764855 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ab1ef48-9579-472e-ab1d-a6efe2269dd9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8ab1ef48-9579-472e-ab1d-a6efe2269dd9\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.764939 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ab1ef48-9579-472e-ab1d-a6efe2269dd9-logs\") pod \"glance-default-internal-api-0\" (UID: \"8ab1ef48-9579-472e-ab1d-a6efe2269dd9\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.765103 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab1ef48-9579-472e-ab1d-a6efe2269dd9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8ab1ef48-9579-472e-ab1d-a6efe2269dd9\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.868533 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ab1ef48-9579-472e-ab1d-a6efe2269dd9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8ab1ef48-9579-472e-ab1d-a6efe2269dd9\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.868623 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8ab1ef48-9579-472e-ab1d-a6efe2269dd9-ceph\") pod \"glance-default-internal-api-0\" (UID: \"8ab1ef48-9579-472e-ab1d-a6efe2269dd9\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.868647 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ab1ef48-9579-472e-ab1d-a6efe2269dd9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8ab1ef48-9579-472e-ab1d-a6efe2269dd9\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.868680 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v2gf\" (UniqueName: \"kubernetes.io/projected/8ab1ef48-9579-472e-ab1d-a6efe2269dd9-kube-api-access-8v2gf\") pod \"glance-default-internal-api-0\" (UID: \"8ab1ef48-9579-472e-ab1d-a6efe2269dd9\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.868731 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ab1ef48-9579-472e-ab1d-a6efe2269dd9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8ab1ef48-9579-472e-ab1d-a6efe2269dd9\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.868762 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ab1ef48-9579-472e-ab1d-a6efe2269dd9-logs\") pod \"glance-default-internal-api-0\" (UID: \"8ab1ef48-9579-472e-ab1d-a6efe2269dd9\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.868805 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab1ef48-9579-472e-ab1d-a6efe2269dd9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8ab1ef48-9579-472e-ab1d-a6efe2269dd9\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.871063 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8ab1ef48-9579-472e-ab1d-a6efe2269dd9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8ab1ef48-9579-472e-ab1d-a6efe2269dd9\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.871850 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ab1ef48-9579-472e-ab1d-a6efe2269dd9-logs\") pod \"glance-default-internal-api-0\" (UID: \"8ab1ef48-9579-472e-ab1d-a6efe2269dd9\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.876773 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8ab1ef48-9579-472e-ab1d-a6efe2269dd9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8ab1ef48-9579-472e-ab1d-a6efe2269dd9\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.878303 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ab1ef48-9579-472e-ab1d-a6efe2269dd9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8ab1ef48-9579-472e-ab1d-a6efe2269dd9\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.879092 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ab1ef48-9579-472e-ab1d-a6efe2269dd9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8ab1ef48-9579-472e-ab1d-a6efe2269dd9\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.881605 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/8ab1ef48-9579-472e-ab1d-a6efe2269dd9-ceph\") pod \"glance-default-internal-api-0\" (UID: \"8ab1ef48-9579-472e-ab1d-a6efe2269dd9\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.894716 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v2gf\" (UniqueName: \"kubernetes.io/projected/8ab1ef48-9579-472e-ab1d-a6efe2269dd9-kube-api-access-8v2gf\") pod \"glance-default-internal-api-0\" (UID: \"8ab1ef48-9579-472e-ab1d-a6efe2269dd9\") " pod="openstack/glance-default-internal-api-0" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.902676 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 15:29:38 crc kubenswrapper[5050]: I1211 15:29:38.962850 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-576555fb6f-qscc8" Dec 11 15:29:39 crc kubenswrapper[5050]: I1211 15:29:39.059943 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-76bf858989-5x589" Dec 11 15:29:39 crc kubenswrapper[5050]: I1211 15:29:39.059978 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-76bf858989-5x589" Dec 11 15:29:39 crc kubenswrapper[5050]: I1211 15:29:39.483215 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 15:29:39 crc kubenswrapper[5050]: I1211 15:29:39.534777 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8ab1ef48-9579-472e-ab1d-a6efe2269dd9","Type":"ContainerStarted","Data":"0f16ef080e866d5bdb1a0379d57f2e4e91ae98d2e10b7f50d8c9b21fbc89b6a7"} Dec 11 15:29:39 crc kubenswrapper[5050]: I1211 15:29:39.538358 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"04726990-8178-4558-9db5-d01140512d63","Type":"ContainerStarted","Data":"12b8b16d0b2428cb0c9b9c5d1cc9225b0456321bf2133c491b272a814e499c12"} Dec 11 15:29:39 crc kubenswrapper[5050]: I1211 15:29:39.538383 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"04726990-8178-4558-9db5-d01140512d63","Type":"ContainerStarted","Data":"dd9569090df21ca4cb282ee533718bce2be1cef496607afd27397fd977f3e9a2"} Dec 11 15:29:39 crc kubenswrapper[5050]: I1211 15:29:39.575824 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56c61de9-b025-4705-8311-bade624f6e13" path="/var/lib/kubelet/pods/56c61de9-b025-4705-8311-bade624f6e13/volumes" Dec 11 15:29:39 crc kubenswrapper[5050]: I1211 15:29:39.582231 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c444ddc6-1d1e-4d4c-8b33-bda628807710" path="/var/lib/kubelet/pods/c444ddc6-1d1e-4d4c-8b33-bda628807710/volumes" Dec 11 15:29:39 crc kubenswrapper[5050]: I1211 15:29:39.771300 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-77d699676c-cljf5" Dec 11 15:29:39 crc kubenswrapper[5050]: I1211 15:29:39.771617 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-77d699676c-cljf5" Dec 11 15:29:40 crc kubenswrapper[5050]: I1211 15:29:40.550105 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8ab1ef48-9579-472e-ab1d-a6efe2269dd9","Type":"ContainerStarted","Data":"541d08243209c4553bea0eddce55bd750e1df1de1ca1f11f5d5586ff7e8e7646"} Dec 11 15:29:40 crc kubenswrapper[5050]: I1211 15:29:40.551776 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"04726990-8178-4558-9db5-d01140512d63","Type":"ContainerStarted","Data":"f557b94479b555716d475c130c07b1dfabc715159eaa488bd0d3ebcfff9ba53d"} Dec 11 15:29:41 crc kubenswrapper[5050]: I1211 15:29:41.577295 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8ab1ef48-9579-472e-ab1d-a6efe2269dd9","Type":"ContainerStarted","Data":"e71e39d84593d655dc224154f1e1ec3fb31a2fe6cdbfc23a533076f8c671495c"} Dec 11 15:29:41 crc kubenswrapper[5050]: I1211 15:29:41.604241 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.604220054 podStartE2EDuration="3.604220054s" podCreationTimestamp="2025-12-11 15:29:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:29:41.602110177 +0000 UTC m=+6072.445832773" watchObservedRunningTime="2025-12-11 15:29:41.604220054 +0000 UTC m=+6072.447942640" Dec 11 15:29:41 crc kubenswrapper[5050]: I1211 15:29:41.607329 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.607306596 podStartE2EDuration="4.607306596s" podCreationTimestamp="2025-12-11 15:29:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:29:40.574435011 +0000 UTC m=+6071.418157597" watchObservedRunningTime="2025-12-11 15:29:41.607306596 +0000 UTC m=+6072.451029192" Dec 11 15:29:47 crc kubenswrapper[5050]: I1211 15:29:47.915586 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 11 15:29:47 crc kubenswrapper[5050]: I1211 15:29:47.916088 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 11 15:29:47 crc kubenswrapper[5050]: I1211 15:29:47.948631 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 11 15:29:47 crc kubenswrapper[5050]: I1211 15:29:47.967401 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 11 15:29:48 crc kubenswrapper[5050]: I1211 15:29:48.046431 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-4smbc"] Dec 11 15:29:48 crc kubenswrapper[5050]: I1211 15:29:48.054797 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-46d9-account-create-update-spksj"] Dec 11 15:29:48 crc kubenswrapper[5050]: I1211 15:29:48.062448 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-4smbc"] Dec 11 15:29:48 crc kubenswrapper[5050]: I1211 15:29:48.070432 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-46d9-account-create-update-spksj"] Dec 11 15:29:48 crc kubenswrapper[5050]: I1211 15:29:48.643911 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 11 15:29:48 crc kubenswrapper[5050]: I1211 15:29:48.644197 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 11 15:29:48 crc kubenswrapper[5050]: I1211 15:29:48.902967 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 11 15:29:48 crc kubenswrapper[5050]: I1211 15:29:48.903120 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 11 15:29:48 crc kubenswrapper[5050]: I1211 15:29:48.930670 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 11 15:29:48 crc kubenswrapper[5050]: I1211 15:29:48.948832 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 11 15:29:49 crc kubenswrapper[5050]: I1211 15:29:49.060831 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-76bf858989-5x589" podUID="32b521d5-20fd-45a3-899f-bd6b605107a5" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.112:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.112:8080: connect: connection refused" Dec 11 15:29:49 crc kubenswrapper[5050]: I1211 15:29:49.561839 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14a8814e-5b3e-4cfd-8646-40bd756bdec8" path="/var/lib/kubelet/pods/14a8814e-5b3e-4cfd-8646-40bd756bdec8/volumes" Dec 11 15:29:49 crc kubenswrapper[5050]: I1211 15:29:49.562883 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="557216c3-9af4-4436-b52d-5d77e1562f8d" path="/var/lib/kubelet/pods/557216c3-9af4-4436-b52d-5d77e1562f8d/volumes" Dec 11 15:29:49 crc kubenswrapper[5050]: I1211 15:29:49.651453 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 11 15:29:49 crc kubenswrapper[5050]: I1211 15:29:49.651492 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 11 15:29:49 crc kubenswrapper[5050]: I1211 15:29:49.773804 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-77d699676c-cljf5" podUID="7da61c1c-03d8-4bcc-8bc9-23ccb60c1652" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.113:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.113:8080: connect: connection refused" Dec 11 15:29:50 crc kubenswrapper[5050]: I1211 15:29:50.563438 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 11 15:29:50 crc kubenswrapper[5050]: I1211 15:29:50.660993 5050 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 15:29:50 crc kubenswrapper[5050]: I1211 15:29:50.698754 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 11 15:29:51 crc kubenswrapper[5050]: I1211 15:29:51.546219 5050 scope.go:117] "RemoveContainer" containerID="ebf03c01a0c1f974cd2ea46998cc404d2f6db1e8b59294fa047964a30636886e" Dec 11 15:29:51 crc kubenswrapper[5050]: E1211 15:29:51.546774 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:29:51 crc kubenswrapper[5050]: I1211 15:29:51.928266 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 11 15:29:51 crc kubenswrapper[5050]: I1211 15:29:51.928357 5050 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 15:29:51 crc kubenswrapper[5050]: I1211 15:29:51.938053 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 11 15:29:57 crc kubenswrapper[5050]: I1211 15:29:57.045742 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-8p965"] Dec 11 15:29:57 crc kubenswrapper[5050]: I1211 15:29:57.057745 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-8p965"] Dec 11 15:29:57 crc kubenswrapper[5050]: I1211 15:29:57.563321 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="866d0d68-e4c3-4087-8158-4bc958909d2d" path="/var/lib/kubelet/pods/866d0d68-e4c3-4087-8158-4bc958909d2d/volumes" Dec 11 15:29:59 crc kubenswrapper[5050]: I1211 15:29:59.576692 5050 scope.go:117] "RemoveContainer" containerID="0813a32ca5633075f04d57d7481a616bfb3b1d79ea23b31d7a44c9ad1e2eaa70" Dec 11 15:30:00 crc kubenswrapper[5050]: I1211 15:30:00.121142 5050 scope.go:117] "RemoveContainer" containerID="50f8d2a0e4aea4a31f9c39bf2d8856658bd5a89d1549f2181eaf4daf44aefe00" Dec 11 15:30:00 crc kubenswrapper[5050]: I1211 15:30:00.152602 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424450-mlg5s"] Dec 11 15:30:00 crc kubenswrapper[5050]: I1211 15:30:00.154198 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424450-mlg5s" Dec 11 15:30:00 crc kubenswrapper[5050]: I1211 15:30:00.156847 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 11 15:30:00 crc kubenswrapper[5050]: I1211 15:30:00.157081 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 11 15:30:00 crc kubenswrapper[5050]: I1211 15:30:00.163870 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424450-mlg5s"] Dec 11 15:30:00 crc kubenswrapper[5050]: I1211 15:30:00.181261 5050 scope.go:117] "RemoveContainer" containerID="5ffcd502ee6cb1a15ef61afcfc038cede45fce813cf7d92a6730ecc8be1238ea" Dec 11 15:30:00 crc kubenswrapper[5050]: I1211 15:30:00.203974 5050 scope.go:117] "RemoveContainer" containerID="ecb21c1d38076d022ead6ee9fc040f45ddec40fa36304511acb01c43faf73ad9" Dec 11 15:30:00 crc kubenswrapper[5050]: I1211 15:30:00.254281 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vnvj\" (UniqueName: \"kubernetes.io/projected/4d86435f-31af-44b2-aaf1-1b1e5ffec1da-kube-api-access-7vnvj\") pod \"collect-profiles-29424450-mlg5s\" (UID: \"4d86435f-31af-44b2-aaf1-1b1e5ffec1da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424450-mlg5s" Dec 11 15:30:00 crc kubenswrapper[5050]: I1211 15:30:00.254545 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d86435f-31af-44b2-aaf1-1b1e5ffec1da-config-volume\") pod \"collect-profiles-29424450-mlg5s\" (UID: \"4d86435f-31af-44b2-aaf1-1b1e5ffec1da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424450-mlg5s" Dec 11 15:30:00 crc kubenswrapper[5050]: I1211 15:30:00.254777 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d86435f-31af-44b2-aaf1-1b1e5ffec1da-secret-volume\") pod \"collect-profiles-29424450-mlg5s\" (UID: \"4d86435f-31af-44b2-aaf1-1b1e5ffec1da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424450-mlg5s" Dec 11 15:30:00 crc kubenswrapper[5050]: I1211 15:30:00.291856 5050 scope.go:117] "RemoveContainer" containerID="5854ceb703a1c14a45377a756c4bfe757bd2470825586e48ec4d5ca3dcc1e542" Dec 11 15:30:00 crc kubenswrapper[5050]: I1211 15:30:00.330771 5050 scope.go:117] "RemoveContainer" containerID="4778bfc392f3f88f224ea74000c59e86b83c8d50962b3649f80cb6462b5b6e1f" Dec 11 15:30:00 crc kubenswrapper[5050]: I1211 15:30:00.357233 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vnvj\" (UniqueName: \"kubernetes.io/projected/4d86435f-31af-44b2-aaf1-1b1e5ffec1da-kube-api-access-7vnvj\") pod \"collect-profiles-29424450-mlg5s\" (UID: \"4d86435f-31af-44b2-aaf1-1b1e5ffec1da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424450-mlg5s" Dec 11 15:30:00 crc kubenswrapper[5050]: I1211 15:30:00.357304 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d86435f-31af-44b2-aaf1-1b1e5ffec1da-config-volume\") pod \"collect-profiles-29424450-mlg5s\" (UID: \"4d86435f-31af-44b2-aaf1-1b1e5ffec1da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424450-mlg5s" Dec 11 15:30:00 crc kubenswrapper[5050]: I1211 15:30:00.357379 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d86435f-31af-44b2-aaf1-1b1e5ffec1da-secret-volume\") pod \"collect-profiles-29424450-mlg5s\" (UID: \"4d86435f-31af-44b2-aaf1-1b1e5ffec1da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424450-mlg5s" Dec 11 15:30:00 crc kubenswrapper[5050]: I1211 15:30:00.359142 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d86435f-31af-44b2-aaf1-1b1e5ffec1da-config-volume\") pod \"collect-profiles-29424450-mlg5s\" (UID: \"4d86435f-31af-44b2-aaf1-1b1e5ffec1da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424450-mlg5s" Dec 11 15:30:00 crc kubenswrapper[5050]: I1211 15:30:00.363967 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d86435f-31af-44b2-aaf1-1b1e5ffec1da-secret-volume\") pod \"collect-profiles-29424450-mlg5s\" (UID: \"4d86435f-31af-44b2-aaf1-1b1e5ffec1da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424450-mlg5s" Dec 11 15:30:00 crc kubenswrapper[5050]: I1211 15:30:00.372624 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vnvj\" (UniqueName: \"kubernetes.io/projected/4d86435f-31af-44b2-aaf1-1b1e5ffec1da-kube-api-access-7vnvj\") pod \"collect-profiles-29424450-mlg5s\" (UID: \"4d86435f-31af-44b2-aaf1-1b1e5ffec1da\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424450-mlg5s" Dec 11 15:30:00 crc kubenswrapper[5050]: I1211 15:30:00.581433 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424450-mlg5s" Dec 11 15:30:02 crc kubenswrapper[5050]: I1211 15:30:00.939096 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-76bf858989-5x589" Dec 11 15:30:02 crc kubenswrapper[5050]: I1211 15:30:01.049845 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424450-mlg5s"] Dec 11 15:30:02 crc kubenswrapper[5050]: W1211 15:30:01.055043 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d86435f_31af_44b2_aaf1_1b1e5ffec1da.slice/crio-b476e98429491628debeb7644b3609c72e5f36c77b599d469ca675d8089b866a WatchSource:0}: Error finding container b476e98429491628debeb7644b3609c72e5f36c77b599d469ca675d8089b866a: Status 404 returned error can't find the container with id b476e98429491628debeb7644b3609c72e5f36c77b599d469ca675d8089b866a Dec 11 15:30:02 crc kubenswrapper[5050]: I1211 15:30:01.556212 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-77d699676c-cljf5" Dec 11 15:30:02 crc kubenswrapper[5050]: I1211 15:30:01.767647 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424450-mlg5s" event={"ID":"4d86435f-31af-44b2-aaf1-1b1e5ffec1da","Type":"ContainerStarted","Data":"b476e98429491628debeb7644b3609c72e5f36c77b599d469ca675d8089b866a"} Dec 11 15:30:02 crc kubenswrapper[5050]: I1211 15:30:02.546119 5050 scope.go:117] "RemoveContainer" containerID="ebf03c01a0c1f974cd2ea46998cc404d2f6db1e8b59294fa047964a30636886e" Dec 11 15:30:02 crc kubenswrapper[5050]: E1211 15:30:02.546304 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:30:02 crc kubenswrapper[5050]: I1211 15:30:02.663386 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-76bf858989-5x589" Dec 11 15:30:02 crc kubenswrapper[5050]: I1211 15:30:02.793593 5050 generic.go:334] "Generic (PLEG): container finished" podID="4d86435f-31af-44b2-aaf1-1b1e5ffec1da" containerID="e090aa1a8f881dd9749d26e7cc5db46cd9fe8aa955ab3f74f0a3c694cff74d53" exitCode=0 Dec 11 15:30:02 crc kubenswrapper[5050]: I1211 15:30:02.793712 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424450-mlg5s" event={"ID":"4d86435f-31af-44b2-aaf1-1b1e5ffec1da","Type":"ContainerDied","Data":"e090aa1a8f881dd9749d26e7cc5db46cd9fe8aa955ab3f74f0a3c694cff74d53"} Dec 11 15:30:03 crc kubenswrapper[5050]: I1211 15:30:03.358374 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-77d699676c-cljf5" Dec 11 15:30:03 crc kubenswrapper[5050]: I1211 15:30:03.465677 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-76bf858989-5x589"] Dec 11 15:30:03 crc kubenswrapper[5050]: I1211 15:30:03.465916 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-76bf858989-5x589" podUID="32b521d5-20fd-45a3-899f-bd6b605107a5" containerName="horizon-log" containerID="cri-o://645c0842ad484bd26a50d403490ba1090f7a0165e591f6f2020f8f2bae092ffa" gracePeriod=30 Dec 11 15:30:03 crc kubenswrapper[5050]: I1211 15:30:03.466185 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-76bf858989-5x589" podUID="32b521d5-20fd-45a3-899f-bd6b605107a5" containerName="horizon" containerID="cri-o://ed40918a857d442157ef5a86bc46bf4c1e750179bf25d5078680814160df7074" gracePeriod=30 Dec 11 15:30:04 crc kubenswrapper[5050]: I1211 15:30:04.161299 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424450-mlg5s" Dec 11 15:30:04 crc kubenswrapper[5050]: I1211 15:30:04.249518 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vnvj\" (UniqueName: \"kubernetes.io/projected/4d86435f-31af-44b2-aaf1-1b1e5ffec1da-kube-api-access-7vnvj\") pod \"4d86435f-31af-44b2-aaf1-1b1e5ffec1da\" (UID: \"4d86435f-31af-44b2-aaf1-1b1e5ffec1da\") " Dec 11 15:30:04 crc kubenswrapper[5050]: I1211 15:30:04.250397 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d86435f-31af-44b2-aaf1-1b1e5ffec1da-config-volume\") pod \"4d86435f-31af-44b2-aaf1-1b1e5ffec1da\" (UID: \"4d86435f-31af-44b2-aaf1-1b1e5ffec1da\") " Dec 11 15:30:04 crc kubenswrapper[5050]: I1211 15:30:04.250530 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d86435f-31af-44b2-aaf1-1b1e5ffec1da-secret-volume\") pod \"4d86435f-31af-44b2-aaf1-1b1e5ffec1da\" (UID: \"4d86435f-31af-44b2-aaf1-1b1e5ffec1da\") " Dec 11 15:30:04 crc kubenswrapper[5050]: I1211 15:30:04.251294 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d86435f-31af-44b2-aaf1-1b1e5ffec1da-config-volume" (OuterVolumeSpecName: "config-volume") pod "4d86435f-31af-44b2-aaf1-1b1e5ffec1da" (UID: "4d86435f-31af-44b2-aaf1-1b1e5ffec1da"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:30:04 crc kubenswrapper[5050]: I1211 15:30:04.256192 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d86435f-31af-44b2-aaf1-1b1e5ffec1da-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4d86435f-31af-44b2-aaf1-1b1e5ffec1da" (UID: "4d86435f-31af-44b2-aaf1-1b1e5ffec1da"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:30:04 crc kubenswrapper[5050]: I1211 15:30:04.256273 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d86435f-31af-44b2-aaf1-1b1e5ffec1da-kube-api-access-7vnvj" (OuterVolumeSpecName: "kube-api-access-7vnvj") pod "4d86435f-31af-44b2-aaf1-1b1e5ffec1da" (UID: "4d86435f-31af-44b2-aaf1-1b1e5ffec1da"). InnerVolumeSpecName "kube-api-access-7vnvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:30:04 crc kubenswrapper[5050]: I1211 15:30:04.354217 5050 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d86435f-31af-44b2-aaf1-1b1e5ffec1da-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 15:30:04 crc kubenswrapper[5050]: I1211 15:30:04.354276 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vnvj\" (UniqueName: \"kubernetes.io/projected/4d86435f-31af-44b2-aaf1-1b1e5ffec1da-kube-api-access-7vnvj\") on node \"crc\" DevicePath \"\"" Dec 11 15:30:04 crc kubenswrapper[5050]: I1211 15:30:04.354288 5050 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d86435f-31af-44b2-aaf1-1b1e5ffec1da-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 15:30:04 crc kubenswrapper[5050]: I1211 15:30:04.814772 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424450-mlg5s" event={"ID":"4d86435f-31af-44b2-aaf1-1b1e5ffec1da","Type":"ContainerDied","Data":"b476e98429491628debeb7644b3609c72e5f36c77b599d469ca675d8089b866a"} Dec 11 15:30:04 crc kubenswrapper[5050]: I1211 15:30:04.814814 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424450-mlg5s" Dec 11 15:30:04 crc kubenswrapper[5050]: I1211 15:30:04.814819 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b476e98429491628debeb7644b3609c72e5f36c77b599d469ca675d8089b866a" Dec 11 15:30:05 crc kubenswrapper[5050]: I1211 15:30:05.226762 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424405-j6477"] Dec 11 15:30:05 crc kubenswrapper[5050]: I1211 15:30:05.236834 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424405-j6477"] Dec 11 15:30:05 crc kubenswrapper[5050]: I1211 15:30:05.561293 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebc932e0-34cb-4293-a192-a5ec57f96e9c" path="/var/lib/kubelet/pods/ebc932e0-34cb-4293-a192-a5ec57f96e9c/volumes" Dec 11 15:30:06 crc kubenswrapper[5050]: I1211 15:30:06.846841 5050 generic.go:334] "Generic (PLEG): container finished" podID="32b521d5-20fd-45a3-899f-bd6b605107a5" containerID="ed40918a857d442157ef5a86bc46bf4c1e750179bf25d5078680814160df7074" exitCode=0 Dec 11 15:30:06 crc kubenswrapper[5050]: I1211 15:30:06.847117 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76bf858989-5x589" event={"ID":"32b521d5-20fd-45a3-899f-bd6b605107a5","Type":"ContainerDied","Data":"ed40918a857d442157ef5a86bc46bf4c1e750179bf25d5078680814160df7074"} Dec 11 15:30:07 crc kubenswrapper[5050]: I1211 15:30:07.864685 5050 generic.go:334] "Generic (PLEG): container finished" podID="adbd909e-6283-4642-a386-8e5cfff2f199" containerID="06fda559c919c413e3b11c7a5acc0df2f40f7527385816c092affd1087075b2c" exitCode=137 Dec 11 15:30:07 crc kubenswrapper[5050]: I1211 15:30:07.865047 5050 generic.go:334] "Generic (PLEG): container finished" podID="adbd909e-6283-4642-a386-8e5cfff2f199" containerID="4ba813f858c8b2a29b98a985aae1143279a0ef2b672cb37677836cf6c86767ed" exitCode=137 Dec 11 15:30:07 crc kubenswrapper[5050]: I1211 15:30:07.865072 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-576555fb6f-qscc8" event={"ID":"adbd909e-6283-4642-a386-8e5cfff2f199","Type":"ContainerDied","Data":"06fda559c919c413e3b11c7a5acc0df2f40f7527385816c092affd1087075b2c"} Dec 11 15:30:07 crc kubenswrapper[5050]: I1211 15:30:07.865102 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-576555fb6f-qscc8" event={"ID":"adbd909e-6283-4642-a386-8e5cfff2f199","Type":"ContainerDied","Data":"4ba813f858c8b2a29b98a985aae1143279a0ef2b672cb37677836cf6c86767ed"} Dec 11 15:30:07 crc kubenswrapper[5050]: I1211 15:30:07.865114 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-576555fb6f-qscc8" event={"ID":"adbd909e-6283-4642-a386-8e5cfff2f199","Type":"ContainerDied","Data":"0465eea08e4e410d700bf1214ba2f04d38f95456fa78613d50e5f2bef8b5a18b"} Dec 11 15:30:07 crc kubenswrapper[5050]: I1211 15:30:07.865124 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0465eea08e4e410d700bf1214ba2f04d38f95456fa78613d50e5f2bef8b5a18b" Dec 11 15:30:07 crc kubenswrapper[5050]: I1211 15:30:07.917713 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-576555fb6f-qscc8" Dec 11 15:30:08 crc kubenswrapper[5050]: I1211 15:30:08.027999 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/adbd909e-6283-4642-a386-8e5cfff2f199-config-data\") pod \"adbd909e-6283-4642-a386-8e5cfff2f199\" (UID: \"adbd909e-6283-4642-a386-8e5cfff2f199\") " Dec 11 15:30:08 crc kubenswrapper[5050]: I1211 15:30:08.028145 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dg65p\" (UniqueName: \"kubernetes.io/projected/adbd909e-6283-4642-a386-8e5cfff2f199-kube-api-access-dg65p\") pod \"adbd909e-6283-4642-a386-8e5cfff2f199\" (UID: \"adbd909e-6283-4642-a386-8e5cfff2f199\") " Dec 11 15:30:08 crc kubenswrapper[5050]: I1211 15:30:08.028194 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adbd909e-6283-4642-a386-8e5cfff2f199-logs\") pod \"adbd909e-6283-4642-a386-8e5cfff2f199\" (UID: \"adbd909e-6283-4642-a386-8e5cfff2f199\") " Dec 11 15:30:08 crc kubenswrapper[5050]: I1211 15:30:08.028272 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/adbd909e-6283-4642-a386-8e5cfff2f199-horizon-secret-key\") pod \"adbd909e-6283-4642-a386-8e5cfff2f199\" (UID: \"adbd909e-6283-4642-a386-8e5cfff2f199\") " Dec 11 15:30:08 crc kubenswrapper[5050]: I1211 15:30:08.028297 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/adbd909e-6283-4642-a386-8e5cfff2f199-scripts\") pod \"adbd909e-6283-4642-a386-8e5cfff2f199\" (UID: \"adbd909e-6283-4642-a386-8e5cfff2f199\") " Dec 11 15:30:08 crc kubenswrapper[5050]: I1211 15:30:08.028835 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/adbd909e-6283-4642-a386-8e5cfff2f199-logs" (OuterVolumeSpecName: "logs") pod "adbd909e-6283-4642-a386-8e5cfff2f199" (UID: "adbd909e-6283-4642-a386-8e5cfff2f199"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:30:08 crc kubenswrapper[5050]: I1211 15:30:08.029807 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adbd909e-6283-4642-a386-8e5cfff2f199-logs\") on node \"crc\" DevicePath \"\"" Dec 11 15:30:08 crc kubenswrapper[5050]: I1211 15:30:08.033093 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adbd909e-6283-4642-a386-8e5cfff2f199-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "adbd909e-6283-4642-a386-8e5cfff2f199" (UID: "adbd909e-6283-4642-a386-8e5cfff2f199"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:30:08 crc kubenswrapper[5050]: I1211 15:30:08.034317 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adbd909e-6283-4642-a386-8e5cfff2f199-kube-api-access-dg65p" (OuterVolumeSpecName: "kube-api-access-dg65p") pod "adbd909e-6283-4642-a386-8e5cfff2f199" (UID: "adbd909e-6283-4642-a386-8e5cfff2f199"). InnerVolumeSpecName "kube-api-access-dg65p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:30:08 crc kubenswrapper[5050]: I1211 15:30:08.052548 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adbd909e-6283-4642-a386-8e5cfff2f199-scripts" (OuterVolumeSpecName: "scripts") pod "adbd909e-6283-4642-a386-8e5cfff2f199" (UID: "adbd909e-6283-4642-a386-8e5cfff2f199"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:30:08 crc kubenswrapper[5050]: I1211 15:30:08.053885 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adbd909e-6283-4642-a386-8e5cfff2f199-config-data" (OuterVolumeSpecName: "config-data") pod "adbd909e-6283-4642-a386-8e5cfff2f199" (UID: "adbd909e-6283-4642-a386-8e5cfff2f199"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:30:08 crc kubenswrapper[5050]: I1211 15:30:08.131115 5050 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/adbd909e-6283-4642-a386-8e5cfff2f199-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 11 15:30:08 crc kubenswrapper[5050]: I1211 15:30:08.131148 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/adbd909e-6283-4642-a386-8e5cfff2f199-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:30:08 crc kubenswrapper[5050]: I1211 15:30:08.131157 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/adbd909e-6283-4642-a386-8e5cfff2f199-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:30:08 crc kubenswrapper[5050]: I1211 15:30:08.131188 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dg65p\" (UniqueName: \"kubernetes.io/projected/adbd909e-6283-4642-a386-8e5cfff2f199-kube-api-access-dg65p\") on node \"crc\" DevicePath \"\"" Dec 11 15:30:08 crc kubenswrapper[5050]: I1211 15:30:08.875035 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-576555fb6f-qscc8" Dec 11 15:30:08 crc kubenswrapper[5050]: I1211 15:30:08.916775 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-576555fb6f-qscc8"] Dec 11 15:30:08 crc kubenswrapper[5050]: I1211 15:30:08.924537 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-576555fb6f-qscc8"] Dec 11 15:30:09 crc kubenswrapper[5050]: I1211 15:30:09.058614 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-76bf858989-5x589" podUID="32b521d5-20fd-45a3-899f-bd6b605107a5" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.112:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.112:8080: connect: connection refused" Dec 11 15:30:09 crc kubenswrapper[5050]: I1211 15:30:09.558708 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adbd909e-6283-4642-a386-8e5cfff2f199" path="/var/lib/kubelet/pods/adbd909e-6283-4642-a386-8e5cfff2f199/volumes" Dec 11 15:30:10 crc kubenswrapper[5050]: I1211 15:30:10.827948 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5fb79d99b5-m4xgd"] Dec 11 15:30:10 crc kubenswrapper[5050]: E1211 15:30:10.829235 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d86435f-31af-44b2-aaf1-1b1e5ffec1da" containerName="collect-profiles" Dec 11 15:30:10 crc kubenswrapper[5050]: I1211 15:30:10.829261 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d86435f-31af-44b2-aaf1-1b1e5ffec1da" containerName="collect-profiles" Dec 11 15:30:10 crc kubenswrapper[5050]: E1211 15:30:10.829282 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adbd909e-6283-4642-a386-8e5cfff2f199" containerName="horizon" Dec 11 15:30:10 crc kubenswrapper[5050]: I1211 15:30:10.829290 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="adbd909e-6283-4642-a386-8e5cfff2f199" containerName="horizon" Dec 11 15:30:10 crc kubenswrapper[5050]: E1211 15:30:10.829321 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adbd909e-6283-4642-a386-8e5cfff2f199" containerName="horizon-log" Dec 11 15:30:10 crc kubenswrapper[5050]: I1211 15:30:10.829328 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="adbd909e-6283-4642-a386-8e5cfff2f199" containerName="horizon-log" Dec 11 15:30:10 crc kubenswrapper[5050]: I1211 15:30:10.829580 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="adbd909e-6283-4642-a386-8e5cfff2f199" containerName="horizon" Dec 11 15:30:10 crc kubenswrapper[5050]: I1211 15:30:10.829603 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="adbd909e-6283-4642-a386-8e5cfff2f199" containerName="horizon-log" Dec 11 15:30:10 crc kubenswrapper[5050]: I1211 15:30:10.829623 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d86435f-31af-44b2-aaf1-1b1e5ffec1da" containerName="collect-profiles" Dec 11 15:30:10 crc kubenswrapper[5050]: I1211 15:30:10.830892 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fb79d99b5-m4xgd" Dec 11 15:30:10 crc kubenswrapper[5050]: I1211 15:30:10.851161 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5fb79d99b5-m4xgd"] Dec 11 15:30:10 crc kubenswrapper[5050]: I1211 15:30:10.985379 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnqln\" (UniqueName: \"kubernetes.io/projected/2086bc41-00a2-4c97-a491-08511f3ed6e5-kube-api-access-wnqln\") pod \"horizon-5fb79d99b5-m4xgd\" (UID: \"2086bc41-00a2-4c97-a491-08511f3ed6e5\") " pod="openstack/horizon-5fb79d99b5-m4xgd" Dec 11 15:30:10 crc kubenswrapper[5050]: I1211 15:30:10.985456 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2086bc41-00a2-4c97-a491-08511f3ed6e5-config-data\") pod \"horizon-5fb79d99b5-m4xgd\" (UID: \"2086bc41-00a2-4c97-a491-08511f3ed6e5\") " pod="openstack/horizon-5fb79d99b5-m4xgd" Dec 11 15:30:10 crc kubenswrapper[5050]: I1211 15:30:10.985672 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2086bc41-00a2-4c97-a491-08511f3ed6e5-horizon-secret-key\") pod \"horizon-5fb79d99b5-m4xgd\" (UID: \"2086bc41-00a2-4c97-a491-08511f3ed6e5\") " pod="openstack/horizon-5fb79d99b5-m4xgd" Dec 11 15:30:10 crc kubenswrapper[5050]: I1211 15:30:10.985821 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2086bc41-00a2-4c97-a491-08511f3ed6e5-scripts\") pod \"horizon-5fb79d99b5-m4xgd\" (UID: \"2086bc41-00a2-4c97-a491-08511f3ed6e5\") " pod="openstack/horizon-5fb79d99b5-m4xgd" Dec 11 15:30:10 crc kubenswrapper[5050]: I1211 15:30:10.986053 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2086bc41-00a2-4c97-a491-08511f3ed6e5-logs\") pod \"horizon-5fb79d99b5-m4xgd\" (UID: \"2086bc41-00a2-4c97-a491-08511f3ed6e5\") " pod="openstack/horizon-5fb79d99b5-m4xgd" Dec 11 15:30:11 crc kubenswrapper[5050]: I1211 15:30:11.088434 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnqln\" (UniqueName: \"kubernetes.io/projected/2086bc41-00a2-4c97-a491-08511f3ed6e5-kube-api-access-wnqln\") pod \"horizon-5fb79d99b5-m4xgd\" (UID: \"2086bc41-00a2-4c97-a491-08511f3ed6e5\") " pod="openstack/horizon-5fb79d99b5-m4xgd" Dec 11 15:30:11 crc kubenswrapper[5050]: I1211 15:30:11.088567 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2086bc41-00a2-4c97-a491-08511f3ed6e5-config-data\") pod \"horizon-5fb79d99b5-m4xgd\" (UID: \"2086bc41-00a2-4c97-a491-08511f3ed6e5\") " pod="openstack/horizon-5fb79d99b5-m4xgd" Dec 11 15:30:11 crc kubenswrapper[5050]: I1211 15:30:11.088632 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2086bc41-00a2-4c97-a491-08511f3ed6e5-horizon-secret-key\") pod \"horizon-5fb79d99b5-m4xgd\" (UID: \"2086bc41-00a2-4c97-a491-08511f3ed6e5\") " pod="openstack/horizon-5fb79d99b5-m4xgd" Dec 11 15:30:11 crc kubenswrapper[5050]: I1211 15:30:11.088669 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2086bc41-00a2-4c97-a491-08511f3ed6e5-scripts\") pod \"horizon-5fb79d99b5-m4xgd\" (UID: \"2086bc41-00a2-4c97-a491-08511f3ed6e5\") " pod="openstack/horizon-5fb79d99b5-m4xgd" Dec 11 15:30:11 crc kubenswrapper[5050]: I1211 15:30:11.088733 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2086bc41-00a2-4c97-a491-08511f3ed6e5-logs\") pod \"horizon-5fb79d99b5-m4xgd\" (UID: \"2086bc41-00a2-4c97-a491-08511f3ed6e5\") " pod="openstack/horizon-5fb79d99b5-m4xgd" Dec 11 15:30:11 crc kubenswrapper[5050]: I1211 15:30:11.089265 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2086bc41-00a2-4c97-a491-08511f3ed6e5-logs\") pod \"horizon-5fb79d99b5-m4xgd\" (UID: \"2086bc41-00a2-4c97-a491-08511f3ed6e5\") " pod="openstack/horizon-5fb79d99b5-m4xgd" Dec 11 15:30:11 crc kubenswrapper[5050]: I1211 15:30:11.089697 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2086bc41-00a2-4c97-a491-08511f3ed6e5-scripts\") pod \"horizon-5fb79d99b5-m4xgd\" (UID: \"2086bc41-00a2-4c97-a491-08511f3ed6e5\") " pod="openstack/horizon-5fb79d99b5-m4xgd" Dec 11 15:30:11 crc kubenswrapper[5050]: I1211 15:30:11.091315 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2086bc41-00a2-4c97-a491-08511f3ed6e5-config-data\") pod \"horizon-5fb79d99b5-m4xgd\" (UID: \"2086bc41-00a2-4c97-a491-08511f3ed6e5\") " pod="openstack/horizon-5fb79d99b5-m4xgd" Dec 11 15:30:11 crc kubenswrapper[5050]: I1211 15:30:11.104856 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2086bc41-00a2-4c97-a491-08511f3ed6e5-horizon-secret-key\") pod \"horizon-5fb79d99b5-m4xgd\" (UID: \"2086bc41-00a2-4c97-a491-08511f3ed6e5\") " pod="openstack/horizon-5fb79d99b5-m4xgd" Dec 11 15:30:11 crc kubenswrapper[5050]: I1211 15:30:11.114390 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnqln\" (UniqueName: \"kubernetes.io/projected/2086bc41-00a2-4c97-a491-08511f3ed6e5-kube-api-access-wnqln\") pod \"horizon-5fb79d99b5-m4xgd\" (UID: \"2086bc41-00a2-4c97-a491-08511f3ed6e5\") " pod="openstack/horizon-5fb79d99b5-m4xgd" Dec 11 15:30:11 crc kubenswrapper[5050]: I1211 15:30:11.155987 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5fb79d99b5-m4xgd" Dec 11 15:30:11 crc kubenswrapper[5050]: I1211 15:30:11.641745 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5fb79d99b5-m4xgd"] Dec 11 15:30:11 crc kubenswrapper[5050]: I1211 15:30:11.914598 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fb79d99b5-m4xgd" event={"ID":"2086bc41-00a2-4c97-a491-08511f3ed6e5","Type":"ContainerStarted","Data":"ade4785bb12143f657821adcba6c18936e5d36ff06762ef37e21832bef0ce26e"} Dec 11 15:30:11 crc kubenswrapper[5050]: I1211 15:30:11.914903 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fb79d99b5-m4xgd" event={"ID":"2086bc41-00a2-4c97-a491-08511f3ed6e5","Type":"ContainerStarted","Data":"9cdfb6d09a7db550c9d608e530246bd8ff9c86f2fe6da6c46a83cd0ec58dd03e"} Dec 11 15:30:12 crc kubenswrapper[5050]: I1211 15:30:12.099414 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-9rvnd"] Dec 11 15:30:12 crc kubenswrapper[5050]: I1211 15:30:12.100684 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-9rvnd" Dec 11 15:30:12 crc kubenswrapper[5050]: I1211 15:30:12.116688 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-1848-account-create-update-npqlp"] Dec 11 15:30:12 crc kubenswrapper[5050]: I1211 15:30:12.118052 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-1848-account-create-update-npqlp" Dec 11 15:30:12 crc kubenswrapper[5050]: I1211 15:30:12.120277 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Dec 11 15:30:12 crc kubenswrapper[5050]: I1211 15:30:12.127180 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-9rvnd"] Dec 11 15:30:12 crc kubenswrapper[5050]: I1211 15:30:12.150098 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-1848-account-create-update-npqlp"] Dec 11 15:30:12 crc kubenswrapper[5050]: I1211 15:30:12.211001 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f08ac59-fc5a-4846-ba07-e5de181aa3c8-operator-scripts\") pod \"heat-1848-account-create-update-npqlp\" (UID: \"8f08ac59-fc5a-4846-ba07-e5de181aa3c8\") " pod="openstack/heat-1848-account-create-update-npqlp" Dec 11 15:30:12 crc kubenswrapper[5050]: I1211 15:30:12.211499 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8962\" (UniqueName: \"kubernetes.io/projected/97e356fb-2d8a-47f3-b2cc-c2af075c658c-kube-api-access-n8962\") pod \"heat-db-create-9rvnd\" (UID: \"97e356fb-2d8a-47f3-b2cc-c2af075c658c\") " pod="openstack/heat-db-create-9rvnd" Dec 11 15:30:12 crc kubenswrapper[5050]: I1211 15:30:12.211560 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72lsm\" (UniqueName: \"kubernetes.io/projected/8f08ac59-fc5a-4846-ba07-e5de181aa3c8-kube-api-access-72lsm\") pod \"heat-1848-account-create-update-npqlp\" (UID: \"8f08ac59-fc5a-4846-ba07-e5de181aa3c8\") " pod="openstack/heat-1848-account-create-update-npqlp" Dec 11 15:30:12 crc kubenswrapper[5050]: I1211 15:30:12.211666 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97e356fb-2d8a-47f3-b2cc-c2af075c658c-operator-scripts\") pod \"heat-db-create-9rvnd\" (UID: \"97e356fb-2d8a-47f3-b2cc-c2af075c658c\") " pod="openstack/heat-db-create-9rvnd" Dec 11 15:30:12 crc kubenswrapper[5050]: I1211 15:30:12.313200 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8962\" (UniqueName: \"kubernetes.io/projected/97e356fb-2d8a-47f3-b2cc-c2af075c658c-kube-api-access-n8962\") pod \"heat-db-create-9rvnd\" (UID: \"97e356fb-2d8a-47f3-b2cc-c2af075c658c\") " pod="openstack/heat-db-create-9rvnd" Dec 11 15:30:12 crc kubenswrapper[5050]: I1211 15:30:12.313272 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72lsm\" (UniqueName: \"kubernetes.io/projected/8f08ac59-fc5a-4846-ba07-e5de181aa3c8-kube-api-access-72lsm\") pod \"heat-1848-account-create-update-npqlp\" (UID: \"8f08ac59-fc5a-4846-ba07-e5de181aa3c8\") " pod="openstack/heat-1848-account-create-update-npqlp" Dec 11 15:30:12 crc kubenswrapper[5050]: I1211 15:30:12.313344 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97e356fb-2d8a-47f3-b2cc-c2af075c658c-operator-scripts\") pod \"heat-db-create-9rvnd\" (UID: \"97e356fb-2d8a-47f3-b2cc-c2af075c658c\") " pod="openstack/heat-db-create-9rvnd" Dec 11 15:30:12 crc kubenswrapper[5050]: I1211 15:30:12.313435 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f08ac59-fc5a-4846-ba07-e5de181aa3c8-operator-scripts\") pod \"heat-1848-account-create-update-npqlp\" (UID: \"8f08ac59-fc5a-4846-ba07-e5de181aa3c8\") " pod="openstack/heat-1848-account-create-update-npqlp" Dec 11 15:30:12 crc kubenswrapper[5050]: I1211 15:30:12.314078 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97e356fb-2d8a-47f3-b2cc-c2af075c658c-operator-scripts\") pod \"heat-db-create-9rvnd\" (UID: \"97e356fb-2d8a-47f3-b2cc-c2af075c658c\") " pod="openstack/heat-db-create-9rvnd" Dec 11 15:30:12 crc kubenswrapper[5050]: I1211 15:30:12.314265 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f08ac59-fc5a-4846-ba07-e5de181aa3c8-operator-scripts\") pod \"heat-1848-account-create-update-npqlp\" (UID: \"8f08ac59-fc5a-4846-ba07-e5de181aa3c8\") " pod="openstack/heat-1848-account-create-update-npqlp" Dec 11 15:30:12 crc kubenswrapper[5050]: I1211 15:30:12.331281 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8962\" (UniqueName: \"kubernetes.io/projected/97e356fb-2d8a-47f3-b2cc-c2af075c658c-kube-api-access-n8962\") pod \"heat-db-create-9rvnd\" (UID: \"97e356fb-2d8a-47f3-b2cc-c2af075c658c\") " pod="openstack/heat-db-create-9rvnd" Dec 11 15:30:12 crc kubenswrapper[5050]: I1211 15:30:12.331619 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72lsm\" (UniqueName: \"kubernetes.io/projected/8f08ac59-fc5a-4846-ba07-e5de181aa3c8-kube-api-access-72lsm\") pod \"heat-1848-account-create-update-npqlp\" (UID: \"8f08ac59-fc5a-4846-ba07-e5de181aa3c8\") " pod="openstack/heat-1848-account-create-update-npqlp" Dec 11 15:30:12 crc kubenswrapper[5050]: I1211 15:30:12.456170 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-9rvnd" Dec 11 15:30:12 crc kubenswrapper[5050]: I1211 15:30:12.479960 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-1848-account-create-update-npqlp" Dec 11 15:30:12 crc kubenswrapper[5050]: I1211 15:30:12.930810 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fb79d99b5-m4xgd" event={"ID":"2086bc41-00a2-4c97-a491-08511f3ed6e5","Type":"ContainerStarted","Data":"c2abb4cc449a9d8127c16b79dfe68ac27a46606a1e98e66a409122244f1e0137"} Dec 11 15:30:12 crc kubenswrapper[5050]: I1211 15:30:12.969567 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-9rvnd"] Dec 11 15:30:12 crc kubenswrapper[5050]: W1211 15:30:12.980311 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97e356fb_2d8a_47f3_b2cc_c2af075c658c.slice/crio-786b7646337534e36b0de8b697c4877afd276ab422e917e6cb68b5d539217df3 WatchSource:0}: Error finding container 786b7646337534e36b0de8b697c4877afd276ab422e917e6cb68b5d539217df3: Status 404 returned error can't find the container with id 786b7646337534e36b0de8b697c4877afd276ab422e917e6cb68b5d539217df3 Dec 11 15:30:12 crc kubenswrapper[5050]: I1211 15:30:12.985706 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5fb79d99b5-m4xgd" podStartSLOduration=2.985687893 podStartE2EDuration="2.985687893s" podCreationTimestamp="2025-12-11 15:30:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:30:12.98557971 +0000 UTC m=+6103.829302296" watchObservedRunningTime="2025-12-11 15:30:12.985687893 +0000 UTC m=+6103.829410479" Dec 11 15:30:13 crc kubenswrapper[5050]: I1211 15:30:13.103117 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-1848-account-create-update-npqlp"] Dec 11 15:30:13 crc kubenswrapper[5050]: E1211 15:30:13.797603 5050 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f08ac59_fc5a_4846_ba07_e5de181aa3c8.slice/crio-a50da491ac8c14648bab31f323eb7ac6bcc69a84651a8d2540ce227e0ff2ebef.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f08ac59_fc5a_4846_ba07_e5de181aa3c8.slice/crio-conmon-a50da491ac8c14648bab31f323eb7ac6bcc69a84651a8d2540ce227e0ff2ebef.scope\": RecentStats: unable to find data in memory cache]" Dec 11 15:30:13 crc kubenswrapper[5050]: I1211 15:30:13.944300 5050 generic.go:334] "Generic (PLEG): container finished" podID="8f08ac59-fc5a-4846-ba07-e5de181aa3c8" containerID="a50da491ac8c14648bab31f323eb7ac6bcc69a84651a8d2540ce227e0ff2ebef" exitCode=0 Dec 11 15:30:13 crc kubenswrapper[5050]: I1211 15:30:13.944402 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-1848-account-create-update-npqlp" event={"ID":"8f08ac59-fc5a-4846-ba07-e5de181aa3c8","Type":"ContainerDied","Data":"a50da491ac8c14648bab31f323eb7ac6bcc69a84651a8d2540ce227e0ff2ebef"} Dec 11 15:30:13 crc kubenswrapper[5050]: I1211 15:30:13.944437 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-1848-account-create-update-npqlp" event={"ID":"8f08ac59-fc5a-4846-ba07-e5de181aa3c8","Type":"ContainerStarted","Data":"2875c9e7dbad3c7071b418daea49c9af77ca7f6795a705197a6bad94d2bb0bd4"} Dec 11 15:30:13 crc kubenswrapper[5050]: I1211 15:30:13.953050 5050 generic.go:334] "Generic (PLEG): container finished" podID="97e356fb-2d8a-47f3-b2cc-c2af075c658c" containerID="bc0e5c33022195409827a4a5e6edb5b46a7222d53c1dc50c05cb0771a9835ecd" exitCode=0 Dec 11 15:30:13 crc kubenswrapper[5050]: I1211 15:30:13.953460 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-9rvnd" event={"ID":"97e356fb-2d8a-47f3-b2cc-c2af075c658c","Type":"ContainerDied","Data":"bc0e5c33022195409827a4a5e6edb5b46a7222d53c1dc50c05cb0771a9835ecd"} Dec 11 15:30:13 crc kubenswrapper[5050]: I1211 15:30:13.953497 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-9rvnd" event={"ID":"97e356fb-2d8a-47f3-b2cc-c2af075c658c","Type":"ContainerStarted","Data":"786b7646337534e36b0de8b697c4877afd276ab422e917e6cb68b5d539217df3"} Dec 11 15:30:14 crc kubenswrapper[5050]: I1211 15:30:14.547028 5050 scope.go:117] "RemoveContainer" containerID="ebf03c01a0c1f974cd2ea46998cc404d2f6db1e8b59294fa047964a30636886e" Dec 11 15:30:14 crc kubenswrapper[5050]: E1211 15:30:14.547684 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:30:15 crc kubenswrapper[5050]: I1211 15:30:15.382926 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-1848-account-create-update-npqlp" Dec 11 15:30:15 crc kubenswrapper[5050]: I1211 15:30:15.392879 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-9rvnd" Dec 11 15:30:15 crc kubenswrapper[5050]: I1211 15:30:15.486755 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8962\" (UniqueName: \"kubernetes.io/projected/97e356fb-2d8a-47f3-b2cc-c2af075c658c-kube-api-access-n8962\") pod \"97e356fb-2d8a-47f3-b2cc-c2af075c658c\" (UID: \"97e356fb-2d8a-47f3-b2cc-c2af075c658c\") " Dec 11 15:30:15 crc kubenswrapper[5050]: I1211 15:30:15.486922 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72lsm\" (UniqueName: \"kubernetes.io/projected/8f08ac59-fc5a-4846-ba07-e5de181aa3c8-kube-api-access-72lsm\") pod \"8f08ac59-fc5a-4846-ba07-e5de181aa3c8\" (UID: \"8f08ac59-fc5a-4846-ba07-e5de181aa3c8\") " Dec 11 15:30:15 crc kubenswrapper[5050]: I1211 15:30:15.487023 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f08ac59-fc5a-4846-ba07-e5de181aa3c8-operator-scripts\") pod \"8f08ac59-fc5a-4846-ba07-e5de181aa3c8\" (UID: \"8f08ac59-fc5a-4846-ba07-e5de181aa3c8\") " Dec 11 15:30:15 crc kubenswrapper[5050]: I1211 15:30:15.487053 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97e356fb-2d8a-47f3-b2cc-c2af075c658c-operator-scripts\") pod \"97e356fb-2d8a-47f3-b2cc-c2af075c658c\" (UID: \"97e356fb-2d8a-47f3-b2cc-c2af075c658c\") " Dec 11 15:30:15 crc kubenswrapper[5050]: I1211 15:30:15.487751 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97e356fb-2d8a-47f3-b2cc-c2af075c658c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "97e356fb-2d8a-47f3-b2cc-c2af075c658c" (UID: "97e356fb-2d8a-47f3-b2cc-c2af075c658c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:30:15 crc kubenswrapper[5050]: I1211 15:30:15.487763 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f08ac59-fc5a-4846-ba07-e5de181aa3c8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8f08ac59-fc5a-4846-ba07-e5de181aa3c8" (UID: "8f08ac59-fc5a-4846-ba07-e5de181aa3c8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:30:15 crc kubenswrapper[5050]: I1211 15:30:15.488194 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f08ac59-fc5a-4846-ba07-e5de181aa3c8-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:30:15 crc kubenswrapper[5050]: I1211 15:30:15.488217 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/97e356fb-2d8a-47f3-b2cc-c2af075c658c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:30:15 crc kubenswrapper[5050]: I1211 15:30:15.493696 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f08ac59-fc5a-4846-ba07-e5de181aa3c8-kube-api-access-72lsm" (OuterVolumeSpecName: "kube-api-access-72lsm") pod "8f08ac59-fc5a-4846-ba07-e5de181aa3c8" (UID: "8f08ac59-fc5a-4846-ba07-e5de181aa3c8"). InnerVolumeSpecName "kube-api-access-72lsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:30:15 crc kubenswrapper[5050]: I1211 15:30:15.498552 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97e356fb-2d8a-47f3-b2cc-c2af075c658c-kube-api-access-n8962" (OuterVolumeSpecName: "kube-api-access-n8962") pod "97e356fb-2d8a-47f3-b2cc-c2af075c658c" (UID: "97e356fb-2d8a-47f3-b2cc-c2af075c658c"). InnerVolumeSpecName "kube-api-access-n8962". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:30:15 crc kubenswrapper[5050]: I1211 15:30:15.589874 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8962\" (UniqueName: \"kubernetes.io/projected/97e356fb-2d8a-47f3-b2cc-c2af075c658c-kube-api-access-n8962\") on node \"crc\" DevicePath \"\"" Dec 11 15:30:15 crc kubenswrapper[5050]: I1211 15:30:15.589992 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72lsm\" (UniqueName: \"kubernetes.io/projected/8f08ac59-fc5a-4846-ba07-e5de181aa3c8-kube-api-access-72lsm\") on node \"crc\" DevicePath \"\"" Dec 11 15:30:15 crc kubenswrapper[5050]: I1211 15:30:15.972576 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-1848-account-create-update-npqlp" event={"ID":"8f08ac59-fc5a-4846-ba07-e5de181aa3c8","Type":"ContainerDied","Data":"2875c9e7dbad3c7071b418daea49c9af77ca7f6795a705197a6bad94d2bb0bd4"} Dec 11 15:30:15 crc kubenswrapper[5050]: I1211 15:30:15.973103 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2875c9e7dbad3c7071b418daea49c9af77ca7f6795a705197a6bad94d2bb0bd4" Dec 11 15:30:15 crc kubenswrapper[5050]: I1211 15:30:15.973231 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-1848-account-create-update-npqlp" Dec 11 15:30:15 crc kubenswrapper[5050]: I1211 15:30:15.975756 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-9rvnd" event={"ID":"97e356fb-2d8a-47f3-b2cc-c2af075c658c","Type":"ContainerDied","Data":"786b7646337534e36b0de8b697c4877afd276ab422e917e6cb68b5d539217df3"} Dec 11 15:30:15 crc kubenswrapper[5050]: I1211 15:30:15.975886 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="786b7646337534e36b0de8b697c4877afd276ab422e917e6cb68b5d539217df3" Dec 11 15:30:15 crc kubenswrapper[5050]: I1211 15:30:15.975983 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-9rvnd" Dec 11 15:30:17 crc kubenswrapper[5050]: I1211 15:30:17.341623 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-wbkw4"] Dec 11 15:30:17 crc kubenswrapper[5050]: E1211 15:30:17.342167 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97e356fb-2d8a-47f3-b2cc-c2af075c658c" containerName="mariadb-database-create" Dec 11 15:30:17 crc kubenswrapper[5050]: I1211 15:30:17.342202 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="97e356fb-2d8a-47f3-b2cc-c2af075c658c" containerName="mariadb-database-create" Dec 11 15:30:17 crc kubenswrapper[5050]: E1211 15:30:17.342245 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f08ac59-fc5a-4846-ba07-e5de181aa3c8" containerName="mariadb-account-create-update" Dec 11 15:30:17 crc kubenswrapper[5050]: I1211 15:30:17.342257 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f08ac59-fc5a-4846-ba07-e5de181aa3c8" containerName="mariadb-account-create-update" Dec 11 15:30:17 crc kubenswrapper[5050]: I1211 15:30:17.342490 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f08ac59-fc5a-4846-ba07-e5de181aa3c8" containerName="mariadb-account-create-update" Dec 11 15:30:17 crc kubenswrapper[5050]: I1211 15:30:17.342514 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="97e356fb-2d8a-47f3-b2cc-c2af075c658c" containerName="mariadb-database-create" Dec 11 15:30:17 crc kubenswrapper[5050]: I1211 15:30:17.343420 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-wbkw4" Dec 11 15:30:17 crc kubenswrapper[5050]: I1211 15:30:17.345969 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-mz9rx" Dec 11 15:30:17 crc kubenswrapper[5050]: I1211 15:30:17.347329 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Dec 11 15:30:17 crc kubenswrapper[5050]: I1211 15:30:17.352526 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-wbkw4"] Dec 11 15:30:17 crc kubenswrapper[5050]: I1211 15:30:17.427642 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k72z2\" (UniqueName: \"kubernetes.io/projected/756e8ca6-c0b8-4051-b88c-0cb6b0159661-kube-api-access-k72z2\") pod \"heat-db-sync-wbkw4\" (UID: \"756e8ca6-c0b8-4051-b88c-0cb6b0159661\") " pod="openstack/heat-db-sync-wbkw4" Dec 11 15:30:17 crc kubenswrapper[5050]: I1211 15:30:17.427733 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/756e8ca6-c0b8-4051-b88c-0cb6b0159661-combined-ca-bundle\") pod \"heat-db-sync-wbkw4\" (UID: \"756e8ca6-c0b8-4051-b88c-0cb6b0159661\") " pod="openstack/heat-db-sync-wbkw4" Dec 11 15:30:17 crc kubenswrapper[5050]: I1211 15:30:17.429295 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/756e8ca6-c0b8-4051-b88c-0cb6b0159661-config-data\") pod \"heat-db-sync-wbkw4\" (UID: \"756e8ca6-c0b8-4051-b88c-0cb6b0159661\") " pod="openstack/heat-db-sync-wbkw4" Dec 11 15:30:17 crc kubenswrapper[5050]: I1211 15:30:17.531822 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k72z2\" (UniqueName: \"kubernetes.io/projected/756e8ca6-c0b8-4051-b88c-0cb6b0159661-kube-api-access-k72z2\") pod \"heat-db-sync-wbkw4\" (UID: \"756e8ca6-c0b8-4051-b88c-0cb6b0159661\") " pod="openstack/heat-db-sync-wbkw4" Dec 11 15:30:17 crc kubenswrapper[5050]: I1211 15:30:17.531974 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/756e8ca6-c0b8-4051-b88c-0cb6b0159661-combined-ca-bundle\") pod \"heat-db-sync-wbkw4\" (UID: \"756e8ca6-c0b8-4051-b88c-0cb6b0159661\") " pod="openstack/heat-db-sync-wbkw4" Dec 11 15:30:17 crc kubenswrapper[5050]: I1211 15:30:17.532101 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/756e8ca6-c0b8-4051-b88c-0cb6b0159661-config-data\") pod \"heat-db-sync-wbkw4\" (UID: \"756e8ca6-c0b8-4051-b88c-0cb6b0159661\") " pod="openstack/heat-db-sync-wbkw4" Dec 11 15:30:17 crc kubenswrapper[5050]: I1211 15:30:17.537879 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/756e8ca6-c0b8-4051-b88c-0cb6b0159661-config-data\") pod \"heat-db-sync-wbkw4\" (UID: \"756e8ca6-c0b8-4051-b88c-0cb6b0159661\") " pod="openstack/heat-db-sync-wbkw4" Dec 11 15:30:17 crc kubenswrapper[5050]: I1211 15:30:17.542565 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/756e8ca6-c0b8-4051-b88c-0cb6b0159661-combined-ca-bundle\") pod \"heat-db-sync-wbkw4\" (UID: \"756e8ca6-c0b8-4051-b88c-0cb6b0159661\") " pod="openstack/heat-db-sync-wbkw4" Dec 11 15:30:17 crc kubenswrapper[5050]: I1211 15:30:17.572605 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k72z2\" (UniqueName: \"kubernetes.io/projected/756e8ca6-c0b8-4051-b88c-0cb6b0159661-kube-api-access-k72z2\") pod \"heat-db-sync-wbkw4\" (UID: \"756e8ca6-c0b8-4051-b88c-0cb6b0159661\") " pod="openstack/heat-db-sync-wbkw4" Dec 11 15:30:17 crc kubenswrapper[5050]: I1211 15:30:17.668406 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-wbkw4" Dec 11 15:30:18 crc kubenswrapper[5050]: I1211 15:30:18.161356 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-wbkw4"] Dec 11 15:30:18 crc kubenswrapper[5050]: W1211 15:30:18.162897 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod756e8ca6_c0b8_4051_b88c_0cb6b0159661.slice/crio-85bad4170bba0a31c68d3182c6bee81e43fb5d3c1b781b9af6bcc7c4e827d8bc WatchSource:0}: Error finding container 85bad4170bba0a31c68d3182c6bee81e43fb5d3c1b781b9af6bcc7c4e827d8bc: Status 404 returned error can't find the container with id 85bad4170bba0a31c68d3182c6bee81e43fb5d3c1b781b9af6bcc7c4e827d8bc Dec 11 15:30:19 crc kubenswrapper[5050]: I1211 15:30:19.004665 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-wbkw4" event={"ID":"756e8ca6-c0b8-4051-b88c-0cb6b0159661","Type":"ContainerStarted","Data":"85bad4170bba0a31c68d3182c6bee81e43fb5d3c1b781b9af6bcc7c4e827d8bc"} Dec 11 15:30:19 crc kubenswrapper[5050]: I1211 15:30:19.058644 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-76bf858989-5x589" podUID="32b521d5-20fd-45a3-899f-bd6b605107a5" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.112:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.112:8080: connect: connection refused" Dec 11 15:30:21 crc kubenswrapper[5050]: I1211 15:30:21.156330 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5fb79d99b5-m4xgd" Dec 11 15:30:21 crc kubenswrapper[5050]: I1211 15:30:21.156380 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5fb79d99b5-m4xgd" Dec 11 15:30:25 crc kubenswrapper[5050]: I1211 15:30:25.072285 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-wbkw4" event={"ID":"756e8ca6-c0b8-4051-b88c-0cb6b0159661","Type":"ContainerStarted","Data":"72c434b48dfb3f08da8a1635c5bb353f39eacf953520b12c042dccad30273b9c"} Dec 11 15:30:25 crc kubenswrapper[5050]: I1211 15:30:25.103220 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-wbkw4" podStartSLOduration=1.585594753 podStartE2EDuration="8.103194001s" podCreationTimestamp="2025-12-11 15:30:17 +0000 UTC" firstStartedPulling="2025-12-11 15:30:18.165466112 +0000 UTC m=+6109.009188688" lastFinishedPulling="2025-12-11 15:30:24.68306535 +0000 UTC m=+6115.526787936" observedRunningTime="2025-12-11 15:30:25.09117914 +0000 UTC m=+6115.934901726" watchObservedRunningTime="2025-12-11 15:30:25.103194001 +0000 UTC m=+6115.946916587" Dec 11 15:30:26 crc kubenswrapper[5050]: I1211 15:30:26.546777 5050 scope.go:117] "RemoveContainer" containerID="ebf03c01a0c1f974cd2ea46998cc404d2f6db1e8b59294fa047964a30636886e" Dec 11 15:30:26 crc kubenswrapper[5050]: E1211 15:30:26.547507 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:30:27 crc kubenswrapper[5050]: I1211 15:30:27.068572 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-1e1b-account-create-update-df2bl"] Dec 11 15:30:27 crc kubenswrapper[5050]: I1211 15:30:27.089242 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-l2kb8"] Dec 11 15:30:27 crc kubenswrapper[5050]: I1211 15:30:27.099608 5050 generic.go:334] "Generic (PLEG): container finished" podID="756e8ca6-c0b8-4051-b88c-0cb6b0159661" containerID="72c434b48dfb3f08da8a1635c5bb353f39eacf953520b12c042dccad30273b9c" exitCode=0 Dec 11 15:30:27 crc kubenswrapper[5050]: I1211 15:30:27.099656 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-wbkw4" event={"ID":"756e8ca6-c0b8-4051-b88c-0cb6b0159661","Type":"ContainerDied","Data":"72c434b48dfb3f08da8a1635c5bb353f39eacf953520b12c042dccad30273b9c"} Dec 11 15:30:27 crc kubenswrapper[5050]: I1211 15:30:27.101326 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-1e1b-account-create-update-df2bl"] Dec 11 15:30:27 crc kubenswrapper[5050]: I1211 15:30:27.113747 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-l2kb8"] Dec 11 15:30:27 crc kubenswrapper[5050]: I1211 15:30:27.560640 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c46e6785-6125-4081-9817-5d5bfd9d9731" path="/var/lib/kubelet/pods/c46e6785-6125-4081-9817-5d5bfd9d9731/volumes" Dec 11 15:30:27 crc kubenswrapper[5050]: I1211 15:30:27.561773 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c67e3e32-26d1-4e14-8a9b-0ba00d5c4df4" path="/var/lib/kubelet/pods/c67e3e32-26d1-4e14-8a9b-0ba00d5c4df4/volumes" Dec 11 15:30:28 crc kubenswrapper[5050]: I1211 15:30:28.518631 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-wbkw4" Dec 11 15:30:28 crc kubenswrapper[5050]: I1211 15:30:28.584962 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/756e8ca6-c0b8-4051-b88c-0cb6b0159661-config-data\") pod \"756e8ca6-c0b8-4051-b88c-0cb6b0159661\" (UID: \"756e8ca6-c0b8-4051-b88c-0cb6b0159661\") " Dec 11 15:30:28 crc kubenswrapper[5050]: I1211 15:30:28.585025 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k72z2\" (UniqueName: \"kubernetes.io/projected/756e8ca6-c0b8-4051-b88c-0cb6b0159661-kube-api-access-k72z2\") pod \"756e8ca6-c0b8-4051-b88c-0cb6b0159661\" (UID: \"756e8ca6-c0b8-4051-b88c-0cb6b0159661\") " Dec 11 15:30:28 crc kubenswrapper[5050]: I1211 15:30:28.585099 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/756e8ca6-c0b8-4051-b88c-0cb6b0159661-combined-ca-bundle\") pod \"756e8ca6-c0b8-4051-b88c-0cb6b0159661\" (UID: \"756e8ca6-c0b8-4051-b88c-0cb6b0159661\") " Dec 11 15:30:28 crc kubenswrapper[5050]: I1211 15:30:28.593461 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/756e8ca6-c0b8-4051-b88c-0cb6b0159661-kube-api-access-k72z2" (OuterVolumeSpecName: "kube-api-access-k72z2") pod "756e8ca6-c0b8-4051-b88c-0cb6b0159661" (UID: "756e8ca6-c0b8-4051-b88c-0cb6b0159661"). InnerVolumeSpecName "kube-api-access-k72z2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:30:28 crc kubenswrapper[5050]: I1211 15:30:28.617100 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/756e8ca6-c0b8-4051-b88c-0cb6b0159661-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "756e8ca6-c0b8-4051-b88c-0cb6b0159661" (UID: "756e8ca6-c0b8-4051-b88c-0cb6b0159661"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:30:28 crc kubenswrapper[5050]: I1211 15:30:28.688597 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k72z2\" (UniqueName: \"kubernetes.io/projected/756e8ca6-c0b8-4051-b88c-0cb6b0159661-kube-api-access-k72z2\") on node \"crc\" DevicePath \"\"" Dec 11 15:30:28 crc kubenswrapper[5050]: I1211 15:30:28.688649 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/756e8ca6-c0b8-4051-b88c-0cb6b0159661-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:30:28 crc kubenswrapper[5050]: I1211 15:30:28.689450 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/756e8ca6-c0b8-4051-b88c-0cb6b0159661-config-data" (OuterVolumeSpecName: "config-data") pod "756e8ca6-c0b8-4051-b88c-0cb6b0159661" (UID: "756e8ca6-c0b8-4051-b88c-0cb6b0159661"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:30:28 crc kubenswrapper[5050]: I1211 15:30:28.792096 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/756e8ca6-c0b8-4051-b88c-0cb6b0159661-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:30:29 crc kubenswrapper[5050]: I1211 15:30:29.058857 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-76bf858989-5x589" podUID="32b521d5-20fd-45a3-899f-bd6b605107a5" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.112:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.112:8080: connect: connection refused" Dec 11 15:30:29 crc kubenswrapper[5050]: I1211 15:30:29.059296 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-76bf858989-5x589" Dec 11 15:30:29 crc kubenswrapper[5050]: I1211 15:30:29.125501 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-wbkw4" event={"ID":"756e8ca6-c0b8-4051-b88c-0cb6b0159661","Type":"ContainerDied","Data":"85bad4170bba0a31c68d3182c6bee81e43fb5d3c1b781b9af6bcc7c4e827d8bc"} Dec 11 15:30:29 crc kubenswrapper[5050]: I1211 15:30:29.125542 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85bad4170bba0a31c68d3182c6bee81e43fb5d3c1b781b9af6bcc7c4e827d8bc" Dec 11 15:30:29 crc kubenswrapper[5050]: I1211 15:30:29.125606 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-wbkw4" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.136248 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-6d45f4c84f-k64rk"] Dec 11 15:30:30 crc kubenswrapper[5050]: E1211 15:30:30.136989 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="756e8ca6-c0b8-4051-b88c-0cb6b0159661" containerName="heat-db-sync" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.137003 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="756e8ca6-c0b8-4051-b88c-0cb6b0159661" containerName="heat-db-sync" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.144079 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="756e8ca6-c0b8-4051-b88c-0cb6b0159661" containerName="heat-db-sync" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.144862 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6d45f4c84f-k64rk" Dec 11 15:30:30 crc kubenswrapper[5050]: W1211 15:30:30.148657 5050 reflector.go:561] object-"openstack"/"heat-engine-config-data": failed to list *v1.Secret: secrets "heat-engine-config-data" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Dec 11 15:30:30 crc kubenswrapper[5050]: E1211 15:30:30.148716 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"heat-engine-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"heat-engine-config-data\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.148816 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.152312 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-mz9rx" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.158391 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6d45f4c84f-k64rk"] Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.226467 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e18cc71c-f742-40f6-8bc6-e3cd7dff6315-config-data-custom\") pod \"heat-engine-6d45f4c84f-k64rk\" (UID: \"e18cc71c-f742-40f6-8bc6-e3cd7dff6315\") " pod="openstack/heat-engine-6d45f4c84f-k64rk" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.226562 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwjb5\" (UniqueName: \"kubernetes.io/projected/e18cc71c-f742-40f6-8bc6-e3cd7dff6315-kube-api-access-zwjb5\") pod \"heat-engine-6d45f4c84f-k64rk\" (UID: \"e18cc71c-f742-40f6-8bc6-e3cd7dff6315\") " pod="openstack/heat-engine-6d45f4c84f-k64rk" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.226617 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e18cc71c-f742-40f6-8bc6-e3cd7dff6315-combined-ca-bundle\") pod \"heat-engine-6d45f4c84f-k64rk\" (UID: \"e18cc71c-f742-40f6-8bc6-e3cd7dff6315\") " pod="openstack/heat-engine-6d45f4c84f-k64rk" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.226679 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e18cc71c-f742-40f6-8bc6-e3cd7dff6315-config-data\") pod \"heat-engine-6d45f4c84f-k64rk\" (UID: \"e18cc71c-f742-40f6-8bc6-e3cd7dff6315\") " pod="openstack/heat-engine-6d45f4c84f-k64rk" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.267074 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-66454d9cff-jngd7"] Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.268442 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-66454d9cff-jngd7" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.270385 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.290740 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-66454d9cff-jngd7"] Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.328127 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6mkq\" (UniqueName: \"kubernetes.io/projected/a7fdc57d-9899-4200-b4df-cc6d2f9deff4-kube-api-access-q6mkq\") pod \"heat-api-66454d9cff-jngd7\" (UID: \"a7fdc57d-9899-4200-b4df-cc6d2f9deff4\") " pod="openstack/heat-api-66454d9cff-jngd7" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.328217 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e18cc71c-f742-40f6-8bc6-e3cd7dff6315-config-data-custom\") pod \"heat-engine-6d45f4c84f-k64rk\" (UID: \"e18cc71c-f742-40f6-8bc6-e3cd7dff6315\") " pod="openstack/heat-engine-6d45f4c84f-k64rk" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.328241 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7fdc57d-9899-4200-b4df-cc6d2f9deff4-combined-ca-bundle\") pod \"heat-api-66454d9cff-jngd7\" (UID: \"a7fdc57d-9899-4200-b4df-cc6d2f9deff4\") " pod="openstack/heat-api-66454d9cff-jngd7" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.328262 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7fdc57d-9899-4200-b4df-cc6d2f9deff4-config-data\") pod \"heat-api-66454d9cff-jngd7\" (UID: \"a7fdc57d-9899-4200-b4df-cc6d2f9deff4\") " pod="openstack/heat-api-66454d9cff-jngd7" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.328283 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a7fdc57d-9899-4200-b4df-cc6d2f9deff4-config-data-custom\") pod \"heat-api-66454d9cff-jngd7\" (UID: \"a7fdc57d-9899-4200-b4df-cc6d2f9deff4\") " pod="openstack/heat-api-66454d9cff-jngd7" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.328309 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwjb5\" (UniqueName: \"kubernetes.io/projected/e18cc71c-f742-40f6-8bc6-e3cd7dff6315-kube-api-access-zwjb5\") pod \"heat-engine-6d45f4c84f-k64rk\" (UID: \"e18cc71c-f742-40f6-8bc6-e3cd7dff6315\") " pod="openstack/heat-engine-6d45f4c84f-k64rk" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.328336 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e18cc71c-f742-40f6-8bc6-e3cd7dff6315-combined-ca-bundle\") pod \"heat-engine-6d45f4c84f-k64rk\" (UID: \"e18cc71c-f742-40f6-8bc6-e3cd7dff6315\") " pod="openstack/heat-engine-6d45f4c84f-k64rk" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.328371 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e18cc71c-f742-40f6-8bc6-e3cd7dff6315-config-data\") pod \"heat-engine-6d45f4c84f-k64rk\" (UID: \"e18cc71c-f742-40f6-8bc6-e3cd7dff6315\") " pod="openstack/heat-engine-6d45f4c84f-k64rk" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.334735 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e18cc71c-f742-40f6-8bc6-e3cd7dff6315-combined-ca-bundle\") pod \"heat-engine-6d45f4c84f-k64rk\" (UID: \"e18cc71c-f742-40f6-8bc6-e3cd7dff6315\") " pod="openstack/heat-engine-6d45f4c84f-k64rk" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.336116 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e18cc71c-f742-40f6-8bc6-e3cd7dff6315-config-data\") pod \"heat-engine-6d45f4c84f-k64rk\" (UID: \"e18cc71c-f742-40f6-8bc6-e3cd7dff6315\") " pod="openstack/heat-engine-6d45f4c84f-k64rk" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.355784 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwjb5\" (UniqueName: \"kubernetes.io/projected/e18cc71c-f742-40f6-8bc6-e3cd7dff6315-kube-api-access-zwjb5\") pod \"heat-engine-6d45f4c84f-k64rk\" (UID: \"e18cc71c-f742-40f6-8bc6-e3cd7dff6315\") " pod="openstack/heat-engine-6d45f4c84f-k64rk" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.356170 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-77d9f97d54-shpxl"] Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.358471 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-77d9f97d54-shpxl" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.362338 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.386306 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-77d9f97d54-shpxl"] Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.430406 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7fdc57d-9899-4200-b4df-cc6d2f9deff4-combined-ca-bundle\") pod \"heat-api-66454d9cff-jngd7\" (UID: \"a7fdc57d-9899-4200-b4df-cc6d2f9deff4\") " pod="openstack/heat-api-66454d9cff-jngd7" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.430464 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48b1dd3a-f26b-4a9f-a7dd-b7e826f9b158-config-data\") pod \"heat-cfnapi-77d9f97d54-shpxl\" (UID: \"48b1dd3a-f26b-4a9f-a7dd-b7e826f9b158\") " pod="openstack/heat-cfnapi-77d9f97d54-shpxl" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.430523 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7fdc57d-9899-4200-b4df-cc6d2f9deff4-config-data\") pod \"heat-api-66454d9cff-jngd7\" (UID: \"a7fdc57d-9899-4200-b4df-cc6d2f9deff4\") " pod="openstack/heat-api-66454d9cff-jngd7" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.430554 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a7fdc57d-9899-4200-b4df-cc6d2f9deff4-config-data-custom\") pod \"heat-api-66454d9cff-jngd7\" (UID: \"a7fdc57d-9899-4200-b4df-cc6d2f9deff4\") " pod="openstack/heat-api-66454d9cff-jngd7" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.430955 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8v6c\" (UniqueName: \"kubernetes.io/projected/48b1dd3a-f26b-4a9f-a7dd-b7e826f9b158-kube-api-access-w8v6c\") pod \"heat-cfnapi-77d9f97d54-shpxl\" (UID: \"48b1dd3a-f26b-4a9f-a7dd-b7e826f9b158\") " pod="openstack/heat-cfnapi-77d9f97d54-shpxl" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.431269 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6mkq\" (UniqueName: \"kubernetes.io/projected/a7fdc57d-9899-4200-b4df-cc6d2f9deff4-kube-api-access-q6mkq\") pod \"heat-api-66454d9cff-jngd7\" (UID: \"a7fdc57d-9899-4200-b4df-cc6d2f9deff4\") " pod="openstack/heat-api-66454d9cff-jngd7" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.434235 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48b1dd3a-f26b-4a9f-a7dd-b7e826f9b158-combined-ca-bundle\") pod \"heat-cfnapi-77d9f97d54-shpxl\" (UID: \"48b1dd3a-f26b-4a9f-a7dd-b7e826f9b158\") " pod="openstack/heat-cfnapi-77d9f97d54-shpxl" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.434444 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48b1dd3a-f26b-4a9f-a7dd-b7e826f9b158-config-data-custom\") pod \"heat-cfnapi-77d9f97d54-shpxl\" (UID: \"48b1dd3a-f26b-4a9f-a7dd-b7e826f9b158\") " pod="openstack/heat-cfnapi-77d9f97d54-shpxl" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.436924 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a7fdc57d-9899-4200-b4df-cc6d2f9deff4-config-data-custom\") pod \"heat-api-66454d9cff-jngd7\" (UID: \"a7fdc57d-9899-4200-b4df-cc6d2f9deff4\") " pod="openstack/heat-api-66454d9cff-jngd7" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.438466 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7fdc57d-9899-4200-b4df-cc6d2f9deff4-config-data\") pod \"heat-api-66454d9cff-jngd7\" (UID: \"a7fdc57d-9899-4200-b4df-cc6d2f9deff4\") " pod="openstack/heat-api-66454d9cff-jngd7" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.447308 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7fdc57d-9899-4200-b4df-cc6d2f9deff4-combined-ca-bundle\") pod \"heat-api-66454d9cff-jngd7\" (UID: \"a7fdc57d-9899-4200-b4df-cc6d2f9deff4\") " pod="openstack/heat-api-66454d9cff-jngd7" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.451755 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6mkq\" (UniqueName: \"kubernetes.io/projected/a7fdc57d-9899-4200-b4df-cc6d2f9deff4-kube-api-access-q6mkq\") pod \"heat-api-66454d9cff-jngd7\" (UID: \"a7fdc57d-9899-4200-b4df-cc6d2f9deff4\") " pod="openstack/heat-api-66454d9cff-jngd7" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.536393 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8v6c\" (UniqueName: \"kubernetes.io/projected/48b1dd3a-f26b-4a9f-a7dd-b7e826f9b158-kube-api-access-w8v6c\") pod \"heat-cfnapi-77d9f97d54-shpxl\" (UID: \"48b1dd3a-f26b-4a9f-a7dd-b7e826f9b158\") " pod="openstack/heat-cfnapi-77d9f97d54-shpxl" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.536514 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48b1dd3a-f26b-4a9f-a7dd-b7e826f9b158-combined-ca-bundle\") pod \"heat-cfnapi-77d9f97d54-shpxl\" (UID: \"48b1dd3a-f26b-4a9f-a7dd-b7e826f9b158\") " pod="openstack/heat-cfnapi-77d9f97d54-shpxl" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.536549 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48b1dd3a-f26b-4a9f-a7dd-b7e826f9b158-config-data-custom\") pod \"heat-cfnapi-77d9f97d54-shpxl\" (UID: \"48b1dd3a-f26b-4a9f-a7dd-b7e826f9b158\") " pod="openstack/heat-cfnapi-77d9f97d54-shpxl" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.536613 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48b1dd3a-f26b-4a9f-a7dd-b7e826f9b158-config-data\") pod \"heat-cfnapi-77d9f97d54-shpxl\" (UID: \"48b1dd3a-f26b-4a9f-a7dd-b7e826f9b158\") " pod="openstack/heat-cfnapi-77d9f97d54-shpxl" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.543041 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48b1dd3a-f26b-4a9f-a7dd-b7e826f9b158-combined-ca-bundle\") pod \"heat-cfnapi-77d9f97d54-shpxl\" (UID: \"48b1dd3a-f26b-4a9f-a7dd-b7e826f9b158\") " pod="openstack/heat-cfnapi-77d9f97d54-shpxl" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.545903 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/48b1dd3a-f26b-4a9f-a7dd-b7e826f9b158-config-data-custom\") pod \"heat-cfnapi-77d9f97d54-shpxl\" (UID: \"48b1dd3a-f26b-4a9f-a7dd-b7e826f9b158\") " pod="openstack/heat-cfnapi-77d9f97d54-shpxl" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.550112 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/48b1dd3a-f26b-4a9f-a7dd-b7e826f9b158-config-data\") pod \"heat-cfnapi-77d9f97d54-shpxl\" (UID: \"48b1dd3a-f26b-4a9f-a7dd-b7e826f9b158\") " pod="openstack/heat-cfnapi-77d9f97d54-shpxl" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.553757 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8v6c\" (UniqueName: \"kubernetes.io/projected/48b1dd3a-f26b-4a9f-a7dd-b7e826f9b158-kube-api-access-w8v6c\") pod \"heat-cfnapi-77d9f97d54-shpxl\" (UID: \"48b1dd3a-f26b-4a9f-a7dd-b7e826f9b158\") " pod="openstack/heat-cfnapi-77d9f97d54-shpxl" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.596698 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-66454d9cff-jngd7" Dec 11 15:30:30 crc kubenswrapper[5050]: I1211 15:30:30.830988 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-77d9f97d54-shpxl" Dec 11 15:30:31 crc kubenswrapper[5050]: I1211 15:30:31.119324 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-66454d9cff-jngd7"] Dec 11 15:30:31 crc kubenswrapper[5050]: I1211 15:30:31.172995 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-66454d9cff-jngd7" event={"ID":"a7fdc57d-9899-4200-b4df-cc6d2f9deff4","Type":"ContainerStarted","Data":"336377b4c31e8f172fec7f402eb2b3b575b4dbbf9a0a95827ada81e97f865e5a"} Dec 11 15:30:31 crc kubenswrapper[5050]: W1211 15:30:31.293981 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48b1dd3a_f26b_4a9f_a7dd_b7e826f9b158.slice/crio-0cc7adb73e476b72e42c2e6ed4844846b9567a8bab38d06c8cb0436d4148e5d7 WatchSource:0}: Error finding container 0cc7adb73e476b72e42c2e6ed4844846b9567a8bab38d06c8cb0436d4148e5d7: Status 404 returned error can't find the container with id 0cc7adb73e476b72e42c2e6ed4844846b9567a8bab38d06c8cb0436d4148e5d7 Dec 11 15:30:31 crc kubenswrapper[5050]: I1211 15:30:31.298622 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-77d9f97d54-shpxl"] Dec 11 15:30:31 crc kubenswrapper[5050]: E1211 15:30:31.330565 5050 secret.go:188] Couldn't get secret openstack/heat-engine-config-data: failed to sync secret cache: timed out waiting for the condition Dec 11 15:30:31 crc kubenswrapper[5050]: E1211 15:30:31.330664 5050 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e18cc71c-f742-40f6-8bc6-e3cd7dff6315-config-data-custom podName:e18cc71c-f742-40f6-8bc6-e3cd7dff6315 nodeName:}" failed. No retries permitted until 2025-12-11 15:30:31.8306382 +0000 UTC m=+6122.674360786 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data-custom" (UniqueName: "kubernetes.io/secret/e18cc71c-f742-40f6-8bc6-e3cd7dff6315-config-data-custom") pod "heat-engine-6d45f4c84f-k64rk" (UID: "e18cc71c-f742-40f6-8bc6-e3cd7dff6315") : failed to sync secret cache: timed out waiting for the condition Dec 11 15:30:31 crc kubenswrapper[5050]: I1211 15:30:31.604040 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Dec 11 15:30:31 crc kubenswrapper[5050]: I1211 15:30:31.867960 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e18cc71c-f742-40f6-8bc6-e3cd7dff6315-config-data-custom\") pod \"heat-engine-6d45f4c84f-k64rk\" (UID: \"e18cc71c-f742-40f6-8bc6-e3cd7dff6315\") " pod="openstack/heat-engine-6d45f4c84f-k64rk" Dec 11 15:30:31 crc kubenswrapper[5050]: I1211 15:30:31.881630 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e18cc71c-f742-40f6-8bc6-e3cd7dff6315-config-data-custom\") pod \"heat-engine-6d45f4c84f-k64rk\" (UID: \"e18cc71c-f742-40f6-8bc6-e3cd7dff6315\") " pod="openstack/heat-engine-6d45f4c84f-k64rk" Dec 11 15:30:31 crc kubenswrapper[5050]: I1211 15:30:31.997907 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-6d45f4c84f-k64rk" Dec 11 15:30:32 crc kubenswrapper[5050]: I1211 15:30:32.050105 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-656c6"] Dec 11 15:30:32 crc kubenswrapper[5050]: I1211 15:30:32.063328 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-656c6"] Dec 11 15:30:32 crc kubenswrapper[5050]: I1211 15:30:32.194188 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-77d9f97d54-shpxl" event={"ID":"48b1dd3a-f26b-4a9f-a7dd-b7e826f9b158","Type":"ContainerStarted","Data":"0cc7adb73e476b72e42c2e6ed4844846b9567a8bab38d06c8cb0436d4148e5d7"} Dec 11 15:30:32 crc kubenswrapper[5050]: I1211 15:30:32.410704 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-6d45f4c84f-k64rk"] Dec 11 15:30:32 crc kubenswrapper[5050]: W1211 15:30:32.420683 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode18cc71c_f742_40f6_8bc6_e3cd7dff6315.slice/crio-e76aa6f97c3b2d90646cacb3fb46e4a8eafcc6cf250f3c6b4038a7a8dcaeaea2 WatchSource:0}: Error finding container e76aa6f97c3b2d90646cacb3fb46e4a8eafcc6cf250f3c6b4038a7a8dcaeaea2: Status 404 returned error can't find the container with id e76aa6f97c3b2d90646cacb3fb46e4a8eafcc6cf250f3c6b4038a7a8dcaeaea2 Dec 11 15:30:33 crc kubenswrapper[5050]: I1211 15:30:33.012349 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5fb79d99b5-m4xgd" Dec 11 15:30:33 crc kubenswrapper[5050]: I1211 15:30:33.209323 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6d45f4c84f-k64rk" event={"ID":"e18cc71c-f742-40f6-8bc6-e3cd7dff6315","Type":"ContainerStarted","Data":"462773e88f3cde486d461df4c65edbfeff990e4f98b2e41877c1d55b206dcc4d"} Dec 11 15:30:33 crc kubenswrapper[5050]: I1211 15:30:33.209370 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-6d45f4c84f-k64rk" event={"ID":"e18cc71c-f742-40f6-8bc6-e3cd7dff6315","Type":"ContainerStarted","Data":"e76aa6f97c3b2d90646cacb3fb46e4a8eafcc6cf250f3c6b4038a7a8dcaeaea2"} Dec 11 15:30:33 crc kubenswrapper[5050]: I1211 15:30:33.209649 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-6d45f4c84f-k64rk" Dec 11 15:30:33 crc kubenswrapper[5050]: I1211 15:30:33.246842 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-6d45f4c84f-k64rk" podStartSLOduration=3.246801236 podStartE2EDuration="3.246801236s" podCreationTimestamp="2025-12-11 15:30:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:30:33.231635761 +0000 UTC m=+6124.075358347" watchObservedRunningTime="2025-12-11 15:30:33.246801236 +0000 UTC m=+6124.090523822" Dec 11 15:30:33 crc kubenswrapper[5050]: I1211 15:30:33.561524 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1e097af-88c0-4cd0-b4bc-92793ae0f1f0" path="/var/lib/kubelet/pods/b1e097af-88c0-4cd0-b4bc-92793ae0f1f0/volumes" Dec 11 15:30:34 crc kubenswrapper[5050]: I1211 15:30:34.219700 5050 generic.go:334] "Generic (PLEG): container finished" podID="32b521d5-20fd-45a3-899f-bd6b605107a5" containerID="645c0842ad484bd26a50d403490ba1090f7a0165e591f6f2020f8f2bae092ffa" exitCode=137 Dec 11 15:30:34 crc kubenswrapper[5050]: I1211 15:30:34.220639 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76bf858989-5x589" event={"ID":"32b521d5-20fd-45a3-899f-bd6b605107a5","Type":"ContainerDied","Data":"645c0842ad484bd26a50d403490ba1090f7a0165e591f6f2020f8f2bae092ffa"} Dec 11 15:30:34 crc kubenswrapper[5050]: I1211 15:30:34.941852 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5fb79d99b5-m4xgd" Dec 11 15:30:35 crc kubenswrapper[5050]: I1211 15:30:35.058923 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-77d699676c-cljf5"] Dec 11 15:30:35 crc kubenswrapper[5050]: I1211 15:30:35.059165 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-77d699676c-cljf5" podUID="7da61c1c-03d8-4bcc-8bc9-23ccb60c1652" containerName="horizon-log" containerID="cri-o://5c6fe59cd09872bf34d0ff029b4368490e629638561d1ea2a03ca6d1f4fc6549" gracePeriod=30 Dec 11 15:30:35 crc kubenswrapper[5050]: I1211 15:30:35.059300 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-77d699676c-cljf5" podUID="7da61c1c-03d8-4bcc-8bc9-23ccb60c1652" containerName="horizon" containerID="cri-o://5bf10d2460c1f2e2f63841e0a3f56ffd837b3829bff105ae681afd9617fd7ce8" gracePeriod=30 Dec 11 15:30:35 crc kubenswrapper[5050]: I1211 15:30:35.182796 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76bf858989-5x589" Dec 11 15:30:35 crc kubenswrapper[5050]: I1211 15:30:35.256866 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-76bf858989-5x589" event={"ID":"32b521d5-20fd-45a3-899f-bd6b605107a5","Type":"ContainerDied","Data":"f36a010f06a285ab0a590a050708d633eab1b6dc1afc038830c8580fa7977ca7"} Dec 11 15:30:35 crc kubenswrapper[5050]: I1211 15:30:35.257218 5050 scope.go:117] "RemoveContainer" containerID="ed40918a857d442157ef5a86bc46bf4c1e750179bf25d5078680814160df7074" Dec 11 15:30:35 crc kubenswrapper[5050]: I1211 15:30:35.258630 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-76bf858989-5x589" Dec 11 15:30:35 crc kubenswrapper[5050]: I1211 15:30:35.267399 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njjkt\" (UniqueName: \"kubernetes.io/projected/32b521d5-20fd-45a3-899f-bd6b605107a5-kube-api-access-njjkt\") pod \"32b521d5-20fd-45a3-899f-bd6b605107a5\" (UID: \"32b521d5-20fd-45a3-899f-bd6b605107a5\") " Dec 11 15:30:35 crc kubenswrapper[5050]: I1211 15:30:35.267477 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32b521d5-20fd-45a3-899f-bd6b605107a5-logs\") pod \"32b521d5-20fd-45a3-899f-bd6b605107a5\" (UID: \"32b521d5-20fd-45a3-899f-bd6b605107a5\") " Dec 11 15:30:35 crc kubenswrapper[5050]: I1211 15:30:35.267671 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32b521d5-20fd-45a3-899f-bd6b605107a5-scripts\") pod \"32b521d5-20fd-45a3-899f-bd6b605107a5\" (UID: \"32b521d5-20fd-45a3-899f-bd6b605107a5\") " Dec 11 15:30:35 crc kubenswrapper[5050]: I1211 15:30:35.267756 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/32b521d5-20fd-45a3-899f-bd6b605107a5-horizon-secret-key\") pod \"32b521d5-20fd-45a3-899f-bd6b605107a5\" (UID: \"32b521d5-20fd-45a3-899f-bd6b605107a5\") " Dec 11 15:30:35 crc kubenswrapper[5050]: I1211 15:30:35.267818 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/32b521d5-20fd-45a3-899f-bd6b605107a5-config-data\") pod \"32b521d5-20fd-45a3-899f-bd6b605107a5\" (UID: \"32b521d5-20fd-45a3-899f-bd6b605107a5\") " Dec 11 15:30:35 crc kubenswrapper[5050]: I1211 15:30:35.269718 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32b521d5-20fd-45a3-899f-bd6b605107a5-logs" (OuterVolumeSpecName: "logs") pod "32b521d5-20fd-45a3-899f-bd6b605107a5" (UID: "32b521d5-20fd-45a3-899f-bd6b605107a5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:30:35 crc kubenswrapper[5050]: I1211 15:30:35.292859 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32b521d5-20fd-45a3-899f-bd6b605107a5-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "32b521d5-20fd-45a3-899f-bd6b605107a5" (UID: "32b521d5-20fd-45a3-899f-bd6b605107a5"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:30:35 crc kubenswrapper[5050]: I1211 15:30:35.298838 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32b521d5-20fd-45a3-899f-bd6b605107a5-kube-api-access-njjkt" (OuterVolumeSpecName: "kube-api-access-njjkt") pod "32b521d5-20fd-45a3-899f-bd6b605107a5" (UID: "32b521d5-20fd-45a3-899f-bd6b605107a5"). InnerVolumeSpecName "kube-api-access-njjkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:30:35 crc kubenswrapper[5050]: I1211 15:30:35.327428 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32b521d5-20fd-45a3-899f-bd6b605107a5-config-data" (OuterVolumeSpecName: "config-data") pod "32b521d5-20fd-45a3-899f-bd6b605107a5" (UID: "32b521d5-20fd-45a3-899f-bd6b605107a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:30:35 crc kubenswrapper[5050]: I1211 15:30:35.344778 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32b521d5-20fd-45a3-899f-bd6b605107a5-scripts" (OuterVolumeSpecName: "scripts") pod "32b521d5-20fd-45a3-899f-bd6b605107a5" (UID: "32b521d5-20fd-45a3-899f-bd6b605107a5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:30:35 crc kubenswrapper[5050]: I1211 15:30:35.373084 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njjkt\" (UniqueName: \"kubernetes.io/projected/32b521d5-20fd-45a3-899f-bd6b605107a5-kube-api-access-njjkt\") on node \"crc\" DevicePath \"\"" Dec 11 15:30:35 crc kubenswrapper[5050]: I1211 15:30:35.373132 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32b521d5-20fd-45a3-899f-bd6b605107a5-logs\") on node \"crc\" DevicePath \"\"" Dec 11 15:30:35 crc kubenswrapper[5050]: I1211 15:30:35.373142 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/32b521d5-20fd-45a3-899f-bd6b605107a5-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:30:35 crc kubenswrapper[5050]: I1211 15:30:35.373153 5050 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/32b521d5-20fd-45a3-899f-bd6b605107a5-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 11 15:30:35 crc kubenswrapper[5050]: I1211 15:30:35.373164 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/32b521d5-20fd-45a3-899f-bd6b605107a5-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:30:35 crc kubenswrapper[5050]: I1211 15:30:35.533767 5050 scope.go:117] "RemoveContainer" containerID="645c0842ad484bd26a50d403490ba1090f7a0165e591f6f2020f8f2bae092ffa" Dec 11 15:30:35 crc kubenswrapper[5050]: I1211 15:30:35.612144 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-76bf858989-5x589"] Dec 11 15:30:35 crc kubenswrapper[5050]: I1211 15:30:35.623892 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-76bf858989-5x589"] Dec 11 15:30:36 crc kubenswrapper[5050]: I1211 15:30:36.273370 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-77d9f97d54-shpxl" event={"ID":"48b1dd3a-f26b-4a9f-a7dd-b7e826f9b158","Type":"ContainerStarted","Data":"2d58e7f063066f4b86ad9dc0d64e4a26a01aea25439b88938249326d1b1cfe34"} Dec 11 15:30:36 crc kubenswrapper[5050]: I1211 15:30:36.273869 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-77d9f97d54-shpxl" Dec 11 15:30:36 crc kubenswrapper[5050]: I1211 15:30:36.276735 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-66454d9cff-jngd7" event={"ID":"a7fdc57d-9899-4200-b4df-cc6d2f9deff4","Type":"ContainerStarted","Data":"9f13f6fbc19f444ccd6c6f7a228b59dd52741f2f6ae2301392ea96e2c4d95438"} Dec 11 15:30:36 crc kubenswrapper[5050]: I1211 15:30:36.276827 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-66454d9cff-jngd7" Dec 11 15:30:36 crc kubenswrapper[5050]: I1211 15:30:36.292547 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-77d9f97d54-shpxl" podStartSLOduration=2.598761479 podStartE2EDuration="6.292527s" podCreationTimestamp="2025-12-11 15:30:30 +0000 UTC" firstStartedPulling="2025-12-11 15:30:31.296410836 +0000 UTC m=+6122.140133422" lastFinishedPulling="2025-12-11 15:30:34.990176357 +0000 UTC m=+6125.833898943" observedRunningTime="2025-12-11 15:30:36.287416113 +0000 UTC m=+6127.131138719" watchObservedRunningTime="2025-12-11 15:30:36.292527 +0000 UTC m=+6127.136249586" Dec 11 15:30:36 crc kubenswrapper[5050]: I1211 15:30:36.319882 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-66454d9cff-jngd7" podStartSLOduration=2.461862742 podStartE2EDuration="6.31986558s" podCreationTimestamp="2025-12-11 15:30:30 +0000 UTC" firstStartedPulling="2025-12-11 15:30:31.131878071 +0000 UTC m=+6121.975600657" lastFinishedPulling="2025-12-11 15:30:34.989880909 +0000 UTC m=+6125.833603495" observedRunningTime="2025-12-11 15:30:36.307770067 +0000 UTC m=+6127.151492673" watchObservedRunningTime="2025-12-11 15:30:36.31986558 +0000 UTC m=+6127.163588166" Dec 11 15:30:37 crc kubenswrapper[5050]: I1211 15:30:37.556805 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32b521d5-20fd-45a3-899f-bd6b605107a5" path="/var/lib/kubelet/pods/32b521d5-20fd-45a3-899f-bd6b605107a5/volumes" Dec 11 15:30:39 crc kubenswrapper[5050]: I1211 15:30:39.306576 5050 generic.go:334] "Generic (PLEG): container finished" podID="7da61c1c-03d8-4bcc-8bc9-23ccb60c1652" containerID="5bf10d2460c1f2e2f63841e0a3f56ffd837b3829bff105ae681afd9617fd7ce8" exitCode=0 Dec 11 15:30:39 crc kubenswrapper[5050]: I1211 15:30:39.306682 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77d699676c-cljf5" event={"ID":"7da61c1c-03d8-4bcc-8bc9-23ccb60c1652","Type":"ContainerDied","Data":"5bf10d2460c1f2e2f63841e0a3f56ffd837b3829bff105ae681afd9617fd7ce8"} Dec 11 15:30:39 crc kubenswrapper[5050]: I1211 15:30:39.552080 5050 scope.go:117] "RemoveContainer" containerID="ebf03c01a0c1f974cd2ea46998cc404d2f6db1e8b59294fa047964a30636886e" Dec 11 15:30:39 crc kubenswrapper[5050]: E1211 15:30:39.552373 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:30:39 crc kubenswrapper[5050]: I1211 15:30:39.771920 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-77d699676c-cljf5" podUID="7da61c1c-03d8-4bcc-8bc9-23ccb60c1652" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.113:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.113:8080: connect: connection refused" Dec 11 15:30:41 crc kubenswrapper[5050]: I1211 15:30:41.955960 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-66454d9cff-jngd7" Dec 11 15:30:42 crc kubenswrapper[5050]: I1211 15:30:42.194977 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-77d9f97d54-shpxl" Dec 11 15:30:49 crc kubenswrapper[5050]: I1211 15:30:49.772767 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-77d699676c-cljf5" podUID="7da61c1c-03d8-4bcc-8bc9-23ccb60c1652" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.113:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.113:8080: connect: connection refused" Dec 11 15:30:52 crc kubenswrapper[5050]: I1211 15:30:52.033731 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-6d45f4c84f-k64rk" Dec 11 15:30:53 crc kubenswrapper[5050]: I1211 15:30:53.546132 5050 scope.go:117] "RemoveContainer" containerID="ebf03c01a0c1f974cd2ea46998cc404d2f6db1e8b59294fa047964a30636886e" Dec 11 15:30:53 crc kubenswrapper[5050]: E1211 15:30:53.546767 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:30:59 crc kubenswrapper[5050]: I1211 15:30:59.773378 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-77d699676c-cljf5" podUID="7da61c1c-03d8-4bcc-8bc9-23ccb60c1652" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.113:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.113:8080: connect: connection refused" Dec 11 15:30:59 crc kubenswrapper[5050]: I1211 15:30:59.773734 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-77d699676c-cljf5" Dec 11 15:31:00 crc kubenswrapper[5050]: I1211 15:31:00.480405 5050 scope.go:117] "RemoveContainer" containerID="96f8515acd5c83fecb09b21a9b2ad08bdcc909ac21984b7a5ff9c25887d4caec" Dec 11 15:31:00 crc kubenswrapper[5050]: I1211 15:31:00.519623 5050 scope.go:117] "RemoveContainer" containerID="8f5a5a163ac44f0f51572872b8b0a4ede464c6b2c468e853bfc87060f5a44b9e" Dec 11 15:31:00 crc kubenswrapper[5050]: I1211 15:31:00.582967 5050 scope.go:117] "RemoveContainer" containerID="883d3a1157268e68ce7d2e5061a1ca26470c59d2d42a2665e63e237577658da8" Dec 11 15:31:00 crc kubenswrapper[5050]: I1211 15:31:00.621584 5050 scope.go:117] "RemoveContainer" containerID="d7ffde2eff7975ba1cdc812b8c5f6af2773623225a74498593e92e9e9d01c9d7" Dec 11 15:31:05 crc kubenswrapper[5050]: I1211 15:31:05.548083 5050 generic.go:334] "Generic (PLEG): container finished" podID="7da61c1c-03d8-4bcc-8bc9-23ccb60c1652" containerID="5c6fe59cd09872bf34d0ff029b4368490e629638561d1ea2a03ca6d1f4fc6549" exitCode=137 Dec 11 15:31:05 crc kubenswrapper[5050]: I1211 15:31:05.557925 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77d699676c-cljf5" event={"ID":"7da61c1c-03d8-4bcc-8bc9-23ccb60c1652","Type":"ContainerDied","Data":"5c6fe59cd09872bf34d0ff029b4368490e629638561d1ea2a03ca6d1f4fc6549"} Dec 11 15:31:05 crc kubenswrapper[5050]: I1211 15:31:05.557974 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77d699676c-cljf5" event={"ID":"7da61c1c-03d8-4bcc-8bc9-23ccb60c1652","Type":"ContainerDied","Data":"5b468a749f92c6d83f2044a8a336e7b8cf8669a554430c6b9e5742ad0b4d9de1"} Dec 11 15:31:05 crc kubenswrapper[5050]: I1211 15:31:05.557993 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b468a749f92c6d83f2044a8a336e7b8cf8669a554430c6b9e5742ad0b4d9de1" Dec 11 15:31:05 crc kubenswrapper[5050]: I1211 15:31:05.574112 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77d699676c-cljf5" Dec 11 15:31:05 crc kubenswrapper[5050]: I1211 15:31:05.685899 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7da61c1c-03d8-4bcc-8bc9-23ccb60c1652-horizon-secret-key\") pod \"7da61c1c-03d8-4bcc-8bc9-23ccb60c1652\" (UID: \"7da61c1c-03d8-4bcc-8bc9-23ccb60c1652\") " Dec 11 15:31:05 crc kubenswrapper[5050]: I1211 15:31:05.685971 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mr4z\" (UniqueName: \"kubernetes.io/projected/7da61c1c-03d8-4bcc-8bc9-23ccb60c1652-kube-api-access-8mr4z\") pod \"7da61c1c-03d8-4bcc-8bc9-23ccb60c1652\" (UID: \"7da61c1c-03d8-4bcc-8bc9-23ccb60c1652\") " Dec 11 15:31:05 crc kubenswrapper[5050]: I1211 15:31:05.686065 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7da61c1c-03d8-4bcc-8bc9-23ccb60c1652-scripts\") pod \"7da61c1c-03d8-4bcc-8bc9-23ccb60c1652\" (UID: \"7da61c1c-03d8-4bcc-8bc9-23ccb60c1652\") " Dec 11 15:31:05 crc kubenswrapper[5050]: I1211 15:31:05.686100 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7da61c1c-03d8-4bcc-8bc9-23ccb60c1652-config-data\") pod \"7da61c1c-03d8-4bcc-8bc9-23ccb60c1652\" (UID: \"7da61c1c-03d8-4bcc-8bc9-23ccb60c1652\") " Dec 11 15:31:05 crc kubenswrapper[5050]: I1211 15:31:05.686175 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7da61c1c-03d8-4bcc-8bc9-23ccb60c1652-logs\") pod \"7da61c1c-03d8-4bcc-8bc9-23ccb60c1652\" (UID: \"7da61c1c-03d8-4bcc-8bc9-23ccb60c1652\") " Dec 11 15:31:05 crc kubenswrapper[5050]: I1211 15:31:05.687316 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7da61c1c-03d8-4bcc-8bc9-23ccb60c1652-logs" (OuterVolumeSpecName: "logs") pod "7da61c1c-03d8-4bcc-8bc9-23ccb60c1652" (UID: "7da61c1c-03d8-4bcc-8bc9-23ccb60c1652"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:31:05 crc kubenswrapper[5050]: I1211 15:31:05.788333 5050 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7da61c1c-03d8-4bcc-8bc9-23ccb60c1652-logs\") on node \"crc\" DevicePath \"\"" Dec 11 15:31:05 crc kubenswrapper[5050]: I1211 15:31:05.871262 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7da61c1c-03d8-4bcc-8bc9-23ccb60c1652-kube-api-access-8mr4z" (OuterVolumeSpecName: "kube-api-access-8mr4z") pod "7da61c1c-03d8-4bcc-8bc9-23ccb60c1652" (UID: "7da61c1c-03d8-4bcc-8bc9-23ccb60c1652"). InnerVolumeSpecName "kube-api-access-8mr4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:31:05 crc kubenswrapper[5050]: I1211 15:31:05.871389 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7da61c1c-03d8-4bcc-8bc9-23ccb60c1652-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "7da61c1c-03d8-4bcc-8bc9-23ccb60c1652" (UID: "7da61c1c-03d8-4bcc-8bc9-23ccb60c1652"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:31:05 crc kubenswrapper[5050]: I1211 15:31:05.888657 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7da61c1c-03d8-4bcc-8bc9-23ccb60c1652-config-data" (OuterVolumeSpecName: "config-data") pod "7da61c1c-03d8-4bcc-8bc9-23ccb60c1652" (UID: "7da61c1c-03d8-4bcc-8bc9-23ccb60c1652"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:31:05 crc kubenswrapper[5050]: I1211 15:31:05.891256 5050 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7da61c1c-03d8-4bcc-8bc9-23ccb60c1652-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 11 15:31:05 crc kubenswrapper[5050]: I1211 15:31:05.891301 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mr4z\" (UniqueName: \"kubernetes.io/projected/7da61c1c-03d8-4bcc-8bc9-23ccb60c1652-kube-api-access-8mr4z\") on node \"crc\" DevicePath \"\"" Dec 11 15:31:05 crc kubenswrapper[5050]: I1211 15:31:05.891318 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7da61c1c-03d8-4bcc-8bc9-23ccb60c1652-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:31:05 crc kubenswrapper[5050]: I1211 15:31:05.892749 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7da61c1c-03d8-4bcc-8bc9-23ccb60c1652-scripts" (OuterVolumeSpecName: "scripts") pod "7da61c1c-03d8-4bcc-8bc9-23ccb60c1652" (UID: "7da61c1c-03d8-4bcc-8bc9-23ccb60c1652"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:31:05 crc kubenswrapper[5050]: I1211 15:31:05.992712 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7da61c1c-03d8-4bcc-8bc9-23ccb60c1652-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:31:06 crc kubenswrapper[5050]: I1211 15:31:06.546641 5050 scope.go:117] "RemoveContainer" containerID="ebf03c01a0c1f974cd2ea46998cc404d2f6db1e8b59294fa047964a30636886e" Dec 11 15:31:06 crc kubenswrapper[5050]: E1211 15:31:06.546909 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:31:06 crc kubenswrapper[5050]: I1211 15:31:06.558087 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77d699676c-cljf5" Dec 11 15:31:06 crc kubenswrapper[5050]: I1211 15:31:06.594219 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-77d699676c-cljf5"] Dec 11 15:31:06 crc kubenswrapper[5050]: I1211 15:31:06.603428 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-77d699676c-cljf5"] Dec 11 15:31:07 crc kubenswrapper[5050]: I1211 15:31:07.557989 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7da61c1c-03d8-4bcc-8bc9-23ccb60c1652" path="/var/lib/kubelet/pods/7da61c1c-03d8-4bcc-8bc9-23ccb60c1652/volumes" Dec 11 15:31:14 crc kubenswrapper[5050]: I1211 15:31:14.837406 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210jjrqn"] Dec 11 15:31:14 crc kubenswrapper[5050]: E1211 15:31:14.838450 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7da61c1c-03d8-4bcc-8bc9-23ccb60c1652" containerName="horizon-log" Dec 11 15:31:14 crc kubenswrapper[5050]: I1211 15:31:14.838469 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="7da61c1c-03d8-4bcc-8bc9-23ccb60c1652" containerName="horizon-log" Dec 11 15:31:14 crc kubenswrapper[5050]: E1211 15:31:14.838491 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32b521d5-20fd-45a3-899f-bd6b605107a5" containerName="horizon" Dec 11 15:31:14 crc kubenswrapper[5050]: I1211 15:31:14.838498 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="32b521d5-20fd-45a3-899f-bd6b605107a5" containerName="horizon" Dec 11 15:31:14 crc kubenswrapper[5050]: E1211 15:31:14.838514 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7da61c1c-03d8-4bcc-8bc9-23ccb60c1652" containerName="horizon" Dec 11 15:31:14 crc kubenswrapper[5050]: I1211 15:31:14.838522 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="7da61c1c-03d8-4bcc-8bc9-23ccb60c1652" containerName="horizon" Dec 11 15:31:14 crc kubenswrapper[5050]: E1211 15:31:14.838539 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32b521d5-20fd-45a3-899f-bd6b605107a5" containerName="horizon-log" Dec 11 15:31:14 crc kubenswrapper[5050]: I1211 15:31:14.838546 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="32b521d5-20fd-45a3-899f-bd6b605107a5" containerName="horizon-log" Dec 11 15:31:14 crc kubenswrapper[5050]: I1211 15:31:14.838767 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="7da61c1c-03d8-4bcc-8bc9-23ccb60c1652" containerName="horizon-log" Dec 11 15:31:14 crc kubenswrapper[5050]: I1211 15:31:14.838802 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="7da61c1c-03d8-4bcc-8bc9-23ccb60c1652" containerName="horizon" Dec 11 15:31:14 crc kubenswrapper[5050]: I1211 15:31:14.838818 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="32b521d5-20fd-45a3-899f-bd6b605107a5" containerName="horizon" Dec 11 15:31:14 crc kubenswrapper[5050]: I1211 15:31:14.838827 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="32b521d5-20fd-45a3-899f-bd6b605107a5" containerName="horizon-log" Dec 11 15:31:14 crc kubenswrapper[5050]: I1211 15:31:14.843345 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210jjrqn" Dec 11 15:31:14 crc kubenswrapper[5050]: I1211 15:31:14.846168 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 11 15:31:14 crc kubenswrapper[5050]: I1211 15:31:14.854814 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210jjrqn"] Dec 11 15:31:14 crc kubenswrapper[5050]: I1211 15:31:14.967448 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a7d3252-f0a4-4414-950b-4048d1824b3f-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210jjrqn\" (UID: \"7a7d3252-f0a4-4414-950b-4048d1824b3f\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210jjrqn" Dec 11 15:31:14 crc kubenswrapper[5050]: I1211 15:31:14.967515 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv7qg\" (UniqueName: \"kubernetes.io/projected/7a7d3252-f0a4-4414-950b-4048d1824b3f-kube-api-access-cv7qg\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210jjrqn\" (UID: \"7a7d3252-f0a4-4414-950b-4048d1824b3f\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210jjrqn" Dec 11 15:31:14 crc kubenswrapper[5050]: I1211 15:31:14.967615 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a7d3252-f0a4-4414-950b-4048d1824b3f-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210jjrqn\" (UID: \"7a7d3252-f0a4-4414-950b-4048d1824b3f\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210jjrqn" Dec 11 15:31:15 crc kubenswrapper[5050]: I1211 15:31:15.069851 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv7qg\" (UniqueName: \"kubernetes.io/projected/7a7d3252-f0a4-4414-950b-4048d1824b3f-kube-api-access-cv7qg\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210jjrqn\" (UID: \"7a7d3252-f0a4-4414-950b-4048d1824b3f\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210jjrqn" Dec 11 15:31:15 crc kubenswrapper[5050]: I1211 15:31:15.069976 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a7d3252-f0a4-4414-950b-4048d1824b3f-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210jjrqn\" (UID: \"7a7d3252-f0a4-4414-950b-4048d1824b3f\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210jjrqn" Dec 11 15:31:15 crc kubenswrapper[5050]: I1211 15:31:15.070092 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a7d3252-f0a4-4414-950b-4048d1824b3f-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210jjrqn\" (UID: \"7a7d3252-f0a4-4414-950b-4048d1824b3f\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210jjrqn" Dec 11 15:31:15 crc kubenswrapper[5050]: I1211 15:31:15.070647 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a7d3252-f0a4-4414-950b-4048d1824b3f-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210jjrqn\" (UID: \"7a7d3252-f0a4-4414-950b-4048d1824b3f\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210jjrqn" Dec 11 15:31:15 crc kubenswrapper[5050]: I1211 15:31:15.070694 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a7d3252-f0a4-4414-950b-4048d1824b3f-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210jjrqn\" (UID: \"7a7d3252-f0a4-4414-950b-4048d1824b3f\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210jjrqn" Dec 11 15:31:15 crc kubenswrapper[5050]: I1211 15:31:15.087299 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv7qg\" (UniqueName: \"kubernetes.io/projected/7a7d3252-f0a4-4414-950b-4048d1824b3f-kube-api-access-cv7qg\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210jjrqn\" (UID: \"7a7d3252-f0a4-4414-950b-4048d1824b3f\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210jjrqn" Dec 11 15:31:15 crc kubenswrapper[5050]: I1211 15:31:15.172943 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210jjrqn" Dec 11 15:31:15 crc kubenswrapper[5050]: I1211 15:31:15.669767 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210jjrqn"] Dec 11 15:31:16 crc kubenswrapper[5050]: I1211 15:31:16.649114 5050 generic.go:334] "Generic (PLEG): container finished" podID="7a7d3252-f0a4-4414-950b-4048d1824b3f" containerID="3e85a576bc8c98b5429fdd9ecade2e1f4059154127e72b73305c6ea0b4ed1744" exitCode=0 Dec 11 15:31:16 crc kubenswrapper[5050]: I1211 15:31:16.649601 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210jjrqn" event={"ID":"7a7d3252-f0a4-4414-950b-4048d1824b3f","Type":"ContainerDied","Data":"3e85a576bc8c98b5429fdd9ecade2e1f4059154127e72b73305c6ea0b4ed1744"} Dec 11 15:31:16 crc kubenswrapper[5050]: I1211 15:31:16.649624 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210jjrqn" event={"ID":"7a7d3252-f0a4-4414-950b-4048d1824b3f","Type":"ContainerStarted","Data":"198338695e4751ea399f06d12a27388856b6f33c426172d3a24a64b101d60f22"} Dec 11 15:31:18 crc kubenswrapper[5050]: I1211 15:31:18.673486 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210jjrqn" event={"ID":"7a7d3252-f0a4-4414-950b-4048d1824b3f","Type":"ContainerStarted","Data":"699006f338361d559b1947ee3f7c5e0c02aafd59b82f2ea0d5c45201313ee6bb"} Dec 11 15:31:19 crc kubenswrapper[5050]: I1211 15:31:19.685505 5050 generic.go:334] "Generic (PLEG): container finished" podID="7a7d3252-f0a4-4414-950b-4048d1824b3f" containerID="699006f338361d559b1947ee3f7c5e0c02aafd59b82f2ea0d5c45201313ee6bb" exitCode=0 Dec 11 15:31:19 crc kubenswrapper[5050]: I1211 15:31:19.685541 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210jjrqn" event={"ID":"7a7d3252-f0a4-4414-950b-4048d1824b3f","Type":"ContainerDied","Data":"699006f338361d559b1947ee3f7c5e0c02aafd59b82f2ea0d5c45201313ee6bb"} Dec 11 15:31:20 crc kubenswrapper[5050]: I1211 15:31:20.755149 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210jjrqn" event={"ID":"7a7d3252-f0a4-4414-950b-4048d1824b3f","Type":"ContainerStarted","Data":"d8fff2abb8f76e345e9270ad262c39403cf05d58d1a87360673b8faa8a70b28b"} Dec 11 15:31:20 crc kubenswrapper[5050]: I1211 15:31:20.776575 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210jjrqn" podStartSLOduration=5.066643458 podStartE2EDuration="6.776553955s" podCreationTimestamp="2025-12-11 15:31:14 +0000 UTC" firstStartedPulling="2025-12-11 15:31:16.651988218 +0000 UTC m=+6167.495710804" lastFinishedPulling="2025-12-11 15:31:18.361898715 +0000 UTC m=+6169.205621301" observedRunningTime="2025-12-11 15:31:20.774903151 +0000 UTC m=+6171.618625757" watchObservedRunningTime="2025-12-11 15:31:20.776553955 +0000 UTC m=+6171.620276581" Dec 11 15:31:21 crc kubenswrapper[5050]: I1211 15:31:21.546692 5050 scope.go:117] "RemoveContainer" containerID="ebf03c01a0c1f974cd2ea46998cc404d2f6db1e8b59294fa047964a30636886e" Dec 11 15:31:21 crc kubenswrapper[5050]: E1211 15:31:21.547149 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:31:21 crc kubenswrapper[5050]: I1211 15:31:21.765843 5050 generic.go:334] "Generic (PLEG): container finished" podID="7a7d3252-f0a4-4414-950b-4048d1824b3f" containerID="d8fff2abb8f76e345e9270ad262c39403cf05d58d1a87360673b8faa8a70b28b" exitCode=0 Dec 11 15:31:21 crc kubenswrapper[5050]: I1211 15:31:21.765916 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210jjrqn" event={"ID":"7a7d3252-f0a4-4414-950b-4048d1824b3f","Type":"ContainerDied","Data":"d8fff2abb8f76e345e9270ad262c39403cf05d58d1a87360673b8faa8a70b28b"} Dec 11 15:31:23 crc kubenswrapper[5050]: I1211 15:31:23.099623 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210jjrqn" Dec 11 15:31:23 crc kubenswrapper[5050]: I1211 15:31:23.266433 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cv7qg\" (UniqueName: \"kubernetes.io/projected/7a7d3252-f0a4-4414-950b-4048d1824b3f-kube-api-access-cv7qg\") pod \"7a7d3252-f0a4-4414-950b-4048d1824b3f\" (UID: \"7a7d3252-f0a4-4414-950b-4048d1824b3f\") " Dec 11 15:31:23 crc kubenswrapper[5050]: I1211 15:31:23.266582 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a7d3252-f0a4-4414-950b-4048d1824b3f-util\") pod \"7a7d3252-f0a4-4414-950b-4048d1824b3f\" (UID: \"7a7d3252-f0a4-4414-950b-4048d1824b3f\") " Dec 11 15:31:23 crc kubenswrapper[5050]: I1211 15:31:23.266674 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a7d3252-f0a4-4414-950b-4048d1824b3f-bundle\") pod \"7a7d3252-f0a4-4414-950b-4048d1824b3f\" (UID: \"7a7d3252-f0a4-4414-950b-4048d1824b3f\") " Dec 11 15:31:23 crc kubenswrapper[5050]: I1211 15:31:23.270426 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a7d3252-f0a4-4414-950b-4048d1824b3f-bundle" (OuterVolumeSpecName: "bundle") pod "7a7d3252-f0a4-4414-950b-4048d1824b3f" (UID: "7a7d3252-f0a4-4414-950b-4048d1824b3f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:31:23 crc kubenswrapper[5050]: I1211 15:31:23.274369 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a7d3252-f0a4-4414-950b-4048d1824b3f-kube-api-access-cv7qg" (OuterVolumeSpecName: "kube-api-access-cv7qg") pod "7a7d3252-f0a4-4414-950b-4048d1824b3f" (UID: "7a7d3252-f0a4-4414-950b-4048d1824b3f"). InnerVolumeSpecName "kube-api-access-cv7qg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:31:23 crc kubenswrapper[5050]: I1211 15:31:23.278376 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a7d3252-f0a4-4414-950b-4048d1824b3f-util" (OuterVolumeSpecName: "util") pod "7a7d3252-f0a4-4414-950b-4048d1824b3f" (UID: "7a7d3252-f0a4-4414-950b-4048d1824b3f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:31:23 crc kubenswrapper[5050]: I1211 15:31:23.369143 5050 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/7a7d3252-f0a4-4414-950b-4048d1824b3f-util\") on node \"crc\" DevicePath \"\"" Dec 11 15:31:23 crc kubenswrapper[5050]: I1211 15:31:23.369199 5050 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/7a7d3252-f0a4-4414-950b-4048d1824b3f-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:31:23 crc kubenswrapper[5050]: I1211 15:31:23.369212 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cv7qg\" (UniqueName: \"kubernetes.io/projected/7a7d3252-f0a4-4414-950b-4048d1824b3f-kube-api-access-cv7qg\") on node \"crc\" DevicePath \"\"" Dec 11 15:31:23 crc kubenswrapper[5050]: I1211 15:31:23.783451 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210jjrqn" event={"ID":"7a7d3252-f0a4-4414-950b-4048d1824b3f","Type":"ContainerDied","Data":"198338695e4751ea399f06d12a27388856b6f33c426172d3a24a64b101d60f22"} Dec 11 15:31:23 crc kubenswrapper[5050]: I1211 15:31:23.783728 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="198338695e4751ea399f06d12a27388856b6f33c426172d3a24a64b101d60f22" Dec 11 15:31:23 crc kubenswrapper[5050]: I1211 15:31:23.783790 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210jjrqn" Dec 11 15:31:29 crc kubenswrapper[5050]: I1211 15:31:29.088136 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-45df-account-create-update-ckxx8"] Dec 11 15:31:29 crc kubenswrapper[5050]: I1211 15:31:29.096546 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-45df-account-create-update-ckxx8"] Dec 11 15:31:29 crc kubenswrapper[5050]: I1211 15:31:29.582106 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a630a5b-c349-4e2f-876a-0b82485a8221" path="/var/lib/kubelet/pods/5a630a5b-c349-4e2f-876a-0b82485a8221/volumes" Dec 11 15:31:30 crc kubenswrapper[5050]: I1211 15:31:30.069679 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-x9fv9"] Dec 11 15:31:30 crc kubenswrapper[5050]: I1211 15:31:30.083628 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-m7c2g"] Dec 11 15:31:30 crc kubenswrapper[5050]: I1211 15:31:30.092670 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-m7c2g"] Dec 11 15:31:30 crc kubenswrapper[5050]: I1211 15:31:30.107455 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-x9fv9"] Dec 11 15:31:30 crc kubenswrapper[5050]: I1211 15:31:30.117918 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-9fc1-account-create-update-prg9d"] Dec 11 15:31:30 crc kubenswrapper[5050]: I1211 15:31:30.126911 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-3433-account-create-update-wknwz"] Dec 11 15:31:30 crc kubenswrapper[5050]: I1211 15:31:30.138552 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-3433-account-create-update-wknwz"] Dec 11 15:31:30 crc kubenswrapper[5050]: I1211 15:31:30.151297 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-9fc1-account-create-update-prg9d"] Dec 11 15:31:30 crc kubenswrapper[5050]: I1211 15:31:30.164452 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-gtv8c"] Dec 11 15:31:30 crc kubenswrapper[5050]: I1211 15:31:30.174695 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-gtv8c"] Dec 11 15:31:31 crc kubenswrapper[5050]: I1211 15:31:31.558099 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04dc7ae1-2485-4ffe-a853-4ef671794e68" path="/var/lib/kubelet/pods/04dc7ae1-2485-4ffe-a853-4ef671794e68/volumes" Dec 11 15:31:31 crc kubenswrapper[5050]: I1211 15:31:31.559599 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="465e991b-2d30-4579-aa62-8fc4ab7afe21" path="/var/lib/kubelet/pods/465e991b-2d30-4579-aa62-8fc4ab7afe21/volumes" Dec 11 15:31:31 crc kubenswrapper[5050]: I1211 15:31:31.560446 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a80b79e5-138b-4e71-ab5e-aa8805cce0b5" path="/var/lib/kubelet/pods/a80b79e5-138b-4e71-ab5e-aa8805cce0b5/volumes" Dec 11 15:31:31 crc kubenswrapper[5050]: I1211 15:31:31.561146 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae46e8c2-edc3-46dc-a160-75c77cb2bafb" path="/var/lib/kubelet/pods/ae46e8c2-edc3-46dc-a160-75c77cb2bafb/volumes" Dec 11 15:31:31 crc kubenswrapper[5050]: I1211 15:31:31.562507 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d35e2a89-ca99-46a0-86ce-83d7eac9733e" path="/var/lib/kubelet/pods/d35e2a89-ca99-46a0-86ce-83d7eac9733e/volumes" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.108666 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-lnbxw"] Dec 11 15:31:33 crc kubenswrapper[5050]: E1211 15:31:33.109472 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a7d3252-f0a4-4414-950b-4048d1824b3f" containerName="util" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.109492 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a7d3252-f0a4-4414-950b-4048d1824b3f" containerName="util" Dec 11 15:31:33 crc kubenswrapper[5050]: E1211 15:31:33.109516 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a7d3252-f0a4-4414-950b-4048d1824b3f" containerName="extract" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.109525 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a7d3252-f0a4-4414-950b-4048d1824b3f" containerName="extract" Dec 11 15:31:33 crc kubenswrapper[5050]: E1211 15:31:33.109571 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a7d3252-f0a4-4414-950b-4048d1824b3f" containerName="pull" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.109580 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a7d3252-f0a4-4414-950b-4048d1824b3f" containerName="pull" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.109822 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a7d3252-f0a4-4414-950b-4048d1824b3f" containerName="extract" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.110767 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-lnbxw" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.112935 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.114607 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.114824 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-mpfzr" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.126267 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-lnbxw"] Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.165376 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-88668db7c-r75jd"] Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.166863 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-88668db7c-r75jd" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.170881 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-g8gr8" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.172396 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.200174 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-88668db7c-r75jd"] Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.264810 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-88668db7c-dgfc6"] Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.266489 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-88668db7c-dgfc6" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.270373 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9fdf7601-ba17-4c54-b9aa-d45acd66f48f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-88668db7c-r75jd\" (UID: \"9fdf7601-ba17-4c54-b9aa-d45acd66f48f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-88668db7c-r75jd" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.270460 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9fdf7601-ba17-4c54-b9aa-d45acd66f48f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-88668db7c-r75jd\" (UID: \"9fdf7601-ba17-4c54-b9aa-d45acd66f48f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-88668db7c-r75jd" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.270512 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcl6m\" (UniqueName: \"kubernetes.io/projected/a0eb6722-facd-448a-97aa-3a2206d037d1-kube-api-access-dcl6m\") pod \"obo-prometheus-operator-668cf9dfbb-lnbxw\" (UID: \"a0eb6722-facd-448a-97aa-3a2206d037d1\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-lnbxw" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.289179 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-88668db7c-dgfc6"] Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.335423 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-wwdcc"] Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.336695 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.339525 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.343314 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-f5g9s" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.366899 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-wwdcc"] Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.374690 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9fdf7601-ba17-4c54-b9aa-d45acd66f48f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-88668db7c-r75jd\" (UID: \"9fdf7601-ba17-4c54-b9aa-d45acd66f48f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-88668db7c-r75jd" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.374791 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b531a96-800b-4ce0-a9d5-f913c90693ba-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-88668db7c-dgfc6\" (UID: \"7b531a96-800b-4ce0-a9d5-f913c90693ba\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-88668db7c-dgfc6" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.374840 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9fdf7601-ba17-4c54-b9aa-d45acd66f48f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-88668db7c-r75jd\" (UID: \"9fdf7601-ba17-4c54-b9aa-d45acd66f48f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-88668db7c-r75jd" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.374883 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b531a96-800b-4ce0-a9d5-f913c90693ba-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-88668db7c-dgfc6\" (UID: \"7b531a96-800b-4ce0-a9d5-f913c90693ba\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-88668db7c-dgfc6" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.374932 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcl6m\" (UniqueName: \"kubernetes.io/projected/a0eb6722-facd-448a-97aa-3a2206d037d1-kube-api-access-dcl6m\") pod \"obo-prometheus-operator-668cf9dfbb-lnbxw\" (UID: \"a0eb6722-facd-448a-97aa-3a2206d037d1\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-lnbxw" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.383074 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9fdf7601-ba17-4c54-b9aa-d45acd66f48f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-88668db7c-r75jd\" (UID: \"9fdf7601-ba17-4c54-b9aa-d45acd66f48f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-88668db7c-r75jd" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.384830 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9fdf7601-ba17-4c54-b9aa-d45acd66f48f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-88668db7c-r75jd\" (UID: \"9fdf7601-ba17-4c54-b9aa-d45acd66f48f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-88668db7c-r75jd" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.392993 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcl6m\" (UniqueName: \"kubernetes.io/projected/a0eb6722-facd-448a-97aa-3a2206d037d1-kube-api-access-dcl6m\") pod \"obo-prometheus-operator-668cf9dfbb-lnbxw\" (UID: \"a0eb6722-facd-448a-97aa-3a2206d037d1\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-lnbxw" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.437873 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-lnbxw" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.477177 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c8331c1-b8ee-456b-baa6-110917427b64-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-wwdcc\" (UID: \"1c8331c1-b8ee-456b-baa6-110917427b64\") " pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.477625 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b531a96-800b-4ce0-a9d5-f913c90693ba-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-88668db7c-dgfc6\" (UID: \"7b531a96-800b-4ce0-a9d5-f913c90693ba\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-88668db7c-dgfc6" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.477753 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b531a96-800b-4ce0-a9d5-f913c90693ba-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-88668db7c-dgfc6\" (UID: \"7b531a96-800b-4ce0-a9d5-f913c90693ba\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-88668db7c-dgfc6" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.477959 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hgv5\" (UniqueName: \"kubernetes.io/projected/1c8331c1-b8ee-456b-baa6-110917427b64-kube-api-access-8hgv5\") pod \"observability-operator-d8bb48f5d-wwdcc\" (UID: \"1c8331c1-b8ee-456b-baa6-110917427b64\") " pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.482698 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7b531a96-800b-4ce0-a9d5-f913c90693ba-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-88668db7c-dgfc6\" (UID: \"7b531a96-800b-4ce0-a9d5-f913c90693ba\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-88668db7c-dgfc6" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.483181 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7b531a96-800b-4ce0-a9d5-f913c90693ba-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-88668db7c-dgfc6\" (UID: \"7b531a96-800b-4ce0-a9d5-f913c90693ba\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-88668db7c-dgfc6" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.528412 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-88668db7c-r75jd" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.532238 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-dqk4m"] Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.533556 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-dqk4m" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.538868 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-xflrf" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.547999 5050 scope.go:117] "RemoveContainer" containerID="ebf03c01a0c1f974cd2ea46998cc404d2f6db1e8b59294fa047964a30636886e" Dec 11 15:31:33 crc kubenswrapper[5050]: E1211 15:31:33.548223 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.580382 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/051b7665-675e-4109-a8e8-5a416c8b49cc-openshift-service-ca\") pod \"perses-operator-5446b9c989-dqk4m\" (UID: \"051b7665-675e-4109-a8e8-5a416c8b49cc\") " pod="openshift-operators/perses-operator-5446b9c989-dqk4m" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.580438 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hgv5\" (UniqueName: \"kubernetes.io/projected/1c8331c1-b8ee-456b-baa6-110917427b64-kube-api-access-8hgv5\") pod \"observability-operator-d8bb48f5d-wwdcc\" (UID: \"1c8331c1-b8ee-456b-baa6-110917427b64\") " pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.582881 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-dqk4m"] Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.580469 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2pxf\" (UniqueName: \"kubernetes.io/projected/051b7665-675e-4109-a8e8-5a416c8b49cc-kube-api-access-n2pxf\") pod \"perses-operator-5446b9c989-dqk4m\" (UID: \"051b7665-675e-4109-a8e8-5a416c8b49cc\") " pod="openshift-operators/perses-operator-5446b9c989-dqk4m" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.591992 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c8331c1-b8ee-456b-baa6-110917427b64-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-wwdcc\" (UID: \"1c8331c1-b8ee-456b-baa6-110917427b64\") " pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.592465 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-88668db7c-dgfc6" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.596473 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/1c8331c1-b8ee-456b-baa6-110917427b64-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-wwdcc\" (UID: \"1c8331c1-b8ee-456b-baa6-110917427b64\") " pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.616468 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hgv5\" (UniqueName: \"kubernetes.io/projected/1c8331c1-b8ee-456b-baa6-110917427b64-kube-api-access-8hgv5\") pod \"observability-operator-d8bb48f5d-wwdcc\" (UID: \"1c8331c1-b8ee-456b-baa6-110917427b64\") " pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.680867 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.697363 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/051b7665-675e-4109-a8e8-5a416c8b49cc-openshift-service-ca\") pod \"perses-operator-5446b9c989-dqk4m\" (UID: \"051b7665-675e-4109-a8e8-5a416c8b49cc\") " pod="openshift-operators/perses-operator-5446b9c989-dqk4m" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.697438 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2pxf\" (UniqueName: \"kubernetes.io/projected/051b7665-675e-4109-a8e8-5a416c8b49cc-kube-api-access-n2pxf\") pod \"perses-operator-5446b9c989-dqk4m\" (UID: \"051b7665-675e-4109-a8e8-5a416c8b49cc\") " pod="openshift-operators/perses-operator-5446b9c989-dqk4m" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.699032 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/051b7665-675e-4109-a8e8-5a416c8b49cc-openshift-service-ca\") pod \"perses-operator-5446b9c989-dqk4m\" (UID: \"051b7665-675e-4109-a8e8-5a416c8b49cc\") " pod="openshift-operators/perses-operator-5446b9c989-dqk4m" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.732090 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2pxf\" (UniqueName: \"kubernetes.io/projected/051b7665-675e-4109-a8e8-5a416c8b49cc-kube-api-access-n2pxf\") pod \"perses-operator-5446b9c989-dqk4m\" (UID: \"051b7665-675e-4109-a8e8-5a416c8b49cc\") " pod="openshift-operators/perses-operator-5446b9c989-dqk4m" Dec 11 15:31:33 crc kubenswrapper[5050]: I1211 15:31:33.974134 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-dqk4m" Dec 11 15:31:34 crc kubenswrapper[5050]: I1211 15:31:34.111933 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-lnbxw"] Dec 11 15:31:34 crc kubenswrapper[5050]: W1211 15:31:34.127964 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0eb6722_facd_448a_97aa_3a2206d037d1.slice/crio-814a6788cae783a4758d6da43895db2247ee88001cced4000d2b4e4fc6eee145 WatchSource:0}: Error finding container 814a6788cae783a4758d6da43895db2247ee88001cced4000d2b4e4fc6eee145: Status 404 returned error can't find the container with id 814a6788cae783a4758d6da43895db2247ee88001cced4000d2b4e4fc6eee145 Dec 11 15:31:34 crc kubenswrapper[5050]: I1211 15:31:34.245278 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-88668db7c-dgfc6"] Dec 11 15:31:34 crc kubenswrapper[5050]: W1211 15:31:34.257615 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b531a96_800b_4ce0_a9d5_f913c90693ba.slice/crio-29831af2e23f3b84a82938559332b818015d60899ff8a7e8dbea105a02cc5f73 WatchSource:0}: Error finding container 29831af2e23f3b84a82938559332b818015d60899ff8a7e8dbea105a02cc5f73: Status 404 returned error can't find the container with id 29831af2e23f3b84a82938559332b818015d60899ff8a7e8dbea105a02cc5f73 Dec 11 15:31:34 crc kubenswrapper[5050]: I1211 15:31:34.335119 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-88668db7c-r75jd"] Dec 11 15:31:34 crc kubenswrapper[5050]: I1211 15:31:34.354729 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-wwdcc"] Dec 11 15:31:34 crc kubenswrapper[5050]: W1211 15:31:34.356942 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fdf7601_ba17_4c54_b9aa_d45acd66f48f.slice/crio-d7ca5e962b105ecb08ff71e94c945aa62a1809b64e0d2fb7f874b533c59745b3 WatchSource:0}: Error finding container d7ca5e962b105ecb08ff71e94c945aa62a1809b64e0d2fb7f874b533c59745b3: Status 404 returned error can't find the container with id d7ca5e962b105ecb08ff71e94c945aa62a1809b64e0d2fb7f874b533c59745b3 Dec 11 15:31:34 crc kubenswrapper[5050]: I1211 15:31:34.516161 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-dqk4m"] Dec 11 15:31:34 crc kubenswrapper[5050]: W1211 15:31:34.527533 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod051b7665_675e_4109_a8e8_5a416c8b49cc.slice/crio-e70cf55ebf095433f91d201bea982298b45d98c92815004a973dcc99a866c0ba WatchSource:0}: Error finding container e70cf55ebf095433f91d201bea982298b45d98c92815004a973dcc99a866c0ba: Status 404 returned error can't find the container with id e70cf55ebf095433f91d201bea982298b45d98c92815004a973dcc99a866c0ba Dec 11 15:31:34 crc kubenswrapper[5050]: I1211 15:31:34.906240 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-88668db7c-dgfc6" event={"ID":"7b531a96-800b-4ce0-a9d5-f913c90693ba","Type":"ContainerStarted","Data":"29831af2e23f3b84a82938559332b818015d60899ff8a7e8dbea105a02cc5f73"} Dec 11 15:31:34 crc kubenswrapper[5050]: I1211 15:31:34.922706 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-lnbxw" event={"ID":"a0eb6722-facd-448a-97aa-3a2206d037d1","Type":"ContainerStarted","Data":"814a6788cae783a4758d6da43895db2247ee88001cced4000d2b4e4fc6eee145"} Dec 11 15:31:34 crc kubenswrapper[5050]: I1211 15:31:34.933476 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" event={"ID":"1c8331c1-b8ee-456b-baa6-110917427b64","Type":"ContainerStarted","Data":"6d1ccf7ca4761dcc3f11a28dc22a144f0cf76e903b9cd9e6fe7350550af1a0d5"} Dec 11 15:31:34 crc kubenswrapper[5050]: I1211 15:31:34.942998 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-88668db7c-r75jd" event={"ID":"9fdf7601-ba17-4c54-b9aa-d45acd66f48f","Type":"ContainerStarted","Data":"d7ca5e962b105ecb08ff71e94c945aa62a1809b64e0d2fb7f874b533c59745b3"} Dec 11 15:31:34 crc kubenswrapper[5050]: I1211 15:31:34.954413 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-dqk4m" event={"ID":"051b7665-675e-4109-a8e8-5a416c8b49cc","Type":"ContainerStarted","Data":"e70cf55ebf095433f91d201bea982298b45d98c92815004a973dcc99a866c0ba"} Dec 11 15:31:39 crc kubenswrapper[5050]: I1211 15:31:39.028934 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-tx9kf"] Dec 11 15:31:39 crc kubenswrapper[5050]: I1211 15:31:39.040582 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-tx9kf"] Dec 11 15:31:39 crc kubenswrapper[5050]: I1211 15:31:39.563221 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="768c7508-49b4-4465-9cf9-f1388a1ca283" path="/var/lib/kubelet/pods/768c7508-49b4-4465-9cf9-f1388a1ca283/volumes" Dec 11 15:31:40 crc kubenswrapper[5050]: I1211 15:31:40.918863 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mwbzz"] Dec 11 15:31:40 crc kubenswrapper[5050]: I1211 15:31:40.921809 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mwbzz" Dec 11 15:31:40 crc kubenswrapper[5050]: I1211 15:31:40.932111 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mwbzz"] Dec 11 15:31:40 crc kubenswrapper[5050]: I1211 15:31:40.990327 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a-catalog-content\") pod \"certified-operators-mwbzz\" (UID: \"7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a\") " pod="openshift-marketplace/certified-operators-mwbzz" Dec 11 15:31:40 crc kubenswrapper[5050]: I1211 15:31:40.990621 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzf98\" (UniqueName: \"kubernetes.io/projected/7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a-kube-api-access-pzf98\") pod \"certified-operators-mwbzz\" (UID: \"7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a\") " pod="openshift-marketplace/certified-operators-mwbzz" Dec 11 15:31:40 crc kubenswrapper[5050]: I1211 15:31:40.990755 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a-utilities\") pod \"certified-operators-mwbzz\" (UID: \"7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a\") " pod="openshift-marketplace/certified-operators-mwbzz" Dec 11 15:31:41 crc kubenswrapper[5050]: I1211 15:31:41.096672 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a-catalog-content\") pod \"certified-operators-mwbzz\" (UID: \"7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a\") " pod="openshift-marketplace/certified-operators-mwbzz" Dec 11 15:31:41 crc kubenswrapper[5050]: I1211 15:31:41.096724 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzf98\" (UniqueName: \"kubernetes.io/projected/7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a-kube-api-access-pzf98\") pod \"certified-operators-mwbzz\" (UID: \"7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a\") " pod="openshift-marketplace/certified-operators-mwbzz" Dec 11 15:31:41 crc kubenswrapper[5050]: I1211 15:31:41.096931 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a-utilities\") pod \"certified-operators-mwbzz\" (UID: \"7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a\") " pod="openshift-marketplace/certified-operators-mwbzz" Dec 11 15:31:41 crc kubenswrapper[5050]: I1211 15:31:41.098978 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a-utilities\") pod \"certified-operators-mwbzz\" (UID: \"7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a\") " pod="openshift-marketplace/certified-operators-mwbzz" Dec 11 15:31:41 crc kubenswrapper[5050]: I1211 15:31:41.098992 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a-catalog-content\") pod \"certified-operators-mwbzz\" (UID: \"7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a\") " pod="openshift-marketplace/certified-operators-mwbzz" Dec 11 15:31:41 crc kubenswrapper[5050]: I1211 15:31:41.122323 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzf98\" (UniqueName: \"kubernetes.io/projected/7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a-kube-api-access-pzf98\") pod \"certified-operators-mwbzz\" (UID: \"7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a\") " pod="openshift-marketplace/certified-operators-mwbzz" Dec 11 15:31:41 crc kubenswrapper[5050]: I1211 15:31:41.265847 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mwbzz" Dec 11 15:31:43 crc kubenswrapper[5050]: I1211 15:31:43.324574 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n72qg"] Dec 11 15:31:43 crc kubenswrapper[5050]: I1211 15:31:43.327159 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n72qg" Dec 11 15:31:43 crc kubenswrapper[5050]: I1211 15:31:43.331563 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n72qg"] Dec 11 15:31:43 crc kubenswrapper[5050]: I1211 15:31:43.442965 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/573bf003-8d0e-432d-8729-b007d972ea7b-catalog-content\") pod \"community-operators-n72qg\" (UID: \"573bf003-8d0e-432d-8729-b007d972ea7b\") " pod="openshift-marketplace/community-operators-n72qg" Dec 11 15:31:43 crc kubenswrapper[5050]: I1211 15:31:43.443307 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/573bf003-8d0e-432d-8729-b007d972ea7b-utilities\") pod \"community-operators-n72qg\" (UID: \"573bf003-8d0e-432d-8729-b007d972ea7b\") " pod="openshift-marketplace/community-operators-n72qg" Dec 11 15:31:43 crc kubenswrapper[5050]: I1211 15:31:43.443354 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxxwq\" (UniqueName: \"kubernetes.io/projected/573bf003-8d0e-432d-8729-b007d972ea7b-kube-api-access-bxxwq\") pod \"community-operators-n72qg\" (UID: \"573bf003-8d0e-432d-8729-b007d972ea7b\") " pod="openshift-marketplace/community-operators-n72qg" Dec 11 15:31:43 crc kubenswrapper[5050]: I1211 15:31:43.545312 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/573bf003-8d0e-432d-8729-b007d972ea7b-utilities\") pod \"community-operators-n72qg\" (UID: \"573bf003-8d0e-432d-8729-b007d972ea7b\") " pod="openshift-marketplace/community-operators-n72qg" Dec 11 15:31:43 crc kubenswrapper[5050]: I1211 15:31:43.545646 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxxwq\" (UniqueName: \"kubernetes.io/projected/573bf003-8d0e-432d-8729-b007d972ea7b-kube-api-access-bxxwq\") pod \"community-operators-n72qg\" (UID: \"573bf003-8d0e-432d-8729-b007d972ea7b\") " pod="openshift-marketplace/community-operators-n72qg" Dec 11 15:31:43 crc kubenswrapper[5050]: I1211 15:31:43.545793 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/573bf003-8d0e-432d-8729-b007d972ea7b-utilities\") pod \"community-operators-n72qg\" (UID: \"573bf003-8d0e-432d-8729-b007d972ea7b\") " pod="openshift-marketplace/community-operators-n72qg" Dec 11 15:31:43 crc kubenswrapper[5050]: I1211 15:31:43.546295 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/573bf003-8d0e-432d-8729-b007d972ea7b-catalog-content\") pod \"community-operators-n72qg\" (UID: \"573bf003-8d0e-432d-8729-b007d972ea7b\") " pod="openshift-marketplace/community-operators-n72qg" Dec 11 15:31:43 crc kubenswrapper[5050]: I1211 15:31:43.546717 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/573bf003-8d0e-432d-8729-b007d972ea7b-catalog-content\") pod \"community-operators-n72qg\" (UID: \"573bf003-8d0e-432d-8729-b007d972ea7b\") " pod="openshift-marketplace/community-operators-n72qg" Dec 11 15:31:43 crc kubenswrapper[5050]: I1211 15:31:43.570156 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxxwq\" (UniqueName: \"kubernetes.io/projected/573bf003-8d0e-432d-8729-b007d972ea7b-kube-api-access-bxxwq\") pod \"community-operators-n72qg\" (UID: \"573bf003-8d0e-432d-8729-b007d972ea7b\") " pod="openshift-marketplace/community-operators-n72qg" Dec 11 15:31:43 crc kubenswrapper[5050]: I1211 15:31:43.649545 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n72qg" Dec 11 15:31:44 crc kubenswrapper[5050]: I1211 15:31:44.546779 5050 scope.go:117] "RemoveContainer" containerID="ebf03c01a0c1f974cd2ea46998cc404d2f6db1e8b59294fa047964a30636886e" Dec 11 15:31:44 crc kubenswrapper[5050]: E1211 15:31:44.547359 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:31:48 crc kubenswrapper[5050]: I1211 15:31:48.004177 5050 patch_prober.go:28] interesting pod/console-6758fcc465-5n5wb container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.46:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:31:48 crc kubenswrapper[5050]: I1211 15:31:48.004986 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-6758fcc465-5n5wb" podUID="a71bf2e0-2e1a-4591-8e3b-7db34508a3cd" containerName="console" probeResult="failure" output="Get \"https://10.217.0.46:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:31:51 crc kubenswrapper[5050]: E1211 15:31:51.790242 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:203cf5b9dc1460f09e75f58d8b5cf7df5e57c18c8c6a41c14b5e8977d83263f3" Dec 11 15:31:51 crc kubenswrapper[5050]: E1211 15:31:51.792089 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:203cf5b9dc1460f09e75f58d8b5cf7df5e57c18c8c6a41c14b5e8977d83263f3,Command:[],Args:[--prometheus-config-reloader=$(RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER) --prometheus-instance-selector=app.kubernetes.io/managed-by=observability-operator --alertmanager-instance-selector=app.kubernetes.io/managed-by=observability-operator --thanos-ruler-instance-selector=app.kubernetes.io/managed-by=observability-operator],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOGC,Value:30,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER,Value:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:1133c973c7472c665f910a722e19c8e2e27accb34b90fab67f14548627ce9c62,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{157286400 0} {} 150Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dcl6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-668cf9dfbb-lnbxw_openshift-operators(a0eb6722-facd-448a-97aa-3a2206d037d1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 15:31:51 crc kubenswrapper[5050]: E1211 15:31:51.793667 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-lnbxw" podUID="a0eb6722-facd-448a-97aa-3a2206d037d1" Dec 11 15:31:52 crc kubenswrapper[5050]: I1211 15:31:52.046215 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qqfkl"] Dec 11 15:31:52 crc kubenswrapper[5050]: I1211 15:31:52.055726 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qqfkl"] Dec 11 15:31:52 crc kubenswrapper[5050]: E1211 15:31:52.353929 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:203cf5b9dc1460f09e75f58d8b5cf7df5e57c18c8c6a41c14b5e8977d83263f3\\\"\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-lnbxw" podUID="a0eb6722-facd-448a-97aa-3a2206d037d1" Dec 11 15:31:53 crc kubenswrapper[5050]: I1211 15:31:53.047648 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-qx6t6"] Dec 11 15:31:53 crc kubenswrapper[5050]: I1211 15:31:53.058430 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-qx6t6"] Dec 11 15:31:53 crc kubenswrapper[5050]: I1211 15:31:53.277519 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-dqk4m" Dec 11 15:31:53 crc kubenswrapper[5050]: I1211 15:31:53.313144 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mwbzz"] Dec 11 15:31:53 crc kubenswrapper[5050]: I1211 15:31:53.319578 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-dqk4m" podStartSLOduration=2.050774152 podStartE2EDuration="20.319558275s" podCreationTimestamp="2025-12-11 15:31:33 +0000 UTC" firstStartedPulling="2025-12-11 15:31:34.53021312 +0000 UTC m=+6185.373935706" lastFinishedPulling="2025-12-11 15:31:52.798997243 +0000 UTC m=+6203.642719829" observedRunningTime="2025-12-11 15:31:53.300628199 +0000 UTC m=+6204.144350805" watchObservedRunningTime="2025-12-11 15:31:53.319558275 +0000 UTC m=+6204.163280861" Dec 11 15:31:53 crc kubenswrapper[5050]: I1211 15:31:53.417839 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n72qg"] Dec 11 15:31:53 crc kubenswrapper[5050]: I1211 15:31:53.558794 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afe7472c-e7d5-4aef-859d-fc0f5e43e05f" path="/var/lib/kubelet/pods/afe7472c-e7d5-4aef-859d-fc0f5e43e05f/volumes" Dec 11 15:31:53 crc kubenswrapper[5050]: I1211 15:31:53.559382 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3e50969-a69c-4b66-9788-ca2566127898" path="/var/lib/kubelet/pods/c3e50969-a69c-4b66-9788-ca2566127898/volumes" Dec 11 15:31:54 crc kubenswrapper[5050]: I1211 15:31:54.286587 5050 generic.go:334] "Generic (PLEG): container finished" podID="573bf003-8d0e-432d-8729-b007d972ea7b" containerID="32a7793c6c2569549adf3e8a2dde32dd2d9293ccdde68140aa8881cec389b874" exitCode=0 Dec 11 15:31:54 crc kubenswrapper[5050]: I1211 15:31:54.286664 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n72qg" event={"ID":"573bf003-8d0e-432d-8729-b007d972ea7b","Type":"ContainerDied","Data":"32a7793c6c2569549adf3e8a2dde32dd2d9293ccdde68140aa8881cec389b874"} Dec 11 15:31:54 crc kubenswrapper[5050]: I1211 15:31:54.286991 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n72qg" event={"ID":"573bf003-8d0e-432d-8729-b007d972ea7b","Type":"ContainerStarted","Data":"6d287e7c69eb7914a2ec84044908d32654a799d65c02a0621cad12e1ed1b8d57"} Dec 11 15:31:54 crc kubenswrapper[5050]: I1211 15:31:54.288856 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-dqk4m" event={"ID":"051b7665-675e-4109-a8e8-5a416c8b49cc","Type":"ContainerStarted","Data":"24d044e6002ceea70e407df4c2114b5a4d2691b9219b5504ac9d1c6b18f96c6a"} Dec 11 15:31:54 crc kubenswrapper[5050]: I1211 15:31:54.306575 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-88668db7c-dgfc6" event={"ID":"7b531a96-800b-4ce0-a9d5-f913c90693ba","Type":"ContainerStarted","Data":"bc25f77291c5f483ddae4409d698fc2139cd9bb0fc44465ac4d1ac3f9dacc12c"} Dec 11 15:31:54 crc kubenswrapper[5050]: I1211 15:31:54.309027 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" event={"ID":"1c8331c1-b8ee-456b-baa6-110917427b64","Type":"ContainerStarted","Data":"bc1a7c414fb7ed1343e941a8a7ba794d8eed3ccaad27a52f69cfbbfb3ef248e7"} Dec 11 15:31:54 crc kubenswrapper[5050]: I1211 15:31:54.309577 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" Dec 11 15:31:54 crc kubenswrapper[5050]: I1211 15:31:54.311035 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" Dec 11 15:31:54 crc kubenswrapper[5050]: I1211 15:31:54.312142 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-88668db7c-r75jd" event={"ID":"9fdf7601-ba17-4c54-b9aa-d45acd66f48f","Type":"ContainerStarted","Data":"fe523c62dc0746a6c39f9e28b22f84b9a2ba4e46ca4a742972f74f2ce63661b0"} Dec 11 15:31:54 crc kubenswrapper[5050]: I1211 15:31:54.313750 5050 generic.go:334] "Generic (PLEG): container finished" podID="7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a" containerID="32f6110f67e367b895cde401023f13d5b1f85c3f15377b7b2569f60a1a7a72a5" exitCode=0 Dec 11 15:31:54 crc kubenswrapper[5050]: I1211 15:31:54.313799 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwbzz" event={"ID":"7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a","Type":"ContainerDied","Data":"32f6110f67e367b895cde401023f13d5b1f85c3f15377b7b2569f60a1a7a72a5"} Dec 11 15:31:54 crc kubenswrapper[5050]: I1211 15:31:54.313825 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwbzz" event={"ID":"7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a","Type":"ContainerStarted","Data":"f49174df14f24904a1a6ff93a3909e0fb96ebe4350e98915c9d727d8fb683010"} Dec 11 15:31:54 crc kubenswrapper[5050]: I1211 15:31:54.355113 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" podStartSLOduration=2.922028431 podStartE2EDuration="21.355076451s" podCreationTimestamp="2025-12-11 15:31:33 +0000 UTC" firstStartedPulling="2025-12-11 15:31:34.36283621 +0000 UTC m=+6185.206558796" lastFinishedPulling="2025-12-11 15:31:52.79588423 +0000 UTC m=+6203.639606816" observedRunningTime="2025-12-11 15:31:54.349441911 +0000 UTC m=+6205.193164497" watchObservedRunningTime="2025-12-11 15:31:54.355076451 +0000 UTC m=+6205.198799037" Dec 11 15:31:54 crc kubenswrapper[5050]: I1211 15:31:54.404271 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-88668db7c-dgfc6" podStartSLOduration=2.855804692 podStartE2EDuration="21.404250684s" podCreationTimestamp="2025-12-11 15:31:33 +0000 UTC" firstStartedPulling="2025-12-11 15:31:34.26062573 +0000 UTC m=+6185.104348316" lastFinishedPulling="2025-12-11 15:31:52.809071722 +0000 UTC m=+6203.652794308" observedRunningTime="2025-12-11 15:31:54.400610537 +0000 UTC m=+6205.244333123" watchObservedRunningTime="2025-12-11 15:31:54.404250684 +0000 UTC m=+6205.247973270" Dec 11 15:31:54 crc kubenswrapper[5050]: I1211 15:31:54.435477 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-88668db7c-r75jd" podStartSLOduration=2.9881320860000002 podStartE2EDuration="21.435460468s" podCreationTimestamp="2025-12-11 15:31:33 +0000 UTC" firstStartedPulling="2025-12-11 15:31:34.36174168 +0000 UTC m=+6185.205464266" lastFinishedPulling="2025-12-11 15:31:52.809070062 +0000 UTC m=+6203.652792648" observedRunningTime="2025-12-11 15:31:54.426368805 +0000 UTC m=+6205.270091391" watchObservedRunningTime="2025-12-11 15:31:54.435460468 +0000 UTC m=+6205.279183054" Dec 11 15:31:55 crc kubenswrapper[5050]: I1211 15:31:55.339810 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n72qg" event={"ID":"573bf003-8d0e-432d-8729-b007d972ea7b","Type":"ContainerStarted","Data":"8a7acb3e92523020874601ab04d189f24eb76757d95dbc3bd6ae594154cc3f12"} Dec 11 15:31:56 crc kubenswrapper[5050]: I1211 15:31:56.350177 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwbzz" event={"ID":"7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a","Type":"ContainerStarted","Data":"90b5200b45db6423e146e8d7c3067953d0e92db44e384e3bdfc475dc90df1046"} Dec 11 15:31:57 crc kubenswrapper[5050]: I1211 15:31:57.363998 5050 generic.go:334] "Generic (PLEG): container finished" podID="573bf003-8d0e-432d-8729-b007d972ea7b" containerID="8a7acb3e92523020874601ab04d189f24eb76757d95dbc3bd6ae594154cc3f12" exitCode=0 Dec 11 15:31:57 crc kubenswrapper[5050]: I1211 15:31:57.364068 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n72qg" event={"ID":"573bf003-8d0e-432d-8729-b007d972ea7b","Type":"ContainerDied","Data":"8a7acb3e92523020874601ab04d189f24eb76757d95dbc3bd6ae594154cc3f12"} Dec 11 15:31:57 crc kubenswrapper[5050]: I1211 15:31:57.546942 5050 scope.go:117] "RemoveContainer" containerID="ebf03c01a0c1f974cd2ea46998cc404d2f6db1e8b59294fa047964a30636886e" Dec 11 15:31:57 crc kubenswrapper[5050]: E1211 15:31:57.547318 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:31:59 crc kubenswrapper[5050]: I1211 15:31:59.383661 5050 generic.go:334] "Generic (PLEG): container finished" podID="7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a" containerID="90b5200b45db6423e146e8d7c3067953d0e92db44e384e3bdfc475dc90df1046" exitCode=0 Dec 11 15:31:59 crc kubenswrapper[5050]: I1211 15:31:59.383748 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwbzz" event={"ID":"7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a","Type":"ContainerDied","Data":"90b5200b45db6423e146e8d7c3067953d0e92db44e384e3bdfc475dc90df1046"} Dec 11 15:32:00 crc kubenswrapper[5050]: I1211 15:32:00.393806 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwbzz" event={"ID":"7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a","Type":"ContainerStarted","Data":"9e9347371c6a2ec24f63bfee61027b068b67cffdfddac7e82b50c4fcc2803480"} Dec 11 15:32:00 crc kubenswrapper[5050]: I1211 15:32:00.397002 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n72qg" event={"ID":"573bf003-8d0e-432d-8729-b007d972ea7b","Type":"ContainerStarted","Data":"a8b4d2b93d8f8f671064bbe3501bf8f6e77eaa2c8520ade9f059b9f462d05ca1"} Dec 11 15:32:00 crc kubenswrapper[5050]: I1211 15:32:00.424096 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mwbzz" podStartSLOduration=14.959126874 podStartE2EDuration="20.424079349s" podCreationTimestamp="2025-12-11 15:31:40 +0000 UTC" firstStartedPulling="2025-12-11 15:31:54.315297739 +0000 UTC m=+6205.159020325" lastFinishedPulling="2025-12-11 15:31:59.780250214 +0000 UTC m=+6210.623972800" observedRunningTime="2025-12-11 15:32:00.415024147 +0000 UTC m=+6211.258746753" watchObservedRunningTime="2025-12-11 15:32:00.424079349 +0000 UTC m=+6211.267801935" Dec 11 15:32:00 crc kubenswrapper[5050]: I1211 15:32:00.445485 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n72qg" podStartSLOduration=12.386420885 podStartE2EDuration="17.44546328s" podCreationTimestamp="2025-12-11 15:31:43 +0000 UTC" firstStartedPulling="2025-12-11 15:31:54.288703428 +0000 UTC m=+6205.132426014" lastFinishedPulling="2025-12-11 15:31:59.347745823 +0000 UTC m=+6210.191468409" observedRunningTime="2025-12-11 15:32:00.440333723 +0000 UTC m=+6211.284056319" watchObservedRunningTime="2025-12-11 15:32:00.44546328 +0000 UTC m=+6211.289185876" Dec 11 15:32:00 crc kubenswrapper[5050]: I1211 15:32:00.875702 5050 scope.go:117] "RemoveContainer" containerID="bba526dafd51c2c91e6a8bc3cb18cc4b2a68f042ba7fc81d2819805dac2c26a4" Dec 11 15:32:00 crc kubenswrapper[5050]: I1211 15:32:00.913184 5050 scope.go:117] "RemoveContainer" containerID="c02660abc3fc5fb076ff5ad5bf2e511e26185a715de3f3abc635e5d8f4dcacdc" Dec 11 15:32:00 crc kubenswrapper[5050]: I1211 15:32:00.949457 5050 scope.go:117] "RemoveContainer" containerID="c1e7a916f86387d6bb6a96e63dcb5d510f10fc66d19aee95197e783fdfbcaf87" Dec 11 15:32:00 crc kubenswrapper[5050]: I1211 15:32:00.997458 5050 scope.go:117] "RemoveContainer" containerID="266d44f0eadb34c4b91b10f9eaec54f29a10ca04027a57ae77f6f1e133cf194e" Dec 11 15:32:01 crc kubenswrapper[5050]: I1211 15:32:01.053403 5050 scope.go:117] "RemoveContainer" containerID="b42d6621c7a82a0d50cf771a1c285c9d99fc393f239c5167d830d58ee4694b91" Dec 11 15:32:01 crc kubenswrapper[5050]: I1211 15:32:01.081578 5050 scope.go:117] "RemoveContainer" containerID="c06af4b1d0beb3613d151bbf71668b5e3490434ca6e76f526229fa627f332232" Dec 11 15:32:01 crc kubenswrapper[5050]: I1211 15:32:01.147307 5050 scope.go:117] "RemoveContainer" containerID="1adc4ab25db7fa2cbe239da32fd3d06314408717077a26f512cd6812c2d70dc4" Dec 11 15:32:01 crc kubenswrapper[5050]: I1211 15:32:01.170893 5050 scope.go:117] "RemoveContainer" containerID="069c9a16de9de821816198ef02460338c610d667d1330d2725bf537d9b806e8b" Dec 11 15:32:01 crc kubenswrapper[5050]: I1211 15:32:01.194505 5050 scope.go:117] "RemoveContainer" containerID="f7f1b6337ce8ed39b35e0217d22c76a014c4bda1bad0bd9c450cb00397f0876e" Dec 11 15:32:01 crc kubenswrapper[5050]: I1211 15:32:01.266980 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mwbzz" Dec 11 15:32:01 crc kubenswrapper[5050]: I1211 15:32:01.267033 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mwbzz" Dec 11 15:32:02 crc kubenswrapper[5050]: I1211 15:32:02.314244 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-mwbzz" podUID="7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a" containerName="registry-server" probeResult="failure" output=< Dec 11 15:32:02 crc kubenswrapper[5050]: timeout: failed to connect service ":50051" within 1s Dec 11 15:32:02 crc kubenswrapper[5050]: > Dec 11 15:32:03 crc kubenswrapper[5050]: I1211 15:32:03.650551 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n72qg" Dec 11 15:32:03 crc kubenswrapper[5050]: I1211 15:32:03.650927 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n72qg" Dec 11 15:32:03 crc kubenswrapper[5050]: I1211 15:32:03.698505 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n72qg" Dec 11 15:32:03 crc kubenswrapper[5050]: I1211 15:32:03.976972 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-dqk4m" Dec 11 15:32:04 crc kubenswrapper[5050]: I1211 15:32:04.521579 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n72qg" Dec 11 15:32:05 crc kubenswrapper[5050]: I1211 15:32:05.452136 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-lnbxw" event={"ID":"a0eb6722-facd-448a-97aa-3a2206d037d1","Type":"ContainerStarted","Data":"1edf5687574483082af9814aad527130571b16e73a2c8de03974e31f86fa41c8"} Dec 11 15:32:05 crc kubenswrapper[5050]: I1211 15:32:05.702903 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-lnbxw" podStartSLOduration=2.5488218849999997 podStartE2EDuration="32.702878632s" podCreationTimestamp="2025-12-11 15:31:33 +0000 UTC" firstStartedPulling="2025-12-11 15:31:34.130734571 +0000 UTC m=+6184.974457157" lastFinishedPulling="2025-12-11 15:32:04.284791318 +0000 UTC m=+6215.128513904" observedRunningTime="2025-12-11 15:32:05.476752073 +0000 UTC m=+6216.320474659" watchObservedRunningTime="2025-12-11 15:32:05.702878632 +0000 UTC m=+6216.546601228" Dec 11 15:32:05 crc kubenswrapper[5050]: I1211 15:32:05.721738 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n72qg"] Dec 11 15:32:06 crc kubenswrapper[5050]: I1211 15:32:06.458777 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n72qg" podUID="573bf003-8d0e-432d-8729-b007d972ea7b" containerName="registry-server" containerID="cri-o://a8b4d2b93d8f8f671064bbe3501bf8f6e77eaa2c8520ade9f059b9f462d05ca1" gracePeriod=2 Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.068977 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n72qg" Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.078830 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-q2m5c"] Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.092232 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-q2m5c"] Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.143621 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxxwq\" (UniqueName: \"kubernetes.io/projected/573bf003-8d0e-432d-8729-b007d972ea7b-kube-api-access-bxxwq\") pod \"573bf003-8d0e-432d-8729-b007d972ea7b\" (UID: \"573bf003-8d0e-432d-8729-b007d972ea7b\") " Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.143790 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/573bf003-8d0e-432d-8729-b007d972ea7b-catalog-content\") pod \"573bf003-8d0e-432d-8729-b007d972ea7b\" (UID: \"573bf003-8d0e-432d-8729-b007d972ea7b\") " Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.143955 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/573bf003-8d0e-432d-8729-b007d972ea7b-utilities\") pod \"573bf003-8d0e-432d-8729-b007d972ea7b\" (UID: \"573bf003-8d0e-432d-8729-b007d972ea7b\") " Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.144546 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/573bf003-8d0e-432d-8729-b007d972ea7b-utilities" (OuterVolumeSpecName: "utilities") pod "573bf003-8d0e-432d-8729-b007d972ea7b" (UID: "573bf003-8d0e-432d-8729-b007d972ea7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.151450 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/573bf003-8d0e-432d-8729-b007d972ea7b-kube-api-access-bxxwq" (OuterVolumeSpecName: "kube-api-access-bxxwq") pod "573bf003-8d0e-432d-8729-b007d972ea7b" (UID: "573bf003-8d0e-432d-8729-b007d972ea7b"). InnerVolumeSpecName "kube-api-access-bxxwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.210038 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/573bf003-8d0e-432d-8729-b007d972ea7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "573bf003-8d0e-432d-8729-b007d972ea7b" (UID: "573bf003-8d0e-432d-8729-b007d972ea7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.246681 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxxwq\" (UniqueName: \"kubernetes.io/projected/573bf003-8d0e-432d-8729-b007d972ea7b-kube-api-access-bxxwq\") on node \"crc\" DevicePath \"\"" Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.246717 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/573bf003-8d0e-432d-8729-b007d972ea7b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.246726 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/573bf003-8d0e-432d-8729-b007d972ea7b-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.471583 5050 generic.go:334] "Generic (PLEG): container finished" podID="573bf003-8d0e-432d-8729-b007d972ea7b" containerID="a8b4d2b93d8f8f671064bbe3501bf8f6e77eaa2c8520ade9f059b9f462d05ca1" exitCode=0 Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.471645 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n72qg" event={"ID":"573bf003-8d0e-432d-8729-b007d972ea7b","Type":"ContainerDied","Data":"a8b4d2b93d8f8f671064bbe3501bf8f6e77eaa2c8520ade9f059b9f462d05ca1"} Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.471677 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n72qg" event={"ID":"573bf003-8d0e-432d-8729-b007d972ea7b","Type":"ContainerDied","Data":"6d287e7c69eb7914a2ec84044908d32654a799d65c02a0621cad12e1ed1b8d57"} Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.471692 5050 scope.go:117] "RemoveContainer" containerID="a8b4d2b93d8f8f671064bbe3501bf8f6e77eaa2c8520ade9f059b9f462d05ca1" Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.471882 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n72qg" Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.495805 5050 scope.go:117] "RemoveContainer" containerID="8a7acb3e92523020874601ab04d189f24eb76757d95dbc3bd6ae594154cc3f12" Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.530341 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n72qg"] Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.540853 5050 scope.go:117] "RemoveContainer" containerID="32a7793c6c2569549adf3e8a2dde32dd2d9293ccdde68140aa8881cec389b874" Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.567090 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50d84a10-51b0-4e8e-a413-727685826a4d" path="/var/lib/kubelet/pods/50d84a10-51b0-4e8e-a413-727685826a4d/volumes" Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.567950 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n72qg"] Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.591167 5050 scope.go:117] "RemoveContainer" containerID="a8b4d2b93d8f8f671064bbe3501bf8f6e77eaa2c8520ade9f059b9f462d05ca1" Dec 11 15:32:07 crc kubenswrapper[5050]: E1211 15:32:07.598482 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8b4d2b93d8f8f671064bbe3501bf8f6e77eaa2c8520ade9f059b9f462d05ca1\": container with ID starting with a8b4d2b93d8f8f671064bbe3501bf8f6e77eaa2c8520ade9f059b9f462d05ca1 not found: ID does not exist" containerID="a8b4d2b93d8f8f671064bbe3501bf8f6e77eaa2c8520ade9f059b9f462d05ca1" Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.598523 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8b4d2b93d8f8f671064bbe3501bf8f6e77eaa2c8520ade9f059b9f462d05ca1"} err="failed to get container status \"a8b4d2b93d8f8f671064bbe3501bf8f6e77eaa2c8520ade9f059b9f462d05ca1\": rpc error: code = NotFound desc = could not find container \"a8b4d2b93d8f8f671064bbe3501bf8f6e77eaa2c8520ade9f059b9f462d05ca1\": container with ID starting with a8b4d2b93d8f8f671064bbe3501bf8f6e77eaa2c8520ade9f059b9f462d05ca1 not found: ID does not exist" Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.598555 5050 scope.go:117] "RemoveContainer" containerID="8a7acb3e92523020874601ab04d189f24eb76757d95dbc3bd6ae594154cc3f12" Dec 11 15:32:07 crc kubenswrapper[5050]: E1211 15:32:07.598975 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a7acb3e92523020874601ab04d189f24eb76757d95dbc3bd6ae594154cc3f12\": container with ID starting with 8a7acb3e92523020874601ab04d189f24eb76757d95dbc3bd6ae594154cc3f12 not found: ID does not exist" containerID="8a7acb3e92523020874601ab04d189f24eb76757d95dbc3bd6ae594154cc3f12" Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.599051 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a7acb3e92523020874601ab04d189f24eb76757d95dbc3bd6ae594154cc3f12"} err="failed to get container status \"8a7acb3e92523020874601ab04d189f24eb76757d95dbc3bd6ae594154cc3f12\": rpc error: code = NotFound desc = could not find container \"8a7acb3e92523020874601ab04d189f24eb76757d95dbc3bd6ae594154cc3f12\": container with ID starting with 8a7acb3e92523020874601ab04d189f24eb76757d95dbc3bd6ae594154cc3f12 not found: ID does not exist" Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.599086 5050 scope.go:117] "RemoveContainer" containerID="32a7793c6c2569549adf3e8a2dde32dd2d9293ccdde68140aa8881cec389b874" Dec 11 15:32:07 crc kubenswrapper[5050]: E1211 15:32:07.599687 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32a7793c6c2569549adf3e8a2dde32dd2d9293ccdde68140aa8881cec389b874\": container with ID starting with 32a7793c6c2569549adf3e8a2dde32dd2d9293ccdde68140aa8881cec389b874 not found: ID does not exist" containerID="32a7793c6c2569549adf3e8a2dde32dd2d9293ccdde68140aa8881cec389b874" Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.599715 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32a7793c6c2569549adf3e8a2dde32dd2d9293ccdde68140aa8881cec389b874"} err="failed to get container status \"32a7793c6c2569549adf3e8a2dde32dd2d9293ccdde68140aa8881cec389b874\": rpc error: code = NotFound desc = could not find container \"32a7793c6c2569549adf3e8a2dde32dd2d9293ccdde68140aa8881cec389b874\": container with ID starting with 32a7793c6c2569549adf3e8a2dde32dd2d9293ccdde68140aa8881cec389b874 not found: ID does not exist" Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.717322 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.717605 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="9dab0594-84c1-48fa-b0f9-a010ae461c08" containerName="openstackclient" containerID="cri-o://c4217da09ab7b280b39f45e31215e32e283a8504c6d9bb8d8c904940e92c9f87" gracePeriod=2 Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.729523 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.771867 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 11 15:32:07 crc kubenswrapper[5050]: E1211 15:32:07.772465 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="573bf003-8d0e-432d-8729-b007d972ea7b" containerName="registry-server" Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.772485 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="573bf003-8d0e-432d-8729-b007d972ea7b" containerName="registry-server" Dec 11 15:32:07 crc kubenswrapper[5050]: E1211 15:32:07.772500 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="573bf003-8d0e-432d-8729-b007d972ea7b" containerName="extract-content" Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.772508 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="573bf003-8d0e-432d-8729-b007d972ea7b" containerName="extract-content" Dec 11 15:32:07 crc kubenswrapper[5050]: E1211 15:32:07.772544 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="573bf003-8d0e-432d-8729-b007d972ea7b" containerName="extract-utilities" Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.772550 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="573bf003-8d0e-432d-8729-b007d972ea7b" containerName="extract-utilities" Dec 11 15:32:07 crc kubenswrapper[5050]: E1211 15:32:07.772558 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dab0594-84c1-48fa-b0f9-a010ae461c08" containerName="openstackclient" Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.772563 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dab0594-84c1-48fa-b0f9-a010ae461c08" containerName="openstackclient" Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.772799 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dab0594-84c1-48fa-b0f9-a010ae461c08" containerName="openstackclient" Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.772816 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="573bf003-8d0e-432d-8729-b007d972ea7b" containerName="registry-server" Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.773487 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.789426 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.801684 5050 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="9dab0594-84c1-48fa-b0f9-a010ae461c08" podUID="d90cb39c-1181-4407-b380-fc88daaf0cc2" Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.864436 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d90cb39c-1181-4407-b380-fc88daaf0cc2-openstack-config\") pod \"openstackclient\" (UID: \"d90cb39c-1181-4407-b380-fc88daaf0cc2\") " pod="openstack/openstackclient" Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.864493 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d90cb39c-1181-4407-b380-fc88daaf0cc2-openstack-config-secret\") pod \"openstackclient\" (UID: \"d90cb39c-1181-4407-b380-fc88daaf0cc2\") " pod="openstack/openstackclient" Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.864537 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86pr5\" (UniqueName: \"kubernetes.io/projected/d90cb39c-1181-4407-b380-fc88daaf0cc2-kube-api-access-86pr5\") pod \"openstackclient\" (UID: \"d90cb39c-1181-4407-b380-fc88daaf0cc2\") " pod="openstack/openstackclient" Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.965976 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d90cb39c-1181-4407-b380-fc88daaf0cc2-openstack-config-secret\") pod \"openstackclient\" (UID: \"d90cb39c-1181-4407-b380-fc88daaf0cc2\") " pod="openstack/openstackclient" Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.966065 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86pr5\" (UniqueName: \"kubernetes.io/projected/d90cb39c-1181-4407-b380-fc88daaf0cc2-kube-api-access-86pr5\") pod \"openstackclient\" (UID: \"d90cb39c-1181-4407-b380-fc88daaf0cc2\") " pod="openstack/openstackclient" Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.966415 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d90cb39c-1181-4407-b380-fc88daaf0cc2-openstack-config\") pod \"openstackclient\" (UID: \"d90cb39c-1181-4407-b380-fc88daaf0cc2\") " pod="openstack/openstackclient" Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.967256 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d90cb39c-1181-4407-b380-fc88daaf0cc2-openstack-config\") pod \"openstackclient\" (UID: \"d90cb39c-1181-4407-b380-fc88daaf0cc2\") " pod="openstack/openstackclient" Dec 11 15:32:07 crc kubenswrapper[5050]: I1211 15:32:07.970811 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d90cb39c-1181-4407-b380-fc88daaf0cc2-openstack-config-secret\") pod \"openstackclient\" (UID: \"d90cb39c-1181-4407-b380-fc88daaf0cc2\") " pod="openstack/openstackclient" Dec 11 15:32:08 crc kubenswrapper[5050]: I1211 15:32:08.021445 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 15:32:08 crc kubenswrapper[5050]: I1211 15:32:08.022918 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86pr5\" (UniqueName: \"kubernetes.io/projected/d90cb39c-1181-4407-b380-fc88daaf0cc2-kube-api-access-86pr5\") pod \"openstackclient\" (UID: \"d90cb39c-1181-4407-b380-fc88daaf0cc2\") " pod="openstack/openstackclient" Dec 11 15:32:08 crc kubenswrapper[5050]: I1211 15:32:08.066522 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 15:32:08 crc kubenswrapper[5050]: I1211 15:32:08.066652 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 11 15:32:08 crc kubenswrapper[5050]: I1211 15:32:08.082381 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-nl629" Dec 11 15:32:08 crc kubenswrapper[5050]: I1211 15:32:08.125605 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 11 15:32:08 crc kubenswrapper[5050]: I1211 15:32:08.171907 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhmv9\" (UniqueName: \"kubernetes.io/projected/71218193-88fc-4811-bf04-33a4f4a87898-kube-api-access-jhmv9\") pod \"kube-state-metrics-0\" (UID: \"71218193-88fc-4811-bf04-33a4f4a87898\") " pod="openstack/kube-state-metrics-0" Dec 11 15:32:08 crc kubenswrapper[5050]: I1211 15:32:08.274294 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhmv9\" (UniqueName: \"kubernetes.io/projected/71218193-88fc-4811-bf04-33a4f4a87898-kube-api-access-jhmv9\") pod \"kube-state-metrics-0\" (UID: \"71218193-88fc-4811-bf04-33a4f4a87898\") " pod="openstack/kube-state-metrics-0" Dec 11 15:32:08 crc kubenswrapper[5050]: I1211 15:32:08.305890 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhmv9\" (UniqueName: \"kubernetes.io/projected/71218193-88fc-4811-bf04-33a4f4a87898-kube-api-access-jhmv9\") pod \"kube-state-metrics-0\" (UID: \"71218193-88fc-4811-bf04-33a4f4a87898\") " pod="openstack/kube-state-metrics-0" Dec 11 15:32:08 crc kubenswrapper[5050]: I1211 15:32:08.467378 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 11 15:32:08 crc kubenswrapper[5050]: I1211 15:32:08.907607 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Dec 11 15:32:08 crc kubenswrapper[5050]: I1211 15:32:08.909920 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Dec 11 15:32:08 crc kubenswrapper[5050]: I1211 15:32:08.919659 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Dec 11 15:32:08 crc kubenswrapper[5050]: I1211 15:32:08.934434 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Dec 11 15:32:08 crc kubenswrapper[5050]: I1211 15:32:08.936752 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-c2fxf" Dec 11 15:32:08 crc kubenswrapper[5050]: I1211 15:32:08.936953 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Dec 11 15:32:08 crc kubenswrapper[5050]: I1211 15:32:08.937340 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Dec 11 15:32:08 crc kubenswrapper[5050]: I1211 15:32:08.952690 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.003255 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmcxb\" (UniqueName: \"kubernetes.io/projected/63d4aef8-b968-4728-b048-c8a3879e18c6-kube-api-access-wmcxb\") pod \"alertmanager-metric-storage-0\" (UID: \"63d4aef8-b968-4728-b048-c8a3879e18c6\") " pod="openstack/alertmanager-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.003358 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/63d4aef8-b968-4728-b048-c8a3879e18c6-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"63d4aef8-b968-4728-b048-c8a3879e18c6\") " pod="openstack/alertmanager-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.003402 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/63d4aef8-b968-4728-b048-c8a3879e18c6-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"63d4aef8-b968-4728-b048-c8a3879e18c6\") " pod="openstack/alertmanager-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.003476 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/63d4aef8-b968-4728-b048-c8a3879e18c6-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"63d4aef8-b968-4728-b048-c8a3879e18c6\") " pod="openstack/alertmanager-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.003493 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/63d4aef8-b968-4728-b048-c8a3879e18c6-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"63d4aef8-b968-4728-b048-c8a3879e18c6\") " pod="openstack/alertmanager-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.003507 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/63d4aef8-b968-4728-b048-c8a3879e18c6-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"63d4aef8-b968-4728-b048-c8a3879e18c6\") " pod="openstack/alertmanager-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.003527 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/63d4aef8-b968-4728-b048-c8a3879e18c6-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"63d4aef8-b968-4728-b048-c8a3879e18c6\") " pod="openstack/alertmanager-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.039497 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.110138 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/63d4aef8-b968-4728-b048-c8a3879e18c6-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"63d4aef8-b968-4728-b048-c8a3879e18c6\") " pod="openstack/alertmanager-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.110400 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/63d4aef8-b968-4728-b048-c8a3879e18c6-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"63d4aef8-b968-4728-b048-c8a3879e18c6\") " pod="openstack/alertmanager-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.110419 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/63d4aef8-b968-4728-b048-c8a3879e18c6-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"63d4aef8-b968-4728-b048-c8a3879e18c6\") " pod="openstack/alertmanager-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.110437 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/63d4aef8-b968-4728-b048-c8a3879e18c6-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"63d4aef8-b968-4728-b048-c8a3879e18c6\") " pod="openstack/alertmanager-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.110513 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmcxb\" (UniqueName: \"kubernetes.io/projected/63d4aef8-b968-4728-b048-c8a3879e18c6-kube-api-access-wmcxb\") pod \"alertmanager-metric-storage-0\" (UID: \"63d4aef8-b968-4728-b048-c8a3879e18c6\") " pod="openstack/alertmanager-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.110592 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/63d4aef8-b968-4728-b048-c8a3879e18c6-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"63d4aef8-b968-4728-b048-c8a3879e18c6\") " pod="openstack/alertmanager-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.110627 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/63d4aef8-b968-4728-b048-c8a3879e18c6-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"63d4aef8-b968-4728-b048-c8a3879e18c6\") " pod="openstack/alertmanager-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.115976 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/63d4aef8-b968-4728-b048-c8a3879e18c6-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"63d4aef8-b968-4728-b048-c8a3879e18c6\") " pod="openstack/alertmanager-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.130347 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/63d4aef8-b968-4728-b048-c8a3879e18c6-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"63d4aef8-b968-4728-b048-c8a3879e18c6\") " pod="openstack/alertmanager-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.135483 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/63d4aef8-b968-4728-b048-c8a3879e18c6-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"63d4aef8-b968-4728-b048-c8a3879e18c6\") " pod="openstack/alertmanager-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.136186 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/63d4aef8-b968-4728-b048-c8a3879e18c6-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"63d4aef8-b968-4728-b048-c8a3879e18c6\") " pod="openstack/alertmanager-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.140840 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/63d4aef8-b968-4728-b048-c8a3879e18c6-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"63d4aef8-b968-4728-b048-c8a3879e18c6\") " pod="openstack/alertmanager-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.141137 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/63d4aef8-b968-4728-b048-c8a3879e18c6-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"63d4aef8-b968-4728-b048-c8a3879e18c6\") " pod="openstack/alertmanager-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.162751 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmcxb\" (UniqueName: \"kubernetes.io/projected/63d4aef8-b968-4728-b048-c8a3879e18c6-kube-api-access-wmcxb\") pod \"alertmanager-metric-storage-0\" (UID: \"63d4aef8-b968-4728-b048-c8a3879e18c6\") " pod="openstack/alertmanager-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.270475 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.386545 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.389897 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.395501 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.395708 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.395823 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.395966 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-4drvb" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.396128 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.396292 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.433638 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.521140 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7fb00b03-fe6e-4c66-bd36-adf9443871a8-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7fb00b03-fe6e-4c66-bd36-adf9443871a8\") " pod="openstack/prometheus-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.521459 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7fb00b03-fe6e-4c66-bd36-adf9443871a8-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7fb00b03-fe6e-4c66-bd36-adf9443871a8\") " pod="openstack/prometheus-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.521528 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7fb00b03-fe6e-4c66-bd36-adf9443871a8-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7fb00b03-fe6e-4c66-bd36-adf9443871a8\") " pod="openstack/prometheus-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.521583 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7fb00b03-fe6e-4c66-bd36-adf9443871a8-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7fb00b03-fe6e-4c66-bd36-adf9443871a8\") " pod="openstack/prometheus-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.521609 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7fb00b03-fe6e-4c66-bd36-adf9443871a8-config\") pod \"prometheus-metric-storage-0\" (UID: \"7fb00b03-fe6e-4c66-bd36-adf9443871a8\") " pod="openstack/prometheus-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.521631 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7fb00b03-fe6e-4c66-bd36-adf9443871a8-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7fb00b03-fe6e-4c66-bd36-adf9443871a8\") " pod="openstack/prometheus-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.521725 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-170bfb36-39b1-4ff5-90ce-120a0cfe0952\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-170bfb36-39b1-4ff5-90ce-120a0cfe0952\") pod \"prometheus-metric-storage-0\" (UID: \"7fb00b03-fe6e-4c66-bd36-adf9443871a8\") " pod="openstack/prometheus-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.521756 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x7ww\" (UniqueName: \"kubernetes.io/projected/7fb00b03-fe6e-4c66-bd36-adf9443871a8-kube-api-access-8x7ww\") pod \"prometheus-metric-storage-0\" (UID: \"7fb00b03-fe6e-4c66-bd36-adf9443871a8\") " pod="openstack/prometheus-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.531552 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.619026 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="573bf003-8d0e-432d-8729-b007d972ea7b" path="/var/lib/kubelet/pods/573bf003-8d0e-432d-8729-b007d972ea7b/volumes" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.620081 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d90cb39c-1181-4407-b380-fc88daaf0cc2","Type":"ContainerStarted","Data":"c53f85740ffd48a69dc50804292b85aaf8763314672182a6dcb94f2b58c04940"} Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.628821 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-170bfb36-39b1-4ff5-90ce-120a0cfe0952\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-170bfb36-39b1-4ff5-90ce-120a0cfe0952\") pod \"prometheus-metric-storage-0\" (UID: \"7fb00b03-fe6e-4c66-bd36-adf9443871a8\") " pod="openstack/prometheus-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.629097 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x7ww\" (UniqueName: \"kubernetes.io/projected/7fb00b03-fe6e-4c66-bd36-adf9443871a8-kube-api-access-8x7ww\") pod \"prometheus-metric-storage-0\" (UID: \"7fb00b03-fe6e-4c66-bd36-adf9443871a8\") " pod="openstack/prometheus-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.629334 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7fb00b03-fe6e-4c66-bd36-adf9443871a8-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7fb00b03-fe6e-4c66-bd36-adf9443871a8\") " pod="openstack/prometheus-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.629458 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7fb00b03-fe6e-4c66-bd36-adf9443871a8-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7fb00b03-fe6e-4c66-bd36-adf9443871a8\") " pod="openstack/prometheus-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.629639 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7fb00b03-fe6e-4c66-bd36-adf9443871a8-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7fb00b03-fe6e-4c66-bd36-adf9443871a8\") " pod="openstack/prometheus-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.629788 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7fb00b03-fe6e-4c66-bd36-adf9443871a8-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7fb00b03-fe6e-4c66-bd36-adf9443871a8\") " pod="openstack/prometheus-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.629889 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7fb00b03-fe6e-4c66-bd36-adf9443871a8-config\") pod \"prometheus-metric-storage-0\" (UID: \"7fb00b03-fe6e-4c66-bd36-adf9443871a8\") " pod="openstack/prometheus-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.629999 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7fb00b03-fe6e-4c66-bd36-adf9443871a8-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7fb00b03-fe6e-4c66-bd36-adf9443871a8\") " pod="openstack/prometheus-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.635994 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7fb00b03-fe6e-4c66-bd36-adf9443871a8-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"7fb00b03-fe6e-4c66-bd36-adf9443871a8\") " pod="openstack/prometheus-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.638212 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7fb00b03-fe6e-4c66-bd36-adf9443871a8-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"7fb00b03-fe6e-4c66-bd36-adf9443871a8\") " pod="openstack/prometheus-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.639157 5050 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.639246 5050 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-170bfb36-39b1-4ff5-90ce-120a0cfe0952\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-170bfb36-39b1-4ff5-90ce-120a0cfe0952\") pod \"prometheus-metric-storage-0\" (UID: \"7fb00b03-fe6e-4c66-bd36-adf9443871a8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/02bf95617fb6a47cd1827bb56fbf3877084337f10c060a275f03037ad807be5d/globalmount\"" pod="openstack/prometheus-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.641821 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7fb00b03-fe6e-4c66-bd36-adf9443871a8-config\") pod \"prometheus-metric-storage-0\" (UID: \"7fb00b03-fe6e-4c66-bd36-adf9443871a8\") " pod="openstack/prometheus-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.660435 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/7fb00b03-fe6e-4c66-bd36-adf9443871a8-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"7fb00b03-fe6e-4c66-bd36-adf9443871a8\") " pod="openstack/prometheus-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.661321 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7fb00b03-fe6e-4c66-bd36-adf9443871a8-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"7fb00b03-fe6e-4c66-bd36-adf9443871a8\") " pod="openstack/prometheus-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.663027 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7fb00b03-fe6e-4c66-bd36-adf9443871a8-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"7fb00b03-fe6e-4c66-bd36-adf9443871a8\") " pod="openstack/prometheus-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.665834 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x7ww\" (UniqueName: \"kubernetes.io/projected/7fb00b03-fe6e-4c66-bd36-adf9443871a8-kube-api-access-8x7ww\") pod \"prometheus-metric-storage-0\" (UID: \"7fb00b03-fe6e-4c66-bd36-adf9443871a8\") " pod="openstack/prometheus-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.752446 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-170bfb36-39b1-4ff5-90ce-120a0cfe0952\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-170bfb36-39b1-4ff5-90ce-120a0cfe0952\") pod \"prometheus-metric-storage-0\" (UID: \"7fb00b03-fe6e-4c66-bd36-adf9443871a8\") " pod="openstack/prometheus-metric-storage-0" Dec 11 15:32:09 crc kubenswrapper[5050]: I1211 15:32:09.761796 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 11 15:32:10 crc kubenswrapper[5050]: I1211 15:32:10.228161 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Dec 11 15:32:10 crc kubenswrapper[5050]: W1211 15:32:10.287809 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63d4aef8_b968_4728_b048_c8a3879e18c6.slice/crio-ee435f4e51178a5b87f549213e2d2b1dd69382164f7ead76ee308765d04fbef3 WatchSource:0}: Error finding container ee435f4e51178a5b87f549213e2d2b1dd69382164f7ead76ee308765d04fbef3: Status 404 returned error can't find the container with id ee435f4e51178a5b87f549213e2d2b1dd69382164f7ead76ee308765d04fbef3 Dec 11 15:32:10 crc kubenswrapper[5050]: I1211 15:32:10.294241 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 11 15:32:10 crc kubenswrapper[5050]: I1211 15:32:10.299913 5050 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="9dab0594-84c1-48fa-b0f9-a010ae461c08" podUID="d90cb39c-1181-4407-b380-fc88daaf0cc2" Dec 11 15:32:10 crc kubenswrapper[5050]: I1211 15:32:10.345518 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9dab0594-84c1-48fa-b0f9-a010ae461c08-openstack-config\") pod \"9dab0594-84c1-48fa-b0f9-a010ae461c08\" (UID: \"9dab0594-84c1-48fa-b0f9-a010ae461c08\") " Dec 11 15:32:10 crc kubenswrapper[5050]: I1211 15:32:10.345579 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9dab0594-84c1-48fa-b0f9-a010ae461c08-openstack-config-secret\") pod \"9dab0594-84c1-48fa-b0f9-a010ae461c08\" (UID: \"9dab0594-84c1-48fa-b0f9-a010ae461c08\") " Dec 11 15:32:10 crc kubenswrapper[5050]: I1211 15:32:10.345621 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75lr8\" (UniqueName: \"kubernetes.io/projected/9dab0594-84c1-48fa-b0f9-a010ae461c08-kube-api-access-75lr8\") pod \"9dab0594-84c1-48fa-b0f9-a010ae461c08\" (UID: \"9dab0594-84c1-48fa-b0f9-a010ae461c08\") " Dec 11 15:32:10 crc kubenswrapper[5050]: I1211 15:32:10.351195 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dab0594-84c1-48fa-b0f9-a010ae461c08-kube-api-access-75lr8" (OuterVolumeSpecName: "kube-api-access-75lr8") pod "9dab0594-84c1-48fa-b0f9-a010ae461c08" (UID: "9dab0594-84c1-48fa-b0f9-a010ae461c08"). InnerVolumeSpecName "kube-api-access-75lr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:32:10 crc kubenswrapper[5050]: I1211 15:32:10.389116 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dab0594-84c1-48fa-b0f9-a010ae461c08-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "9dab0594-84c1-48fa-b0f9-a010ae461c08" (UID: "9dab0594-84c1-48fa-b0f9-a010ae461c08"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:32:10 crc kubenswrapper[5050]: I1211 15:32:10.419738 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dab0594-84c1-48fa-b0f9-a010ae461c08-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "9dab0594-84c1-48fa-b0f9-a010ae461c08" (UID: "9dab0594-84c1-48fa-b0f9-a010ae461c08"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:32:10 crc kubenswrapper[5050]: I1211 15:32:10.452128 5050 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9dab0594-84c1-48fa-b0f9-a010ae461c08-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 11 15:32:10 crc kubenswrapper[5050]: I1211 15:32:10.452165 5050 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9dab0594-84c1-48fa-b0f9-a010ae461c08-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 11 15:32:10 crc kubenswrapper[5050]: I1211 15:32:10.452177 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75lr8\" (UniqueName: \"kubernetes.io/projected/9dab0594-84c1-48fa-b0f9-a010ae461c08-kube-api-access-75lr8\") on node \"crc\" DevicePath \"\"" Dec 11 15:32:10 crc kubenswrapper[5050]: I1211 15:32:10.471711 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 11 15:32:10 crc kubenswrapper[5050]: I1211 15:32:10.597899 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"63d4aef8-b968-4728-b048-c8a3879e18c6","Type":"ContainerStarted","Data":"ee435f4e51178a5b87f549213e2d2b1dd69382164f7ead76ee308765d04fbef3"} Dec 11 15:32:10 crc kubenswrapper[5050]: I1211 15:32:10.599724 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d90cb39c-1181-4407-b380-fc88daaf0cc2","Type":"ContainerStarted","Data":"c7e4eaa0d32b9d60944a030361f771321ef275d1709553e7a3c745ba76d86f8f"} Dec 11 15:32:10 crc kubenswrapper[5050]: I1211 15:32:10.601375 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7fb00b03-fe6e-4c66-bd36-adf9443871a8","Type":"ContainerStarted","Data":"e7e2309c6d2f1890af6f9d57a1ce2ca3b148a47bf770068840c6f0d9aaf6bf60"} Dec 11 15:32:10 crc kubenswrapper[5050]: I1211 15:32:10.602945 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"71218193-88fc-4811-bf04-33a4f4a87898","Type":"ContainerStarted","Data":"d6d78ae06a556ee38d684c35f4b28a4e8362d122891149382720399f1ed0ffed"} Dec 11 15:32:10 crc kubenswrapper[5050]: I1211 15:32:10.602970 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"71218193-88fc-4811-bf04-33a4f4a87898","Type":"ContainerStarted","Data":"92212d39573136bdb7e08c58d617ca217d7999f15ac8cedce6cd667085c7b965"} Dec 11 15:32:10 crc kubenswrapper[5050]: I1211 15:32:10.603883 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 11 15:32:10 crc kubenswrapper[5050]: I1211 15:32:10.605211 5050 generic.go:334] "Generic (PLEG): container finished" podID="9dab0594-84c1-48fa-b0f9-a010ae461c08" containerID="c4217da09ab7b280b39f45e31215e32e283a8504c6d9bb8d8c904940e92c9f87" exitCode=137 Dec 11 15:32:10 crc kubenswrapper[5050]: I1211 15:32:10.605281 5050 scope.go:117] "RemoveContainer" containerID="c4217da09ab7b280b39f45e31215e32e283a8504c6d9bb8d8c904940e92c9f87" Dec 11 15:32:10 crc kubenswrapper[5050]: I1211 15:32:10.605288 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 11 15:32:10 crc kubenswrapper[5050]: I1211 15:32:10.641637 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.641571733 podStartE2EDuration="3.641571733s" podCreationTimestamp="2025-12-11 15:32:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:32:10.635883121 +0000 UTC m=+6221.479605707" watchObservedRunningTime="2025-12-11 15:32:10.641571733 +0000 UTC m=+6221.485294339" Dec 11 15:32:10 crc kubenswrapper[5050]: I1211 15:32:10.664523 5050 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="9dab0594-84c1-48fa-b0f9-a010ae461c08" podUID="d90cb39c-1181-4407-b380-fc88daaf0cc2" Dec 11 15:32:10 crc kubenswrapper[5050]: I1211 15:32:10.674190 5050 scope.go:117] "RemoveContainer" containerID="c4217da09ab7b280b39f45e31215e32e283a8504c6d9bb8d8c904940e92c9f87" Dec 11 15:32:10 crc kubenswrapper[5050]: E1211 15:32:10.674734 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4217da09ab7b280b39f45e31215e32e283a8504c6d9bb8d8c904940e92c9f87\": container with ID starting with c4217da09ab7b280b39f45e31215e32e283a8504c6d9bb8d8c904940e92c9f87 not found: ID does not exist" containerID="c4217da09ab7b280b39f45e31215e32e283a8504c6d9bb8d8c904940e92c9f87" Dec 11 15:32:10 crc kubenswrapper[5050]: I1211 15:32:10.674834 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4217da09ab7b280b39f45e31215e32e283a8504c6d9bb8d8c904940e92c9f87"} err="failed to get container status \"c4217da09ab7b280b39f45e31215e32e283a8504c6d9bb8d8c904940e92c9f87\": rpc error: code = NotFound desc = could not find container \"c4217da09ab7b280b39f45e31215e32e283a8504c6d9bb8d8c904940e92c9f87\": container with ID starting with c4217da09ab7b280b39f45e31215e32e283a8504c6d9bb8d8c904940e92c9f87 not found: ID does not exist" Dec 11 15:32:10 crc kubenswrapper[5050]: I1211 15:32:10.675634 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.190979308 podStartE2EDuration="3.675608892s" podCreationTimestamp="2025-12-11 15:32:07 +0000 UTC" firstStartedPulling="2025-12-11 15:32:09.587598233 +0000 UTC m=+6220.431320819" lastFinishedPulling="2025-12-11 15:32:10.072227827 +0000 UTC m=+6220.915950403" observedRunningTime="2025-12-11 15:32:10.661579177 +0000 UTC m=+6221.505301773" watchObservedRunningTime="2025-12-11 15:32:10.675608892 +0000 UTC m=+6221.519331478" Dec 11 15:32:10 crc kubenswrapper[5050]: I1211 15:32:10.686089 5050 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="9dab0594-84c1-48fa-b0f9-a010ae461c08" podUID="d90cb39c-1181-4407-b380-fc88daaf0cc2" Dec 11 15:32:11 crc kubenswrapper[5050]: I1211 15:32:11.317273 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mwbzz" Dec 11 15:32:11 crc kubenswrapper[5050]: I1211 15:32:11.367602 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mwbzz" Dec 11 15:32:11 crc kubenswrapper[5050]: I1211 15:32:11.564838 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dab0594-84c1-48fa-b0f9-a010ae461c08" path="/var/lib/kubelet/pods/9dab0594-84c1-48fa-b0f9-a010ae461c08/volumes" Dec 11 15:32:12 crc kubenswrapper[5050]: I1211 15:32:12.546881 5050 scope.go:117] "RemoveContainer" containerID="ebf03c01a0c1f974cd2ea46998cc404d2f6db1e8b59294fa047964a30636886e" Dec 11 15:32:12 crc kubenswrapper[5050]: E1211 15:32:12.547256 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:32:13 crc kubenswrapper[5050]: I1211 15:32:13.111179 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mwbzz"] Dec 11 15:32:13 crc kubenswrapper[5050]: I1211 15:32:13.111829 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mwbzz" podUID="7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a" containerName="registry-server" containerID="cri-o://9e9347371c6a2ec24f63bfee61027b068b67cffdfddac7e82b50c4fcc2803480" gracePeriod=2 Dec 11 15:32:13 crc kubenswrapper[5050]: I1211 15:32:13.580401 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mwbzz" Dec 11 15:32:13 crc kubenswrapper[5050]: I1211 15:32:13.628267 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a-catalog-content\") pod \"7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a\" (UID: \"7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a\") " Dec 11 15:32:13 crc kubenswrapper[5050]: I1211 15:32:13.628328 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzf98\" (UniqueName: \"kubernetes.io/projected/7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a-kube-api-access-pzf98\") pod \"7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a\" (UID: \"7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a\") " Dec 11 15:32:13 crc kubenswrapper[5050]: I1211 15:32:13.628445 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a-utilities\") pod \"7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a\" (UID: \"7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a\") " Dec 11 15:32:13 crc kubenswrapper[5050]: I1211 15:32:13.629369 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a-utilities" (OuterVolumeSpecName: "utilities") pod "7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a" (UID: "7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:32:13 crc kubenswrapper[5050]: I1211 15:32:13.632978 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a-kube-api-access-pzf98" (OuterVolumeSpecName: "kube-api-access-pzf98") pod "7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a" (UID: "7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a"). InnerVolumeSpecName "kube-api-access-pzf98". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:32:13 crc kubenswrapper[5050]: I1211 15:32:13.676071 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a" (UID: "7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:32:13 crc kubenswrapper[5050]: I1211 15:32:13.696497 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mwbzz" Dec 11 15:32:13 crc kubenswrapper[5050]: I1211 15:32:13.696512 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwbzz" event={"ID":"7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a","Type":"ContainerDied","Data":"9e9347371c6a2ec24f63bfee61027b068b67cffdfddac7e82b50c4fcc2803480"} Dec 11 15:32:13 crc kubenswrapper[5050]: I1211 15:32:13.696564 5050 scope.go:117] "RemoveContainer" containerID="9e9347371c6a2ec24f63bfee61027b068b67cffdfddac7e82b50c4fcc2803480" Dec 11 15:32:13 crc kubenswrapper[5050]: I1211 15:32:13.696659 5050 generic.go:334] "Generic (PLEG): container finished" podID="7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a" containerID="9e9347371c6a2ec24f63bfee61027b068b67cffdfddac7e82b50c4fcc2803480" exitCode=0 Dec 11 15:32:13 crc kubenswrapper[5050]: I1211 15:32:13.696681 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mwbzz" event={"ID":"7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a","Type":"ContainerDied","Data":"f49174df14f24904a1a6ff93a3909e0fb96ebe4350e98915c9d727d8fb683010"} Dec 11 15:32:13 crc kubenswrapper[5050]: I1211 15:32:13.717235 5050 scope.go:117] "RemoveContainer" containerID="90b5200b45db6423e146e8d7c3067953d0e92db44e384e3bdfc475dc90df1046" Dec 11 15:32:13 crc kubenswrapper[5050]: I1211 15:32:13.730797 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 15:32:13 crc kubenswrapper[5050]: I1211 15:32:13.730839 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzf98\" (UniqueName: \"kubernetes.io/projected/7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a-kube-api-access-pzf98\") on node \"crc\" DevicePath \"\"" Dec 11 15:32:13 crc kubenswrapper[5050]: I1211 15:32:13.730849 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 15:32:13 crc kubenswrapper[5050]: I1211 15:32:13.739215 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mwbzz"] Dec 11 15:32:13 crc kubenswrapper[5050]: I1211 15:32:13.747416 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mwbzz"] Dec 11 15:32:13 crc kubenswrapper[5050]: I1211 15:32:13.753449 5050 scope.go:117] "RemoveContainer" containerID="32f6110f67e367b895cde401023f13d5b1f85c3f15377b7b2569f60a1a7a72a5" Dec 11 15:32:13 crc kubenswrapper[5050]: I1211 15:32:13.778536 5050 scope.go:117] "RemoveContainer" containerID="9e9347371c6a2ec24f63bfee61027b068b67cffdfddac7e82b50c4fcc2803480" Dec 11 15:32:13 crc kubenswrapper[5050]: E1211 15:32:13.779137 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e9347371c6a2ec24f63bfee61027b068b67cffdfddac7e82b50c4fcc2803480\": container with ID starting with 9e9347371c6a2ec24f63bfee61027b068b67cffdfddac7e82b50c4fcc2803480 not found: ID does not exist" containerID="9e9347371c6a2ec24f63bfee61027b068b67cffdfddac7e82b50c4fcc2803480" Dec 11 15:32:13 crc kubenswrapper[5050]: I1211 15:32:13.779184 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e9347371c6a2ec24f63bfee61027b068b67cffdfddac7e82b50c4fcc2803480"} err="failed to get container status \"9e9347371c6a2ec24f63bfee61027b068b67cffdfddac7e82b50c4fcc2803480\": rpc error: code = NotFound desc = could not find container \"9e9347371c6a2ec24f63bfee61027b068b67cffdfddac7e82b50c4fcc2803480\": container with ID starting with 9e9347371c6a2ec24f63bfee61027b068b67cffdfddac7e82b50c4fcc2803480 not found: ID does not exist" Dec 11 15:32:13 crc kubenswrapper[5050]: I1211 15:32:13.779217 5050 scope.go:117] "RemoveContainer" containerID="90b5200b45db6423e146e8d7c3067953d0e92db44e384e3bdfc475dc90df1046" Dec 11 15:32:13 crc kubenswrapper[5050]: E1211 15:32:13.779604 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90b5200b45db6423e146e8d7c3067953d0e92db44e384e3bdfc475dc90df1046\": container with ID starting with 90b5200b45db6423e146e8d7c3067953d0e92db44e384e3bdfc475dc90df1046 not found: ID does not exist" containerID="90b5200b45db6423e146e8d7c3067953d0e92db44e384e3bdfc475dc90df1046" Dec 11 15:32:13 crc kubenswrapper[5050]: I1211 15:32:13.779638 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90b5200b45db6423e146e8d7c3067953d0e92db44e384e3bdfc475dc90df1046"} err="failed to get container status \"90b5200b45db6423e146e8d7c3067953d0e92db44e384e3bdfc475dc90df1046\": rpc error: code = NotFound desc = could not find container \"90b5200b45db6423e146e8d7c3067953d0e92db44e384e3bdfc475dc90df1046\": container with ID starting with 90b5200b45db6423e146e8d7c3067953d0e92db44e384e3bdfc475dc90df1046 not found: ID does not exist" Dec 11 15:32:13 crc kubenswrapper[5050]: I1211 15:32:13.779665 5050 scope.go:117] "RemoveContainer" containerID="32f6110f67e367b895cde401023f13d5b1f85c3f15377b7b2569f60a1a7a72a5" Dec 11 15:32:13 crc kubenswrapper[5050]: E1211 15:32:13.779975 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32f6110f67e367b895cde401023f13d5b1f85c3f15377b7b2569f60a1a7a72a5\": container with ID starting with 32f6110f67e367b895cde401023f13d5b1f85c3f15377b7b2569f60a1a7a72a5 not found: ID does not exist" containerID="32f6110f67e367b895cde401023f13d5b1f85c3f15377b7b2569f60a1a7a72a5" Dec 11 15:32:13 crc kubenswrapper[5050]: I1211 15:32:13.779994 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32f6110f67e367b895cde401023f13d5b1f85c3f15377b7b2569f60a1a7a72a5"} err="failed to get container status \"32f6110f67e367b895cde401023f13d5b1f85c3f15377b7b2569f60a1a7a72a5\": rpc error: code = NotFound desc = could not find container \"32f6110f67e367b895cde401023f13d5b1f85c3f15377b7b2569f60a1a7a72a5\": container with ID starting with 32f6110f67e367b895cde401023f13d5b1f85c3f15377b7b2569f60a1a7a72a5 not found: ID does not exist" Dec 11 15:32:15 crc kubenswrapper[5050]: I1211 15:32:15.558873 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a" path="/var/lib/kubelet/pods/7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a/volumes" Dec 11 15:32:16 crc kubenswrapper[5050]: I1211 15:32:16.743416 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7fb00b03-fe6e-4c66-bd36-adf9443871a8","Type":"ContainerStarted","Data":"ab16c4db4fb9a94a51a26ef2f6813876393fb17f070ad6f3d6eecc2067ccb1cc"} Dec 11 15:32:16 crc kubenswrapper[5050]: I1211 15:32:16.746799 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"63d4aef8-b968-4728-b048-c8a3879e18c6","Type":"ContainerStarted","Data":"64effc42f07ee9d990dba2a73ef91c412d1a5e1faba46ba1792ce37bc627c0c3"} Dec 11 15:32:18 crc kubenswrapper[5050]: I1211 15:32:18.471614 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 11 15:32:22 crc kubenswrapper[5050]: I1211 15:32:22.817135 5050 generic.go:334] "Generic (PLEG): container finished" podID="63d4aef8-b968-4728-b048-c8a3879e18c6" containerID="64effc42f07ee9d990dba2a73ef91c412d1a5e1faba46ba1792ce37bc627c0c3" exitCode=0 Dec 11 15:32:22 crc kubenswrapper[5050]: I1211 15:32:22.817578 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"63d4aef8-b968-4728-b048-c8a3879e18c6","Type":"ContainerDied","Data":"64effc42f07ee9d990dba2a73ef91c412d1a5e1faba46ba1792ce37bc627c0c3"} Dec 11 15:32:22 crc kubenswrapper[5050]: I1211 15:32:22.821877 5050 generic.go:334] "Generic (PLEG): container finished" podID="7fb00b03-fe6e-4c66-bd36-adf9443871a8" containerID="ab16c4db4fb9a94a51a26ef2f6813876393fb17f070ad6f3d6eecc2067ccb1cc" exitCode=0 Dec 11 15:32:22 crc kubenswrapper[5050]: I1211 15:32:22.821943 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7fb00b03-fe6e-4c66-bd36-adf9443871a8","Type":"ContainerDied","Data":"ab16c4db4fb9a94a51a26ef2f6813876393fb17f070ad6f3d6eecc2067ccb1cc"} Dec 11 15:32:24 crc kubenswrapper[5050]: I1211 15:32:24.546191 5050 scope.go:117] "RemoveContainer" containerID="ebf03c01a0c1f974cd2ea46998cc404d2f6db1e8b59294fa047964a30636886e" Dec 11 15:32:24 crc kubenswrapper[5050]: E1211 15:32:24.546823 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:32:25 crc kubenswrapper[5050]: I1211 15:32:25.864657 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"63d4aef8-b968-4728-b048-c8a3879e18c6","Type":"ContainerStarted","Data":"bb5e6e2ca1463892d94b5fe30b950cbe9d96a66f04232e3ba0987248fc895fd7"} Dec 11 15:32:28 crc kubenswrapper[5050]: I1211 15:32:28.904486 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"63d4aef8-b968-4728-b048-c8a3879e18c6","Type":"ContainerStarted","Data":"948f85ff603bbc9a26641152c225fbda9c1450426052fb6b08fd1172ff4b86f8"} Dec 11 15:32:28 crc kubenswrapper[5050]: I1211 15:32:28.905046 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Dec 11 15:32:28 crc kubenswrapper[5050]: I1211 15:32:28.908301 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Dec 11 15:32:28 crc kubenswrapper[5050]: I1211 15:32:28.931584 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=6.343363147 podStartE2EDuration="20.931549611s" podCreationTimestamp="2025-12-11 15:32:08 +0000 UTC" firstStartedPulling="2025-12-11 15:32:10.292563951 +0000 UTC m=+6221.136286537" lastFinishedPulling="2025-12-11 15:32:24.880750415 +0000 UTC m=+6235.724473001" observedRunningTime="2025-12-11 15:32:28.926171658 +0000 UTC m=+6239.769894264" watchObservedRunningTime="2025-12-11 15:32:28.931549611 +0000 UTC m=+6239.775272197" Dec 11 15:32:30 crc kubenswrapper[5050]: I1211 15:32:30.922375 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7fb00b03-fe6e-4c66-bd36-adf9443871a8","Type":"ContainerStarted","Data":"0c5246f2871c8299a1d9fbf47c96087b750b411a58afd3370be94ca88a66119e"} Dec 11 15:32:33 crc kubenswrapper[5050]: I1211 15:32:33.950817 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7fb00b03-fe6e-4c66-bd36-adf9443871a8","Type":"ContainerStarted","Data":"ab695d20544fc11a3431b132f0b6289242334b38c883474548e329e8746ad1aa"} Dec 11 15:32:36 crc kubenswrapper[5050]: I1211 15:32:36.991382 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7fb00b03-fe6e-4c66-bd36-adf9443871a8","Type":"ContainerStarted","Data":"9d01f1bed3af21ed8e66720265b8777eb8c388382e8feda39b38ad55c6c4c086"} Dec 11 15:32:37 crc kubenswrapper[5050]: I1211 15:32:37.013328 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=2.80737041 podStartE2EDuration="29.013312334s" podCreationTimestamp="2025-12-11 15:32:08 +0000 UTC" firstStartedPulling="2025-12-11 15:32:10.476560305 +0000 UTC m=+6221.320282891" lastFinishedPulling="2025-12-11 15:32:36.682502219 +0000 UTC m=+6247.526224815" observedRunningTime="2025-12-11 15:32:37.012946324 +0000 UTC m=+6247.856668910" watchObservedRunningTime="2025-12-11 15:32:37.013312334 +0000 UTC m=+6247.857034920" Dec 11 15:32:39 crc kubenswrapper[5050]: I1211 15:32:39.552385 5050 scope.go:117] "RemoveContainer" containerID="ebf03c01a0c1f974cd2ea46998cc404d2f6db1e8b59294fa047964a30636886e" Dec 11 15:32:39 crc kubenswrapper[5050]: E1211 15:32:39.552850 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:32:39 crc kubenswrapper[5050]: I1211 15:32:39.762838 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Dec 11 15:32:39 crc kubenswrapper[5050]: I1211 15:32:39.762904 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Dec 11 15:32:39 crc kubenswrapper[5050]: I1211 15:32:39.764944 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Dec 11 15:32:40 crc kubenswrapper[5050]: I1211 15:32:40.024738 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Dec 11 15:32:42 crc kubenswrapper[5050]: I1211 15:32:42.403097 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 15:32:42 crc kubenswrapper[5050]: E1211 15:32:42.404151 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a" containerName="registry-server" Dec 11 15:32:42 crc kubenswrapper[5050]: I1211 15:32:42.404168 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a" containerName="registry-server" Dec 11 15:32:42 crc kubenswrapper[5050]: E1211 15:32:42.404196 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a" containerName="extract-utilities" Dec 11 15:32:42 crc kubenswrapper[5050]: I1211 15:32:42.404204 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a" containerName="extract-utilities" Dec 11 15:32:42 crc kubenswrapper[5050]: E1211 15:32:42.404219 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a" containerName="extract-content" Dec 11 15:32:42 crc kubenswrapper[5050]: I1211 15:32:42.404238 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a" containerName="extract-content" Dec 11 15:32:42 crc kubenswrapper[5050]: I1211 15:32:42.404511 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="7df56cd6-2ce7-49f3-90fd-3c79af9f9c8a" containerName="registry-server" Dec 11 15:32:42 crc kubenswrapper[5050]: I1211 15:32:42.406887 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 15:32:42 crc kubenswrapper[5050]: I1211 15:32:42.412455 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 15:32:42 crc kubenswrapper[5050]: I1211 15:32:42.412573 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 15:32:42 crc kubenswrapper[5050]: I1211 15:32:42.416414 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 15:32:42 crc kubenswrapper[5050]: I1211 15:32:42.560222 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d697bc1-dfee-4c73-a14e-1efe3755b265-config-data\") pod \"ceilometer-0\" (UID: \"2d697bc1-dfee-4c73-a14e-1efe3755b265\") " pod="openstack/ceilometer-0" Dec 11 15:32:42 crc kubenswrapper[5050]: I1211 15:32:42.560436 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d697bc1-dfee-4c73-a14e-1efe3755b265-scripts\") pod \"ceilometer-0\" (UID: \"2d697bc1-dfee-4c73-a14e-1efe3755b265\") " pod="openstack/ceilometer-0" Dec 11 15:32:42 crc kubenswrapper[5050]: I1211 15:32:42.560577 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d697bc1-dfee-4c73-a14e-1efe3755b265-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d697bc1-dfee-4c73-a14e-1efe3755b265\") " pod="openstack/ceilometer-0" Dec 11 15:32:42 crc kubenswrapper[5050]: I1211 15:32:42.560729 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8jxh\" (UniqueName: \"kubernetes.io/projected/2d697bc1-dfee-4c73-a14e-1efe3755b265-kube-api-access-d8jxh\") pod \"ceilometer-0\" (UID: \"2d697bc1-dfee-4c73-a14e-1efe3755b265\") " pod="openstack/ceilometer-0" Dec 11 15:32:42 crc kubenswrapper[5050]: I1211 15:32:42.560826 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d697bc1-dfee-4c73-a14e-1efe3755b265-run-httpd\") pod \"ceilometer-0\" (UID: \"2d697bc1-dfee-4c73-a14e-1efe3755b265\") " pod="openstack/ceilometer-0" Dec 11 15:32:42 crc kubenswrapper[5050]: I1211 15:32:42.560904 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d697bc1-dfee-4c73-a14e-1efe3755b265-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d697bc1-dfee-4c73-a14e-1efe3755b265\") " pod="openstack/ceilometer-0" Dec 11 15:32:42 crc kubenswrapper[5050]: I1211 15:32:42.560934 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d697bc1-dfee-4c73-a14e-1efe3755b265-log-httpd\") pod \"ceilometer-0\" (UID: \"2d697bc1-dfee-4c73-a14e-1efe3755b265\") " pod="openstack/ceilometer-0" Dec 11 15:32:42 crc kubenswrapper[5050]: I1211 15:32:42.662989 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8jxh\" (UniqueName: \"kubernetes.io/projected/2d697bc1-dfee-4c73-a14e-1efe3755b265-kube-api-access-d8jxh\") pod \"ceilometer-0\" (UID: \"2d697bc1-dfee-4c73-a14e-1efe3755b265\") " pod="openstack/ceilometer-0" Dec 11 15:32:42 crc kubenswrapper[5050]: I1211 15:32:42.663074 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d697bc1-dfee-4c73-a14e-1efe3755b265-run-httpd\") pod \"ceilometer-0\" (UID: \"2d697bc1-dfee-4c73-a14e-1efe3755b265\") " pod="openstack/ceilometer-0" Dec 11 15:32:42 crc kubenswrapper[5050]: I1211 15:32:42.663118 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d697bc1-dfee-4c73-a14e-1efe3755b265-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d697bc1-dfee-4c73-a14e-1efe3755b265\") " pod="openstack/ceilometer-0" Dec 11 15:32:42 crc kubenswrapper[5050]: I1211 15:32:42.663139 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d697bc1-dfee-4c73-a14e-1efe3755b265-log-httpd\") pod \"ceilometer-0\" (UID: \"2d697bc1-dfee-4c73-a14e-1efe3755b265\") " pod="openstack/ceilometer-0" Dec 11 15:32:42 crc kubenswrapper[5050]: I1211 15:32:42.663228 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d697bc1-dfee-4c73-a14e-1efe3755b265-config-data\") pod \"ceilometer-0\" (UID: \"2d697bc1-dfee-4c73-a14e-1efe3755b265\") " pod="openstack/ceilometer-0" Dec 11 15:32:42 crc kubenswrapper[5050]: I1211 15:32:42.663312 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d697bc1-dfee-4c73-a14e-1efe3755b265-scripts\") pod \"ceilometer-0\" (UID: \"2d697bc1-dfee-4c73-a14e-1efe3755b265\") " pod="openstack/ceilometer-0" Dec 11 15:32:42 crc kubenswrapper[5050]: I1211 15:32:42.663367 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d697bc1-dfee-4c73-a14e-1efe3755b265-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d697bc1-dfee-4c73-a14e-1efe3755b265\") " pod="openstack/ceilometer-0" Dec 11 15:32:42 crc kubenswrapper[5050]: I1211 15:32:42.664500 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d697bc1-dfee-4c73-a14e-1efe3755b265-log-httpd\") pod \"ceilometer-0\" (UID: \"2d697bc1-dfee-4c73-a14e-1efe3755b265\") " pod="openstack/ceilometer-0" Dec 11 15:32:42 crc kubenswrapper[5050]: I1211 15:32:42.664577 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d697bc1-dfee-4c73-a14e-1efe3755b265-run-httpd\") pod \"ceilometer-0\" (UID: \"2d697bc1-dfee-4c73-a14e-1efe3755b265\") " pod="openstack/ceilometer-0" Dec 11 15:32:42 crc kubenswrapper[5050]: I1211 15:32:42.669509 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d697bc1-dfee-4c73-a14e-1efe3755b265-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d697bc1-dfee-4c73-a14e-1efe3755b265\") " pod="openstack/ceilometer-0" Dec 11 15:32:42 crc kubenswrapper[5050]: I1211 15:32:42.670520 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d697bc1-dfee-4c73-a14e-1efe3755b265-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d697bc1-dfee-4c73-a14e-1efe3755b265\") " pod="openstack/ceilometer-0" Dec 11 15:32:42 crc kubenswrapper[5050]: I1211 15:32:42.680435 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d697bc1-dfee-4c73-a14e-1efe3755b265-scripts\") pod \"ceilometer-0\" (UID: \"2d697bc1-dfee-4c73-a14e-1efe3755b265\") " pod="openstack/ceilometer-0" Dec 11 15:32:42 crc kubenswrapper[5050]: I1211 15:32:42.684923 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d697bc1-dfee-4c73-a14e-1efe3755b265-config-data\") pod \"ceilometer-0\" (UID: \"2d697bc1-dfee-4c73-a14e-1efe3755b265\") " pod="openstack/ceilometer-0" Dec 11 15:32:42 crc kubenswrapper[5050]: I1211 15:32:42.689433 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8jxh\" (UniqueName: \"kubernetes.io/projected/2d697bc1-dfee-4c73-a14e-1efe3755b265-kube-api-access-d8jxh\") pod \"ceilometer-0\" (UID: \"2d697bc1-dfee-4c73-a14e-1efe3755b265\") " pod="openstack/ceilometer-0" Dec 11 15:32:42 crc kubenswrapper[5050]: I1211 15:32:42.751903 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 15:32:43 crc kubenswrapper[5050]: I1211 15:32:43.289277 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 15:32:44 crc kubenswrapper[5050]: I1211 15:32:44.067869 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d697bc1-dfee-4c73-a14e-1efe3755b265","Type":"ContainerStarted","Data":"94911bcee3a6fcf1635f2041b11d709e64d8d1586ea247f5d5c39fc74466fd08"} Dec 11 15:32:49 crc kubenswrapper[5050]: I1211 15:32:49.042975 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b147-account-create-update-qst8q"] Dec 11 15:32:49 crc kubenswrapper[5050]: I1211 15:32:49.054150 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-jcgz4"] Dec 11 15:32:49 crc kubenswrapper[5050]: I1211 15:32:49.063600 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b147-account-create-update-qst8q"] Dec 11 15:32:49 crc kubenswrapper[5050]: I1211 15:32:49.073271 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-jcgz4"] Dec 11 15:32:49 crc kubenswrapper[5050]: I1211 15:32:49.127249 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d697bc1-dfee-4c73-a14e-1efe3755b265","Type":"ContainerStarted","Data":"99fa98e356068c680fdfe51b711841d524733de1086baaa29f8ba75d19536fce"} Dec 11 15:32:49 crc kubenswrapper[5050]: I1211 15:32:49.557945 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a860a2e6-279e-4b60-81cb-895bab7f0525" path="/var/lib/kubelet/pods/a860a2e6-279e-4b60-81cb-895bab7f0525/volumes" Dec 11 15:32:49 crc kubenswrapper[5050]: I1211 15:32:49.558938 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddb9d2a0-555b-482f-98a9-2aacd8129ead" path="/var/lib/kubelet/pods/ddb9d2a0-555b-482f-98a9-2aacd8129ead/volumes" Dec 11 15:32:51 crc kubenswrapper[5050]: I1211 15:32:51.423375 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d697bc1-dfee-4c73-a14e-1efe3755b265","Type":"ContainerStarted","Data":"026e8e7c07286a0dc108c50444d79365466e237beb839637de0849eaecd4a51b"} Dec 11 15:32:51 crc kubenswrapper[5050]: I1211 15:32:51.547705 5050 scope.go:117] "RemoveContainer" containerID="ebf03c01a0c1f974cd2ea46998cc404d2f6db1e8b59294fa047964a30636886e" Dec 11 15:32:51 crc kubenswrapper[5050]: E1211 15:32:51.548334 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:32:53 crc kubenswrapper[5050]: I1211 15:32:53.448881 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d697bc1-dfee-4c73-a14e-1efe3755b265","Type":"ContainerStarted","Data":"c9efaf60bba126e1e72e580cd48fe0427b3f9a1636123f9f4e5eb48a7f90277b"} Dec 11 15:32:54 crc kubenswrapper[5050]: I1211 15:32:54.461303 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d697bc1-dfee-4c73-a14e-1efe3755b265","Type":"ContainerStarted","Data":"2f00152d880c0063c28f861ec958d5222480e1906bfd18662dee2e6e36c7f041"} Dec 11 15:32:54 crc kubenswrapper[5050]: I1211 15:32:54.461960 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 11 15:32:54 crc kubenswrapper[5050]: I1211 15:32:54.487557 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.878892075 podStartE2EDuration="12.487536156s" podCreationTimestamp="2025-12-11 15:32:42 +0000 UTC" firstStartedPulling="2025-12-11 15:32:43.310865165 +0000 UTC m=+6254.154587751" lastFinishedPulling="2025-12-11 15:32:53.919509246 +0000 UTC m=+6264.763231832" observedRunningTime="2025-12-11 15:32:54.482950013 +0000 UTC m=+6265.326672599" watchObservedRunningTime="2025-12-11 15:32:54.487536156 +0000 UTC m=+6265.331258752" Dec 11 15:32:57 crc kubenswrapper[5050]: I1211 15:32:57.052497 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-nxd6x"] Dec 11 15:32:57 crc kubenswrapper[5050]: I1211 15:32:57.063374 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-nxd6x"] Dec 11 15:32:57 crc kubenswrapper[5050]: I1211 15:32:57.557747 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f664216-f326-4ee5-aa8a-167f41efbd65" path="/var/lib/kubelet/pods/4f664216-f326-4ee5-aa8a-167f41efbd65/volumes" Dec 11 15:32:58 crc kubenswrapper[5050]: I1211 15:32:58.930810 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-76vp7"] Dec 11 15:32:58 crc kubenswrapper[5050]: I1211 15:32:58.932652 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-76vp7" Dec 11 15:32:58 crc kubenswrapper[5050]: I1211 15:32:58.941159 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-76vp7"] Dec 11 15:32:59 crc kubenswrapper[5050]: I1211 15:32:59.054052 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-059a-account-create-update-hpdrc"] Dec 11 15:32:59 crc kubenswrapper[5050]: I1211 15:32:59.055637 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-059a-account-create-update-hpdrc" Dec 11 15:32:59 crc kubenswrapper[5050]: I1211 15:32:59.058790 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Dec 11 15:32:59 crc kubenswrapper[5050]: I1211 15:32:59.074558 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-059a-account-create-update-hpdrc"] Dec 11 15:32:59 crc kubenswrapper[5050]: I1211 15:32:59.098455 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2g6g\" (UniqueName: \"kubernetes.io/projected/690a3b25-657c-45f2-b9ea-0524747cfc73-kube-api-access-h2g6g\") pod \"aodh-db-create-76vp7\" (UID: \"690a3b25-657c-45f2-b9ea-0524747cfc73\") " pod="openstack/aodh-db-create-76vp7" Dec 11 15:32:59 crc kubenswrapper[5050]: I1211 15:32:59.098573 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/690a3b25-657c-45f2-b9ea-0524747cfc73-operator-scripts\") pod \"aodh-db-create-76vp7\" (UID: \"690a3b25-657c-45f2-b9ea-0524747cfc73\") " pod="openstack/aodh-db-create-76vp7" Dec 11 15:32:59 crc kubenswrapper[5050]: I1211 15:32:59.200933 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c2lh\" (UniqueName: \"kubernetes.io/projected/4adb707f-3ea4-4240-945e-56011d9af159-kube-api-access-6c2lh\") pod \"aodh-059a-account-create-update-hpdrc\" (UID: \"4adb707f-3ea4-4240-945e-56011d9af159\") " pod="openstack/aodh-059a-account-create-update-hpdrc" Dec 11 15:32:59 crc kubenswrapper[5050]: I1211 15:32:59.201003 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2g6g\" (UniqueName: \"kubernetes.io/projected/690a3b25-657c-45f2-b9ea-0524747cfc73-kube-api-access-h2g6g\") pod \"aodh-db-create-76vp7\" (UID: \"690a3b25-657c-45f2-b9ea-0524747cfc73\") " pod="openstack/aodh-db-create-76vp7" Dec 11 15:32:59 crc kubenswrapper[5050]: I1211 15:32:59.201124 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4adb707f-3ea4-4240-945e-56011d9af159-operator-scripts\") pod \"aodh-059a-account-create-update-hpdrc\" (UID: \"4adb707f-3ea4-4240-945e-56011d9af159\") " pod="openstack/aodh-059a-account-create-update-hpdrc" Dec 11 15:32:59 crc kubenswrapper[5050]: I1211 15:32:59.201153 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/690a3b25-657c-45f2-b9ea-0524747cfc73-operator-scripts\") pod \"aodh-db-create-76vp7\" (UID: \"690a3b25-657c-45f2-b9ea-0524747cfc73\") " pod="openstack/aodh-db-create-76vp7" Dec 11 15:32:59 crc kubenswrapper[5050]: I1211 15:32:59.202075 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/690a3b25-657c-45f2-b9ea-0524747cfc73-operator-scripts\") pod \"aodh-db-create-76vp7\" (UID: \"690a3b25-657c-45f2-b9ea-0524747cfc73\") " pod="openstack/aodh-db-create-76vp7" Dec 11 15:32:59 crc kubenswrapper[5050]: I1211 15:32:59.224741 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2g6g\" (UniqueName: \"kubernetes.io/projected/690a3b25-657c-45f2-b9ea-0524747cfc73-kube-api-access-h2g6g\") pod \"aodh-db-create-76vp7\" (UID: \"690a3b25-657c-45f2-b9ea-0524747cfc73\") " pod="openstack/aodh-db-create-76vp7" Dec 11 15:32:59 crc kubenswrapper[5050]: I1211 15:32:59.257896 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-76vp7" Dec 11 15:32:59 crc kubenswrapper[5050]: I1211 15:32:59.303601 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4adb707f-3ea4-4240-945e-56011d9af159-operator-scripts\") pod \"aodh-059a-account-create-update-hpdrc\" (UID: \"4adb707f-3ea4-4240-945e-56011d9af159\") " pod="openstack/aodh-059a-account-create-update-hpdrc" Dec 11 15:32:59 crc kubenswrapper[5050]: I1211 15:32:59.303751 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c2lh\" (UniqueName: \"kubernetes.io/projected/4adb707f-3ea4-4240-945e-56011d9af159-kube-api-access-6c2lh\") pod \"aodh-059a-account-create-update-hpdrc\" (UID: \"4adb707f-3ea4-4240-945e-56011d9af159\") " pod="openstack/aodh-059a-account-create-update-hpdrc" Dec 11 15:32:59 crc kubenswrapper[5050]: I1211 15:32:59.304454 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4adb707f-3ea4-4240-945e-56011d9af159-operator-scripts\") pod \"aodh-059a-account-create-update-hpdrc\" (UID: \"4adb707f-3ea4-4240-945e-56011d9af159\") " pod="openstack/aodh-059a-account-create-update-hpdrc" Dec 11 15:32:59 crc kubenswrapper[5050]: I1211 15:32:59.321793 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c2lh\" (UniqueName: \"kubernetes.io/projected/4adb707f-3ea4-4240-945e-56011d9af159-kube-api-access-6c2lh\") pod \"aodh-059a-account-create-update-hpdrc\" (UID: \"4adb707f-3ea4-4240-945e-56011d9af159\") " pod="openstack/aodh-059a-account-create-update-hpdrc" Dec 11 15:32:59 crc kubenswrapper[5050]: I1211 15:32:59.383546 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-059a-account-create-update-hpdrc" Dec 11 15:32:59 crc kubenswrapper[5050]: I1211 15:32:59.785004 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-76vp7"] Dec 11 15:32:59 crc kubenswrapper[5050]: W1211 15:32:59.924705 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4adb707f_3ea4_4240_945e_56011d9af159.slice/crio-22d20fcb7a80139c203d917a8ecfd0acb082697e516527b83ea2ab39ca14d95d WatchSource:0}: Error finding container 22d20fcb7a80139c203d917a8ecfd0acb082697e516527b83ea2ab39ca14d95d: Status 404 returned error can't find the container with id 22d20fcb7a80139c203d917a8ecfd0acb082697e516527b83ea2ab39ca14d95d Dec 11 15:32:59 crc kubenswrapper[5050]: I1211 15:32:59.929313 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-059a-account-create-update-hpdrc"] Dec 11 15:33:00 crc kubenswrapper[5050]: I1211 15:33:00.521136 5050 generic.go:334] "Generic (PLEG): container finished" podID="690a3b25-657c-45f2-b9ea-0524747cfc73" containerID="d13fdec15ae43ef5e65e61a5ba702be22b0ed8bf81a2bc8d3525f301f55db3de" exitCode=0 Dec 11 15:33:00 crc kubenswrapper[5050]: I1211 15:33:00.521188 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-76vp7" event={"ID":"690a3b25-657c-45f2-b9ea-0524747cfc73","Type":"ContainerDied","Data":"d13fdec15ae43ef5e65e61a5ba702be22b0ed8bf81a2bc8d3525f301f55db3de"} Dec 11 15:33:00 crc kubenswrapper[5050]: I1211 15:33:00.521257 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-76vp7" event={"ID":"690a3b25-657c-45f2-b9ea-0524747cfc73","Type":"ContainerStarted","Data":"8be1fe63533e27717366bf02b5580ea154999ce9261e13c60ba9ff54a3a8cb6e"} Dec 11 15:33:00 crc kubenswrapper[5050]: I1211 15:33:00.523195 5050 generic.go:334] "Generic (PLEG): container finished" podID="4adb707f-3ea4-4240-945e-56011d9af159" containerID="08c868ab19a3bbdb777661860d40e3b26650d8c18672365c50e50d93b4f1cb98" exitCode=0 Dec 11 15:33:00 crc kubenswrapper[5050]: I1211 15:33:00.523239 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-059a-account-create-update-hpdrc" event={"ID":"4adb707f-3ea4-4240-945e-56011d9af159","Type":"ContainerDied","Data":"08c868ab19a3bbdb777661860d40e3b26650d8c18672365c50e50d93b4f1cb98"} Dec 11 15:33:00 crc kubenswrapper[5050]: I1211 15:33:00.523266 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-059a-account-create-update-hpdrc" event={"ID":"4adb707f-3ea4-4240-945e-56011d9af159","Type":"ContainerStarted","Data":"22d20fcb7a80139c203d917a8ecfd0acb082697e516527b83ea2ab39ca14d95d"} Dec 11 15:33:01 crc kubenswrapper[5050]: I1211 15:33:01.480802 5050 scope.go:117] "RemoveContainer" containerID="578b239ac0eb1eceac16034f9bd1b64d1b33ce112cb38d916f0219ee5ec91442" Dec 11 15:33:01 crc kubenswrapper[5050]: I1211 15:33:01.503646 5050 scope.go:117] "RemoveContainer" containerID="919526ce458eecdc706f3209ee706b33539871c9fa5dc8a0bdb096afb3e5c27a" Dec 11 15:33:01 crc kubenswrapper[5050]: I1211 15:33:01.612948 5050 scope.go:117] "RemoveContainer" containerID="ef87bc8a5bcc42721914762d982301661a280a48eb6b5900a1409172956af3fb" Dec 11 15:33:01 crc kubenswrapper[5050]: I1211 15:33:01.644443 5050 scope.go:117] "RemoveContainer" containerID="d5a02bbceed282d5531ddaefcb14aa55611a9672386803752c6e363d5bcef2ec" Dec 11 15:33:01 crc kubenswrapper[5050]: I1211 15:33:01.696660 5050 scope.go:117] "RemoveContainer" containerID="7e044c9a621d31bd2d96de1d63132fefaf26eba1190070e5637504856cb5b072" Dec 11 15:33:01 crc kubenswrapper[5050]: I1211 15:33:01.726413 5050 scope.go:117] "RemoveContainer" containerID="19c6a9a722efda4436f056ed0806f41b1bf98af98b3841b6c0ef07ac48fc68f3" Dec 11 15:33:03 crc kubenswrapper[5050]: I1211 15:33:03.380696 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-059a-account-create-update-hpdrc" Dec 11 15:33:03 crc kubenswrapper[5050]: I1211 15:33:03.387875 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-76vp7" Dec 11 15:33:03 crc kubenswrapper[5050]: I1211 15:33:03.501325 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4adb707f-3ea4-4240-945e-56011d9af159-operator-scripts\") pod \"4adb707f-3ea4-4240-945e-56011d9af159\" (UID: \"4adb707f-3ea4-4240-945e-56011d9af159\") " Dec 11 15:33:03 crc kubenswrapper[5050]: I1211 15:33:03.501406 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/690a3b25-657c-45f2-b9ea-0524747cfc73-operator-scripts\") pod \"690a3b25-657c-45f2-b9ea-0524747cfc73\" (UID: \"690a3b25-657c-45f2-b9ea-0524747cfc73\") " Dec 11 15:33:03 crc kubenswrapper[5050]: I1211 15:33:03.501433 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c2lh\" (UniqueName: \"kubernetes.io/projected/4adb707f-3ea4-4240-945e-56011d9af159-kube-api-access-6c2lh\") pod \"4adb707f-3ea4-4240-945e-56011d9af159\" (UID: \"4adb707f-3ea4-4240-945e-56011d9af159\") " Dec 11 15:33:03 crc kubenswrapper[5050]: I1211 15:33:03.501713 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2g6g\" (UniqueName: \"kubernetes.io/projected/690a3b25-657c-45f2-b9ea-0524747cfc73-kube-api-access-h2g6g\") pod \"690a3b25-657c-45f2-b9ea-0524747cfc73\" (UID: \"690a3b25-657c-45f2-b9ea-0524747cfc73\") " Dec 11 15:33:03 crc kubenswrapper[5050]: I1211 15:33:03.502065 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/690a3b25-657c-45f2-b9ea-0524747cfc73-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "690a3b25-657c-45f2-b9ea-0524747cfc73" (UID: "690a3b25-657c-45f2-b9ea-0524747cfc73"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:33:03 crc kubenswrapper[5050]: I1211 15:33:03.502354 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/690a3b25-657c-45f2-b9ea-0524747cfc73-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:33:03 crc kubenswrapper[5050]: I1211 15:33:03.502065 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4adb707f-3ea4-4240-945e-56011d9af159-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4adb707f-3ea4-4240-945e-56011d9af159" (UID: "4adb707f-3ea4-4240-945e-56011d9af159"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:33:03 crc kubenswrapper[5050]: I1211 15:33:03.507414 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/690a3b25-657c-45f2-b9ea-0524747cfc73-kube-api-access-h2g6g" (OuterVolumeSpecName: "kube-api-access-h2g6g") pod "690a3b25-657c-45f2-b9ea-0524747cfc73" (UID: "690a3b25-657c-45f2-b9ea-0524747cfc73"). InnerVolumeSpecName "kube-api-access-h2g6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:33:03 crc kubenswrapper[5050]: I1211 15:33:03.507563 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4adb707f-3ea4-4240-945e-56011d9af159-kube-api-access-6c2lh" (OuterVolumeSpecName: "kube-api-access-6c2lh") pod "4adb707f-3ea4-4240-945e-56011d9af159" (UID: "4adb707f-3ea4-4240-945e-56011d9af159"). InnerVolumeSpecName "kube-api-access-6c2lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:33:03 crc kubenswrapper[5050]: I1211 15:33:03.573821 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-76vp7" event={"ID":"690a3b25-657c-45f2-b9ea-0524747cfc73","Type":"ContainerDied","Data":"8be1fe63533e27717366bf02b5580ea154999ce9261e13c60ba9ff54a3a8cb6e"} Dec 11 15:33:03 crc kubenswrapper[5050]: I1211 15:33:03.573856 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-76vp7" Dec 11 15:33:03 crc kubenswrapper[5050]: I1211 15:33:03.573862 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8be1fe63533e27717366bf02b5580ea154999ce9261e13c60ba9ff54a3a8cb6e" Dec 11 15:33:03 crc kubenswrapper[5050]: I1211 15:33:03.575413 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-059a-account-create-update-hpdrc" event={"ID":"4adb707f-3ea4-4240-945e-56011d9af159","Type":"ContainerDied","Data":"22d20fcb7a80139c203d917a8ecfd0acb082697e516527b83ea2ab39ca14d95d"} Dec 11 15:33:03 crc kubenswrapper[5050]: I1211 15:33:03.575473 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22d20fcb7a80139c203d917a8ecfd0acb082697e516527b83ea2ab39ca14d95d" Dec 11 15:33:03 crc kubenswrapper[5050]: I1211 15:33:03.575436 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-059a-account-create-update-hpdrc" Dec 11 15:33:03 crc kubenswrapper[5050]: I1211 15:33:03.619541 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4adb707f-3ea4-4240-945e-56011d9af159-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:33:03 crc kubenswrapper[5050]: I1211 15:33:03.619792 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6c2lh\" (UniqueName: \"kubernetes.io/projected/4adb707f-3ea4-4240-945e-56011d9af159-kube-api-access-6c2lh\") on node \"crc\" DevicePath \"\"" Dec 11 15:33:03 crc kubenswrapper[5050]: I1211 15:33:03.619904 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2g6g\" (UniqueName: \"kubernetes.io/projected/690a3b25-657c-45f2-b9ea-0524747cfc73-kube-api-access-h2g6g\") on node \"crc\" DevicePath \"\"" Dec 11 15:33:06 crc kubenswrapper[5050]: I1211 15:33:06.545717 5050 scope.go:117] "RemoveContainer" containerID="ebf03c01a0c1f974cd2ea46998cc404d2f6db1e8b59294fa047964a30636886e" Dec 11 15:33:06 crc kubenswrapper[5050]: E1211 15:33:06.546397 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:33:09 crc kubenswrapper[5050]: I1211 15:33:09.412466 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-d74n6"] Dec 11 15:33:09 crc kubenswrapper[5050]: E1211 15:33:09.414603 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="690a3b25-657c-45f2-b9ea-0524747cfc73" containerName="mariadb-database-create" Dec 11 15:33:09 crc kubenswrapper[5050]: I1211 15:33:09.414637 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="690a3b25-657c-45f2-b9ea-0524747cfc73" containerName="mariadb-database-create" Dec 11 15:33:09 crc kubenswrapper[5050]: E1211 15:33:09.414679 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4adb707f-3ea4-4240-945e-56011d9af159" containerName="mariadb-account-create-update" Dec 11 15:33:09 crc kubenswrapper[5050]: I1211 15:33:09.414688 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="4adb707f-3ea4-4240-945e-56011d9af159" containerName="mariadb-account-create-update" Dec 11 15:33:09 crc kubenswrapper[5050]: I1211 15:33:09.414974 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="4adb707f-3ea4-4240-945e-56011d9af159" containerName="mariadb-account-create-update" Dec 11 15:33:09 crc kubenswrapper[5050]: I1211 15:33:09.415024 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="690a3b25-657c-45f2-b9ea-0524747cfc73" containerName="mariadb-database-create" Dec 11 15:33:09 crc kubenswrapper[5050]: I1211 15:33:09.416015 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-d74n6" Dec 11 15:33:09 crc kubenswrapper[5050]: I1211 15:33:09.417852 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Dec 11 15:33:09 crc kubenswrapper[5050]: I1211 15:33:09.418548 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-7ght8" Dec 11 15:33:09 crc kubenswrapper[5050]: I1211 15:33:09.419043 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 11 15:33:09 crc kubenswrapper[5050]: I1211 15:33:09.420421 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Dec 11 15:33:09 crc kubenswrapper[5050]: I1211 15:33:09.424112 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-d74n6"] Dec 11 15:33:09 crc kubenswrapper[5050]: I1211 15:33:09.539849 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6glq7\" (UniqueName: \"kubernetes.io/projected/8bc5b4ae-cf53-4fc7-8233-bcb362806684-kube-api-access-6glq7\") pod \"aodh-db-sync-d74n6\" (UID: \"8bc5b4ae-cf53-4fc7-8233-bcb362806684\") " pod="openstack/aodh-db-sync-d74n6" Dec 11 15:33:09 crc kubenswrapper[5050]: I1211 15:33:09.539902 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bc5b4ae-cf53-4fc7-8233-bcb362806684-scripts\") pod \"aodh-db-sync-d74n6\" (UID: \"8bc5b4ae-cf53-4fc7-8233-bcb362806684\") " pod="openstack/aodh-db-sync-d74n6" Dec 11 15:33:09 crc kubenswrapper[5050]: I1211 15:33:09.539952 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bc5b4ae-cf53-4fc7-8233-bcb362806684-combined-ca-bundle\") pod \"aodh-db-sync-d74n6\" (UID: \"8bc5b4ae-cf53-4fc7-8233-bcb362806684\") " pod="openstack/aodh-db-sync-d74n6" Dec 11 15:33:09 crc kubenswrapper[5050]: I1211 15:33:09.539999 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bc5b4ae-cf53-4fc7-8233-bcb362806684-config-data\") pod \"aodh-db-sync-d74n6\" (UID: \"8bc5b4ae-cf53-4fc7-8233-bcb362806684\") " pod="openstack/aodh-db-sync-d74n6" Dec 11 15:33:09 crc kubenswrapper[5050]: I1211 15:33:09.641541 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6glq7\" (UniqueName: \"kubernetes.io/projected/8bc5b4ae-cf53-4fc7-8233-bcb362806684-kube-api-access-6glq7\") pod \"aodh-db-sync-d74n6\" (UID: \"8bc5b4ae-cf53-4fc7-8233-bcb362806684\") " pod="openstack/aodh-db-sync-d74n6" Dec 11 15:33:09 crc kubenswrapper[5050]: I1211 15:33:09.641588 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bc5b4ae-cf53-4fc7-8233-bcb362806684-scripts\") pod \"aodh-db-sync-d74n6\" (UID: \"8bc5b4ae-cf53-4fc7-8233-bcb362806684\") " pod="openstack/aodh-db-sync-d74n6" Dec 11 15:33:09 crc kubenswrapper[5050]: I1211 15:33:09.641644 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bc5b4ae-cf53-4fc7-8233-bcb362806684-combined-ca-bundle\") pod \"aodh-db-sync-d74n6\" (UID: \"8bc5b4ae-cf53-4fc7-8233-bcb362806684\") " pod="openstack/aodh-db-sync-d74n6" Dec 11 15:33:09 crc kubenswrapper[5050]: I1211 15:33:09.641685 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bc5b4ae-cf53-4fc7-8233-bcb362806684-config-data\") pod \"aodh-db-sync-d74n6\" (UID: \"8bc5b4ae-cf53-4fc7-8233-bcb362806684\") " pod="openstack/aodh-db-sync-d74n6" Dec 11 15:33:09 crc kubenswrapper[5050]: I1211 15:33:09.651435 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bc5b4ae-cf53-4fc7-8233-bcb362806684-config-data\") pod \"aodh-db-sync-d74n6\" (UID: \"8bc5b4ae-cf53-4fc7-8233-bcb362806684\") " pod="openstack/aodh-db-sync-d74n6" Dec 11 15:33:09 crc kubenswrapper[5050]: I1211 15:33:09.656670 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bc5b4ae-cf53-4fc7-8233-bcb362806684-combined-ca-bundle\") pod \"aodh-db-sync-d74n6\" (UID: \"8bc5b4ae-cf53-4fc7-8233-bcb362806684\") " pod="openstack/aodh-db-sync-d74n6" Dec 11 15:33:09 crc kubenswrapper[5050]: I1211 15:33:09.657008 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bc5b4ae-cf53-4fc7-8233-bcb362806684-scripts\") pod \"aodh-db-sync-d74n6\" (UID: \"8bc5b4ae-cf53-4fc7-8233-bcb362806684\") " pod="openstack/aodh-db-sync-d74n6" Dec 11 15:33:09 crc kubenswrapper[5050]: I1211 15:33:09.659392 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6glq7\" (UniqueName: \"kubernetes.io/projected/8bc5b4ae-cf53-4fc7-8233-bcb362806684-kube-api-access-6glq7\") pod \"aodh-db-sync-d74n6\" (UID: \"8bc5b4ae-cf53-4fc7-8233-bcb362806684\") " pod="openstack/aodh-db-sync-d74n6" Dec 11 15:33:09 crc kubenswrapper[5050]: I1211 15:33:09.792870 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-d74n6" Dec 11 15:33:10 crc kubenswrapper[5050]: I1211 15:33:10.244126 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-d74n6"] Dec 11 15:33:10 crc kubenswrapper[5050]: I1211 15:33:10.649070 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-d74n6" event={"ID":"8bc5b4ae-cf53-4fc7-8233-bcb362806684","Type":"ContainerStarted","Data":"f77a564cee0f110b3451f180d66edbf04e9242436065e46b05d857daecae6729"} Dec 11 15:33:12 crc kubenswrapper[5050]: I1211 15:33:12.883757 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 11 15:33:16 crc kubenswrapper[5050]: I1211 15:33:16.725052 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-d74n6" event={"ID":"8bc5b4ae-cf53-4fc7-8233-bcb362806684","Type":"ContainerStarted","Data":"d106b55970922c5710ed694066cf811edd1799badadc2a6dc2ce8e89552a73a3"} Dec 11 15:33:17 crc kubenswrapper[5050]: I1211 15:33:17.781562 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-d74n6" podStartSLOduration=3.215196777 podStartE2EDuration="8.78154232s" podCreationTimestamp="2025-12-11 15:33:09 +0000 UTC" firstStartedPulling="2025-12-11 15:33:10.247823763 +0000 UTC m=+6281.091546349" lastFinishedPulling="2025-12-11 15:33:15.814169306 +0000 UTC m=+6286.657891892" observedRunningTime="2025-12-11 15:33:17.773268579 +0000 UTC m=+6288.616991165" watchObservedRunningTime="2025-12-11 15:33:17.78154232 +0000 UTC m=+6288.625264896" Dec 11 15:33:18 crc kubenswrapper[5050]: I1211 15:33:18.546264 5050 scope.go:117] "RemoveContainer" containerID="ebf03c01a0c1f974cd2ea46998cc404d2f6db1e8b59294fa047964a30636886e" Dec 11 15:33:18 crc kubenswrapper[5050]: E1211 15:33:18.546610 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:33:22 crc kubenswrapper[5050]: I1211 15:33:22.796989 5050 generic.go:334] "Generic (PLEG): container finished" podID="8bc5b4ae-cf53-4fc7-8233-bcb362806684" containerID="d106b55970922c5710ed694066cf811edd1799badadc2a6dc2ce8e89552a73a3" exitCode=0 Dec 11 15:33:22 crc kubenswrapper[5050]: I1211 15:33:22.797048 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-d74n6" event={"ID":"8bc5b4ae-cf53-4fc7-8233-bcb362806684","Type":"ContainerDied","Data":"d106b55970922c5710ed694066cf811edd1799badadc2a6dc2ce8e89552a73a3"} Dec 11 15:33:24 crc kubenswrapper[5050]: I1211 15:33:24.202556 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-d74n6" Dec 11 15:33:24 crc kubenswrapper[5050]: I1211 15:33:24.303271 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bc5b4ae-cf53-4fc7-8233-bcb362806684-combined-ca-bundle\") pod \"8bc5b4ae-cf53-4fc7-8233-bcb362806684\" (UID: \"8bc5b4ae-cf53-4fc7-8233-bcb362806684\") " Dec 11 15:33:24 crc kubenswrapper[5050]: I1211 15:33:24.303348 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6glq7\" (UniqueName: \"kubernetes.io/projected/8bc5b4ae-cf53-4fc7-8233-bcb362806684-kube-api-access-6glq7\") pod \"8bc5b4ae-cf53-4fc7-8233-bcb362806684\" (UID: \"8bc5b4ae-cf53-4fc7-8233-bcb362806684\") " Dec 11 15:33:24 crc kubenswrapper[5050]: I1211 15:33:24.303622 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bc5b4ae-cf53-4fc7-8233-bcb362806684-config-data\") pod \"8bc5b4ae-cf53-4fc7-8233-bcb362806684\" (UID: \"8bc5b4ae-cf53-4fc7-8233-bcb362806684\") " Dec 11 15:33:24 crc kubenswrapper[5050]: I1211 15:33:24.303667 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bc5b4ae-cf53-4fc7-8233-bcb362806684-scripts\") pod \"8bc5b4ae-cf53-4fc7-8233-bcb362806684\" (UID: \"8bc5b4ae-cf53-4fc7-8233-bcb362806684\") " Dec 11 15:33:24 crc kubenswrapper[5050]: I1211 15:33:24.308957 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bc5b4ae-cf53-4fc7-8233-bcb362806684-kube-api-access-6glq7" (OuterVolumeSpecName: "kube-api-access-6glq7") pod "8bc5b4ae-cf53-4fc7-8233-bcb362806684" (UID: "8bc5b4ae-cf53-4fc7-8233-bcb362806684"). InnerVolumeSpecName "kube-api-access-6glq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:33:24 crc kubenswrapper[5050]: I1211 15:33:24.321836 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bc5b4ae-cf53-4fc7-8233-bcb362806684-scripts" (OuterVolumeSpecName: "scripts") pod "8bc5b4ae-cf53-4fc7-8233-bcb362806684" (UID: "8bc5b4ae-cf53-4fc7-8233-bcb362806684"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:33:24 crc kubenswrapper[5050]: I1211 15:33:24.331441 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bc5b4ae-cf53-4fc7-8233-bcb362806684-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8bc5b4ae-cf53-4fc7-8233-bcb362806684" (UID: "8bc5b4ae-cf53-4fc7-8233-bcb362806684"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:33:24 crc kubenswrapper[5050]: I1211 15:33:24.336613 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bc5b4ae-cf53-4fc7-8233-bcb362806684-config-data" (OuterVolumeSpecName: "config-data") pod "8bc5b4ae-cf53-4fc7-8233-bcb362806684" (UID: "8bc5b4ae-cf53-4fc7-8233-bcb362806684"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:33:24 crc kubenswrapper[5050]: I1211 15:33:24.406909 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bc5b4ae-cf53-4fc7-8233-bcb362806684-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:33:24 crc kubenswrapper[5050]: I1211 15:33:24.406946 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bc5b4ae-cf53-4fc7-8233-bcb362806684-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:33:24 crc kubenswrapper[5050]: I1211 15:33:24.406958 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6glq7\" (UniqueName: \"kubernetes.io/projected/8bc5b4ae-cf53-4fc7-8233-bcb362806684-kube-api-access-6glq7\") on node \"crc\" DevicePath \"\"" Dec 11 15:33:24 crc kubenswrapper[5050]: I1211 15:33:24.406967 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bc5b4ae-cf53-4fc7-8233-bcb362806684-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:33:24 crc kubenswrapper[5050]: I1211 15:33:24.819514 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-d74n6" event={"ID":"8bc5b4ae-cf53-4fc7-8233-bcb362806684","Type":"ContainerDied","Data":"f77a564cee0f110b3451f180d66edbf04e9242436065e46b05d857daecae6729"} Dec 11 15:33:24 crc kubenswrapper[5050]: I1211 15:33:24.819572 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f77a564cee0f110b3451f180d66edbf04e9242436065e46b05d857daecae6729" Dec 11 15:33:24 crc kubenswrapper[5050]: I1211 15:33:24.819639 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-d74n6" Dec 11 15:33:29 crc kubenswrapper[5050]: I1211 15:33:29.306998 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Dec 11 15:33:29 crc kubenswrapper[5050]: E1211 15:33:29.308261 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bc5b4ae-cf53-4fc7-8233-bcb362806684" containerName="aodh-db-sync" Dec 11 15:33:29 crc kubenswrapper[5050]: I1211 15:33:29.308286 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bc5b4ae-cf53-4fc7-8233-bcb362806684" containerName="aodh-db-sync" Dec 11 15:33:29 crc kubenswrapper[5050]: I1211 15:33:29.308605 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bc5b4ae-cf53-4fc7-8233-bcb362806684" containerName="aodh-db-sync" Dec 11 15:33:29 crc kubenswrapper[5050]: I1211 15:33:29.310860 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Dec 11 15:33:29 crc kubenswrapper[5050]: I1211 15:33:29.312901 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Dec 11 15:33:29 crc kubenswrapper[5050]: I1211 15:33:29.315791 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-7ght8" Dec 11 15:33:29 crc kubenswrapper[5050]: I1211 15:33:29.315819 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Dec 11 15:33:29 crc kubenswrapper[5050]: I1211 15:33:29.325532 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Dec 11 15:33:29 crc kubenswrapper[5050]: I1211 15:33:29.412443 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89ae685-619c-4c74-80e9-d71728108118-config-data\") pod \"aodh-0\" (UID: \"e89ae685-619c-4c74-80e9-d71728108118\") " pod="openstack/aodh-0" Dec 11 15:33:29 crc kubenswrapper[5050]: I1211 15:33:29.415581 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e89ae685-619c-4c74-80e9-d71728108118-scripts\") pod \"aodh-0\" (UID: \"e89ae685-619c-4c74-80e9-d71728108118\") " pod="openstack/aodh-0" Dec 11 15:33:29 crc kubenswrapper[5050]: I1211 15:33:29.415768 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89ae685-619c-4c74-80e9-d71728108118-combined-ca-bundle\") pod \"aodh-0\" (UID: \"e89ae685-619c-4c74-80e9-d71728108118\") " pod="openstack/aodh-0" Dec 11 15:33:29 crc kubenswrapper[5050]: I1211 15:33:29.415951 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r6zq\" (UniqueName: \"kubernetes.io/projected/e89ae685-619c-4c74-80e9-d71728108118-kube-api-access-4r6zq\") pod \"aodh-0\" (UID: \"e89ae685-619c-4c74-80e9-d71728108118\") " pod="openstack/aodh-0" Dec 11 15:33:29 crc kubenswrapper[5050]: I1211 15:33:29.518689 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89ae685-619c-4c74-80e9-d71728108118-config-data\") pod \"aodh-0\" (UID: \"e89ae685-619c-4c74-80e9-d71728108118\") " pod="openstack/aodh-0" Dec 11 15:33:29 crc kubenswrapper[5050]: I1211 15:33:29.518740 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e89ae685-619c-4c74-80e9-d71728108118-scripts\") pod \"aodh-0\" (UID: \"e89ae685-619c-4c74-80e9-d71728108118\") " pod="openstack/aodh-0" Dec 11 15:33:29 crc kubenswrapper[5050]: I1211 15:33:29.518838 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89ae685-619c-4c74-80e9-d71728108118-combined-ca-bundle\") pod \"aodh-0\" (UID: \"e89ae685-619c-4c74-80e9-d71728108118\") " pod="openstack/aodh-0" Dec 11 15:33:29 crc kubenswrapper[5050]: I1211 15:33:29.519411 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r6zq\" (UniqueName: \"kubernetes.io/projected/e89ae685-619c-4c74-80e9-d71728108118-kube-api-access-4r6zq\") pod \"aodh-0\" (UID: \"e89ae685-619c-4c74-80e9-d71728108118\") " pod="openstack/aodh-0" Dec 11 15:33:29 crc kubenswrapper[5050]: I1211 15:33:29.521038 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Dec 11 15:33:29 crc kubenswrapper[5050]: I1211 15:33:29.521091 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Dec 11 15:33:29 crc kubenswrapper[5050]: I1211 15:33:29.538572 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e89ae685-619c-4c74-80e9-d71728108118-config-data\") pod \"aodh-0\" (UID: \"e89ae685-619c-4c74-80e9-d71728108118\") " pod="openstack/aodh-0" Dec 11 15:33:29 crc kubenswrapper[5050]: I1211 15:33:29.540276 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r6zq\" (UniqueName: \"kubernetes.io/projected/e89ae685-619c-4c74-80e9-d71728108118-kube-api-access-4r6zq\") pod \"aodh-0\" (UID: \"e89ae685-619c-4c74-80e9-d71728108118\") " pod="openstack/aodh-0" Dec 11 15:33:29 crc kubenswrapper[5050]: I1211 15:33:29.541572 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e89ae685-619c-4c74-80e9-d71728108118-scripts\") pod \"aodh-0\" (UID: \"e89ae685-619c-4c74-80e9-d71728108118\") " pod="openstack/aodh-0" Dec 11 15:33:29 crc kubenswrapper[5050]: I1211 15:33:29.552743 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e89ae685-619c-4c74-80e9-d71728108118-combined-ca-bundle\") pod \"aodh-0\" (UID: \"e89ae685-619c-4c74-80e9-d71728108118\") " pod="openstack/aodh-0" Dec 11 15:33:29 crc kubenswrapper[5050]: I1211 15:33:29.643664 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-7ght8" Dec 11 15:33:29 crc kubenswrapper[5050]: I1211 15:33:29.651317 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Dec 11 15:33:30 crc kubenswrapper[5050]: I1211 15:33:30.169448 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Dec 11 15:33:30 crc kubenswrapper[5050]: I1211 15:33:30.873168 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e89ae685-619c-4c74-80e9-d71728108118","Type":"ContainerStarted","Data":"149a1f354d1749958ec671a06410768dc52476a340a5396799c9fd011d9c8ac5"} Dec 11 15:33:31 crc kubenswrapper[5050]: I1211 15:33:31.461347 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 15:33:31 crc kubenswrapper[5050]: I1211 15:33:31.461857 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d697bc1-dfee-4c73-a14e-1efe3755b265" containerName="ceilometer-central-agent" containerID="cri-o://99fa98e356068c680fdfe51b711841d524733de1086baaa29f8ba75d19536fce" gracePeriod=30 Dec 11 15:33:31 crc kubenswrapper[5050]: I1211 15:33:31.461889 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d697bc1-dfee-4c73-a14e-1efe3755b265" containerName="proxy-httpd" containerID="cri-o://2f00152d880c0063c28f861ec958d5222480e1906bfd18662dee2e6e36c7f041" gracePeriod=30 Dec 11 15:33:31 crc kubenswrapper[5050]: I1211 15:33:31.461925 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d697bc1-dfee-4c73-a14e-1efe3755b265" containerName="sg-core" containerID="cri-o://c9efaf60bba126e1e72e580cd48fe0427b3f9a1636123f9f4e5eb48a7f90277b" gracePeriod=30 Dec 11 15:33:31 crc kubenswrapper[5050]: I1211 15:33:31.461997 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d697bc1-dfee-4c73-a14e-1efe3755b265" containerName="ceilometer-notification-agent" containerID="cri-o://026e8e7c07286a0dc108c50444d79365466e237beb839637de0849eaecd4a51b" gracePeriod=30 Dec 11 15:33:31 crc kubenswrapper[5050]: I1211 15:33:31.883942 5050 generic.go:334] "Generic (PLEG): container finished" podID="2d697bc1-dfee-4c73-a14e-1efe3755b265" containerID="2f00152d880c0063c28f861ec958d5222480e1906bfd18662dee2e6e36c7f041" exitCode=0 Dec 11 15:33:31 crc kubenswrapper[5050]: I1211 15:33:31.884232 5050 generic.go:334] "Generic (PLEG): container finished" podID="2d697bc1-dfee-4c73-a14e-1efe3755b265" containerID="c9efaf60bba126e1e72e580cd48fe0427b3f9a1636123f9f4e5eb48a7f90277b" exitCode=2 Dec 11 15:33:31 crc kubenswrapper[5050]: I1211 15:33:31.884033 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d697bc1-dfee-4c73-a14e-1efe3755b265","Type":"ContainerDied","Data":"2f00152d880c0063c28f861ec958d5222480e1906bfd18662dee2e6e36c7f041"} Dec 11 15:33:31 crc kubenswrapper[5050]: I1211 15:33:31.884296 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d697bc1-dfee-4c73-a14e-1efe3755b265","Type":"ContainerDied","Data":"c9efaf60bba126e1e72e580cd48fe0427b3f9a1636123f9f4e5eb48a7f90277b"} Dec 11 15:33:31 crc kubenswrapper[5050]: I1211 15:33:31.886749 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e89ae685-619c-4c74-80e9-d71728108118","Type":"ContainerStarted","Data":"8a6a4221a1e50ac3cc0345bf70143f2a6ee75ced7b778800b90db8b008783373"} Dec 11 15:33:32 crc kubenswrapper[5050]: I1211 15:33:32.900778 5050 generic.go:334] "Generic (PLEG): container finished" podID="2d697bc1-dfee-4c73-a14e-1efe3755b265" containerID="99fa98e356068c680fdfe51b711841d524733de1086baaa29f8ba75d19536fce" exitCode=0 Dec 11 15:33:32 crc kubenswrapper[5050]: I1211 15:33:32.900876 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d697bc1-dfee-4c73-a14e-1efe3755b265","Type":"ContainerDied","Data":"99fa98e356068c680fdfe51b711841d524733de1086baaa29f8ba75d19536fce"} Dec 11 15:33:33 crc kubenswrapper[5050]: I1211 15:33:33.546816 5050 scope.go:117] "RemoveContainer" containerID="ebf03c01a0c1f974cd2ea46998cc404d2f6db1e8b59294fa047964a30636886e" Dec 11 15:33:33 crc kubenswrapper[5050]: E1211 15:33:33.547147 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:33:34 crc kubenswrapper[5050]: I1211 15:33:34.920689 5050 generic.go:334] "Generic (PLEG): container finished" podID="2d697bc1-dfee-4c73-a14e-1efe3755b265" containerID="026e8e7c07286a0dc108c50444d79365466e237beb839637de0849eaecd4a51b" exitCode=0 Dec 11 15:33:34 crc kubenswrapper[5050]: I1211 15:33:34.920885 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d697bc1-dfee-4c73-a14e-1efe3755b265","Type":"ContainerDied","Data":"026e8e7c07286a0dc108c50444d79365466e237beb839637de0849eaecd4a51b"} Dec 11 15:33:35 crc kubenswrapper[5050]: I1211 15:33:35.390792 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 15:33:35 crc kubenswrapper[5050]: I1211 15:33:35.454208 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d697bc1-dfee-4c73-a14e-1efe3755b265-sg-core-conf-yaml\") pod \"2d697bc1-dfee-4c73-a14e-1efe3755b265\" (UID: \"2d697bc1-dfee-4c73-a14e-1efe3755b265\") " Dec 11 15:33:35 crc kubenswrapper[5050]: I1211 15:33:35.454306 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d697bc1-dfee-4c73-a14e-1efe3755b265-scripts\") pod \"2d697bc1-dfee-4c73-a14e-1efe3755b265\" (UID: \"2d697bc1-dfee-4c73-a14e-1efe3755b265\") " Dec 11 15:33:35 crc kubenswrapper[5050]: I1211 15:33:35.454387 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d697bc1-dfee-4c73-a14e-1efe3755b265-run-httpd\") pod \"2d697bc1-dfee-4c73-a14e-1efe3755b265\" (UID: \"2d697bc1-dfee-4c73-a14e-1efe3755b265\") " Dec 11 15:33:35 crc kubenswrapper[5050]: I1211 15:33:35.454447 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8jxh\" (UniqueName: \"kubernetes.io/projected/2d697bc1-dfee-4c73-a14e-1efe3755b265-kube-api-access-d8jxh\") pod \"2d697bc1-dfee-4c73-a14e-1efe3755b265\" (UID: \"2d697bc1-dfee-4c73-a14e-1efe3755b265\") " Dec 11 15:33:35 crc kubenswrapper[5050]: I1211 15:33:35.461767 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d697bc1-dfee-4c73-a14e-1efe3755b265-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2d697bc1-dfee-4c73-a14e-1efe3755b265" (UID: "2d697bc1-dfee-4c73-a14e-1efe3755b265"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:33:35 crc kubenswrapper[5050]: I1211 15:33:35.465139 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d697bc1-dfee-4c73-a14e-1efe3755b265-scripts" (OuterVolumeSpecName: "scripts") pod "2d697bc1-dfee-4c73-a14e-1efe3755b265" (UID: "2d697bc1-dfee-4c73-a14e-1efe3755b265"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:33:35 crc kubenswrapper[5050]: I1211 15:33:35.465345 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d697bc1-dfee-4c73-a14e-1efe3755b265-kube-api-access-d8jxh" (OuterVolumeSpecName: "kube-api-access-d8jxh") pod "2d697bc1-dfee-4c73-a14e-1efe3755b265" (UID: "2d697bc1-dfee-4c73-a14e-1efe3755b265"). InnerVolumeSpecName "kube-api-access-d8jxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:33:35 crc kubenswrapper[5050]: I1211 15:33:35.491876 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d697bc1-dfee-4c73-a14e-1efe3755b265-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2d697bc1-dfee-4c73-a14e-1efe3755b265" (UID: "2d697bc1-dfee-4c73-a14e-1efe3755b265"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:33:35 crc kubenswrapper[5050]: I1211 15:33:35.557292 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d697bc1-dfee-4c73-a14e-1efe3755b265-config-data\") pod \"2d697bc1-dfee-4c73-a14e-1efe3755b265\" (UID: \"2d697bc1-dfee-4c73-a14e-1efe3755b265\") " Dec 11 15:33:35 crc kubenswrapper[5050]: I1211 15:33:35.557387 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d697bc1-dfee-4c73-a14e-1efe3755b265-combined-ca-bundle\") pod \"2d697bc1-dfee-4c73-a14e-1efe3755b265\" (UID: \"2d697bc1-dfee-4c73-a14e-1efe3755b265\") " Dec 11 15:33:35 crc kubenswrapper[5050]: I1211 15:33:35.557964 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d697bc1-dfee-4c73-a14e-1efe3755b265-log-httpd\") pod \"2d697bc1-dfee-4c73-a14e-1efe3755b265\" (UID: \"2d697bc1-dfee-4c73-a14e-1efe3755b265\") " Dec 11 15:33:35 crc kubenswrapper[5050]: I1211 15:33:35.558845 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d697bc1-dfee-4c73-a14e-1efe3755b265-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:33:35 crc kubenswrapper[5050]: I1211 15:33:35.558865 5050 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d697bc1-dfee-4c73-a14e-1efe3755b265-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 15:33:35 crc kubenswrapper[5050]: I1211 15:33:35.558878 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8jxh\" (UniqueName: \"kubernetes.io/projected/2d697bc1-dfee-4c73-a14e-1efe3755b265-kube-api-access-d8jxh\") on node \"crc\" DevicePath \"\"" Dec 11 15:33:35 crc kubenswrapper[5050]: I1211 15:33:35.558888 5050 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d697bc1-dfee-4c73-a14e-1efe3755b265-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 15:33:35 crc kubenswrapper[5050]: I1211 15:33:35.559966 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d697bc1-dfee-4c73-a14e-1efe3755b265-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2d697bc1-dfee-4c73-a14e-1efe3755b265" (UID: "2d697bc1-dfee-4c73-a14e-1efe3755b265"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:33:35 crc kubenswrapper[5050]: I1211 15:33:35.652760 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d697bc1-dfee-4c73-a14e-1efe3755b265-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d697bc1-dfee-4c73-a14e-1efe3755b265" (UID: "2d697bc1-dfee-4c73-a14e-1efe3755b265"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:33:35 crc kubenswrapper[5050]: I1211 15:33:35.660482 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d697bc1-dfee-4c73-a14e-1efe3755b265-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:33:35 crc kubenswrapper[5050]: I1211 15:33:35.660519 5050 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d697bc1-dfee-4c73-a14e-1efe3755b265-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 15:33:35 crc kubenswrapper[5050]: I1211 15:33:35.669914 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d697bc1-dfee-4c73-a14e-1efe3755b265-config-data" (OuterVolumeSpecName: "config-data") pod "2d697bc1-dfee-4c73-a14e-1efe3755b265" (UID: "2d697bc1-dfee-4c73-a14e-1efe3755b265"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:33:35 crc kubenswrapper[5050]: I1211 15:33:35.762964 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d697bc1-dfee-4c73-a14e-1efe3755b265-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:33:35 crc kubenswrapper[5050]: I1211 15:33:35.931947 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d697bc1-dfee-4c73-a14e-1efe3755b265","Type":"ContainerDied","Data":"94911bcee3a6fcf1635f2041b11d709e64d8d1586ea247f5d5c39fc74466fd08"} Dec 11 15:33:35 crc kubenswrapper[5050]: I1211 15:33:35.931991 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 15:33:35 crc kubenswrapper[5050]: I1211 15:33:35.932028 5050 scope.go:117] "RemoveContainer" containerID="2f00152d880c0063c28f861ec958d5222480e1906bfd18662dee2e6e36c7f041" Dec 11 15:33:35 crc kubenswrapper[5050]: I1211 15:33:35.934650 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e89ae685-619c-4c74-80e9-d71728108118","Type":"ContainerStarted","Data":"12df0f544098652e53a5f0bad43ddc3938deb90b698804a8191f31960d36c850"} Dec 11 15:33:35 crc kubenswrapper[5050]: I1211 15:33:35.977175 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 15:33:35 crc kubenswrapper[5050]: I1211 15:33:35.995123 5050 scope.go:117] "RemoveContainer" containerID="c9efaf60bba126e1e72e580cd48fe0427b3f9a1636123f9f4e5eb48a7f90277b" Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.001053 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.013989 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 15:33:36 crc kubenswrapper[5050]: E1211 15:33:36.014557 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d697bc1-dfee-4c73-a14e-1efe3755b265" containerName="ceilometer-notification-agent" Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.014579 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d697bc1-dfee-4c73-a14e-1efe3755b265" containerName="ceilometer-notification-agent" Dec 11 15:33:36 crc kubenswrapper[5050]: E1211 15:33:36.014593 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d697bc1-dfee-4c73-a14e-1efe3755b265" containerName="sg-core" Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.014599 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d697bc1-dfee-4c73-a14e-1efe3755b265" containerName="sg-core" Dec 11 15:33:36 crc kubenswrapper[5050]: E1211 15:33:36.014610 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d697bc1-dfee-4c73-a14e-1efe3755b265" containerName="ceilometer-central-agent" Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.014616 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d697bc1-dfee-4c73-a14e-1efe3755b265" containerName="ceilometer-central-agent" Dec 11 15:33:36 crc kubenswrapper[5050]: E1211 15:33:36.014634 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d697bc1-dfee-4c73-a14e-1efe3755b265" containerName="proxy-httpd" Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.014640 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d697bc1-dfee-4c73-a14e-1efe3755b265" containerName="proxy-httpd" Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.014815 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d697bc1-dfee-4c73-a14e-1efe3755b265" containerName="sg-core" Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.014836 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d697bc1-dfee-4c73-a14e-1efe3755b265" containerName="ceilometer-central-agent" Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.014843 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d697bc1-dfee-4c73-a14e-1efe3755b265" containerName="ceilometer-notification-agent" Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.014851 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d697bc1-dfee-4c73-a14e-1efe3755b265" containerName="proxy-httpd" Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.017181 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.019918 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.020877 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.031294 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.042495 5050 scope.go:117] "RemoveContainer" containerID="026e8e7c07286a0dc108c50444d79365466e237beb839637de0849eaecd4a51b" Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.072342 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw888\" (UniqueName: \"kubernetes.io/projected/4f789e1f-2171-4126-baee-8507b4411dbb-kube-api-access-hw888\") pod \"ceilometer-0\" (UID: \"4f789e1f-2171-4126-baee-8507b4411dbb\") " pod="openstack/ceilometer-0" Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.072498 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f789e1f-2171-4126-baee-8507b4411dbb-log-httpd\") pod \"ceilometer-0\" (UID: \"4f789e1f-2171-4126-baee-8507b4411dbb\") " pod="openstack/ceilometer-0" Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.072529 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f789e1f-2171-4126-baee-8507b4411dbb-config-data\") pod \"ceilometer-0\" (UID: \"4f789e1f-2171-4126-baee-8507b4411dbb\") " pod="openstack/ceilometer-0" Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.072568 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f789e1f-2171-4126-baee-8507b4411dbb-scripts\") pod \"ceilometer-0\" (UID: \"4f789e1f-2171-4126-baee-8507b4411dbb\") " pod="openstack/ceilometer-0" Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.072931 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f789e1f-2171-4126-baee-8507b4411dbb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4f789e1f-2171-4126-baee-8507b4411dbb\") " pod="openstack/ceilometer-0" Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.073225 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f789e1f-2171-4126-baee-8507b4411dbb-run-httpd\") pod \"ceilometer-0\" (UID: \"4f789e1f-2171-4126-baee-8507b4411dbb\") " pod="openstack/ceilometer-0" Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.073287 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f789e1f-2171-4126-baee-8507b4411dbb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4f789e1f-2171-4126-baee-8507b4411dbb\") " pod="openstack/ceilometer-0" Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.081963 5050 scope.go:117] "RemoveContainer" containerID="99fa98e356068c680fdfe51b711841d524733de1086baaa29f8ba75d19536fce" Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.175460 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f789e1f-2171-4126-baee-8507b4411dbb-log-httpd\") pod \"ceilometer-0\" (UID: \"4f789e1f-2171-4126-baee-8507b4411dbb\") " pod="openstack/ceilometer-0" Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.175517 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f789e1f-2171-4126-baee-8507b4411dbb-config-data\") pod \"ceilometer-0\" (UID: \"4f789e1f-2171-4126-baee-8507b4411dbb\") " pod="openstack/ceilometer-0" Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.175570 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f789e1f-2171-4126-baee-8507b4411dbb-scripts\") pod \"ceilometer-0\" (UID: \"4f789e1f-2171-4126-baee-8507b4411dbb\") " pod="openstack/ceilometer-0" Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.175633 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f789e1f-2171-4126-baee-8507b4411dbb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4f789e1f-2171-4126-baee-8507b4411dbb\") " pod="openstack/ceilometer-0" Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.175695 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f789e1f-2171-4126-baee-8507b4411dbb-run-httpd\") pod \"ceilometer-0\" (UID: \"4f789e1f-2171-4126-baee-8507b4411dbb\") " pod="openstack/ceilometer-0" Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.175718 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f789e1f-2171-4126-baee-8507b4411dbb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4f789e1f-2171-4126-baee-8507b4411dbb\") " pod="openstack/ceilometer-0" Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.175795 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw888\" (UniqueName: \"kubernetes.io/projected/4f789e1f-2171-4126-baee-8507b4411dbb-kube-api-access-hw888\") pod \"ceilometer-0\" (UID: \"4f789e1f-2171-4126-baee-8507b4411dbb\") " pod="openstack/ceilometer-0" Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.176120 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f789e1f-2171-4126-baee-8507b4411dbb-log-httpd\") pod \"ceilometer-0\" (UID: \"4f789e1f-2171-4126-baee-8507b4411dbb\") " pod="openstack/ceilometer-0" Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.176206 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f789e1f-2171-4126-baee-8507b4411dbb-run-httpd\") pod \"ceilometer-0\" (UID: \"4f789e1f-2171-4126-baee-8507b4411dbb\") " pod="openstack/ceilometer-0" Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.180955 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f789e1f-2171-4126-baee-8507b4411dbb-scripts\") pod \"ceilometer-0\" (UID: \"4f789e1f-2171-4126-baee-8507b4411dbb\") " pod="openstack/ceilometer-0" Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.181154 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f789e1f-2171-4126-baee-8507b4411dbb-config-data\") pod \"ceilometer-0\" (UID: \"4f789e1f-2171-4126-baee-8507b4411dbb\") " pod="openstack/ceilometer-0" Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.183874 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f789e1f-2171-4126-baee-8507b4411dbb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4f789e1f-2171-4126-baee-8507b4411dbb\") " pod="openstack/ceilometer-0" Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.184697 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f789e1f-2171-4126-baee-8507b4411dbb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4f789e1f-2171-4126-baee-8507b4411dbb\") " pod="openstack/ceilometer-0" Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.191739 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw888\" (UniqueName: \"kubernetes.io/projected/4f789e1f-2171-4126-baee-8507b4411dbb-kube-api-access-hw888\") pod \"ceilometer-0\" (UID: \"4f789e1f-2171-4126-baee-8507b4411dbb\") " pod="openstack/ceilometer-0" Dec 11 15:33:36 crc kubenswrapper[5050]: I1211 15:33:36.345320 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 15:33:37 crc kubenswrapper[5050]: I1211 15:33:37.215134 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 15:33:37 crc kubenswrapper[5050]: W1211 15:33:37.215727 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f789e1f_2171_4126_baee_8507b4411dbb.slice/crio-685da66938a11245ece68d2a553c447ebefc63fee91c411bc185edf566daba6e WatchSource:0}: Error finding container 685da66938a11245ece68d2a553c447ebefc63fee91c411bc185edf566daba6e: Status 404 returned error can't find the container with id 685da66938a11245ece68d2a553c447ebefc63fee91c411bc185edf566daba6e Dec 11 15:33:37 crc kubenswrapper[5050]: I1211 15:33:37.559641 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d697bc1-dfee-4c73-a14e-1efe3755b265" path="/var/lib/kubelet/pods/2d697bc1-dfee-4c73-a14e-1efe3755b265/volumes" Dec 11 15:33:37 crc kubenswrapper[5050]: I1211 15:33:37.971655 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f789e1f-2171-4126-baee-8507b4411dbb","Type":"ContainerStarted","Data":"685da66938a11245ece68d2a553c447ebefc63fee91c411bc185edf566daba6e"} Dec 11 15:33:37 crc kubenswrapper[5050]: I1211 15:33:37.973479 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e89ae685-619c-4c74-80e9-d71728108118","Type":"ContainerStarted","Data":"8e319f36afd9e903b311d159ec01a8c0567538b2be3d1fd0477c2a4cccf046ff"} Dec 11 15:33:38 crc kubenswrapper[5050]: I1211 15:33:38.983442 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f789e1f-2171-4126-baee-8507b4411dbb","Type":"ContainerStarted","Data":"f958aa1522472d79dcdf0daf0cba7e2e0b8ade5eca4bf022d39c6e775e12abb2"} Dec 11 15:33:41 crc kubenswrapper[5050]: I1211 15:33:41.009790 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f789e1f-2171-4126-baee-8507b4411dbb","Type":"ContainerStarted","Data":"22b1dfb4a5657b19c2f23bf8514de39b07b2810851f4ca44610259f4c81cb7d7"} Dec 11 15:33:41 crc kubenswrapper[5050]: I1211 15:33:41.014020 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e89ae685-619c-4c74-80e9-d71728108118","Type":"ContainerStarted","Data":"5a13f05c9d958f216c2977e05cdc20ba464ddf8aee38f9a427cbc376f0046582"} Dec 11 15:33:41 crc kubenswrapper[5050]: I1211 15:33:41.040587 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.982998496 podStartE2EDuration="12.040569089s" podCreationTimestamp="2025-12-11 15:33:29 +0000 UTC" firstStartedPulling="2025-12-11 15:33:30.180419032 +0000 UTC m=+6301.024141618" lastFinishedPulling="2025-12-11 15:33:40.237989545 +0000 UTC m=+6311.081712211" observedRunningTime="2025-12-11 15:33:41.032408421 +0000 UTC m=+6311.876131007" watchObservedRunningTime="2025-12-11 15:33:41.040569089 +0000 UTC m=+6311.884291675" Dec 11 15:33:42 crc kubenswrapper[5050]: I1211 15:33:42.026098 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f789e1f-2171-4126-baee-8507b4411dbb","Type":"ContainerStarted","Data":"7f24b2bebdda17df7d919c4a53800aa15481b7f4278bdd020fa13306048a2873"} Dec 11 15:33:44 crc kubenswrapper[5050]: I1211 15:33:44.050935 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f789e1f-2171-4126-baee-8507b4411dbb","Type":"ContainerStarted","Data":"3fbf19dabea36cc7a385d0fbcc8915b21baa754a31df40dd82f2ef4000c00f60"} Dec 11 15:33:44 crc kubenswrapper[5050]: I1211 15:33:44.051458 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 11 15:33:44 crc kubenswrapper[5050]: I1211 15:33:44.081054 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.017347527 podStartE2EDuration="9.081037003s" podCreationTimestamp="2025-12-11 15:33:35 +0000 UTC" firstStartedPulling="2025-12-11 15:33:37.218226744 +0000 UTC m=+6308.061949340" lastFinishedPulling="2025-12-11 15:33:43.28191622 +0000 UTC m=+6314.125638816" observedRunningTime="2025-12-11 15:33:44.073536293 +0000 UTC m=+6314.917258879" watchObservedRunningTime="2025-12-11 15:33:44.081037003 +0000 UTC m=+6314.924759589" Dec 11 15:33:44 crc kubenswrapper[5050]: I1211 15:33:44.547926 5050 scope.go:117] "RemoveContainer" containerID="ebf03c01a0c1f974cd2ea46998cc404d2f6db1e8b59294fa047964a30636886e" Dec 11 15:33:45 crc kubenswrapper[5050]: I1211 15:33:45.064761 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerStarted","Data":"8e3528ca5f78bfb9e7123275334aa38f06e1578dafb2743ec06b823dfbe30e85"} Dec 11 15:33:46 crc kubenswrapper[5050]: I1211 15:33:46.051675 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-zmz4j"] Dec 11 15:33:46 crc kubenswrapper[5050]: I1211 15:33:46.053669 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-zmz4j" Dec 11 15:33:46 crc kubenswrapper[5050]: I1211 15:33:46.061667 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-zmz4j"] Dec 11 15:33:46 crc kubenswrapper[5050]: I1211 15:33:46.138848 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-f133-account-create-update-4k82x"] Dec 11 15:33:46 crc kubenswrapper[5050]: I1211 15:33:46.140301 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-f133-account-create-update-4k82x" Dec 11 15:33:46 crc kubenswrapper[5050]: I1211 15:33:46.142279 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Dec 11 15:33:46 crc kubenswrapper[5050]: I1211 15:33:46.149689 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-f133-account-create-update-4k82x"] Dec 11 15:33:46 crc kubenswrapper[5050]: I1211 15:33:46.192956 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v5qw\" (UniqueName: \"kubernetes.io/projected/8af6ed6f-5644-47f2-b05f-21d9d019e926-kube-api-access-4v5qw\") pod \"manila-db-create-zmz4j\" (UID: \"8af6ed6f-5644-47f2-b05f-21d9d019e926\") " pod="openstack/manila-db-create-zmz4j" Dec 11 15:33:46 crc kubenswrapper[5050]: I1211 15:33:46.193805 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8af6ed6f-5644-47f2-b05f-21d9d019e926-operator-scripts\") pod \"manila-db-create-zmz4j\" (UID: \"8af6ed6f-5644-47f2-b05f-21d9d019e926\") " pod="openstack/manila-db-create-zmz4j" Dec 11 15:33:46 crc kubenswrapper[5050]: I1211 15:33:46.295320 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8af6ed6f-5644-47f2-b05f-21d9d019e926-operator-scripts\") pod \"manila-db-create-zmz4j\" (UID: \"8af6ed6f-5644-47f2-b05f-21d9d019e926\") " pod="openstack/manila-db-create-zmz4j" Dec 11 15:33:46 crc kubenswrapper[5050]: I1211 15:33:46.295606 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v5qw\" (UniqueName: \"kubernetes.io/projected/8af6ed6f-5644-47f2-b05f-21d9d019e926-kube-api-access-4v5qw\") pod \"manila-db-create-zmz4j\" (UID: \"8af6ed6f-5644-47f2-b05f-21d9d019e926\") " pod="openstack/manila-db-create-zmz4j" Dec 11 15:33:46 crc kubenswrapper[5050]: I1211 15:33:46.295640 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8ebf3f9-d34a-4eff-8911-2760f8bb9b55-operator-scripts\") pod \"manila-f133-account-create-update-4k82x\" (UID: \"a8ebf3f9-d34a-4eff-8911-2760f8bb9b55\") " pod="openstack/manila-f133-account-create-update-4k82x" Dec 11 15:33:46 crc kubenswrapper[5050]: I1211 15:33:46.295714 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzdxn\" (UniqueName: \"kubernetes.io/projected/a8ebf3f9-d34a-4eff-8911-2760f8bb9b55-kube-api-access-vzdxn\") pod \"manila-f133-account-create-update-4k82x\" (UID: \"a8ebf3f9-d34a-4eff-8911-2760f8bb9b55\") " pod="openstack/manila-f133-account-create-update-4k82x" Dec 11 15:33:46 crc kubenswrapper[5050]: I1211 15:33:46.297067 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8af6ed6f-5644-47f2-b05f-21d9d019e926-operator-scripts\") pod \"manila-db-create-zmz4j\" (UID: \"8af6ed6f-5644-47f2-b05f-21d9d019e926\") " pod="openstack/manila-db-create-zmz4j" Dec 11 15:33:46 crc kubenswrapper[5050]: I1211 15:33:46.312502 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v5qw\" (UniqueName: \"kubernetes.io/projected/8af6ed6f-5644-47f2-b05f-21d9d019e926-kube-api-access-4v5qw\") pod \"manila-db-create-zmz4j\" (UID: \"8af6ed6f-5644-47f2-b05f-21d9d019e926\") " pod="openstack/manila-db-create-zmz4j" Dec 11 15:33:46 crc kubenswrapper[5050]: I1211 15:33:46.390664 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-zmz4j" Dec 11 15:33:46 crc kubenswrapper[5050]: I1211 15:33:46.397684 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8ebf3f9-d34a-4eff-8911-2760f8bb9b55-operator-scripts\") pod \"manila-f133-account-create-update-4k82x\" (UID: \"a8ebf3f9-d34a-4eff-8911-2760f8bb9b55\") " pod="openstack/manila-f133-account-create-update-4k82x" Dec 11 15:33:46 crc kubenswrapper[5050]: I1211 15:33:46.397798 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzdxn\" (UniqueName: \"kubernetes.io/projected/a8ebf3f9-d34a-4eff-8911-2760f8bb9b55-kube-api-access-vzdxn\") pod \"manila-f133-account-create-update-4k82x\" (UID: \"a8ebf3f9-d34a-4eff-8911-2760f8bb9b55\") " pod="openstack/manila-f133-account-create-update-4k82x" Dec 11 15:33:46 crc kubenswrapper[5050]: I1211 15:33:46.402900 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8ebf3f9-d34a-4eff-8911-2760f8bb9b55-operator-scripts\") pod \"manila-f133-account-create-update-4k82x\" (UID: \"a8ebf3f9-d34a-4eff-8911-2760f8bb9b55\") " pod="openstack/manila-f133-account-create-update-4k82x" Dec 11 15:33:46 crc kubenswrapper[5050]: I1211 15:33:46.423638 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzdxn\" (UniqueName: \"kubernetes.io/projected/a8ebf3f9-d34a-4eff-8911-2760f8bb9b55-kube-api-access-vzdxn\") pod \"manila-f133-account-create-update-4k82x\" (UID: \"a8ebf3f9-d34a-4eff-8911-2760f8bb9b55\") " pod="openstack/manila-f133-account-create-update-4k82x" Dec 11 15:33:46 crc kubenswrapper[5050]: I1211 15:33:46.461077 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-f133-account-create-update-4k82x" Dec 11 15:33:47 crc kubenswrapper[5050]: I1211 15:33:47.042809 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-zmz4j"] Dec 11 15:33:47 crc kubenswrapper[5050]: I1211 15:33:47.098160 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-f133-account-create-update-4k82x"] Dec 11 15:33:47 crc kubenswrapper[5050]: I1211 15:33:47.098206 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-zmz4j" event={"ID":"8af6ed6f-5644-47f2-b05f-21d9d019e926","Type":"ContainerStarted","Data":"d4c8ca023fef3bd8cb5e652ab2632dddf28583393cf3a3630b0a623f2f998ae0"} Dec 11 15:33:47 crc kubenswrapper[5050]: W1211 15:33:47.102586 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8ebf3f9_d34a_4eff_8911_2760f8bb9b55.slice/crio-02a3ff95cc2b8ef43f662b3ad6d15dec941fed691feda18b8dfa571c6ec57c1f WatchSource:0}: Error finding container 02a3ff95cc2b8ef43f662b3ad6d15dec941fed691feda18b8dfa571c6ec57c1f: Status 404 returned error can't find the container with id 02a3ff95cc2b8ef43f662b3ad6d15dec941fed691feda18b8dfa571c6ec57c1f Dec 11 15:33:48 crc kubenswrapper[5050]: I1211 15:33:48.109743 5050 generic.go:334] "Generic (PLEG): container finished" podID="a8ebf3f9-d34a-4eff-8911-2760f8bb9b55" containerID="55a2e9d9bf123fb10a3d360673f3ae553986b43cda11c9367ddb0244986c5899" exitCode=0 Dec 11 15:33:48 crc kubenswrapper[5050]: I1211 15:33:48.109820 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-f133-account-create-update-4k82x" event={"ID":"a8ebf3f9-d34a-4eff-8911-2760f8bb9b55","Type":"ContainerDied","Data":"55a2e9d9bf123fb10a3d360673f3ae553986b43cda11c9367ddb0244986c5899"} Dec 11 15:33:48 crc kubenswrapper[5050]: I1211 15:33:48.111347 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-f133-account-create-update-4k82x" event={"ID":"a8ebf3f9-d34a-4eff-8911-2760f8bb9b55","Type":"ContainerStarted","Data":"02a3ff95cc2b8ef43f662b3ad6d15dec941fed691feda18b8dfa571c6ec57c1f"} Dec 11 15:33:48 crc kubenswrapper[5050]: I1211 15:33:48.114244 5050 generic.go:334] "Generic (PLEG): container finished" podID="8af6ed6f-5644-47f2-b05f-21d9d019e926" containerID="b6f836f0045db03d7529079df1faf86d67993a6817fb87815a04a34c22c372df" exitCode=0 Dec 11 15:33:48 crc kubenswrapper[5050]: I1211 15:33:48.114285 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-zmz4j" event={"ID":"8af6ed6f-5644-47f2-b05f-21d9d019e926","Type":"ContainerDied","Data":"b6f836f0045db03d7529079df1faf86d67993a6817fb87815a04a34c22c372df"} Dec 11 15:33:49 crc kubenswrapper[5050]: I1211 15:33:49.665859 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-f133-account-create-update-4k82x" Dec 11 15:33:49 crc kubenswrapper[5050]: I1211 15:33:49.675435 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-zmz4j" Dec 11 15:33:49 crc kubenswrapper[5050]: I1211 15:33:49.681096 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8ebf3f9-d34a-4eff-8911-2760f8bb9b55-operator-scripts\") pod \"a8ebf3f9-d34a-4eff-8911-2760f8bb9b55\" (UID: \"a8ebf3f9-d34a-4eff-8911-2760f8bb9b55\") " Dec 11 15:33:49 crc kubenswrapper[5050]: I1211 15:33:49.681318 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzdxn\" (UniqueName: \"kubernetes.io/projected/a8ebf3f9-d34a-4eff-8911-2760f8bb9b55-kube-api-access-vzdxn\") pod \"a8ebf3f9-d34a-4eff-8911-2760f8bb9b55\" (UID: \"a8ebf3f9-d34a-4eff-8911-2760f8bb9b55\") " Dec 11 15:33:49 crc kubenswrapper[5050]: I1211 15:33:49.681367 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v5qw\" (UniqueName: \"kubernetes.io/projected/8af6ed6f-5644-47f2-b05f-21d9d019e926-kube-api-access-4v5qw\") pod \"8af6ed6f-5644-47f2-b05f-21d9d019e926\" (UID: \"8af6ed6f-5644-47f2-b05f-21d9d019e926\") " Dec 11 15:33:49 crc kubenswrapper[5050]: I1211 15:33:49.681428 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8af6ed6f-5644-47f2-b05f-21d9d019e926-operator-scripts\") pod \"8af6ed6f-5644-47f2-b05f-21d9d019e926\" (UID: \"8af6ed6f-5644-47f2-b05f-21d9d019e926\") " Dec 11 15:33:49 crc kubenswrapper[5050]: I1211 15:33:49.682552 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8af6ed6f-5644-47f2-b05f-21d9d019e926-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8af6ed6f-5644-47f2-b05f-21d9d019e926" (UID: "8af6ed6f-5644-47f2-b05f-21d9d019e926"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:33:49 crc kubenswrapper[5050]: I1211 15:33:49.682763 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8ebf3f9-d34a-4eff-8911-2760f8bb9b55-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a8ebf3f9-d34a-4eff-8911-2760f8bb9b55" (UID: "a8ebf3f9-d34a-4eff-8911-2760f8bb9b55"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:33:49 crc kubenswrapper[5050]: I1211 15:33:49.689046 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8ebf3f9-d34a-4eff-8911-2760f8bb9b55-kube-api-access-vzdxn" (OuterVolumeSpecName: "kube-api-access-vzdxn") pod "a8ebf3f9-d34a-4eff-8911-2760f8bb9b55" (UID: "a8ebf3f9-d34a-4eff-8911-2760f8bb9b55"). InnerVolumeSpecName "kube-api-access-vzdxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:33:49 crc kubenswrapper[5050]: I1211 15:33:49.695880 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8af6ed6f-5644-47f2-b05f-21d9d019e926-kube-api-access-4v5qw" (OuterVolumeSpecName: "kube-api-access-4v5qw") pod "8af6ed6f-5644-47f2-b05f-21d9d019e926" (UID: "8af6ed6f-5644-47f2-b05f-21d9d019e926"). InnerVolumeSpecName "kube-api-access-4v5qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:33:49 crc kubenswrapper[5050]: I1211 15:33:49.783809 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8ebf3f9-d34a-4eff-8911-2760f8bb9b55-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:33:49 crc kubenswrapper[5050]: I1211 15:33:49.783854 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzdxn\" (UniqueName: \"kubernetes.io/projected/a8ebf3f9-d34a-4eff-8911-2760f8bb9b55-kube-api-access-vzdxn\") on node \"crc\" DevicePath \"\"" Dec 11 15:33:49 crc kubenswrapper[5050]: I1211 15:33:49.783870 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4v5qw\" (UniqueName: \"kubernetes.io/projected/8af6ed6f-5644-47f2-b05f-21d9d019e926-kube-api-access-4v5qw\") on node \"crc\" DevicePath \"\"" Dec 11 15:33:49 crc kubenswrapper[5050]: I1211 15:33:49.783882 5050 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8af6ed6f-5644-47f2-b05f-21d9d019e926-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:33:50 crc kubenswrapper[5050]: I1211 15:33:50.133781 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-zmz4j" event={"ID":"8af6ed6f-5644-47f2-b05f-21d9d019e926","Type":"ContainerDied","Data":"d4c8ca023fef3bd8cb5e652ab2632dddf28583393cf3a3630b0a623f2f998ae0"} Dec 11 15:33:50 crc kubenswrapper[5050]: I1211 15:33:50.134127 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4c8ca023fef3bd8cb5e652ab2632dddf28583393cf3a3630b0a623f2f998ae0" Dec 11 15:33:50 crc kubenswrapper[5050]: I1211 15:33:50.134207 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-zmz4j" Dec 11 15:33:50 crc kubenswrapper[5050]: I1211 15:33:50.135868 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-f133-account-create-update-4k82x" event={"ID":"a8ebf3f9-d34a-4eff-8911-2760f8bb9b55","Type":"ContainerDied","Data":"02a3ff95cc2b8ef43f662b3ad6d15dec941fed691feda18b8dfa571c6ec57c1f"} Dec 11 15:33:50 crc kubenswrapper[5050]: I1211 15:33:50.135893 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02a3ff95cc2b8ef43f662b3ad6d15dec941fed691feda18b8dfa571c6ec57c1f" Dec 11 15:33:50 crc kubenswrapper[5050]: I1211 15:33:50.135928 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-f133-account-create-update-4k82x" Dec 11 15:33:51 crc kubenswrapper[5050]: I1211 15:33:51.636521 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-t56dz"] Dec 11 15:33:51 crc kubenswrapper[5050]: E1211 15:33:51.637601 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8af6ed6f-5644-47f2-b05f-21d9d019e926" containerName="mariadb-database-create" Dec 11 15:33:51 crc kubenswrapper[5050]: I1211 15:33:51.637621 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="8af6ed6f-5644-47f2-b05f-21d9d019e926" containerName="mariadb-database-create" Dec 11 15:33:51 crc kubenswrapper[5050]: E1211 15:33:51.637657 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8ebf3f9-d34a-4eff-8911-2760f8bb9b55" containerName="mariadb-account-create-update" Dec 11 15:33:51 crc kubenswrapper[5050]: I1211 15:33:51.637667 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8ebf3f9-d34a-4eff-8911-2760f8bb9b55" containerName="mariadb-account-create-update" Dec 11 15:33:51 crc kubenswrapper[5050]: I1211 15:33:51.637897 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="8af6ed6f-5644-47f2-b05f-21d9d019e926" containerName="mariadb-database-create" Dec 11 15:33:51 crc kubenswrapper[5050]: I1211 15:33:51.637914 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8ebf3f9-d34a-4eff-8911-2760f8bb9b55" containerName="mariadb-account-create-update" Dec 11 15:33:51 crc kubenswrapper[5050]: I1211 15:33:51.638914 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-t56dz" Dec 11 15:33:51 crc kubenswrapper[5050]: I1211 15:33:51.642350 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-d7578" Dec 11 15:33:51 crc kubenswrapper[5050]: I1211 15:33:51.642567 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Dec 11 15:33:51 crc kubenswrapper[5050]: I1211 15:33:51.647347 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-t56dz"] Dec 11 15:33:51 crc kubenswrapper[5050]: I1211 15:33:51.733613 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m65vj\" (UniqueName: \"kubernetes.io/projected/e6e01872-2756-4ca0-b7d6-a8bf1c80ed46-kube-api-access-m65vj\") pod \"manila-db-sync-t56dz\" (UID: \"e6e01872-2756-4ca0-b7d6-a8bf1c80ed46\") " pod="openstack/manila-db-sync-t56dz" Dec 11 15:33:51 crc kubenswrapper[5050]: I1211 15:33:51.733733 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6e01872-2756-4ca0-b7d6-a8bf1c80ed46-config-data\") pod \"manila-db-sync-t56dz\" (UID: \"e6e01872-2756-4ca0-b7d6-a8bf1c80ed46\") " pod="openstack/manila-db-sync-t56dz" Dec 11 15:33:51 crc kubenswrapper[5050]: I1211 15:33:51.733762 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/e6e01872-2756-4ca0-b7d6-a8bf1c80ed46-job-config-data\") pod \"manila-db-sync-t56dz\" (UID: \"e6e01872-2756-4ca0-b7d6-a8bf1c80ed46\") " pod="openstack/manila-db-sync-t56dz" Dec 11 15:33:51 crc kubenswrapper[5050]: I1211 15:33:51.733817 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6e01872-2756-4ca0-b7d6-a8bf1c80ed46-combined-ca-bundle\") pod \"manila-db-sync-t56dz\" (UID: \"e6e01872-2756-4ca0-b7d6-a8bf1c80ed46\") " pod="openstack/manila-db-sync-t56dz" Dec 11 15:33:51 crc kubenswrapper[5050]: I1211 15:33:51.836452 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m65vj\" (UniqueName: \"kubernetes.io/projected/e6e01872-2756-4ca0-b7d6-a8bf1c80ed46-kube-api-access-m65vj\") pod \"manila-db-sync-t56dz\" (UID: \"e6e01872-2756-4ca0-b7d6-a8bf1c80ed46\") " pod="openstack/manila-db-sync-t56dz" Dec 11 15:33:51 crc kubenswrapper[5050]: I1211 15:33:51.836509 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6e01872-2756-4ca0-b7d6-a8bf1c80ed46-config-data\") pod \"manila-db-sync-t56dz\" (UID: \"e6e01872-2756-4ca0-b7d6-a8bf1c80ed46\") " pod="openstack/manila-db-sync-t56dz" Dec 11 15:33:51 crc kubenswrapper[5050]: I1211 15:33:51.836535 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/e6e01872-2756-4ca0-b7d6-a8bf1c80ed46-job-config-data\") pod \"manila-db-sync-t56dz\" (UID: \"e6e01872-2756-4ca0-b7d6-a8bf1c80ed46\") " pod="openstack/manila-db-sync-t56dz" Dec 11 15:33:51 crc kubenswrapper[5050]: I1211 15:33:51.836588 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6e01872-2756-4ca0-b7d6-a8bf1c80ed46-combined-ca-bundle\") pod \"manila-db-sync-t56dz\" (UID: \"e6e01872-2756-4ca0-b7d6-a8bf1c80ed46\") " pod="openstack/manila-db-sync-t56dz" Dec 11 15:33:51 crc kubenswrapper[5050]: I1211 15:33:51.844662 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6e01872-2756-4ca0-b7d6-a8bf1c80ed46-combined-ca-bundle\") pod \"manila-db-sync-t56dz\" (UID: \"e6e01872-2756-4ca0-b7d6-a8bf1c80ed46\") " pod="openstack/manila-db-sync-t56dz" Dec 11 15:33:51 crc kubenswrapper[5050]: I1211 15:33:51.851840 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/e6e01872-2756-4ca0-b7d6-a8bf1c80ed46-job-config-data\") pod \"manila-db-sync-t56dz\" (UID: \"e6e01872-2756-4ca0-b7d6-a8bf1c80ed46\") " pod="openstack/manila-db-sync-t56dz" Dec 11 15:33:51 crc kubenswrapper[5050]: I1211 15:33:51.856951 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6e01872-2756-4ca0-b7d6-a8bf1c80ed46-config-data\") pod \"manila-db-sync-t56dz\" (UID: \"e6e01872-2756-4ca0-b7d6-a8bf1c80ed46\") " pod="openstack/manila-db-sync-t56dz" Dec 11 15:33:51 crc kubenswrapper[5050]: I1211 15:33:51.971270 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m65vj\" (UniqueName: \"kubernetes.io/projected/e6e01872-2756-4ca0-b7d6-a8bf1c80ed46-kube-api-access-m65vj\") pod \"manila-db-sync-t56dz\" (UID: \"e6e01872-2756-4ca0-b7d6-a8bf1c80ed46\") " pod="openstack/manila-db-sync-t56dz" Dec 11 15:33:52 crc kubenswrapper[5050]: I1211 15:33:52.267268 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-t56dz" Dec 11 15:33:52 crc kubenswrapper[5050]: I1211 15:33:52.817731 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-t56dz"] Dec 11 15:33:52 crc kubenswrapper[5050]: I1211 15:33:52.821225 5050 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 15:33:53 crc kubenswrapper[5050]: I1211 15:33:53.162607 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-t56dz" event={"ID":"e6e01872-2756-4ca0-b7d6-a8bf1c80ed46","Type":"ContainerStarted","Data":"1b88b38c3c29ce139eaca1d1e92fb894d336a05c246cab2fa3dac00349a1c2c8"} Dec 11 15:33:57 crc kubenswrapper[5050]: I1211 15:33:57.200368 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-t56dz" event={"ID":"e6e01872-2756-4ca0-b7d6-a8bf1c80ed46","Type":"ContainerStarted","Data":"389145b73a7bb74e1f167cd4a81a913e92f7842804320675f2d543e6870ecc00"} Dec 11 15:33:57 crc kubenswrapper[5050]: I1211 15:33:57.223405 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-t56dz" podStartSLOduration=2.30197919 podStartE2EDuration="6.223382081s" podCreationTimestamp="2025-12-11 15:33:51 +0000 UTC" firstStartedPulling="2025-12-11 15:33:52.820943123 +0000 UTC m=+6323.664665709" lastFinishedPulling="2025-12-11 15:33:56.742346014 +0000 UTC m=+6327.586068600" observedRunningTime="2025-12-11 15:33:57.217283958 +0000 UTC m=+6328.061006544" watchObservedRunningTime="2025-12-11 15:33:57.223382081 +0000 UTC m=+6328.067104667" Dec 11 15:33:59 crc kubenswrapper[5050]: I1211 15:33:59.218415 5050 generic.go:334] "Generic (PLEG): container finished" podID="e6e01872-2756-4ca0-b7d6-a8bf1c80ed46" containerID="389145b73a7bb74e1f167cd4a81a913e92f7842804320675f2d543e6870ecc00" exitCode=0 Dec 11 15:33:59 crc kubenswrapper[5050]: I1211 15:33:59.218506 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-t56dz" event={"ID":"e6e01872-2756-4ca0-b7d6-a8bf1c80ed46","Type":"ContainerDied","Data":"389145b73a7bb74e1f167cd4a81a913e92f7842804320675f2d543e6870ecc00"} Dec 11 15:34:00 crc kubenswrapper[5050]: I1211 15:34:00.662599 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-t56dz" Dec 11 15:34:00 crc kubenswrapper[5050]: I1211 15:34:00.747584 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6e01872-2756-4ca0-b7d6-a8bf1c80ed46-config-data\") pod \"e6e01872-2756-4ca0-b7d6-a8bf1c80ed46\" (UID: \"e6e01872-2756-4ca0-b7d6-a8bf1c80ed46\") " Dec 11 15:34:00 crc kubenswrapper[5050]: I1211 15:34:00.747639 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6e01872-2756-4ca0-b7d6-a8bf1c80ed46-combined-ca-bundle\") pod \"e6e01872-2756-4ca0-b7d6-a8bf1c80ed46\" (UID: \"e6e01872-2756-4ca0-b7d6-a8bf1c80ed46\") " Dec 11 15:34:00 crc kubenswrapper[5050]: I1211 15:34:00.747808 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/e6e01872-2756-4ca0-b7d6-a8bf1c80ed46-job-config-data\") pod \"e6e01872-2756-4ca0-b7d6-a8bf1c80ed46\" (UID: \"e6e01872-2756-4ca0-b7d6-a8bf1c80ed46\") " Dec 11 15:34:00 crc kubenswrapper[5050]: I1211 15:34:00.747901 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m65vj\" (UniqueName: \"kubernetes.io/projected/e6e01872-2756-4ca0-b7d6-a8bf1c80ed46-kube-api-access-m65vj\") pod \"e6e01872-2756-4ca0-b7d6-a8bf1c80ed46\" (UID: \"e6e01872-2756-4ca0-b7d6-a8bf1c80ed46\") " Dec 11 15:34:00 crc kubenswrapper[5050]: I1211 15:34:00.754151 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6e01872-2756-4ca0-b7d6-a8bf1c80ed46-kube-api-access-m65vj" (OuterVolumeSpecName: "kube-api-access-m65vj") pod "e6e01872-2756-4ca0-b7d6-a8bf1c80ed46" (UID: "e6e01872-2756-4ca0-b7d6-a8bf1c80ed46"). InnerVolumeSpecName "kube-api-access-m65vj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:34:00 crc kubenswrapper[5050]: I1211 15:34:00.758550 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6e01872-2756-4ca0-b7d6-a8bf1c80ed46-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "e6e01872-2756-4ca0-b7d6-a8bf1c80ed46" (UID: "e6e01872-2756-4ca0-b7d6-a8bf1c80ed46"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:34:00 crc kubenswrapper[5050]: I1211 15:34:00.761172 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6e01872-2756-4ca0-b7d6-a8bf1c80ed46-config-data" (OuterVolumeSpecName: "config-data") pod "e6e01872-2756-4ca0-b7d6-a8bf1c80ed46" (UID: "e6e01872-2756-4ca0-b7d6-a8bf1c80ed46"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:34:00 crc kubenswrapper[5050]: I1211 15:34:00.789173 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6e01872-2756-4ca0-b7d6-a8bf1c80ed46-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6e01872-2756-4ca0-b7d6-a8bf1c80ed46" (UID: "e6e01872-2756-4ca0-b7d6-a8bf1c80ed46"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:34:00 crc kubenswrapper[5050]: I1211 15:34:00.851090 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m65vj\" (UniqueName: \"kubernetes.io/projected/e6e01872-2756-4ca0-b7d6-a8bf1c80ed46-kube-api-access-m65vj\") on node \"crc\" DevicePath \"\"" Dec 11 15:34:00 crc kubenswrapper[5050]: I1211 15:34:00.851254 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6e01872-2756-4ca0-b7d6-a8bf1c80ed46-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:34:00 crc kubenswrapper[5050]: I1211 15:34:00.851317 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6e01872-2756-4ca0-b7d6-a8bf1c80ed46-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:34:00 crc kubenswrapper[5050]: I1211 15:34:00.851381 5050 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/e6e01872-2756-4ca0-b7d6-a8bf1c80ed46-job-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.236763 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-t56dz" event={"ID":"e6e01872-2756-4ca0-b7d6-a8bf1c80ed46","Type":"ContainerDied","Data":"1b88b38c3c29ce139eaca1d1e92fb894d336a05c246cab2fa3dac00349a1c2c8"} Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.236802 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b88b38c3c29ce139eaca1d1e92fb894d336a05c246cab2fa3dac00349a1c2c8" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.236817 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-t56dz" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.570571 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Dec 11 15:34:01 crc kubenswrapper[5050]: E1211 15:34:01.571028 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6e01872-2756-4ca0-b7d6-a8bf1c80ed46" containerName="manila-db-sync" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.571046 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6e01872-2756-4ca0-b7d6-a8bf1c80ed46" containerName="manila-db-sync" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.571282 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6e01872-2756-4ca0-b7d6-a8bf1c80ed46" containerName="manila-db-sync" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.575362 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.576736 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.578387 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.580659 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.580962 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-d7578" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.581143 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.581029 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.598569 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.605531 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.625423 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.671767 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29321ad8-528b-46ed-8c14-21a74038cddb-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"29321ad8-528b-46ed-8c14-21a74038cddb\") " pod="openstack/manila-share-share1-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.671816 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c5fd2fd-4df8-4f0f-982c-d3e6df852669-scripts\") pod \"manila-scheduler-0\" (UID: \"9c5fd2fd-4df8-4f0f-982c-d3e6df852669\") " pod="openstack/manila-scheduler-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.671838 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q89cg\" (UniqueName: \"kubernetes.io/projected/29321ad8-528b-46ed-8c14-21a74038cddb-kube-api-access-q89cg\") pod \"manila-share-share1-0\" (UID: \"29321ad8-528b-46ed-8c14-21a74038cddb\") " pod="openstack/manila-share-share1-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.671862 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c5fd2fd-4df8-4f0f-982c-d3e6df852669-config-data\") pod \"manila-scheduler-0\" (UID: \"9c5fd2fd-4df8-4f0f-982c-d3e6df852669\") " pod="openstack/manila-scheduler-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.671930 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c5fd2fd-4df8-4f0f-982c-d3e6df852669-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"9c5fd2fd-4df8-4f0f-982c-d3e6df852669\") " pod="openstack/manila-scheduler-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.671962 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29321ad8-528b-46ed-8c14-21a74038cddb-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"29321ad8-528b-46ed-8c14-21a74038cddb\") " pod="openstack/manila-share-share1-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.671980 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29321ad8-528b-46ed-8c14-21a74038cddb-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"29321ad8-528b-46ed-8c14-21a74038cddb\") " pod="openstack/manila-share-share1-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.672089 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c5fd2fd-4df8-4f0f-982c-d3e6df852669-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"9c5fd2fd-4df8-4f0f-982c-d3e6df852669\") " pod="openstack/manila-scheduler-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.672125 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9c5fd2fd-4df8-4f0f-982c-d3e6df852669-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"9c5fd2fd-4df8-4f0f-982c-d3e6df852669\") " pod="openstack/manila-scheduler-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.672143 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/29321ad8-528b-46ed-8c14-21a74038cddb-ceph\") pod \"manila-share-share1-0\" (UID: \"29321ad8-528b-46ed-8c14-21a74038cddb\") " pod="openstack/manila-share-share1-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.672164 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29321ad8-528b-46ed-8c14-21a74038cddb-scripts\") pod \"manila-share-share1-0\" (UID: \"29321ad8-528b-46ed-8c14-21a74038cddb\") " pod="openstack/manila-share-share1-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.672201 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29321ad8-528b-46ed-8c14-21a74038cddb-config-data\") pod \"manila-share-share1-0\" (UID: \"29321ad8-528b-46ed-8c14-21a74038cddb\") " pod="openstack/manila-share-share1-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.672218 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/29321ad8-528b-46ed-8c14-21a74038cddb-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"29321ad8-528b-46ed-8c14-21a74038cddb\") " pod="openstack/manila-share-share1-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.672282 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkwc8\" (UniqueName: \"kubernetes.io/projected/9c5fd2fd-4df8-4f0f-982c-d3e6df852669-kube-api-access-kkwc8\") pod \"manila-scheduler-0\" (UID: \"9c5fd2fd-4df8-4f0f-982c-d3e6df852669\") " pod="openstack/manila-scheduler-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.688306 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d495dfb55-bp7n6"] Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.690360 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d495dfb55-bp7n6" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.712254 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d495dfb55-bp7n6"] Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.774474 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c5fd2fd-4df8-4f0f-982c-d3e6df852669-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"9c5fd2fd-4df8-4f0f-982c-d3e6df852669\") " pod="openstack/manila-scheduler-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.774538 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29321ad8-528b-46ed-8c14-21a74038cddb-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"29321ad8-528b-46ed-8c14-21a74038cddb\") " pod="openstack/manila-share-share1-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.774557 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29321ad8-528b-46ed-8c14-21a74038cddb-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"29321ad8-528b-46ed-8c14-21a74038cddb\") " pod="openstack/manila-share-share1-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.774593 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c5fd2fd-4df8-4f0f-982c-d3e6df852669-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"9c5fd2fd-4df8-4f0f-982c-d3e6df852669\") " pod="openstack/manila-scheduler-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.774620 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9c5fd2fd-4df8-4f0f-982c-d3e6df852669-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"9c5fd2fd-4df8-4f0f-982c-d3e6df852669\") " pod="openstack/manila-scheduler-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.774636 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/29321ad8-528b-46ed-8c14-21a74038cddb-ceph\") pod \"manila-share-share1-0\" (UID: \"29321ad8-528b-46ed-8c14-21a74038cddb\") " pod="openstack/manila-share-share1-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.774658 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29321ad8-528b-46ed-8c14-21a74038cddb-scripts\") pod \"manila-share-share1-0\" (UID: \"29321ad8-528b-46ed-8c14-21a74038cddb\") " pod="openstack/manila-share-share1-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.774699 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29321ad8-528b-46ed-8c14-21a74038cddb-config-data\") pod \"manila-share-share1-0\" (UID: \"29321ad8-528b-46ed-8c14-21a74038cddb\") " pod="openstack/manila-share-share1-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.774719 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/29321ad8-528b-46ed-8c14-21a74038cddb-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"29321ad8-528b-46ed-8c14-21a74038cddb\") " pod="openstack/manila-share-share1-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.774757 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2412d0a8-7e8f-4fae-915b-c794621f9655-config\") pod \"dnsmasq-dns-d495dfb55-bp7n6\" (UID: \"2412d0a8-7e8f-4fae-915b-c794621f9655\") " pod="openstack/dnsmasq-dns-d495dfb55-bp7n6" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.774773 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2412d0a8-7e8f-4fae-915b-c794621f9655-ovsdbserver-nb\") pod \"dnsmasq-dns-d495dfb55-bp7n6\" (UID: \"2412d0a8-7e8f-4fae-915b-c794621f9655\") " pod="openstack/dnsmasq-dns-d495dfb55-bp7n6" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.774794 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkwc8\" (UniqueName: \"kubernetes.io/projected/9c5fd2fd-4df8-4f0f-982c-d3e6df852669-kube-api-access-kkwc8\") pod \"manila-scheduler-0\" (UID: \"9c5fd2fd-4df8-4f0f-982c-d3e6df852669\") " pod="openstack/manila-scheduler-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.774818 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2412d0a8-7e8f-4fae-915b-c794621f9655-ovsdbserver-sb\") pod \"dnsmasq-dns-d495dfb55-bp7n6\" (UID: \"2412d0a8-7e8f-4fae-915b-c794621f9655\") " pod="openstack/dnsmasq-dns-d495dfb55-bp7n6" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.774852 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wrk9\" (UniqueName: \"kubernetes.io/projected/2412d0a8-7e8f-4fae-915b-c794621f9655-kube-api-access-9wrk9\") pod \"dnsmasq-dns-d495dfb55-bp7n6\" (UID: \"2412d0a8-7e8f-4fae-915b-c794621f9655\") " pod="openstack/dnsmasq-dns-d495dfb55-bp7n6" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.774892 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2412d0a8-7e8f-4fae-915b-c794621f9655-dns-svc\") pod \"dnsmasq-dns-d495dfb55-bp7n6\" (UID: \"2412d0a8-7e8f-4fae-915b-c794621f9655\") " pod="openstack/dnsmasq-dns-d495dfb55-bp7n6" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.774925 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29321ad8-528b-46ed-8c14-21a74038cddb-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"29321ad8-528b-46ed-8c14-21a74038cddb\") " pod="openstack/manila-share-share1-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.774943 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c5fd2fd-4df8-4f0f-982c-d3e6df852669-scripts\") pod \"manila-scheduler-0\" (UID: \"9c5fd2fd-4df8-4f0f-982c-d3e6df852669\") " pod="openstack/manila-scheduler-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.774960 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q89cg\" (UniqueName: \"kubernetes.io/projected/29321ad8-528b-46ed-8c14-21a74038cddb-kube-api-access-q89cg\") pod \"manila-share-share1-0\" (UID: \"29321ad8-528b-46ed-8c14-21a74038cddb\") " pod="openstack/manila-share-share1-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.774979 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c5fd2fd-4df8-4f0f-982c-d3e6df852669-config-data\") pod \"manila-scheduler-0\" (UID: \"9c5fd2fd-4df8-4f0f-982c-d3e6df852669\") " pod="openstack/manila-scheduler-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.775389 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/29321ad8-528b-46ed-8c14-21a74038cddb-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"29321ad8-528b-46ed-8c14-21a74038cddb\") " pod="openstack/manila-share-share1-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.777508 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29321ad8-528b-46ed-8c14-21a74038cddb-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"29321ad8-528b-46ed-8c14-21a74038cddb\") " pod="openstack/manila-share-share1-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.777518 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9c5fd2fd-4df8-4f0f-982c-d3e6df852669-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"9c5fd2fd-4df8-4f0f-982c-d3e6df852669\") " pod="openstack/manila-scheduler-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.786969 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c5fd2fd-4df8-4f0f-982c-d3e6df852669-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"9c5fd2fd-4df8-4f0f-982c-d3e6df852669\") " pod="openstack/manila-scheduler-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.787502 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c5fd2fd-4df8-4f0f-982c-d3e6df852669-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"9c5fd2fd-4df8-4f0f-982c-d3e6df852669\") " pod="openstack/manila-scheduler-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.787910 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c5fd2fd-4df8-4f0f-982c-d3e6df852669-config-data\") pod \"manila-scheduler-0\" (UID: \"9c5fd2fd-4df8-4f0f-982c-d3e6df852669\") " pod="openstack/manila-scheduler-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.788055 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29321ad8-528b-46ed-8c14-21a74038cddb-config-data\") pod \"manila-share-share1-0\" (UID: \"29321ad8-528b-46ed-8c14-21a74038cddb\") " pod="openstack/manila-share-share1-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.799921 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29321ad8-528b-46ed-8c14-21a74038cddb-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"29321ad8-528b-46ed-8c14-21a74038cddb\") " pod="openstack/manila-share-share1-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.800396 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/29321ad8-528b-46ed-8c14-21a74038cddb-ceph\") pod \"manila-share-share1-0\" (UID: \"29321ad8-528b-46ed-8c14-21a74038cddb\") " pod="openstack/manila-share-share1-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.800794 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c5fd2fd-4df8-4f0f-982c-d3e6df852669-scripts\") pod \"manila-scheduler-0\" (UID: \"9c5fd2fd-4df8-4f0f-982c-d3e6df852669\") " pod="openstack/manila-scheduler-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.805152 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29321ad8-528b-46ed-8c14-21a74038cddb-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"29321ad8-528b-46ed-8c14-21a74038cddb\") " pod="openstack/manila-share-share1-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.808963 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29321ad8-528b-46ed-8c14-21a74038cddb-scripts\") pod \"manila-share-share1-0\" (UID: \"29321ad8-528b-46ed-8c14-21a74038cddb\") " pod="openstack/manila-share-share1-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.812181 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkwc8\" (UniqueName: \"kubernetes.io/projected/9c5fd2fd-4df8-4f0f-982c-d3e6df852669-kube-api-access-kkwc8\") pod \"manila-scheduler-0\" (UID: \"9c5fd2fd-4df8-4f0f-982c-d3e6df852669\") " pod="openstack/manila-scheduler-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.812773 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q89cg\" (UniqueName: \"kubernetes.io/projected/29321ad8-528b-46ed-8c14-21a74038cddb-kube-api-access-q89cg\") pod \"manila-share-share1-0\" (UID: \"29321ad8-528b-46ed-8c14-21a74038cddb\") " pod="openstack/manila-share-share1-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.819123 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.821025 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.824443 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.829865 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.877911 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2412d0a8-7e8f-4fae-915b-c794621f9655-config\") pod \"dnsmasq-dns-d495dfb55-bp7n6\" (UID: \"2412d0a8-7e8f-4fae-915b-c794621f9655\") " pod="openstack/dnsmasq-dns-d495dfb55-bp7n6" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.877969 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2412d0a8-7e8f-4fae-915b-c794621f9655-ovsdbserver-nb\") pod \"dnsmasq-dns-d495dfb55-bp7n6\" (UID: \"2412d0a8-7e8f-4fae-915b-c794621f9655\") " pod="openstack/dnsmasq-dns-d495dfb55-bp7n6" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.878022 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2412d0a8-7e8f-4fae-915b-c794621f9655-ovsdbserver-sb\") pod \"dnsmasq-dns-d495dfb55-bp7n6\" (UID: \"2412d0a8-7e8f-4fae-915b-c794621f9655\") " pod="openstack/dnsmasq-dns-d495dfb55-bp7n6" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.878063 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wrk9\" (UniqueName: \"kubernetes.io/projected/2412d0a8-7e8f-4fae-915b-c794621f9655-kube-api-access-9wrk9\") pod \"dnsmasq-dns-d495dfb55-bp7n6\" (UID: \"2412d0a8-7e8f-4fae-915b-c794621f9655\") " pod="openstack/dnsmasq-dns-d495dfb55-bp7n6" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.878114 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2412d0a8-7e8f-4fae-915b-c794621f9655-dns-svc\") pod \"dnsmasq-dns-d495dfb55-bp7n6\" (UID: \"2412d0a8-7e8f-4fae-915b-c794621f9655\") " pod="openstack/dnsmasq-dns-d495dfb55-bp7n6" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.879647 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2412d0a8-7e8f-4fae-915b-c794621f9655-dns-svc\") pod \"dnsmasq-dns-d495dfb55-bp7n6\" (UID: \"2412d0a8-7e8f-4fae-915b-c794621f9655\") " pod="openstack/dnsmasq-dns-d495dfb55-bp7n6" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.879840 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2412d0a8-7e8f-4fae-915b-c794621f9655-ovsdbserver-nb\") pod \"dnsmasq-dns-d495dfb55-bp7n6\" (UID: \"2412d0a8-7e8f-4fae-915b-c794621f9655\") " pod="openstack/dnsmasq-dns-d495dfb55-bp7n6" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.879878 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2412d0a8-7e8f-4fae-915b-c794621f9655-config\") pod \"dnsmasq-dns-d495dfb55-bp7n6\" (UID: \"2412d0a8-7e8f-4fae-915b-c794621f9655\") " pod="openstack/dnsmasq-dns-d495dfb55-bp7n6" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.882674 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2412d0a8-7e8f-4fae-915b-c794621f9655-ovsdbserver-sb\") pod \"dnsmasq-dns-d495dfb55-bp7n6\" (UID: \"2412d0a8-7e8f-4fae-915b-c794621f9655\") " pod="openstack/dnsmasq-dns-d495dfb55-bp7n6" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.894451 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wrk9\" (UniqueName: \"kubernetes.io/projected/2412d0a8-7e8f-4fae-915b-c794621f9655-kube-api-access-9wrk9\") pod \"dnsmasq-dns-d495dfb55-bp7n6\" (UID: \"2412d0a8-7e8f-4fae-915b-c794621f9655\") " pod="openstack/dnsmasq-dns-d495dfb55-bp7n6" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.903471 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.921094 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.980752 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cdfa62cb-9c4b-4684-bab0-698433c7e69a-config-data-custom\") pod \"manila-api-0\" (UID: \"cdfa62cb-9c4b-4684-bab0-698433c7e69a\") " pod="openstack/manila-api-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.980800 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl85t\" (UniqueName: \"kubernetes.io/projected/cdfa62cb-9c4b-4684-bab0-698433c7e69a-kube-api-access-tl85t\") pod \"manila-api-0\" (UID: \"cdfa62cb-9c4b-4684-bab0-698433c7e69a\") " pod="openstack/manila-api-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.980824 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cdfa62cb-9c4b-4684-bab0-698433c7e69a-etc-machine-id\") pod \"manila-api-0\" (UID: \"cdfa62cb-9c4b-4684-bab0-698433c7e69a\") " pod="openstack/manila-api-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.980840 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdfa62cb-9c4b-4684-bab0-698433c7e69a-scripts\") pod \"manila-api-0\" (UID: \"cdfa62cb-9c4b-4684-bab0-698433c7e69a\") " pod="openstack/manila-api-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.980860 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdfa62cb-9c4b-4684-bab0-698433c7e69a-logs\") pod \"manila-api-0\" (UID: \"cdfa62cb-9c4b-4684-bab0-698433c7e69a\") " pod="openstack/manila-api-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.980890 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdfa62cb-9c4b-4684-bab0-698433c7e69a-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"cdfa62cb-9c4b-4684-bab0-698433c7e69a\") " pod="openstack/manila-api-0" Dec 11 15:34:01 crc kubenswrapper[5050]: I1211 15:34:01.980970 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdfa62cb-9c4b-4684-bab0-698433c7e69a-config-data\") pod \"manila-api-0\" (UID: \"cdfa62cb-9c4b-4684-bab0-698433c7e69a\") " pod="openstack/manila-api-0" Dec 11 15:34:02 crc kubenswrapper[5050]: I1211 15:34:02.019650 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d495dfb55-bp7n6" Dec 11 15:34:02 crc kubenswrapper[5050]: I1211 15:34:02.094556 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdfa62cb-9c4b-4684-bab0-698433c7e69a-config-data\") pod \"manila-api-0\" (UID: \"cdfa62cb-9c4b-4684-bab0-698433c7e69a\") " pod="openstack/manila-api-0" Dec 11 15:34:02 crc kubenswrapper[5050]: I1211 15:34:02.095002 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cdfa62cb-9c4b-4684-bab0-698433c7e69a-config-data-custom\") pod \"manila-api-0\" (UID: \"cdfa62cb-9c4b-4684-bab0-698433c7e69a\") " pod="openstack/manila-api-0" Dec 11 15:34:02 crc kubenswrapper[5050]: I1211 15:34:02.095291 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl85t\" (UniqueName: \"kubernetes.io/projected/cdfa62cb-9c4b-4684-bab0-698433c7e69a-kube-api-access-tl85t\") pod \"manila-api-0\" (UID: \"cdfa62cb-9c4b-4684-bab0-698433c7e69a\") " pod="openstack/manila-api-0" Dec 11 15:34:02 crc kubenswrapper[5050]: I1211 15:34:02.095362 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cdfa62cb-9c4b-4684-bab0-698433c7e69a-etc-machine-id\") pod \"manila-api-0\" (UID: \"cdfa62cb-9c4b-4684-bab0-698433c7e69a\") " pod="openstack/manila-api-0" Dec 11 15:34:02 crc kubenswrapper[5050]: I1211 15:34:02.095383 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdfa62cb-9c4b-4684-bab0-698433c7e69a-scripts\") pod \"manila-api-0\" (UID: \"cdfa62cb-9c4b-4684-bab0-698433c7e69a\") " pod="openstack/manila-api-0" Dec 11 15:34:02 crc kubenswrapper[5050]: I1211 15:34:02.095432 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdfa62cb-9c4b-4684-bab0-698433c7e69a-logs\") pod \"manila-api-0\" (UID: \"cdfa62cb-9c4b-4684-bab0-698433c7e69a\") " pod="openstack/manila-api-0" Dec 11 15:34:02 crc kubenswrapper[5050]: I1211 15:34:02.095489 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdfa62cb-9c4b-4684-bab0-698433c7e69a-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"cdfa62cb-9c4b-4684-bab0-698433c7e69a\") " pod="openstack/manila-api-0" Dec 11 15:34:02 crc kubenswrapper[5050]: I1211 15:34:02.096322 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/cdfa62cb-9c4b-4684-bab0-698433c7e69a-etc-machine-id\") pod \"manila-api-0\" (UID: \"cdfa62cb-9c4b-4684-bab0-698433c7e69a\") " pod="openstack/manila-api-0" Dec 11 15:34:02 crc kubenswrapper[5050]: I1211 15:34:02.096709 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cdfa62cb-9c4b-4684-bab0-698433c7e69a-logs\") pod \"manila-api-0\" (UID: \"cdfa62cb-9c4b-4684-bab0-698433c7e69a\") " pod="openstack/manila-api-0" Dec 11 15:34:02 crc kubenswrapper[5050]: I1211 15:34:02.108974 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cdfa62cb-9c4b-4684-bab0-698433c7e69a-scripts\") pod \"manila-api-0\" (UID: \"cdfa62cb-9c4b-4684-bab0-698433c7e69a\") " pod="openstack/manila-api-0" Dec 11 15:34:02 crc kubenswrapper[5050]: I1211 15:34:02.110348 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cdfa62cb-9c4b-4684-bab0-698433c7e69a-config-data-custom\") pod \"manila-api-0\" (UID: \"cdfa62cb-9c4b-4684-bab0-698433c7e69a\") " pod="openstack/manila-api-0" Dec 11 15:34:02 crc kubenswrapper[5050]: I1211 15:34:02.111792 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdfa62cb-9c4b-4684-bab0-698433c7e69a-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"cdfa62cb-9c4b-4684-bab0-698433c7e69a\") " pod="openstack/manila-api-0" Dec 11 15:34:02 crc kubenswrapper[5050]: I1211 15:34:02.119956 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdfa62cb-9c4b-4684-bab0-698433c7e69a-config-data\") pod \"manila-api-0\" (UID: \"cdfa62cb-9c4b-4684-bab0-698433c7e69a\") " pod="openstack/manila-api-0" Dec 11 15:34:02 crc kubenswrapper[5050]: I1211 15:34:02.120515 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl85t\" (UniqueName: \"kubernetes.io/projected/cdfa62cb-9c4b-4684-bab0-698433c7e69a-kube-api-access-tl85t\") pod \"manila-api-0\" (UID: \"cdfa62cb-9c4b-4684-bab0-698433c7e69a\") " pod="openstack/manila-api-0" Dec 11 15:34:02 crc kubenswrapper[5050]: I1211 15:34:02.370792 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Dec 11 15:34:02 crc kubenswrapper[5050]: I1211 15:34:02.453371 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Dec 11 15:34:02 crc kubenswrapper[5050]: I1211 15:34:02.678692 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Dec 11 15:34:02 crc kubenswrapper[5050]: W1211 15:34:02.687177 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29321ad8_528b_46ed_8c14_21a74038cddb.slice/crio-dfc377512557e9b9f63748f50ecc4fe7d871e6abffff093c68ac04906aec0ed7 WatchSource:0}: Error finding container dfc377512557e9b9f63748f50ecc4fe7d871e6abffff093c68ac04906aec0ed7: Status 404 returned error can't find the container with id dfc377512557e9b9f63748f50ecc4fe7d871e6abffff093c68ac04906aec0ed7 Dec 11 15:34:02 crc kubenswrapper[5050]: I1211 15:34:02.742412 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d495dfb55-bp7n6"] Dec 11 15:34:03 crc kubenswrapper[5050]: I1211 15:34:03.013192 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Dec 11 15:34:03 crc kubenswrapper[5050]: I1211 15:34:03.269497 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"cdfa62cb-9c4b-4684-bab0-698433c7e69a","Type":"ContainerStarted","Data":"71135d33308cb2ceaa1dc61dfe9ca2e04180f3b0cff0d7263892993774fee27e"} Dec 11 15:34:03 crc kubenswrapper[5050]: I1211 15:34:03.272319 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d495dfb55-bp7n6" event={"ID":"2412d0a8-7e8f-4fae-915b-c794621f9655","Type":"ContainerStarted","Data":"702ffa5224c6a87bb507f75231c2608d8d14e2b53f2c36c06ad8688acd7b0ad8"} Dec 11 15:34:03 crc kubenswrapper[5050]: I1211 15:34:03.275082 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"29321ad8-528b-46ed-8c14-21a74038cddb","Type":"ContainerStarted","Data":"dfc377512557e9b9f63748f50ecc4fe7d871e6abffff093c68ac04906aec0ed7"} Dec 11 15:34:03 crc kubenswrapper[5050]: I1211 15:34:03.276291 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"9c5fd2fd-4df8-4f0f-982c-d3e6df852669","Type":"ContainerStarted","Data":"618aff2b454686e1b457e08897fbf2f194791845f723d97f53a74032c188213c"} Dec 11 15:34:04 crc kubenswrapper[5050]: I1211 15:34:04.294556 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"cdfa62cb-9c4b-4684-bab0-698433c7e69a","Type":"ContainerStarted","Data":"eacfe0d3a371280cd5752d93b117deac768c60ee64ce3658a74ffddb6d4aa9df"} Dec 11 15:34:04 crc kubenswrapper[5050]: I1211 15:34:04.296352 5050 generic.go:334] "Generic (PLEG): container finished" podID="2412d0a8-7e8f-4fae-915b-c794621f9655" containerID="b9cc781ac06d5864326978c46d2320f33805c8c774c5b235f79f80ab0db31c10" exitCode=0 Dec 11 15:34:04 crc kubenswrapper[5050]: I1211 15:34:04.296387 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d495dfb55-bp7n6" event={"ID":"2412d0a8-7e8f-4fae-915b-c794621f9655","Type":"ContainerDied","Data":"b9cc781ac06d5864326978c46d2320f33805c8c774c5b235f79f80ab0db31c10"} Dec 11 15:34:05 crc kubenswrapper[5050]: I1211 15:34:05.311645 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"cdfa62cb-9c4b-4684-bab0-698433c7e69a","Type":"ContainerStarted","Data":"1c6e5335faba3dbffb844b71fa38799aba48c905041e991d3ebbbd36b45af4f9"} Dec 11 15:34:05 crc kubenswrapper[5050]: I1211 15:34:05.312101 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Dec 11 15:34:05 crc kubenswrapper[5050]: I1211 15:34:05.321144 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d495dfb55-bp7n6" event={"ID":"2412d0a8-7e8f-4fae-915b-c794621f9655","Type":"ContainerStarted","Data":"b47ef13d30c6603edd9078e5b603930cd327a27621b2f004ae2f9949d960884b"} Dec 11 15:34:05 crc kubenswrapper[5050]: I1211 15:34:05.321414 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d495dfb55-bp7n6" Dec 11 15:34:05 crc kubenswrapper[5050]: I1211 15:34:05.325996 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"9c5fd2fd-4df8-4f0f-982c-d3e6df852669","Type":"ContainerStarted","Data":"21d82c8ed257c6bbf41d15d936e7dbaada66175dbed0e14ecf37991389e1a973"} Dec 11 15:34:05 crc kubenswrapper[5050]: I1211 15:34:05.342971 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=4.342949835 podStartE2EDuration="4.342949835s" podCreationTimestamp="2025-12-11 15:34:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:34:05.333896973 +0000 UTC m=+6336.177619559" watchObservedRunningTime="2025-12-11 15:34:05.342949835 +0000 UTC m=+6336.186672421" Dec 11 15:34:05 crc kubenswrapper[5050]: I1211 15:34:05.360769 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d495dfb55-bp7n6" podStartSLOduration=4.36075078 podStartE2EDuration="4.36075078s" podCreationTimestamp="2025-12-11 15:34:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:34:05.359360953 +0000 UTC m=+6336.203083539" watchObservedRunningTime="2025-12-11 15:34:05.36075078 +0000 UTC m=+6336.204473366" Dec 11 15:34:06 crc kubenswrapper[5050]: I1211 15:34:06.339756 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"9c5fd2fd-4df8-4f0f-982c-d3e6df852669","Type":"ContainerStarted","Data":"838009be48876fbf3a998b6c983ee056868389a806476822d35e85d0a5f9f705"} Dec 11 15:34:06 crc kubenswrapper[5050]: I1211 15:34:06.367065 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.101815445 podStartE2EDuration="5.367046125s" podCreationTimestamp="2025-12-11 15:34:01 +0000 UTC" firstStartedPulling="2025-12-11 15:34:02.483762872 +0000 UTC m=+6333.327485458" lastFinishedPulling="2025-12-11 15:34:04.748993552 +0000 UTC m=+6335.592716138" observedRunningTime="2025-12-11 15:34:06.35785628 +0000 UTC m=+6337.201578866" watchObservedRunningTime="2025-12-11 15:34:06.367046125 +0000 UTC m=+6337.210768711" Dec 11 15:34:06 crc kubenswrapper[5050]: I1211 15:34:06.422118 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 11 15:34:11 crc kubenswrapper[5050]: I1211 15:34:11.907326 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Dec 11 15:34:12 crc kubenswrapper[5050]: I1211 15:34:12.021737 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d495dfb55-bp7n6" Dec 11 15:34:12 crc kubenswrapper[5050]: I1211 15:34:12.119183 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fc54567-bwk9j"] Dec 11 15:34:12 crc kubenswrapper[5050]: I1211 15:34:12.119662 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fc54567-bwk9j" podUID="88c56e95-bf88-47a2-9c36-63f9092746c9" containerName="dnsmasq-dns" containerID="cri-o://336110a6025e6d0b53ff8f886b6d86222090ad7a85dd7af0ba57ecc585ec26a9" gracePeriod=10 Dec 11 15:34:12 crc kubenswrapper[5050]: I1211 15:34:12.411383 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"29321ad8-528b-46ed-8c14-21a74038cddb","Type":"ContainerStarted","Data":"89a3244f39469542afef978a8a085efc27bf1f9f1160cd1c8ba170b9c2b63c6e"} Dec 11 15:34:12 crc kubenswrapper[5050]: I1211 15:34:12.411436 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"29321ad8-528b-46ed-8c14-21a74038cddb","Type":"ContainerStarted","Data":"938a7854156ef3bfb81302582302a81ef99f4c5853767f52c07821fcdff360d2"} Dec 11 15:34:12 crc kubenswrapper[5050]: I1211 15:34:12.414906 5050 generic.go:334] "Generic (PLEG): container finished" podID="88c56e95-bf88-47a2-9c36-63f9092746c9" containerID="336110a6025e6d0b53ff8f886b6d86222090ad7a85dd7af0ba57ecc585ec26a9" exitCode=0 Dec 11 15:34:12 crc kubenswrapper[5050]: I1211 15:34:12.414940 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fc54567-bwk9j" event={"ID":"88c56e95-bf88-47a2-9c36-63f9092746c9","Type":"ContainerDied","Data":"336110a6025e6d0b53ff8f886b6d86222090ad7a85dd7af0ba57ecc585ec26a9"} Dec 11 15:34:12 crc kubenswrapper[5050]: I1211 15:34:12.442630 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=2.839989174 podStartE2EDuration="11.442593368s" podCreationTimestamp="2025-12-11 15:34:01 +0000 UTC" firstStartedPulling="2025-12-11 15:34:02.688969033 +0000 UTC m=+6333.532691619" lastFinishedPulling="2025-12-11 15:34:11.291573227 +0000 UTC m=+6342.135295813" observedRunningTime="2025-12-11 15:34:12.436871646 +0000 UTC m=+6343.280594242" watchObservedRunningTime="2025-12-11 15:34:12.442593368 +0000 UTC m=+6343.286315954" Dec 11 15:34:12 crc kubenswrapper[5050]: I1211 15:34:12.798224 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fc54567-bwk9j" Dec 11 15:34:12 crc kubenswrapper[5050]: I1211 15:34:12.963046 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88c56e95-bf88-47a2-9c36-63f9092746c9-config\") pod \"88c56e95-bf88-47a2-9c36-63f9092746c9\" (UID: \"88c56e95-bf88-47a2-9c36-63f9092746c9\") " Dec 11 15:34:12 crc kubenswrapper[5050]: I1211 15:34:12.963405 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5g5f5\" (UniqueName: \"kubernetes.io/projected/88c56e95-bf88-47a2-9c36-63f9092746c9-kube-api-access-5g5f5\") pod \"88c56e95-bf88-47a2-9c36-63f9092746c9\" (UID: \"88c56e95-bf88-47a2-9c36-63f9092746c9\") " Dec 11 15:34:12 crc kubenswrapper[5050]: I1211 15:34:12.963472 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88c56e95-bf88-47a2-9c36-63f9092746c9-ovsdbserver-nb\") pod \"88c56e95-bf88-47a2-9c36-63f9092746c9\" (UID: \"88c56e95-bf88-47a2-9c36-63f9092746c9\") " Dec 11 15:34:12 crc kubenswrapper[5050]: I1211 15:34:12.963597 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88c56e95-bf88-47a2-9c36-63f9092746c9-ovsdbserver-sb\") pod \"88c56e95-bf88-47a2-9c36-63f9092746c9\" (UID: \"88c56e95-bf88-47a2-9c36-63f9092746c9\") " Dec 11 15:34:12 crc kubenswrapper[5050]: I1211 15:34:12.963654 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88c56e95-bf88-47a2-9c36-63f9092746c9-dns-svc\") pod \"88c56e95-bf88-47a2-9c36-63f9092746c9\" (UID: \"88c56e95-bf88-47a2-9c36-63f9092746c9\") " Dec 11 15:34:12 crc kubenswrapper[5050]: I1211 15:34:12.969464 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88c56e95-bf88-47a2-9c36-63f9092746c9-kube-api-access-5g5f5" (OuterVolumeSpecName: "kube-api-access-5g5f5") pod "88c56e95-bf88-47a2-9c36-63f9092746c9" (UID: "88c56e95-bf88-47a2-9c36-63f9092746c9"). InnerVolumeSpecName "kube-api-access-5g5f5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:34:13 crc kubenswrapper[5050]: I1211 15:34:13.021116 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88c56e95-bf88-47a2-9c36-63f9092746c9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "88c56e95-bf88-47a2-9c36-63f9092746c9" (UID: "88c56e95-bf88-47a2-9c36-63f9092746c9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:34:13 crc kubenswrapper[5050]: I1211 15:34:13.024585 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88c56e95-bf88-47a2-9c36-63f9092746c9-config" (OuterVolumeSpecName: "config") pod "88c56e95-bf88-47a2-9c36-63f9092746c9" (UID: "88c56e95-bf88-47a2-9c36-63f9092746c9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:34:13 crc kubenswrapper[5050]: I1211 15:34:13.025640 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88c56e95-bf88-47a2-9c36-63f9092746c9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "88c56e95-bf88-47a2-9c36-63f9092746c9" (UID: "88c56e95-bf88-47a2-9c36-63f9092746c9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:34:13 crc kubenswrapper[5050]: I1211 15:34:13.026887 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88c56e95-bf88-47a2-9c36-63f9092746c9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "88c56e95-bf88-47a2-9c36-63f9092746c9" (UID: "88c56e95-bf88-47a2-9c36-63f9092746c9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:34:13 crc kubenswrapper[5050]: I1211 15:34:13.066765 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88c56e95-bf88-47a2-9c36-63f9092746c9-config\") on node \"crc\" DevicePath \"\"" Dec 11 15:34:13 crc kubenswrapper[5050]: I1211 15:34:13.066808 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5g5f5\" (UniqueName: \"kubernetes.io/projected/88c56e95-bf88-47a2-9c36-63f9092746c9-kube-api-access-5g5f5\") on node \"crc\" DevicePath \"\"" Dec 11 15:34:13 crc kubenswrapper[5050]: I1211 15:34:13.066821 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/88c56e95-bf88-47a2-9c36-63f9092746c9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 15:34:13 crc kubenswrapper[5050]: I1211 15:34:13.066830 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/88c56e95-bf88-47a2-9c36-63f9092746c9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 15:34:13 crc kubenswrapper[5050]: I1211 15:34:13.066838 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/88c56e95-bf88-47a2-9c36-63f9092746c9-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 15:34:13 crc kubenswrapper[5050]: I1211 15:34:13.429974 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fc54567-bwk9j" event={"ID":"88c56e95-bf88-47a2-9c36-63f9092746c9","Type":"ContainerDied","Data":"70d0cf17706a8e3c8e6e0d9f8a7cb955f69b519e62a30f01baa6ffba3d3cf71f"} Dec 11 15:34:13 crc kubenswrapper[5050]: I1211 15:34:13.430000 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fc54567-bwk9j" Dec 11 15:34:13 crc kubenswrapper[5050]: I1211 15:34:13.430102 5050 scope.go:117] "RemoveContainer" containerID="336110a6025e6d0b53ff8f886b6d86222090ad7a85dd7af0ba57ecc585ec26a9" Dec 11 15:34:13 crc kubenswrapper[5050]: I1211 15:34:13.468834 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fc54567-bwk9j"] Dec 11 15:34:13 crc kubenswrapper[5050]: I1211 15:34:13.479661 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fc54567-bwk9j"] Dec 11 15:34:13 crc kubenswrapper[5050]: I1211 15:34:13.559561 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88c56e95-bf88-47a2-9c36-63f9092746c9" path="/var/lib/kubelet/pods/88c56e95-bf88-47a2-9c36-63f9092746c9/volumes" Dec 11 15:34:13 crc kubenswrapper[5050]: I1211 15:34:13.706291 5050 scope.go:117] "RemoveContainer" containerID="15e413a3a05acf5a23d5ae4d8f76ea61da79e0f5d6ed7933204f75b733537e8a" Dec 11 15:34:14 crc kubenswrapper[5050]: I1211 15:34:14.796017 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 15:34:14 crc kubenswrapper[5050]: I1211 15:34:14.797320 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4f789e1f-2171-4126-baee-8507b4411dbb" containerName="ceilometer-central-agent" containerID="cri-o://f958aa1522472d79dcdf0daf0cba7e2e0b8ade5eca4bf022d39c6e775e12abb2" gracePeriod=30 Dec 11 15:34:14 crc kubenswrapper[5050]: I1211 15:34:14.797476 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4f789e1f-2171-4126-baee-8507b4411dbb" containerName="sg-core" containerID="cri-o://7f24b2bebdda17df7d919c4a53800aa15481b7f4278bdd020fa13306048a2873" gracePeriod=30 Dec 11 15:34:14 crc kubenswrapper[5050]: I1211 15:34:14.797533 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4f789e1f-2171-4126-baee-8507b4411dbb" containerName="proxy-httpd" containerID="cri-o://3fbf19dabea36cc7a385d0fbcc8915b21baa754a31df40dd82f2ef4000c00f60" gracePeriod=30 Dec 11 15:34:14 crc kubenswrapper[5050]: I1211 15:34:14.797476 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4f789e1f-2171-4126-baee-8507b4411dbb" containerName="ceilometer-notification-agent" containerID="cri-o://22b1dfb4a5657b19c2f23bf8514de39b07b2810851f4ca44610259f4c81cb7d7" gracePeriod=30 Dec 11 15:34:15 crc kubenswrapper[5050]: I1211 15:34:15.475068 5050 generic.go:334] "Generic (PLEG): container finished" podID="4f789e1f-2171-4126-baee-8507b4411dbb" containerID="3fbf19dabea36cc7a385d0fbcc8915b21baa754a31df40dd82f2ef4000c00f60" exitCode=0 Dec 11 15:34:15 crc kubenswrapper[5050]: I1211 15:34:15.475372 5050 generic.go:334] "Generic (PLEG): container finished" podID="4f789e1f-2171-4126-baee-8507b4411dbb" containerID="7f24b2bebdda17df7d919c4a53800aa15481b7f4278bdd020fa13306048a2873" exitCode=2 Dec 11 15:34:15 crc kubenswrapper[5050]: I1211 15:34:15.475384 5050 generic.go:334] "Generic (PLEG): container finished" podID="4f789e1f-2171-4126-baee-8507b4411dbb" containerID="f958aa1522472d79dcdf0daf0cba7e2e0b8ade5eca4bf022d39c6e775e12abb2" exitCode=0 Dec 11 15:34:15 crc kubenswrapper[5050]: I1211 15:34:15.475127 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f789e1f-2171-4126-baee-8507b4411dbb","Type":"ContainerDied","Data":"3fbf19dabea36cc7a385d0fbcc8915b21baa754a31df40dd82f2ef4000c00f60"} Dec 11 15:34:15 crc kubenswrapper[5050]: I1211 15:34:15.475424 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f789e1f-2171-4126-baee-8507b4411dbb","Type":"ContainerDied","Data":"7f24b2bebdda17df7d919c4a53800aa15481b7f4278bdd020fa13306048a2873"} Dec 11 15:34:15 crc kubenswrapper[5050]: I1211 15:34:15.475448 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f789e1f-2171-4126-baee-8507b4411dbb","Type":"ContainerDied","Data":"f958aa1522472d79dcdf0daf0cba7e2e0b8ade5eca4bf022d39c6e775e12abb2"} Dec 11 15:34:17 crc kubenswrapper[5050]: I1211 15:34:17.502686 5050 generic.go:334] "Generic (PLEG): container finished" podID="4f789e1f-2171-4126-baee-8507b4411dbb" containerID="22b1dfb4a5657b19c2f23bf8514de39b07b2810851f4ca44610259f4c81cb7d7" exitCode=0 Dec 11 15:34:17 crc kubenswrapper[5050]: I1211 15:34:17.502749 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f789e1f-2171-4126-baee-8507b4411dbb","Type":"ContainerDied","Data":"22b1dfb4a5657b19c2f23bf8514de39b07b2810851f4ca44610259f4c81cb7d7"} Dec 11 15:34:17 crc kubenswrapper[5050]: I1211 15:34:17.503057 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4f789e1f-2171-4126-baee-8507b4411dbb","Type":"ContainerDied","Data":"685da66938a11245ece68d2a553c447ebefc63fee91c411bc185edf566daba6e"} Dec 11 15:34:17 crc kubenswrapper[5050]: I1211 15:34:17.503073 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="685da66938a11245ece68d2a553c447ebefc63fee91c411bc185edf566daba6e" Dec 11 15:34:17 crc kubenswrapper[5050]: I1211 15:34:17.522745 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 15:34:17 crc kubenswrapper[5050]: I1211 15:34:17.671893 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hw888\" (UniqueName: \"kubernetes.io/projected/4f789e1f-2171-4126-baee-8507b4411dbb-kube-api-access-hw888\") pod \"4f789e1f-2171-4126-baee-8507b4411dbb\" (UID: \"4f789e1f-2171-4126-baee-8507b4411dbb\") " Dec 11 15:34:17 crc kubenswrapper[5050]: I1211 15:34:17.671946 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f789e1f-2171-4126-baee-8507b4411dbb-scripts\") pod \"4f789e1f-2171-4126-baee-8507b4411dbb\" (UID: \"4f789e1f-2171-4126-baee-8507b4411dbb\") " Dec 11 15:34:17 crc kubenswrapper[5050]: I1211 15:34:17.671965 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f789e1f-2171-4126-baee-8507b4411dbb-sg-core-conf-yaml\") pod \"4f789e1f-2171-4126-baee-8507b4411dbb\" (UID: \"4f789e1f-2171-4126-baee-8507b4411dbb\") " Dec 11 15:34:17 crc kubenswrapper[5050]: I1211 15:34:17.672000 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f789e1f-2171-4126-baee-8507b4411dbb-config-data\") pod \"4f789e1f-2171-4126-baee-8507b4411dbb\" (UID: \"4f789e1f-2171-4126-baee-8507b4411dbb\") " Dec 11 15:34:17 crc kubenswrapper[5050]: I1211 15:34:17.672065 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f789e1f-2171-4126-baee-8507b4411dbb-run-httpd\") pod \"4f789e1f-2171-4126-baee-8507b4411dbb\" (UID: \"4f789e1f-2171-4126-baee-8507b4411dbb\") " Dec 11 15:34:17 crc kubenswrapper[5050]: I1211 15:34:17.672235 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f789e1f-2171-4126-baee-8507b4411dbb-log-httpd\") pod \"4f789e1f-2171-4126-baee-8507b4411dbb\" (UID: \"4f789e1f-2171-4126-baee-8507b4411dbb\") " Dec 11 15:34:17 crc kubenswrapper[5050]: I1211 15:34:17.672278 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f789e1f-2171-4126-baee-8507b4411dbb-combined-ca-bundle\") pod \"4f789e1f-2171-4126-baee-8507b4411dbb\" (UID: \"4f789e1f-2171-4126-baee-8507b4411dbb\") " Dec 11 15:34:17 crc kubenswrapper[5050]: I1211 15:34:17.672997 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f789e1f-2171-4126-baee-8507b4411dbb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4f789e1f-2171-4126-baee-8507b4411dbb" (UID: "4f789e1f-2171-4126-baee-8507b4411dbb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:34:17 crc kubenswrapper[5050]: I1211 15:34:17.673240 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f789e1f-2171-4126-baee-8507b4411dbb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4f789e1f-2171-4126-baee-8507b4411dbb" (UID: "4f789e1f-2171-4126-baee-8507b4411dbb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:34:17 crc kubenswrapper[5050]: I1211 15:34:17.681379 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f789e1f-2171-4126-baee-8507b4411dbb-kube-api-access-hw888" (OuterVolumeSpecName: "kube-api-access-hw888") pod "4f789e1f-2171-4126-baee-8507b4411dbb" (UID: "4f789e1f-2171-4126-baee-8507b4411dbb"). InnerVolumeSpecName "kube-api-access-hw888". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:34:17 crc kubenswrapper[5050]: I1211 15:34:17.688589 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f789e1f-2171-4126-baee-8507b4411dbb-scripts" (OuterVolumeSpecName: "scripts") pod "4f789e1f-2171-4126-baee-8507b4411dbb" (UID: "4f789e1f-2171-4126-baee-8507b4411dbb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:34:17 crc kubenswrapper[5050]: I1211 15:34:17.709415 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f789e1f-2171-4126-baee-8507b4411dbb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4f789e1f-2171-4126-baee-8507b4411dbb" (UID: "4f789e1f-2171-4126-baee-8507b4411dbb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:34:17 crc kubenswrapper[5050]: I1211 15:34:17.774737 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hw888\" (UniqueName: \"kubernetes.io/projected/4f789e1f-2171-4126-baee-8507b4411dbb-kube-api-access-hw888\") on node \"crc\" DevicePath \"\"" Dec 11 15:34:17 crc kubenswrapper[5050]: I1211 15:34:17.775151 5050 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f789e1f-2171-4126-baee-8507b4411dbb-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 15:34:17 crc kubenswrapper[5050]: I1211 15:34:17.775160 5050 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4f789e1f-2171-4126-baee-8507b4411dbb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 15:34:17 crc kubenswrapper[5050]: I1211 15:34:17.775168 5050 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f789e1f-2171-4126-baee-8507b4411dbb-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 15:34:17 crc kubenswrapper[5050]: I1211 15:34:17.775178 5050 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4f789e1f-2171-4126-baee-8507b4411dbb-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 15:34:17 crc kubenswrapper[5050]: I1211 15:34:17.781167 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f789e1f-2171-4126-baee-8507b4411dbb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f789e1f-2171-4126-baee-8507b4411dbb" (UID: "4f789e1f-2171-4126-baee-8507b4411dbb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:34:17 crc kubenswrapper[5050]: I1211 15:34:17.796310 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f789e1f-2171-4126-baee-8507b4411dbb-config-data" (OuterVolumeSpecName: "config-data") pod "4f789e1f-2171-4126-baee-8507b4411dbb" (UID: "4f789e1f-2171-4126-baee-8507b4411dbb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:34:17 crc kubenswrapper[5050]: I1211 15:34:17.877237 5050 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f789e1f-2171-4126-baee-8507b4411dbb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:34:17 crc kubenswrapper[5050]: I1211 15:34:17.877278 5050 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f789e1f-2171-4126-baee-8507b4411dbb-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.512707 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.549469 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.558109 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.573778 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 15:34:18 crc kubenswrapper[5050]: E1211 15:34:18.574297 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88c56e95-bf88-47a2-9c36-63f9092746c9" containerName="init" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.574319 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="88c56e95-bf88-47a2-9c36-63f9092746c9" containerName="init" Dec 11 15:34:18 crc kubenswrapper[5050]: E1211 15:34:18.574371 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88c56e95-bf88-47a2-9c36-63f9092746c9" containerName="dnsmasq-dns" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.574379 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="88c56e95-bf88-47a2-9c36-63f9092746c9" containerName="dnsmasq-dns" Dec 11 15:34:18 crc kubenswrapper[5050]: E1211 15:34:18.574409 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f789e1f-2171-4126-baee-8507b4411dbb" containerName="ceilometer-notification-agent" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.574417 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f789e1f-2171-4126-baee-8507b4411dbb" containerName="ceilometer-notification-agent" Dec 11 15:34:18 crc kubenswrapper[5050]: E1211 15:34:18.574433 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f789e1f-2171-4126-baee-8507b4411dbb" containerName="ceilometer-central-agent" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.574440 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f789e1f-2171-4126-baee-8507b4411dbb" containerName="ceilometer-central-agent" Dec 11 15:34:18 crc kubenswrapper[5050]: E1211 15:34:18.574458 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f789e1f-2171-4126-baee-8507b4411dbb" containerName="sg-core" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.574466 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f789e1f-2171-4126-baee-8507b4411dbb" containerName="sg-core" Dec 11 15:34:18 crc kubenswrapper[5050]: E1211 15:34:18.574529 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f789e1f-2171-4126-baee-8507b4411dbb" containerName="proxy-httpd" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.574538 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f789e1f-2171-4126-baee-8507b4411dbb" containerName="proxy-httpd" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.574760 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f789e1f-2171-4126-baee-8507b4411dbb" containerName="ceilometer-notification-agent" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.574783 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="88c56e95-bf88-47a2-9c36-63f9092746c9" containerName="dnsmasq-dns" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.574799 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f789e1f-2171-4126-baee-8507b4411dbb" containerName="ceilometer-central-agent" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.574822 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f789e1f-2171-4126-baee-8507b4411dbb" containerName="proxy-httpd" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.574831 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f789e1f-2171-4126-baee-8507b4411dbb" containerName="sg-core" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.577795 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.579847 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.580001 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.593703 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.693908 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fef8d631-c968-4ccd-92ec-e6fc5a2f6731-config-data\") pod \"ceilometer-0\" (UID: \"fef8d631-c968-4ccd-92ec-e6fc5a2f6731\") " pod="openstack/ceilometer-0" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.694529 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fef8d631-c968-4ccd-92ec-e6fc5a2f6731-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fef8d631-c968-4ccd-92ec-e6fc5a2f6731\") " pod="openstack/ceilometer-0" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.694582 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fef8d631-c968-4ccd-92ec-e6fc5a2f6731-log-httpd\") pod \"ceilometer-0\" (UID: \"fef8d631-c968-4ccd-92ec-e6fc5a2f6731\") " pod="openstack/ceilometer-0" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.694618 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dllww\" (UniqueName: \"kubernetes.io/projected/fef8d631-c968-4ccd-92ec-e6fc5a2f6731-kube-api-access-dllww\") pod \"ceilometer-0\" (UID: \"fef8d631-c968-4ccd-92ec-e6fc5a2f6731\") " pod="openstack/ceilometer-0" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.694700 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fef8d631-c968-4ccd-92ec-e6fc5a2f6731-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fef8d631-c968-4ccd-92ec-e6fc5a2f6731\") " pod="openstack/ceilometer-0" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.694728 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fef8d631-c968-4ccd-92ec-e6fc5a2f6731-run-httpd\") pod \"ceilometer-0\" (UID: \"fef8d631-c968-4ccd-92ec-e6fc5a2f6731\") " pod="openstack/ceilometer-0" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.694771 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fef8d631-c968-4ccd-92ec-e6fc5a2f6731-scripts\") pod \"ceilometer-0\" (UID: \"fef8d631-c968-4ccd-92ec-e6fc5a2f6731\") " pod="openstack/ceilometer-0" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.796859 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fef8d631-c968-4ccd-92ec-e6fc5a2f6731-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fef8d631-c968-4ccd-92ec-e6fc5a2f6731\") " pod="openstack/ceilometer-0" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.796983 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fef8d631-c968-4ccd-92ec-e6fc5a2f6731-log-httpd\") pod \"ceilometer-0\" (UID: \"fef8d631-c968-4ccd-92ec-e6fc5a2f6731\") " pod="openstack/ceilometer-0" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.797071 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dllww\" (UniqueName: \"kubernetes.io/projected/fef8d631-c968-4ccd-92ec-e6fc5a2f6731-kube-api-access-dllww\") pod \"ceilometer-0\" (UID: \"fef8d631-c968-4ccd-92ec-e6fc5a2f6731\") " pod="openstack/ceilometer-0" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.797214 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fef8d631-c968-4ccd-92ec-e6fc5a2f6731-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fef8d631-c968-4ccd-92ec-e6fc5a2f6731\") " pod="openstack/ceilometer-0" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.797270 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fef8d631-c968-4ccd-92ec-e6fc5a2f6731-run-httpd\") pod \"ceilometer-0\" (UID: \"fef8d631-c968-4ccd-92ec-e6fc5a2f6731\") " pod="openstack/ceilometer-0" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.797347 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fef8d631-c968-4ccd-92ec-e6fc5a2f6731-scripts\") pod \"ceilometer-0\" (UID: \"fef8d631-c968-4ccd-92ec-e6fc5a2f6731\") " pod="openstack/ceilometer-0" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.797395 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fef8d631-c968-4ccd-92ec-e6fc5a2f6731-config-data\") pod \"ceilometer-0\" (UID: \"fef8d631-c968-4ccd-92ec-e6fc5a2f6731\") " pod="openstack/ceilometer-0" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.797597 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fef8d631-c968-4ccd-92ec-e6fc5a2f6731-log-httpd\") pod \"ceilometer-0\" (UID: \"fef8d631-c968-4ccd-92ec-e6fc5a2f6731\") " pod="openstack/ceilometer-0" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.799109 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fef8d631-c968-4ccd-92ec-e6fc5a2f6731-run-httpd\") pod \"ceilometer-0\" (UID: \"fef8d631-c968-4ccd-92ec-e6fc5a2f6731\") " pod="openstack/ceilometer-0" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.800807 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fef8d631-c968-4ccd-92ec-e6fc5a2f6731-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fef8d631-c968-4ccd-92ec-e6fc5a2f6731\") " pod="openstack/ceilometer-0" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.808917 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fef8d631-c968-4ccd-92ec-e6fc5a2f6731-scripts\") pod \"ceilometer-0\" (UID: \"fef8d631-c968-4ccd-92ec-e6fc5a2f6731\") " pod="openstack/ceilometer-0" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.809264 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fef8d631-c968-4ccd-92ec-e6fc5a2f6731-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fef8d631-c968-4ccd-92ec-e6fc5a2f6731\") " pod="openstack/ceilometer-0" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.810374 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fef8d631-c968-4ccd-92ec-e6fc5a2f6731-config-data\") pod \"ceilometer-0\" (UID: \"fef8d631-c968-4ccd-92ec-e6fc5a2f6731\") " pod="openstack/ceilometer-0" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.818440 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dllww\" (UniqueName: \"kubernetes.io/projected/fef8d631-c968-4ccd-92ec-e6fc5a2f6731-kube-api-access-dllww\") pod \"ceilometer-0\" (UID: \"fef8d631-c968-4ccd-92ec-e6fc5a2f6731\") " pod="openstack/ceilometer-0" Dec 11 15:34:18 crc kubenswrapper[5050]: I1211 15:34:18.956125 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 15:34:19 crc kubenswrapper[5050]: W1211 15:34:19.251866 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfef8d631_c968_4ccd_92ec_e6fc5a2f6731.slice/crio-46391b25676ce7c40ddd60d6fe6c5450ecf459e6a529bf4e92c5d2bf78e9b221 WatchSource:0}: Error finding container 46391b25676ce7c40ddd60d6fe6c5450ecf459e6a529bf4e92c5d2bf78e9b221: Status 404 returned error can't find the container with id 46391b25676ce7c40ddd60d6fe6c5450ecf459e6a529bf4e92c5d2bf78e9b221 Dec 11 15:34:19 crc kubenswrapper[5050]: I1211 15:34:19.256243 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 15:34:19 crc kubenswrapper[5050]: I1211 15:34:19.524848 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fef8d631-c968-4ccd-92ec-e6fc5a2f6731","Type":"ContainerStarted","Data":"46391b25676ce7c40ddd60d6fe6c5450ecf459e6a529bf4e92c5d2bf78e9b221"} Dec 11 15:34:19 crc kubenswrapper[5050]: I1211 15:34:19.577436 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f789e1f-2171-4126-baee-8507b4411dbb" path="/var/lib/kubelet/pods/4f789e1f-2171-4126-baee-8507b4411dbb/volumes" Dec 11 15:34:20 crc kubenswrapper[5050]: I1211 15:34:20.535353 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fef8d631-c968-4ccd-92ec-e6fc5a2f6731","Type":"ContainerStarted","Data":"166c7ea8f22887bff1aac5363e204edce18d6f260bad2031a294155043ca2094"} Dec 11 15:34:21 crc kubenswrapper[5050]: I1211 15:34:21.563031 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fef8d631-c968-4ccd-92ec-e6fc5a2f6731","Type":"ContainerStarted","Data":"bfec6cc59ed05acb62e1f0f824dd77874de93bb26b4282c3f9bd4cc5ffdc26e5"} Dec 11 15:34:21 crc kubenswrapper[5050]: I1211 15:34:21.922677 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Dec 11 15:34:22 crc kubenswrapper[5050]: I1211 15:34:22.564261 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fef8d631-c968-4ccd-92ec-e6fc5a2f6731","Type":"ContainerStarted","Data":"b9b274e2d472015de4dd1c4c3f97672872088324100a2de88752780d943ad089"} Dec 11 15:34:23 crc kubenswrapper[5050]: I1211 15:34:23.497909 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Dec 11 15:34:23 crc kubenswrapper[5050]: I1211 15:34:23.526899 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Dec 11 15:34:23 crc kubenswrapper[5050]: I1211 15:34:23.576982 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fef8d631-c968-4ccd-92ec-e6fc5a2f6731","Type":"ContainerStarted","Data":"a21e794cca3499f312f6ffdd98e5ef906efabb603ad146423f45dcdbd353f652"} Dec 11 15:34:23 crc kubenswrapper[5050]: I1211 15:34:23.577511 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 11 15:34:23 crc kubenswrapper[5050]: I1211 15:34:23.600232 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.139512333 podStartE2EDuration="5.600215939s" podCreationTimestamp="2025-12-11 15:34:18 +0000 UTC" firstStartedPulling="2025-12-11 15:34:19.254336553 +0000 UTC m=+6350.098059139" lastFinishedPulling="2025-12-11 15:34:22.715040119 +0000 UTC m=+6353.558762745" observedRunningTime="2025-12-11 15:34:23.597651361 +0000 UTC m=+6354.441373947" watchObservedRunningTime="2025-12-11 15:34:23.600215939 +0000 UTC m=+6354.443938525" Dec 11 15:34:23 crc kubenswrapper[5050]: I1211 15:34:23.980285 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Dec 11 15:34:48 crc kubenswrapper[5050]: I1211 15:34:48.962164 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 11 15:35:04 crc kubenswrapper[5050]: I1211 15:35:04.287266 5050 scope.go:117] "RemoveContainer" containerID="9e5e27425b1c055cc1dae09b33e0fffaac58d6f7f8443f27c3df126117dcf4a1" Dec 11 15:35:04 crc kubenswrapper[5050]: I1211 15:35:04.315066 5050 scope.go:117] "RemoveContainer" containerID="27626695d55b1f6bac43b5fb67e29c26b93cbf719f151929017a0e8abf60424b" Dec 11 15:35:15 crc kubenswrapper[5050]: I1211 15:35:15.159613 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d5569cdf5-rw89v"] Dec 11 15:35:15 crc kubenswrapper[5050]: I1211 15:35:15.162319 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d5569cdf5-rw89v" Dec 11 15:35:15 crc kubenswrapper[5050]: I1211 15:35:15.165267 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1" Dec 11 15:35:15 crc kubenswrapper[5050]: I1211 15:35:15.220448 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d5569cdf5-rw89v"] Dec 11 15:35:15 crc kubenswrapper[5050]: I1211 15:35:15.286159 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sl9m\" (UniqueName: \"kubernetes.io/projected/975e5b80-467b-409e-95c7-7339b86e4627-kube-api-access-9sl9m\") pod \"dnsmasq-dns-d5569cdf5-rw89v\" (UID: \"975e5b80-467b-409e-95c7-7339b86e4627\") " pod="openstack/dnsmasq-dns-d5569cdf5-rw89v" Dec 11 15:35:15 crc kubenswrapper[5050]: I1211 15:35:15.286287 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/975e5b80-467b-409e-95c7-7339b86e4627-ovsdbserver-nb\") pod \"dnsmasq-dns-d5569cdf5-rw89v\" (UID: \"975e5b80-467b-409e-95c7-7339b86e4627\") " pod="openstack/dnsmasq-dns-d5569cdf5-rw89v" Dec 11 15:35:15 crc kubenswrapper[5050]: I1211 15:35:15.286326 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/975e5b80-467b-409e-95c7-7339b86e4627-ovsdbserver-sb\") pod \"dnsmasq-dns-d5569cdf5-rw89v\" (UID: \"975e5b80-467b-409e-95c7-7339b86e4627\") " pod="openstack/dnsmasq-dns-d5569cdf5-rw89v" Dec 11 15:35:15 crc kubenswrapper[5050]: I1211 15:35:15.286420 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/975e5b80-467b-409e-95c7-7339b86e4627-dns-svc\") pod \"dnsmasq-dns-d5569cdf5-rw89v\" (UID: \"975e5b80-467b-409e-95c7-7339b86e4627\") " pod="openstack/dnsmasq-dns-d5569cdf5-rw89v" Dec 11 15:35:15 crc kubenswrapper[5050]: I1211 15:35:15.286638 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/975e5b80-467b-409e-95c7-7339b86e4627-config\") pod \"dnsmasq-dns-d5569cdf5-rw89v\" (UID: \"975e5b80-467b-409e-95c7-7339b86e4627\") " pod="openstack/dnsmasq-dns-d5569cdf5-rw89v" Dec 11 15:35:15 crc kubenswrapper[5050]: I1211 15:35:15.286741 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/975e5b80-467b-409e-95c7-7339b86e4627-openstack-cell1\") pod \"dnsmasq-dns-d5569cdf5-rw89v\" (UID: \"975e5b80-467b-409e-95c7-7339b86e4627\") " pod="openstack/dnsmasq-dns-d5569cdf5-rw89v" Dec 11 15:35:15 crc kubenswrapper[5050]: I1211 15:35:15.388516 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sl9m\" (UniqueName: \"kubernetes.io/projected/975e5b80-467b-409e-95c7-7339b86e4627-kube-api-access-9sl9m\") pod \"dnsmasq-dns-d5569cdf5-rw89v\" (UID: \"975e5b80-467b-409e-95c7-7339b86e4627\") " pod="openstack/dnsmasq-dns-d5569cdf5-rw89v" Dec 11 15:35:15 crc kubenswrapper[5050]: I1211 15:35:15.388713 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/975e5b80-467b-409e-95c7-7339b86e4627-ovsdbserver-nb\") pod \"dnsmasq-dns-d5569cdf5-rw89v\" (UID: \"975e5b80-467b-409e-95c7-7339b86e4627\") " pod="openstack/dnsmasq-dns-d5569cdf5-rw89v" Dec 11 15:35:15 crc kubenswrapper[5050]: I1211 15:35:15.388758 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/975e5b80-467b-409e-95c7-7339b86e4627-ovsdbserver-sb\") pod \"dnsmasq-dns-d5569cdf5-rw89v\" (UID: \"975e5b80-467b-409e-95c7-7339b86e4627\") " pod="openstack/dnsmasq-dns-d5569cdf5-rw89v" Dec 11 15:35:15 crc kubenswrapper[5050]: I1211 15:35:15.388857 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/975e5b80-467b-409e-95c7-7339b86e4627-dns-svc\") pod \"dnsmasq-dns-d5569cdf5-rw89v\" (UID: \"975e5b80-467b-409e-95c7-7339b86e4627\") " pod="openstack/dnsmasq-dns-d5569cdf5-rw89v" Dec 11 15:35:15 crc kubenswrapper[5050]: I1211 15:35:15.388919 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/975e5b80-467b-409e-95c7-7339b86e4627-config\") pod \"dnsmasq-dns-d5569cdf5-rw89v\" (UID: \"975e5b80-467b-409e-95c7-7339b86e4627\") " pod="openstack/dnsmasq-dns-d5569cdf5-rw89v" Dec 11 15:35:15 crc kubenswrapper[5050]: I1211 15:35:15.388970 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/975e5b80-467b-409e-95c7-7339b86e4627-openstack-cell1\") pod \"dnsmasq-dns-d5569cdf5-rw89v\" (UID: \"975e5b80-467b-409e-95c7-7339b86e4627\") " pod="openstack/dnsmasq-dns-d5569cdf5-rw89v" Dec 11 15:35:15 crc kubenswrapper[5050]: I1211 15:35:15.389781 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/975e5b80-467b-409e-95c7-7339b86e4627-ovsdbserver-nb\") pod \"dnsmasq-dns-d5569cdf5-rw89v\" (UID: \"975e5b80-467b-409e-95c7-7339b86e4627\") " pod="openstack/dnsmasq-dns-d5569cdf5-rw89v" Dec 11 15:35:15 crc kubenswrapper[5050]: I1211 15:35:15.389959 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/975e5b80-467b-409e-95c7-7339b86e4627-dns-svc\") pod \"dnsmasq-dns-d5569cdf5-rw89v\" (UID: \"975e5b80-467b-409e-95c7-7339b86e4627\") " pod="openstack/dnsmasq-dns-d5569cdf5-rw89v" Dec 11 15:35:15 crc kubenswrapper[5050]: I1211 15:35:15.390126 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/975e5b80-467b-409e-95c7-7339b86e4627-openstack-cell1\") pod \"dnsmasq-dns-d5569cdf5-rw89v\" (UID: \"975e5b80-467b-409e-95c7-7339b86e4627\") " pod="openstack/dnsmasq-dns-d5569cdf5-rw89v" Dec 11 15:35:15 crc kubenswrapper[5050]: I1211 15:35:15.390722 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/975e5b80-467b-409e-95c7-7339b86e4627-ovsdbserver-sb\") pod \"dnsmasq-dns-d5569cdf5-rw89v\" (UID: \"975e5b80-467b-409e-95c7-7339b86e4627\") " pod="openstack/dnsmasq-dns-d5569cdf5-rw89v" Dec 11 15:35:15 crc kubenswrapper[5050]: I1211 15:35:15.390820 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/975e5b80-467b-409e-95c7-7339b86e4627-config\") pod \"dnsmasq-dns-d5569cdf5-rw89v\" (UID: \"975e5b80-467b-409e-95c7-7339b86e4627\") " pod="openstack/dnsmasq-dns-d5569cdf5-rw89v" Dec 11 15:35:15 crc kubenswrapper[5050]: I1211 15:35:15.411410 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sl9m\" (UniqueName: \"kubernetes.io/projected/975e5b80-467b-409e-95c7-7339b86e4627-kube-api-access-9sl9m\") pod \"dnsmasq-dns-d5569cdf5-rw89v\" (UID: \"975e5b80-467b-409e-95c7-7339b86e4627\") " pod="openstack/dnsmasq-dns-d5569cdf5-rw89v" Dec 11 15:35:15 crc kubenswrapper[5050]: I1211 15:35:15.484480 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d5569cdf5-rw89v" Dec 11 15:35:16 crc kubenswrapper[5050]: I1211 15:35:16.064940 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d5569cdf5-rw89v"] Dec 11 15:35:16 crc kubenswrapper[5050]: I1211 15:35:16.095460 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d5569cdf5-rw89v" event={"ID":"975e5b80-467b-409e-95c7-7339b86e4627","Type":"ContainerStarted","Data":"bcadec739545d4753997551f6d0a458d580aab4cc43f150bd802ec1c7f42853a"} Dec 11 15:35:17 crc kubenswrapper[5050]: I1211 15:35:17.107065 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d5569cdf5-rw89v" event={"ID":"975e5b80-467b-409e-95c7-7339b86e4627","Type":"ContainerDied","Data":"a042e4776a5186c62f34d9cf34a3bb0dc1a23e92835d14206533482c85ac1b49"} Dec 11 15:35:17 crc kubenswrapper[5050]: I1211 15:35:17.107005 5050 generic.go:334] "Generic (PLEG): container finished" podID="975e5b80-467b-409e-95c7-7339b86e4627" containerID="a042e4776a5186c62f34d9cf34a3bb0dc1a23e92835d14206533482c85ac1b49" exitCode=0 Dec 11 15:35:18 crc kubenswrapper[5050]: I1211 15:35:18.128356 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d5569cdf5-rw89v" event={"ID":"975e5b80-467b-409e-95c7-7339b86e4627","Type":"ContainerStarted","Data":"140023bf2bcc04a98abc646555e2dae3e233570e234b1081a36e0db242f995de"} Dec 11 15:35:18 crc kubenswrapper[5050]: I1211 15:35:18.128932 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d5569cdf5-rw89v" Dec 11 15:35:18 crc kubenswrapper[5050]: I1211 15:35:18.154272 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d5569cdf5-rw89v" podStartSLOduration=3.15425756 podStartE2EDuration="3.15425756s" podCreationTimestamp="2025-12-11 15:35:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:35:18.153026987 +0000 UTC m=+6408.996749583" watchObservedRunningTime="2025-12-11 15:35:18.15425756 +0000 UTC m=+6408.997980146" Dec 11 15:35:25 crc kubenswrapper[5050]: I1211 15:35:25.486283 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d5569cdf5-rw89v" Dec 11 15:35:25 crc kubenswrapper[5050]: I1211 15:35:25.581109 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d495dfb55-bp7n6"] Dec 11 15:35:25 crc kubenswrapper[5050]: I1211 15:35:25.581372 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d495dfb55-bp7n6" podUID="2412d0a8-7e8f-4fae-915b-c794621f9655" containerName="dnsmasq-dns" containerID="cri-o://b47ef13d30c6603edd9078e5b603930cd327a27621b2f004ae2f9949d960884b" gracePeriod=10 Dec 11 15:35:25 crc kubenswrapper[5050]: I1211 15:35:25.790238 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fb668979f-9bbkc"] Dec 11 15:35:25 crc kubenswrapper[5050]: I1211 15:35:25.793234 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fb668979f-9bbkc" Dec 11 15:35:25 crc kubenswrapper[5050]: I1211 15:35:25.816338 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fb668979f-9bbkc"] Dec 11 15:35:25 crc kubenswrapper[5050]: I1211 15:35:25.947194 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/c77c08a9-c188-4d22-b635-1e930b6e086e-openstack-cell1\") pod \"dnsmasq-dns-7fb668979f-9bbkc\" (UID: \"c77c08a9-c188-4d22-b635-1e930b6e086e\") " pod="openstack/dnsmasq-dns-7fb668979f-9bbkc" Dec 11 15:35:25 crc kubenswrapper[5050]: I1211 15:35:25.947490 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c77c08a9-c188-4d22-b635-1e930b6e086e-dns-svc\") pod \"dnsmasq-dns-7fb668979f-9bbkc\" (UID: \"c77c08a9-c188-4d22-b635-1e930b6e086e\") " pod="openstack/dnsmasq-dns-7fb668979f-9bbkc" Dec 11 15:35:25 crc kubenswrapper[5050]: I1211 15:35:25.947599 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n79nt\" (UniqueName: \"kubernetes.io/projected/c77c08a9-c188-4d22-b635-1e930b6e086e-kube-api-access-n79nt\") pod \"dnsmasq-dns-7fb668979f-9bbkc\" (UID: \"c77c08a9-c188-4d22-b635-1e930b6e086e\") " pod="openstack/dnsmasq-dns-7fb668979f-9bbkc" Dec 11 15:35:25 crc kubenswrapper[5050]: I1211 15:35:25.947702 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c77c08a9-c188-4d22-b635-1e930b6e086e-ovsdbserver-nb\") pod \"dnsmasq-dns-7fb668979f-9bbkc\" (UID: \"c77c08a9-c188-4d22-b635-1e930b6e086e\") " pod="openstack/dnsmasq-dns-7fb668979f-9bbkc" Dec 11 15:35:25 crc kubenswrapper[5050]: I1211 15:35:25.947803 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c77c08a9-c188-4d22-b635-1e930b6e086e-config\") pod \"dnsmasq-dns-7fb668979f-9bbkc\" (UID: \"c77c08a9-c188-4d22-b635-1e930b6e086e\") " pod="openstack/dnsmasq-dns-7fb668979f-9bbkc" Dec 11 15:35:25 crc kubenswrapper[5050]: I1211 15:35:25.948039 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c77c08a9-c188-4d22-b635-1e930b6e086e-ovsdbserver-sb\") pod \"dnsmasq-dns-7fb668979f-9bbkc\" (UID: \"c77c08a9-c188-4d22-b635-1e930b6e086e\") " pod="openstack/dnsmasq-dns-7fb668979f-9bbkc" Dec 11 15:35:26 crc kubenswrapper[5050]: I1211 15:35:26.050076 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c77c08a9-c188-4d22-b635-1e930b6e086e-ovsdbserver-sb\") pod \"dnsmasq-dns-7fb668979f-9bbkc\" (UID: \"c77c08a9-c188-4d22-b635-1e930b6e086e\") " pod="openstack/dnsmasq-dns-7fb668979f-9bbkc" Dec 11 15:35:26 crc kubenswrapper[5050]: I1211 15:35:26.050161 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/c77c08a9-c188-4d22-b635-1e930b6e086e-openstack-cell1\") pod \"dnsmasq-dns-7fb668979f-9bbkc\" (UID: \"c77c08a9-c188-4d22-b635-1e930b6e086e\") " pod="openstack/dnsmasq-dns-7fb668979f-9bbkc" Dec 11 15:35:26 crc kubenswrapper[5050]: I1211 15:35:26.050226 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c77c08a9-c188-4d22-b635-1e930b6e086e-dns-svc\") pod \"dnsmasq-dns-7fb668979f-9bbkc\" (UID: \"c77c08a9-c188-4d22-b635-1e930b6e086e\") " pod="openstack/dnsmasq-dns-7fb668979f-9bbkc" Dec 11 15:35:26 crc kubenswrapper[5050]: I1211 15:35:26.050267 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n79nt\" (UniqueName: \"kubernetes.io/projected/c77c08a9-c188-4d22-b635-1e930b6e086e-kube-api-access-n79nt\") pod \"dnsmasq-dns-7fb668979f-9bbkc\" (UID: \"c77c08a9-c188-4d22-b635-1e930b6e086e\") " pod="openstack/dnsmasq-dns-7fb668979f-9bbkc" Dec 11 15:35:26 crc kubenswrapper[5050]: I1211 15:35:26.050290 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c77c08a9-c188-4d22-b635-1e930b6e086e-ovsdbserver-nb\") pod \"dnsmasq-dns-7fb668979f-9bbkc\" (UID: \"c77c08a9-c188-4d22-b635-1e930b6e086e\") " pod="openstack/dnsmasq-dns-7fb668979f-9bbkc" Dec 11 15:35:26 crc kubenswrapper[5050]: I1211 15:35:26.050311 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c77c08a9-c188-4d22-b635-1e930b6e086e-config\") pod \"dnsmasq-dns-7fb668979f-9bbkc\" (UID: \"c77c08a9-c188-4d22-b635-1e930b6e086e\") " pod="openstack/dnsmasq-dns-7fb668979f-9bbkc" Dec 11 15:35:26 crc kubenswrapper[5050]: I1211 15:35:26.051641 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c77c08a9-c188-4d22-b635-1e930b6e086e-ovsdbserver-sb\") pod \"dnsmasq-dns-7fb668979f-9bbkc\" (UID: \"c77c08a9-c188-4d22-b635-1e930b6e086e\") " pod="openstack/dnsmasq-dns-7fb668979f-9bbkc" Dec 11 15:35:26 crc kubenswrapper[5050]: I1211 15:35:26.051782 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c77c08a9-c188-4d22-b635-1e930b6e086e-config\") pod \"dnsmasq-dns-7fb668979f-9bbkc\" (UID: \"c77c08a9-c188-4d22-b635-1e930b6e086e\") " pod="openstack/dnsmasq-dns-7fb668979f-9bbkc" Dec 11 15:35:26 crc kubenswrapper[5050]: I1211 15:35:26.052215 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c77c08a9-c188-4d22-b635-1e930b6e086e-dns-svc\") pod \"dnsmasq-dns-7fb668979f-9bbkc\" (UID: \"c77c08a9-c188-4d22-b635-1e930b6e086e\") " pod="openstack/dnsmasq-dns-7fb668979f-9bbkc" Dec 11 15:35:26 crc kubenswrapper[5050]: I1211 15:35:26.052351 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c77c08a9-c188-4d22-b635-1e930b6e086e-ovsdbserver-nb\") pod \"dnsmasq-dns-7fb668979f-9bbkc\" (UID: \"c77c08a9-c188-4d22-b635-1e930b6e086e\") " pod="openstack/dnsmasq-dns-7fb668979f-9bbkc" Dec 11 15:35:26 crc kubenswrapper[5050]: I1211 15:35:26.052535 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/c77c08a9-c188-4d22-b635-1e930b6e086e-openstack-cell1\") pod \"dnsmasq-dns-7fb668979f-9bbkc\" (UID: \"c77c08a9-c188-4d22-b635-1e930b6e086e\") " pod="openstack/dnsmasq-dns-7fb668979f-9bbkc" Dec 11 15:35:26 crc kubenswrapper[5050]: I1211 15:35:26.072131 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n79nt\" (UniqueName: \"kubernetes.io/projected/c77c08a9-c188-4d22-b635-1e930b6e086e-kube-api-access-n79nt\") pod \"dnsmasq-dns-7fb668979f-9bbkc\" (UID: \"c77c08a9-c188-4d22-b635-1e930b6e086e\") " pod="openstack/dnsmasq-dns-7fb668979f-9bbkc" Dec 11 15:35:26 crc kubenswrapper[5050]: I1211 15:35:26.144783 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fb668979f-9bbkc" Dec 11 15:35:26 crc kubenswrapper[5050]: I1211 15:35:26.212533 5050 generic.go:334] "Generic (PLEG): container finished" podID="2412d0a8-7e8f-4fae-915b-c794621f9655" containerID="b47ef13d30c6603edd9078e5b603930cd327a27621b2f004ae2f9949d960884b" exitCode=0 Dec 11 15:35:26 crc kubenswrapper[5050]: I1211 15:35:26.212576 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d495dfb55-bp7n6" event={"ID":"2412d0a8-7e8f-4fae-915b-c794621f9655","Type":"ContainerDied","Data":"b47ef13d30c6603edd9078e5b603930cd327a27621b2f004ae2f9949d960884b"} Dec 11 15:35:26 crc kubenswrapper[5050]: I1211 15:35:26.680243 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d495dfb55-bp7n6" Dec 11 15:35:26 crc kubenswrapper[5050]: I1211 15:35:26.703070 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fb668979f-9bbkc"] Dec 11 15:35:26 crc kubenswrapper[5050]: I1211 15:35:26.767486 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2412d0a8-7e8f-4fae-915b-c794621f9655-ovsdbserver-nb\") pod \"2412d0a8-7e8f-4fae-915b-c794621f9655\" (UID: \"2412d0a8-7e8f-4fae-915b-c794621f9655\") " Dec 11 15:35:26 crc kubenswrapper[5050]: I1211 15:35:26.767823 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2412d0a8-7e8f-4fae-915b-c794621f9655-ovsdbserver-sb\") pod \"2412d0a8-7e8f-4fae-915b-c794621f9655\" (UID: \"2412d0a8-7e8f-4fae-915b-c794621f9655\") " Dec 11 15:35:26 crc kubenswrapper[5050]: I1211 15:35:26.767853 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2412d0a8-7e8f-4fae-915b-c794621f9655-dns-svc\") pod \"2412d0a8-7e8f-4fae-915b-c794621f9655\" (UID: \"2412d0a8-7e8f-4fae-915b-c794621f9655\") " Dec 11 15:35:26 crc kubenswrapper[5050]: I1211 15:35:26.767924 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2412d0a8-7e8f-4fae-915b-c794621f9655-config\") pod \"2412d0a8-7e8f-4fae-915b-c794621f9655\" (UID: \"2412d0a8-7e8f-4fae-915b-c794621f9655\") " Dec 11 15:35:26 crc kubenswrapper[5050]: I1211 15:35:26.767962 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wrk9\" (UniqueName: \"kubernetes.io/projected/2412d0a8-7e8f-4fae-915b-c794621f9655-kube-api-access-9wrk9\") pod \"2412d0a8-7e8f-4fae-915b-c794621f9655\" (UID: \"2412d0a8-7e8f-4fae-915b-c794621f9655\") " Dec 11 15:35:26 crc kubenswrapper[5050]: I1211 15:35:26.772093 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2412d0a8-7e8f-4fae-915b-c794621f9655-kube-api-access-9wrk9" (OuterVolumeSpecName: "kube-api-access-9wrk9") pod "2412d0a8-7e8f-4fae-915b-c794621f9655" (UID: "2412d0a8-7e8f-4fae-915b-c794621f9655"). InnerVolumeSpecName "kube-api-access-9wrk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:35:26 crc kubenswrapper[5050]: I1211 15:35:26.836251 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2412d0a8-7e8f-4fae-915b-c794621f9655-config" (OuterVolumeSpecName: "config") pod "2412d0a8-7e8f-4fae-915b-c794621f9655" (UID: "2412d0a8-7e8f-4fae-915b-c794621f9655"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:35:26 crc kubenswrapper[5050]: I1211 15:35:26.850332 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2412d0a8-7e8f-4fae-915b-c794621f9655-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2412d0a8-7e8f-4fae-915b-c794621f9655" (UID: "2412d0a8-7e8f-4fae-915b-c794621f9655"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:35:26 crc kubenswrapper[5050]: I1211 15:35:26.856679 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2412d0a8-7e8f-4fae-915b-c794621f9655-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2412d0a8-7e8f-4fae-915b-c794621f9655" (UID: "2412d0a8-7e8f-4fae-915b-c794621f9655"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:35:26 crc kubenswrapper[5050]: I1211 15:35:26.857192 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2412d0a8-7e8f-4fae-915b-c794621f9655-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2412d0a8-7e8f-4fae-915b-c794621f9655" (UID: "2412d0a8-7e8f-4fae-915b-c794621f9655"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:35:26 crc kubenswrapper[5050]: I1211 15:35:26.871239 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2412d0a8-7e8f-4fae-915b-c794621f9655-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 15:35:26 crc kubenswrapper[5050]: I1211 15:35:26.871280 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2412d0a8-7e8f-4fae-915b-c794621f9655-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 15:35:26 crc kubenswrapper[5050]: I1211 15:35:26.871290 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2412d0a8-7e8f-4fae-915b-c794621f9655-config\") on node \"crc\" DevicePath \"\"" Dec 11 15:35:26 crc kubenswrapper[5050]: I1211 15:35:26.871336 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wrk9\" (UniqueName: \"kubernetes.io/projected/2412d0a8-7e8f-4fae-915b-c794621f9655-kube-api-access-9wrk9\") on node \"crc\" DevicePath \"\"" Dec 11 15:35:26 crc kubenswrapper[5050]: I1211 15:35:26.871347 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2412d0a8-7e8f-4fae-915b-c794621f9655-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 15:35:27 crc kubenswrapper[5050]: I1211 15:35:27.226108 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d495dfb55-bp7n6" Dec 11 15:35:27 crc kubenswrapper[5050]: I1211 15:35:27.226102 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d495dfb55-bp7n6" event={"ID":"2412d0a8-7e8f-4fae-915b-c794621f9655","Type":"ContainerDied","Data":"702ffa5224c6a87bb507f75231c2608d8d14e2b53f2c36c06ad8688acd7b0ad8"} Dec 11 15:35:27 crc kubenswrapper[5050]: I1211 15:35:27.226234 5050 scope.go:117] "RemoveContainer" containerID="b47ef13d30c6603edd9078e5b603930cd327a27621b2f004ae2f9949d960884b" Dec 11 15:35:27 crc kubenswrapper[5050]: I1211 15:35:27.227519 5050 generic.go:334] "Generic (PLEG): container finished" podID="c77c08a9-c188-4d22-b635-1e930b6e086e" containerID="8d9f516e91169861cefb8db64fe267d4905857e8660b149444ac48cc99d892b6" exitCode=0 Dec 11 15:35:27 crc kubenswrapper[5050]: I1211 15:35:27.227562 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fb668979f-9bbkc" event={"ID":"c77c08a9-c188-4d22-b635-1e930b6e086e","Type":"ContainerDied","Data":"8d9f516e91169861cefb8db64fe267d4905857e8660b149444ac48cc99d892b6"} Dec 11 15:35:27 crc kubenswrapper[5050]: I1211 15:35:27.227591 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fb668979f-9bbkc" event={"ID":"c77c08a9-c188-4d22-b635-1e930b6e086e","Type":"ContainerStarted","Data":"e139a26abe60c3fadafc79762dece21d7d398eb369751317424c407272c15e9c"} Dec 11 15:35:27 crc kubenswrapper[5050]: I1211 15:35:27.263357 5050 scope.go:117] "RemoveContainer" containerID="b9cc781ac06d5864326978c46d2320f33805c8c774c5b235f79f80ab0db31c10" Dec 11 15:35:27 crc kubenswrapper[5050]: I1211 15:35:27.284734 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d495dfb55-bp7n6"] Dec 11 15:35:27 crc kubenswrapper[5050]: I1211 15:35:27.295554 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d495dfb55-bp7n6"] Dec 11 15:35:27 crc kubenswrapper[5050]: I1211 15:35:27.558574 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2412d0a8-7e8f-4fae-915b-c794621f9655" path="/var/lib/kubelet/pods/2412d0a8-7e8f-4fae-915b-c794621f9655/volumes" Dec 11 15:35:28 crc kubenswrapper[5050]: I1211 15:35:28.238513 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fb668979f-9bbkc" event={"ID":"c77c08a9-c188-4d22-b635-1e930b6e086e","Type":"ContainerStarted","Data":"5c1785cf34c80d6d2c0b475d1d1a3c3a9c7e7626d4ba7bcc86abacf56f74cfe5"} Dec 11 15:35:28 crc kubenswrapper[5050]: I1211 15:35:28.238914 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fb668979f-9bbkc" Dec 11 15:35:28 crc kubenswrapper[5050]: I1211 15:35:28.257394 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fb668979f-9bbkc" podStartSLOduration=3.257375649 podStartE2EDuration="3.257375649s" podCreationTimestamp="2025-12-11 15:35:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:35:28.255083607 +0000 UTC m=+6419.098806203" watchObservedRunningTime="2025-12-11 15:35:28.257375649 +0000 UTC m=+6419.101098235" Dec 11 15:35:36 crc kubenswrapper[5050]: I1211 15:35:36.146146 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fb668979f-9bbkc" Dec 11 15:35:36 crc kubenswrapper[5050]: I1211 15:35:36.209395 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d5569cdf5-rw89v"] Dec 11 15:35:36 crc kubenswrapper[5050]: I1211 15:35:36.209760 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d5569cdf5-rw89v" podUID="975e5b80-467b-409e-95c7-7339b86e4627" containerName="dnsmasq-dns" containerID="cri-o://140023bf2bcc04a98abc646555e2dae3e233570e234b1081a36e0db242f995de" gracePeriod=10 Dec 11 15:35:36 crc kubenswrapper[5050]: I1211 15:35:36.750720 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d5569cdf5-rw89v" Dec 11 15:35:36 crc kubenswrapper[5050]: I1211 15:35:36.794575 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/975e5b80-467b-409e-95c7-7339b86e4627-dns-svc\") pod \"975e5b80-467b-409e-95c7-7339b86e4627\" (UID: \"975e5b80-467b-409e-95c7-7339b86e4627\") " Dec 11 15:35:36 crc kubenswrapper[5050]: I1211 15:35:36.794710 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/975e5b80-467b-409e-95c7-7339b86e4627-ovsdbserver-nb\") pod \"975e5b80-467b-409e-95c7-7339b86e4627\" (UID: \"975e5b80-467b-409e-95c7-7339b86e4627\") " Dec 11 15:35:36 crc kubenswrapper[5050]: I1211 15:35:36.795174 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/975e5b80-467b-409e-95c7-7339b86e4627-ovsdbserver-sb\") pod \"975e5b80-467b-409e-95c7-7339b86e4627\" (UID: \"975e5b80-467b-409e-95c7-7339b86e4627\") " Dec 11 15:35:36 crc kubenswrapper[5050]: I1211 15:35:36.795197 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/975e5b80-467b-409e-95c7-7339b86e4627-config\") pod \"975e5b80-467b-409e-95c7-7339b86e4627\" (UID: \"975e5b80-467b-409e-95c7-7339b86e4627\") " Dec 11 15:35:36 crc kubenswrapper[5050]: I1211 15:35:36.795263 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/975e5b80-467b-409e-95c7-7339b86e4627-openstack-cell1\") pod \"975e5b80-467b-409e-95c7-7339b86e4627\" (UID: \"975e5b80-467b-409e-95c7-7339b86e4627\") " Dec 11 15:35:36 crc kubenswrapper[5050]: I1211 15:35:36.795375 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sl9m\" (UniqueName: \"kubernetes.io/projected/975e5b80-467b-409e-95c7-7339b86e4627-kube-api-access-9sl9m\") pod \"975e5b80-467b-409e-95c7-7339b86e4627\" (UID: \"975e5b80-467b-409e-95c7-7339b86e4627\") " Dec 11 15:35:36 crc kubenswrapper[5050]: I1211 15:35:36.801462 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/975e5b80-467b-409e-95c7-7339b86e4627-kube-api-access-9sl9m" (OuterVolumeSpecName: "kube-api-access-9sl9m") pod "975e5b80-467b-409e-95c7-7339b86e4627" (UID: "975e5b80-467b-409e-95c7-7339b86e4627"). InnerVolumeSpecName "kube-api-access-9sl9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:35:36 crc kubenswrapper[5050]: I1211 15:35:36.881324 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/975e5b80-467b-409e-95c7-7339b86e4627-openstack-cell1" (OuterVolumeSpecName: "openstack-cell1") pod "975e5b80-467b-409e-95c7-7339b86e4627" (UID: "975e5b80-467b-409e-95c7-7339b86e4627"). InnerVolumeSpecName "openstack-cell1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:35:36 crc kubenswrapper[5050]: I1211 15:35:36.892230 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/975e5b80-467b-409e-95c7-7339b86e4627-config" (OuterVolumeSpecName: "config") pod "975e5b80-467b-409e-95c7-7339b86e4627" (UID: "975e5b80-467b-409e-95c7-7339b86e4627"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:35:36 crc kubenswrapper[5050]: I1211 15:35:36.896275 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/975e5b80-467b-409e-95c7-7339b86e4627-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "975e5b80-467b-409e-95c7-7339b86e4627" (UID: "975e5b80-467b-409e-95c7-7339b86e4627"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:35:36 crc kubenswrapper[5050]: I1211 15:35:36.898241 5050 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/975e5b80-467b-409e-95c7-7339b86e4627-config\") on node \"crc\" DevicePath \"\"" Dec 11 15:35:36 crc kubenswrapper[5050]: I1211 15:35:36.898284 5050 reconciler_common.go:293] "Volume detached for volume \"openstack-cell1\" (UniqueName: \"kubernetes.io/configmap/975e5b80-467b-409e-95c7-7339b86e4627-openstack-cell1\") on node \"crc\" DevicePath \"\"" Dec 11 15:35:36 crc kubenswrapper[5050]: I1211 15:35:36.898301 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sl9m\" (UniqueName: \"kubernetes.io/projected/975e5b80-467b-409e-95c7-7339b86e4627-kube-api-access-9sl9m\") on node \"crc\" DevicePath \"\"" Dec 11 15:35:36 crc kubenswrapper[5050]: I1211 15:35:36.898315 5050 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/975e5b80-467b-409e-95c7-7339b86e4627-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 15:35:36 crc kubenswrapper[5050]: I1211 15:35:36.900215 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/975e5b80-467b-409e-95c7-7339b86e4627-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "975e5b80-467b-409e-95c7-7339b86e4627" (UID: "975e5b80-467b-409e-95c7-7339b86e4627"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:35:36 crc kubenswrapper[5050]: I1211 15:35:36.916257 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/975e5b80-467b-409e-95c7-7339b86e4627-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "975e5b80-467b-409e-95c7-7339b86e4627" (UID: "975e5b80-467b-409e-95c7-7339b86e4627"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:35:36 crc kubenswrapper[5050]: I1211 15:35:36.999861 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/975e5b80-467b-409e-95c7-7339b86e4627-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 15:35:36 crc kubenswrapper[5050]: I1211 15:35:36.999899 5050 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/975e5b80-467b-409e-95c7-7339b86e4627-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 15:35:37 crc kubenswrapper[5050]: I1211 15:35:37.320851 5050 generic.go:334] "Generic (PLEG): container finished" podID="975e5b80-467b-409e-95c7-7339b86e4627" containerID="140023bf2bcc04a98abc646555e2dae3e233570e234b1081a36e0db242f995de" exitCode=0 Dec 11 15:35:37 crc kubenswrapper[5050]: I1211 15:35:37.320905 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d5569cdf5-rw89v" event={"ID":"975e5b80-467b-409e-95c7-7339b86e4627","Type":"ContainerDied","Data":"140023bf2bcc04a98abc646555e2dae3e233570e234b1081a36e0db242f995de"} Dec 11 15:35:37 crc kubenswrapper[5050]: I1211 15:35:37.320938 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d5569cdf5-rw89v" event={"ID":"975e5b80-467b-409e-95c7-7339b86e4627","Type":"ContainerDied","Data":"bcadec739545d4753997551f6d0a458d580aab4cc43f150bd802ec1c7f42853a"} Dec 11 15:35:37 crc kubenswrapper[5050]: I1211 15:35:37.320957 5050 scope.go:117] "RemoveContainer" containerID="140023bf2bcc04a98abc646555e2dae3e233570e234b1081a36e0db242f995de" Dec 11 15:35:37 crc kubenswrapper[5050]: I1211 15:35:37.321161 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d5569cdf5-rw89v" Dec 11 15:35:37 crc kubenswrapper[5050]: I1211 15:35:37.413137 5050 scope.go:117] "RemoveContainer" containerID="a042e4776a5186c62f34d9cf34a3bb0dc1a23e92835d14206533482c85ac1b49" Dec 11 15:35:37 crc kubenswrapper[5050]: I1211 15:35:37.421377 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d5569cdf5-rw89v"] Dec 11 15:35:37 crc kubenswrapper[5050]: I1211 15:35:37.431743 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d5569cdf5-rw89v"] Dec 11 15:35:37 crc kubenswrapper[5050]: I1211 15:35:37.449127 5050 scope.go:117] "RemoveContainer" containerID="140023bf2bcc04a98abc646555e2dae3e233570e234b1081a36e0db242f995de" Dec 11 15:35:37 crc kubenswrapper[5050]: E1211 15:35:37.450002 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"140023bf2bcc04a98abc646555e2dae3e233570e234b1081a36e0db242f995de\": container with ID starting with 140023bf2bcc04a98abc646555e2dae3e233570e234b1081a36e0db242f995de not found: ID does not exist" containerID="140023bf2bcc04a98abc646555e2dae3e233570e234b1081a36e0db242f995de" Dec 11 15:35:37 crc kubenswrapper[5050]: I1211 15:35:37.450074 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"140023bf2bcc04a98abc646555e2dae3e233570e234b1081a36e0db242f995de"} err="failed to get container status \"140023bf2bcc04a98abc646555e2dae3e233570e234b1081a36e0db242f995de\": rpc error: code = NotFound desc = could not find container \"140023bf2bcc04a98abc646555e2dae3e233570e234b1081a36e0db242f995de\": container with ID starting with 140023bf2bcc04a98abc646555e2dae3e233570e234b1081a36e0db242f995de not found: ID does not exist" Dec 11 15:35:37 crc kubenswrapper[5050]: I1211 15:35:37.450121 5050 scope.go:117] "RemoveContainer" containerID="a042e4776a5186c62f34d9cf34a3bb0dc1a23e92835d14206533482c85ac1b49" Dec 11 15:35:37 crc kubenswrapper[5050]: E1211 15:35:37.450457 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a042e4776a5186c62f34d9cf34a3bb0dc1a23e92835d14206533482c85ac1b49\": container with ID starting with a042e4776a5186c62f34d9cf34a3bb0dc1a23e92835d14206533482c85ac1b49 not found: ID does not exist" containerID="a042e4776a5186c62f34d9cf34a3bb0dc1a23e92835d14206533482c85ac1b49" Dec 11 15:35:37 crc kubenswrapper[5050]: I1211 15:35:37.450491 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a042e4776a5186c62f34d9cf34a3bb0dc1a23e92835d14206533482c85ac1b49"} err="failed to get container status \"a042e4776a5186c62f34d9cf34a3bb0dc1a23e92835d14206533482c85ac1b49\": rpc error: code = NotFound desc = could not find container \"a042e4776a5186c62f34d9cf34a3bb0dc1a23e92835d14206533482c85ac1b49\": container with ID starting with a042e4776a5186c62f34d9cf34a3bb0dc1a23e92835d14206533482c85ac1b49 not found: ID does not exist" Dec 11 15:35:37 crc kubenswrapper[5050]: I1211 15:35:37.558861 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="975e5b80-467b-409e-95c7-7339b86e4627" path="/var/lib/kubelet/pods/975e5b80-467b-409e-95c7-7339b86e4627/volumes" Dec 11 15:35:46 crc kubenswrapper[5050]: I1211 15:35:46.851069 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd"] Dec 11 15:35:46 crc kubenswrapper[5050]: E1211 15:35:46.851982 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2412d0a8-7e8f-4fae-915b-c794621f9655" containerName="dnsmasq-dns" Dec 11 15:35:46 crc kubenswrapper[5050]: I1211 15:35:46.851996 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="2412d0a8-7e8f-4fae-915b-c794621f9655" containerName="dnsmasq-dns" Dec 11 15:35:46 crc kubenswrapper[5050]: E1211 15:35:46.852027 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="975e5b80-467b-409e-95c7-7339b86e4627" containerName="dnsmasq-dns" Dec 11 15:35:46 crc kubenswrapper[5050]: I1211 15:35:46.852033 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="975e5b80-467b-409e-95c7-7339b86e4627" containerName="dnsmasq-dns" Dec 11 15:35:46 crc kubenswrapper[5050]: E1211 15:35:46.852058 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2412d0a8-7e8f-4fae-915b-c794621f9655" containerName="init" Dec 11 15:35:46 crc kubenswrapper[5050]: I1211 15:35:46.852064 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="2412d0a8-7e8f-4fae-915b-c794621f9655" containerName="init" Dec 11 15:35:46 crc kubenswrapper[5050]: E1211 15:35:46.852085 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="975e5b80-467b-409e-95c7-7339b86e4627" containerName="init" Dec 11 15:35:46 crc kubenswrapper[5050]: I1211 15:35:46.852091 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="975e5b80-467b-409e-95c7-7339b86e4627" containerName="init" Dec 11 15:35:46 crc kubenswrapper[5050]: I1211 15:35:46.853074 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="975e5b80-467b-409e-95c7-7339b86e4627" containerName="dnsmasq-dns" Dec 11 15:35:46 crc kubenswrapper[5050]: I1211 15:35:46.853106 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="2412d0a8-7e8f-4fae-915b-c794621f9655" containerName="dnsmasq-dns" Dec 11 15:35:46 crc kubenswrapper[5050]: I1211 15:35:46.853898 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd" Dec 11 15:35:46 crc kubenswrapper[5050]: I1211 15:35:46.861594 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-mvxd9" Dec 11 15:35:46 crc kubenswrapper[5050]: I1211 15:35:46.861926 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 11 15:35:46 crc kubenswrapper[5050]: I1211 15:35:46.862329 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 15:35:46 crc kubenswrapper[5050]: I1211 15:35:46.863984 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 11 15:35:46 crc kubenswrapper[5050]: I1211 15:35:46.864800 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd"] Dec 11 15:35:46 crc kubenswrapper[5050]: I1211 15:35:46.921822 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a062fa14-8745-4b82-85a8-2a483d34682a-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd\" (UID: \"a062fa14-8745-4b82-85a8-2a483d34682a\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd" Dec 11 15:35:46 crc kubenswrapper[5050]: I1211 15:35:46.922047 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a062fa14-8745-4b82-85a8-2a483d34682a-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd\" (UID: \"a062fa14-8745-4b82-85a8-2a483d34682a\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd" Dec 11 15:35:46 crc kubenswrapper[5050]: I1211 15:35:46.922192 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwsw6\" (UniqueName: \"kubernetes.io/projected/a062fa14-8745-4b82-85a8-2a483d34682a-kube-api-access-bwsw6\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd\" (UID: \"a062fa14-8745-4b82-85a8-2a483d34682a\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd" Dec 11 15:35:46 crc kubenswrapper[5050]: I1211 15:35:46.922397 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a062fa14-8745-4b82-85a8-2a483d34682a-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd\" (UID: \"a062fa14-8745-4b82-85a8-2a483d34682a\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd" Dec 11 15:35:46 crc kubenswrapper[5050]: I1211 15:35:46.922472 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a062fa14-8745-4b82-85a8-2a483d34682a-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd\" (UID: \"a062fa14-8745-4b82-85a8-2a483d34682a\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd" Dec 11 15:35:47 crc kubenswrapper[5050]: I1211 15:35:47.024180 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwsw6\" (UniqueName: \"kubernetes.io/projected/a062fa14-8745-4b82-85a8-2a483d34682a-kube-api-access-bwsw6\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd\" (UID: \"a062fa14-8745-4b82-85a8-2a483d34682a\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd" Dec 11 15:35:47 crc kubenswrapper[5050]: I1211 15:35:47.024572 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a062fa14-8745-4b82-85a8-2a483d34682a-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd\" (UID: \"a062fa14-8745-4b82-85a8-2a483d34682a\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd" Dec 11 15:35:47 crc kubenswrapper[5050]: I1211 15:35:47.024612 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a062fa14-8745-4b82-85a8-2a483d34682a-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd\" (UID: \"a062fa14-8745-4b82-85a8-2a483d34682a\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd" Dec 11 15:35:47 crc kubenswrapper[5050]: I1211 15:35:47.024710 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a062fa14-8745-4b82-85a8-2a483d34682a-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd\" (UID: \"a062fa14-8745-4b82-85a8-2a483d34682a\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd" Dec 11 15:35:47 crc kubenswrapper[5050]: I1211 15:35:47.024792 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a062fa14-8745-4b82-85a8-2a483d34682a-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd\" (UID: \"a062fa14-8745-4b82-85a8-2a483d34682a\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd" Dec 11 15:35:47 crc kubenswrapper[5050]: I1211 15:35:47.030552 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a062fa14-8745-4b82-85a8-2a483d34682a-inventory\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd\" (UID: \"a062fa14-8745-4b82-85a8-2a483d34682a\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd" Dec 11 15:35:47 crc kubenswrapper[5050]: I1211 15:35:47.031263 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a062fa14-8745-4b82-85a8-2a483d34682a-pre-adoption-validation-combined-ca-bundle\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd\" (UID: \"a062fa14-8745-4b82-85a8-2a483d34682a\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd" Dec 11 15:35:47 crc kubenswrapper[5050]: I1211 15:35:47.033554 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a062fa14-8745-4b82-85a8-2a483d34682a-ssh-key\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd\" (UID: \"a062fa14-8745-4b82-85a8-2a483d34682a\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd" Dec 11 15:35:47 crc kubenswrapper[5050]: I1211 15:35:47.033940 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a062fa14-8745-4b82-85a8-2a483d34682a-ceph\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd\" (UID: \"a062fa14-8745-4b82-85a8-2a483d34682a\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd" Dec 11 15:35:47 crc kubenswrapper[5050]: I1211 15:35:47.051245 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwsw6\" (UniqueName: \"kubernetes.io/projected/a062fa14-8745-4b82-85a8-2a483d34682a-kube-api-access-bwsw6\") pod \"pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd\" (UID: \"a062fa14-8745-4b82-85a8-2a483d34682a\") " pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd" Dec 11 15:35:47 crc kubenswrapper[5050]: I1211 15:35:47.057824 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-create-krjxg"] Dec 11 15:35:47 crc kubenswrapper[5050]: I1211 15:35:47.072194 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-create-krjxg"] Dec 11 15:35:47 crc kubenswrapper[5050]: I1211 15:35:47.179158 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd" Dec 11 15:35:47 crc kubenswrapper[5050]: I1211 15:35:47.563649 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="836453ed-a74b-46f5-a16e-7e5276f60c2a" path="/var/lib/kubelet/pods/836453ed-a74b-46f5-a16e-7e5276f60c2a/volumes" Dec 11 15:35:47 crc kubenswrapper[5050]: I1211 15:35:47.782176 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd"] Dec 11 15:35:48 crc kubenswrapper[5050]: I1211 15:35:48.421390 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd" event={"ID":"a062fa14-8745-4b82-85a8-2a483d34682a","Type":"ContainerStarted","Data":"272396a259e3d991a8c8229f904e0b71775cc388e50d6123098836f1c6461155"} Dec 11 15:35:48 crc kubenswrapper[5050]: I1211 15:35:48.697828 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2jvzc"] Dec 11 15:35:48 crc kubenswrapper[5050]: I1211 15:35:48.701971 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2jvzc" Dec 11 15:35:48 crc kubenswrapper[5050]: I1211 15:35:48.714110 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2jvzc"] Dec 11 15:35:48 crc kubenswrapper[5050]: I1211 15:35:48.778684 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd755ba-c38c-438f-9d17-45e15a264292-utilities\") pod \"redhat-operators-2jvzc\" (UID: \"1bd755ba-c38c-438f-9d17-45e15a264292\") " pod="openshift-marketplace/redhat-operators-2jvzc" Dec 11 15:35:48 crc kubenswrapper[5050]: I1211 15:35:48.778970 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd755ba-c38c-438f-9d17-45e15a264292-catalog-content\") pod \"redhat-operators-2jvzc\" (UID: \"1bd755ba-c38c-438f-9d17-45e15a264292\") " pod="openshift-marketplace/redhat-operators-2jvzc" Dec 11 15:35:48 crc kubenswrapper[5050]: I1211 15:35:48.779231 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5zc2\" (UniqueName: \"kubernetes.io/projected/1bd755ba-c38c-438f-9d17-45e15a264292-kube-api-access-n5zc2\") pod \"redhat-operators-2jvzc\" (UID: \"1bd755ba-c38c-438f-9d17-45e15a264292\") " pod="openshift-marketplace/redhat-operators-2jvzc" Dec 11 15:35:48 crc kubenswrapper[5050]: I1211 15:35:48.882260 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd755ba-c38c-438f-9d17-45e15a264292-utilities\") pod \"redhat-operators-2jvzc\" (UID: \"1bd755ba-c38c-438f-9d17-45e15a264292\") " pod="openshift-marketplace/redhat-operators-2jvzc" Dec 11 15:35:48 crc kubenswrapper[5050]: I1211 15:35:48.882460 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd755ba-c38c-438f-9d17-45e15a264292-catalog-content\") pod \"redhat-operators-2jvzc\" (UID: \"1bd755ba-c38c-438f-9d17-45e15a264292\") " pod="openshift-marketplace/redhat-operators-2jvzc" Dec 11 15:35:48 crc kubenswrapper[5050]: I1211 15:35:48.882565 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5zc2\" (UniqueName: \"kubernetes.io/projected/1bd755ba-c38c-438f-9d17-45e15a264292-kube-api-access-n5zc2\") pod \"redhat-operators-2jvzc\" (UID: \"1bd755ba-c38c-438f-9d17-45e15a264292\") " pod="openshift-marketplace/redhat-operators-2jvzc" Dec 11 15:35:48 crc kubenswrapper[5050]: I1211 15:35:48.882988 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd755ba-c38c-438f-9d17-45e15a264292-utilities\") pod \"redhat-operators-2jvzc\" (UID: \"1bd755ba-c38c-438f-9d17-45e15a264292\") " pod="openshift-marketplace/redhat-operators-2jvzc" Dec 11 15:35:48 crc kubenswrapper[5050]: I1211 15:35:48.883046 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd755ba-c38c-438f-9d17-45e15a264292-catalog-content\") pod \"redhat-operators-2jvzc\" (UID: \"1bd755ba-c38c-438f-9d17-45e15a264292\") " pod="openshift-marketplace/redhat-operators-2jvzc" Dec 11 15:35:48 crc kubenswrapper[5050]: I1211 15:35:48.903045 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5zc2\" (UniqueName: \"kubernetes.io/projected/1bd755ba-c38c-438f-9d17-45e15a264292-kube-api-access-n5zc2\") pod \"redhat-operators-2jvzc\" (UID: \"1bd755ba-c38c-438f-9d17-45e15a264292\") " pod="openshift-marketplace/redhat-operators-2jvzc" Dec 11 15:35:49 crc kubenswrapper[5050]: I1211 15:35:49.031483 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-b4a7-account-create-update-f95fr"] Dec 11 15:35:49 crc kubenswrapper[5050]: I1211 15:35:49.034304 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2jvzc" Dec 11 15:35:49 crc kubenswrapper[5050]: I1211 15:35:49.043377 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-b4a7-account-create-update-f95fr"] Dec 11 15:35:49 crc kubenswrapper[5050]: I1211 15:35:49.560281 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a4b1c0-1a75-4092-9a96-b4171f480b4f" path="/var/lib/kubelet/pods/57a4b1c0-1a75-4092-9a96-b4171f480b4f/volumes" Dec 11 15:35:49 crc kubenswrapper[5050]: I1211 15:35:49.570365 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2jvzc"] Dec 11 15:35:49 crc kubenswrapper[5050]: W1211 15:35:49.573183 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bd755ba_c38c_438f_9d17_45e15a264292.slice/crio-73e516d97d5d8d7508ee67b16005c1a1b42f222d122d277908073724cb1aa323 WatchSource:0}: Error finding container 73e516d97d5d8d7508ee67b16005c1a1b42f222d122d277908073724cb1aa323: Status 404 returned error can't find the container with id 73e516d97d5d8d7508ee67b16005c1a1b42f222d122d277908073724cb1aa323 Dec 11 15:35:50 crc kubenswrapper[5050]: I1211 15:35:50.456624 5050 generic.go:334] "Generic (PLEG): container finished" podID="1bd755ba-c38c-438f-9d17-45e15a264292" containerID="3a496f9b44a6cd1d82794d8dc8d3fbbf548c6f282a18732b07135b4a43132cc9" exitCode=0 Dec 11 15:35:50 crc kubenswrapper[5050]: I1211 15:35:50.456890 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2jvzc" event={"ID":"1bd755ba-c38c-438f-9d17-45e15a264292","Type":"ContainerDied","Data":"3a496f9b44a6cd1d82794d8dc8d3fbbf548c6f282a18732b07135b4a43132cc9"} Dec 11 15:35:50 crc kubenswrapper[5050]: I1211 15:35:50.456916 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2jvzc" event={"ID":"1bd755ba-c38c-438f-9d17-45e15a264292","Type":"ContainerStarted","Data":"73e516d97d5d8d7508ee67b16005c1a1b42f222d122d277908073724cb1aa323"} Dec 11 15:35:58 crc kubenswrapper[5050]: I1211 15:35:58.046218 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-persistence-db-create-rg9ff"] Dec 11 15:35:58 crc kubenswrapper[5050]: I1211 15:35:58.056561 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-9764-account-create-update-lp57s"] Dec 11 15:35:58 crc kubenswrapper[5050]: I1211 15:35:58.068337 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-persistence-db-create-rg9ff"] Dec 11 15:35:58 crc kubenswrapper[5050]: I1211 15:35:58.078685 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-9764-account-create-update-lp57s"] Dec 11 15:35:59 crc kubenswrapper[5050]: I1211 15:35:59.587416 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e95122a2-0a89-4c0d-a67e-1fbe72cbb208" path="/var/lib/kubelet/pods/e95122a2-0a89-4c0d-a67e-1fbe72cbb208/volumes" Dec 11 15:35:59 crc kubenswrapper[5050]: I1211 15:35:59.588120 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea7f0570-a2ef-4e47-a947-19341754adc1" path="/var/lib/kubelet/pods/ea7f0570-a2ef-4e47-a947-19341754adc1/volumes" Dec 11 15:36:02 crc kubenswrapper[5050]: I1211 15:36:02.621206 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2jvzc" event={"ID":"1bd755ba-c38c-438f-9d17-45e15a264292","Type":"ContainerStarted","Data":"9189a77a14c2de0447acb5823a2189cae45ca5f41f11ad4696d9290f1b332d6e"} Dec 11 15:36:02 crc kubenswrapper[5050]: I1211 15:36:02.628982 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd" event={"ID":"a062fa14-8745-4b82-85a8-2a483d34682a","Type":"ContainerStarted","Data":"0595b3a7c776571ee8c9eef465aa1c3dd553158b2428a6bdf63a1345aa37111b"} Dec 11 15:36:02 crc kubenswrapper[5050]: I1211 15:36:02.870726 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd" podStartSLOduration=2.928689058 podStartE2EDuration="16.870708054s" podCreationTimestamp="2025-12-11 15:35:46 +0000 UTC" firstStartedPulling="2025-12-11 15:35:47.821353973 +0000 UTC m=+6438.665076559" lastFinishedPulling="2025-12-11 15:36:01.763372969 +0000 UTC m=+6452.607095555" observedRunningTime="2025-12-11 15:36:02.841624997 +0000 UTC m=+6453.685347593" watchObservedRunningTime="2025-12-11 15:36:02.870708054 +0000 UTC m=+6453.714430640" Dec 11 15:36:04 crc kubenswrapper[5050]: I1211 15:36:04.476120 5050 scope.go:117] "RemoveContainer" containerID="b9d50468aa2e054abc49cf0f7babb6b1e4f301f68eb8b30d820facff22b3eb60" Dec 11 15:36:04 crc kubenswrapper[5050]: I1211 15:36:04.638830 5050 scope.go:117] "RemoveContainer" containerID="4ba813f858c8b2a29b98a985aae1143279a0ef2b672cb37677836cf6c86767ed" Dec 11 15:36:04 crc kubenswrapper[5050]: I1211 15:36:04.662548 5050 scope.go:117] "RemoveContainer" containerID="20b7d6ef6474684bc5192d50805612f0ca77ba2bcf3f51a98dc9f1be239188c9" Dec 11 15:36:04 crc kubenswrapper[5050]: I1211 15:36:04.684092 5050 scope.go:117] "RemoveContainer" containerID="a7ddedfd8c099629fce721e3a016dcf36b6feb805345f717bc8590fa2767312b" Dec 11 15:36:04 crc kubenswrapper[5050]: I1211 15:36:04.733776 5050 scope.go:117] "RemoveContainer" containerID="5bf10d2460c1f2e2f63841e0a3f56ffd837b3829bff105ae681afd9617fd7ce8" Dec 11 15:36:04 crc kubenswrapper[5050]: I1211 15:36:04.938070 5050 scope.go:117] "RemoveContainer" containerID="06fda559c919c413e3b11c7a5acc0df2f40f7527385816c092affd1087075b2c" Dec 11 15:36:05 crc kubenswrapper[5050]: I1211 15:36:05.857293 5050 scope.go:117] "RemoveContainer" containerID="5c6fe59cd09872bf34d0ff029b4368490e629638561d1ea2a03ca6d1f4fc6549" Dec 11 15:36:05 crc kubenswrapper[5050]: I1211 15:36:05.995866 5050 scope.go:117] "RemoveContainer" containerID="ca9002d0b3b3d6d9130a5971f97fa09de488225866dbeab6a3b09dc5aa324393" Dec 11 15:36:08 crc kubenswrapper[5050]: I1211 15:36:08.702706 5050 generic.go:334] "Generic (PLEG): container finished" podID="1bd755ba-c38c-438f-9d17-45e15a264292" containerID="9189a77a14c2de0447acb5823a2189cae45ca5f41f11ad4696d9290f1b332d6e" exitCode=0 Dec 11 15:36:08 crc kubenswrapper[5050]: I1211 15:36:08.702826 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2jvzc" event={"ID":"1bd755ba-c38c-438f-9d17-45e15a264292","Type":"ContainerDied","Data":"9189a77a14c2de0447acb5823a2189cae45ca5f41f11ad4696d9290f1b332d6e"} Dec 11 15:36:09 crc kubenswrapper[5050]: I1211 15:36:09.715519 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2jvzc" event={"ID":"1bd755ba-c38c-438f-9d17-45e15a264292","Type":"ContainerStarted","Data":"1e2fce975148ddc535dcb5381946e5dd6885461def5bcaec568ea9112ad15049"} Dec 11 15:36:09 crc kubenswrapper[5050]: I1211 15:36:09.746632 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2jvzc" podStartSLOduration=8.374555648 podStartE2EDuration="21.746608852s" podCreationTimestamp="2025-12-11 15:35:48 +0000 UTC" firstStartedPulling="2025-12-11 15:35:55.758813003 +0000 UTC m=+6446.602535589" lastFinishedPulling="2025-12-11 15:36:09.130866207 +0000 UTC m=+6459.974588793" observedRunningTime="2025-12-11 15:36:09.733736008 +0000 UTC m=+6460.577458604" watchObservedRunningTime="2025-12-11 15:36:09.746608852 +0000 UTC m=+6460.590331448" Dec 11 15:36:10 crc kubenswrapper[5050]: I1211 15:36:10.796450 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 15:36:10 crc kubenswrapper[5050]: I1211 15:36:10.796764 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 15:36:15 crc kubenswrapper[5050]: I1211 15:36:15.780793 5050 generic.go:334] "Generic (PLEG): container finished" podID="a062fa14-8745-4b82-85a8-2a483d34682a" containerID="0595b3a7c776571ee8c9eef465aa1c3dd553158b2428a6bdf63a1345aa37111b" exitCode=0 Dec 11 15:36:15 crc kubenswrapper[5050]: I1211 15:36:15.780882 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd" event={"ID":"a062fa14-8745-4b82-85a8-2a483d34682a","Type":"ContainerDied","Data":"0595b3a7c776571ee8c9eef465aa1c3dd553158b2428a6bdf63a1345aa37111b"} Dec 11 15:36:17 crc kubenswrapper[5050]: I1211 15:36:17.297710 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd" Dec 11 15:36:17 crc kubenswrapper[5050]: I1211 15:36:17.310428 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwsw6\" (UniqueName: \"kubernetes.io/projected/a062fa14-8745-4b82-85a8-2a483d34682a-kube-api-access-bwsw6\") pod \"a062fa14-8745-4b82-85a8-2a483d34682a\" (UID: \"a062fa14-8745-4b82-85a8-2a483d34682a\") " Dec 11 15:36:17 crc kubenswrapper[5050]: I1211 15:36:17.310556 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a062fa14-8745-4b82-85a8-2a483d34682a-ssh-key\") pod \"a062fa14-8745-4b82-85a8-2a483d34682a\" (UID: \"a062fa14-8745-4b82-85a8-2a483d34682a\") " Dec 11 15:36:17 crc kubenswrapper[5050]: I1211 15:36:17.310590 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a062fa14-8745-4b82-85a8-2a483d34682a-ceph\") pod \"a062fa14-8745-4b82-85a8-2a483d34682a\" (UID: \"a062fa14-8745-4b82-85a8-2a483d34682a\") " Dec 11 15:36:17 crc kubenswrapper[5050]: I1211 15:36:17.310629 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a062fa14-8745-4b82-85a8-2a483d34682a-pre-adoption-validation-combined-ca-bundle\") pod \"a062fa14-8745-4b82-85a8-2a483d34682a\" (UID: \"a062fa14-8745-4b82-85a8-2a483d34682a\") " Dec 11 15:36:17 crc kubenswrapper[5050]: I1211 15:36:17.310679 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a062fa14-8745-4b82-85a8-2a483d34682a-inventory\") pod \"a062fa14-8745-4b82-85a8-2a483d34682a\" (UID: \"a062fa14-8745-4b82-85a8-2a483d34682a\") " Dec 11 15:36:17 crc kubenswrapper[5050]: I1211 15:36:17.320753 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a062fa14-8745-4b82-85a8-2a483d34682a-pre-adoption-validation-combined-ca-bundle" (OuterVolumeSpecName: "pre-adoption-validation-combined-ca-bundle") pod "a062fa14-8745-4b82-85a8-2a483d34682a" (UID: "a062fa14-8745-4b82-85a8-2a483d34682a"). InnerVolumeSpecName "pre-adoption-validation-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:36:17 crc kubenswrapper[5050]: I1211 15:36:17.320967 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a062fa14-8745-4b82-85a8-2a483d34682a-ceph" (OuterVolumeSpecName: "ceph") pod "a062fa14-8745-4b82-85a8-2a483d34682a" (UID: "a062fa14-8745-4b82-85a8-2a483d34682a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:36:17 crc kubenswrapper[5050]: I1211 15:36:17.340927 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a062fa14-8745-4b82-85a8-2a483d34682a-kube-api-access-bwsw6" (OuterVolumeSpecName: "kube-api-access-bwsw6") pod "a062fa14-8745-4b82-85a8-2a483d34682a" (UID: "a062fa14-8745-4b82-85a8-2a483d34682a"). InnerVolumeSpecName "kube-api-access-bwsw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:36:17 crc kubenswrapper[5050]: I1211 15:36:17.344084 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a062fa14-8745-4b82-85a8-2a483d34682a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a062fa14-8745-4b82-85a8-2a483d34682a" (UID: "a062fa14-8745-4b82-85a8-2a483d34682a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:36:17 crc kubenswrapper[5050]: I1211 15:36:17.362484 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a062fa14-8745-4b82-85a8-2a483d34682a-inventory" (OuterVolumeSpecName: "inventory") pod "a062fa14-8745-4b82-85a8-2a483d34682a" (UID: "a062fa14-8745-4b82-85a8-2a483d34682a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:36:17 crc kubenswrapper[5050]: I1211 15:36:17.413715 5050 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a062fa14-8745-4b82-85a8-2a483d34682a-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 15:36:17 crc kubenswrapper[5050]: I1211 15:36:17.413748 5050 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a062fa14-8745-4b82-85a8-2a483d34682a-ceph\") on node \"crc\" DevicePath \"\"" Dec 11 15:36:17 crc kubenswrapper[5050]: I1211 15:36:17.413758 5050 reconciler_common.go:293] "Volume detached for volume \"pre-adoption-validation-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a062fa14-8745-4b82-85a8-2a483d34682a-pre-adoption-validation-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:36:17 crc kubenswrapper[5050]: I1211 15:36:17.413801 5050 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a062fa14-8745-4b82-85a8-2a483d34682a-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 15:36:17 crc kubenswrapper[5050]: I1211 15:36:17.413812 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwsw6\" (UniqueName: \"kubernetes.io/projected/a062fa14-8745-4b82-85a8-2a483d34682a-kube-api-access-bwsw6\") on node \"crc\" DevicePath \"\"" Dec 11 15:36:17 crc kubenswrapper[5050]: I1211 15:36:17.814772 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd" event={"ID":"a062fa14-8745-4b82-85a8-2a483d34682a","Type":"ContainerDied","Data":"272396a259e3d991a8c8229f904e0b71775cc388e50d6123098836f1c6461155"} Dec 11 15:36:17 crc kubenswrapper[5050]: I1211 15:36:17.814821 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="272396a259e3d991a8c8229f904e0b71775cc388e50d6123098836f1c6461155" Dec 11 15:36:17 crc kubenswrapper[5050]: I1211 15:36:17.814834 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/pre-adoption-validation-openstack-pre-adoption-openstack-c87cmd" Dec 11 15:36:19 crc kubenswrapper[5050]: I1211 15:36:19.035938 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2jvzc" Dec 11 15:36:19 crc kubenswrapper[5050]: I1211 15:36:19.036356 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2jvzc" Dec 11 15:36:19 crc kubenswrapper[5050]: I1211 15:36:19.086223 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2jvzc" Dec 11 15:36:19 crc kubenswrapper[5050]: I1211 15:36:19.605926 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx"] Dec 11 15:36:19 crc kubenswrapper[5050]: E1211 15:36:19.606604 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a062fa14-8745-4b82-85a8-2a483d34682a" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Dec 11 15:36:19 crc kubenswrapper[5050]: I1211 15:36:19.606630 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a062fa14-8745-4b82-85a8-2a483d34682a" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Dec 11 15:36:19 crc kubenswrapper[5050]: I1211 15:36:19.606948 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="a062fa14-8745-4b82-85a8-2a483d34682a" containerName="pre-adoption-validation-openstack-pre-adoption-openstack-cell1" Dec 11 15:36:19 crc kubenswrapper[5050]: I1211 15:36:19.607869 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx" Dec 11 15:36:19 crc kubenswrapper[5050]: I1211 15:36:19.611276 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 15:36:19 crc kubenswrapper[5050]: I1211 15:36:19.611564 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 11 15:36:19 crc kubenswrapper[5050]: I1211 15:36:19.611721 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 11 15:36:19 crc kubenswrapper[5050]: I1211 15:36:19.612554 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-mvxd9" Dec 11 15:36:19 crc kubenswrapper[5050]: I1211 15:36:19.620685 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx"] Dec 11 15:36:19 crc kubenswrapper[5050]: I1211 15:36:19.659119 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx\" (UID: \"ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx" Dec 11 15:36:19 crc kubenswrapper[5050]: I1211 15:36:19.659198 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx\" (UID: \"ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx" Dec 11 15:36:19 crc kubenswrapper[5050]: I1211 15:36:19.659348 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx\" (UID: \"ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx" Dec 11 15:36:19 crc kubenswrapper[5050]: I1211 15:36:19.659415 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx\" (UID: \"ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx" Dec 11 15:36:19 crc kubenswrapper[5050]: I1211 15:36:19.659481 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czk49\" (UniqueName: \"kubernetes.io/projected/ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8-kube-api-access-czk49\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx\" (UID: \"ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx" Dec 11 15:36:19 crc kubenswrapper[5050]: I1211 15:36:19.760895 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx\" (UID: \"ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx" Dec 11 15:36:19 crc kubenswrapper[5050]: I1211 15:36:19.761021 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx\" (UID: \"ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx" Dec 11 15:36:19 crc kubenswrapper[5050]: I1211 15:36:19.761120 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czk49\" (UniqueName: \"kubernetes.io/projected/ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8-kube-api-access-czk49\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx\" (UID: \"ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx" Dec 11 15:36:19 crc kubenswrapper[5050]: I1211 15:36:19.761246 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx\" (UID: \"ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx" Dec 11 15:36:19 crc kubenswrapper[5050]: I1211 15:36:19.761301 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx\" (UID: \"ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx" Dec 11 15:36:19 crc kubenswrapper[5050]: I1211 15:36:19.767768 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8-ssh-key\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx\" (UID: \"ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx" Dec 11 15:36:19 crc kubenswrapper[5050]: I1211 15:36:19.767840 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8-tripleo-cleanup-combined-ca-bundle\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx\" (UID: \"ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx" Dec 11 15:36:19 crc kubenswrapper[5050]: I1211 15:36:19.781881 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8-inventory\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx\" (UID: \"ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx" Dec 11 15:36:19 crc kubenswrapper[5050]: I1211 15:36:19.781916 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8-ceph\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx\" (UID: \"ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx" Dec 11 15:36:19 crc kubenswrapper[5050]: I1211 15:36:19.791836 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czk49\" (UniqueName: \"kubernetes.io/projected/ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8-kube-api-access-czk49\") pod \"tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx\" (UID: \"ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8\") " pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx" Dec 11 15:36:19 crc kubenswrapper[5050]: I1211 15:36:19.889684 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2jvzc" Dec 11 15:36:19 crc kubenswrapper[5050]: I1211 15:36:19.932692 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx" Dec 11 15:36:20 crc kubenswrapper[5050]: I1211 15:36:20.278181 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx"] Dec 11 15:36:20 crc kubenswrapper[5050]: I1211 15:36:20.848470 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx" event={"ID":"ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8","Type":"ContainerStarted","Data":"541861f96a87d2c6e3ee2d5d6669b92fca0a9b346227bb5ebfbf8d08b082f85d"} Dec 11 15:36:21 crc kubenswrapper[5050]: I1211 15:36:21.859098 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx" event={"ID":"ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8","Type":"ContainerStarted","Data":"1b74abef546ddc219dd2a15737230b4e90328b8a61b3f97039ec276041179c6c"} Dec 11 15:36:21 crc kubenswrapper[5050]: I1211 15:36:21.884611 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx" podStartSLOduration=1.951239978 podStartE2EDuration="2.884590686s" podCreationTimestamp="2025-12-11 15:36:19 +0000 UTC" firstStartedPulling="2025-12-11 15:36:20.289502565 +0000 UTC m=+6471.133225151" lastFinishedPulling="2025-12-11 15:36:21.222853273 +0000 UTC m=+6472.066575859" observedRunningTime="2025-12-11 15:36:21.875358839 +0000 UTC m=+6472.719081435" watchObservedRunningTime="2025-12-11 15:36:21.884590686 +0000 UTC m=+6472.728313272" Dec 11 15:36:22 crc kubenswrapper[5050]: I1211 15:36:22.655588 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2jvzc"] Dec 11 15:36:22 crc kubenswrapper[5050]: I1211 15:36:22.656121 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2jvzc" podUID="1bd755ba-c38c-438f-9d17-45e15a264292" containerName="registry-server" containerID="cri-o://1e2fce975148ddc535dcb5381946e5dd6885461def5bcaec568ea9112ad15049" gracePeriod=2 Dec 11 15:36:22 crc kubenswrapper[5050]: I1211 15:36:22.870476 5050 generic.go:334] "Generic (PLEG): container finished" podID="1bd755ba-c38c-438f-9d17-45e15a264292" containerID="1e2fce975148ddc535dcb5381946e5dd6885461def5bcaec568ea9112ad15049" exitCode=0 Dec 11 15:36:22 crc kubenswrapper[5050]: I1211 15:36:22.870555 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2jvzc" event={"ID":"1bd755ba-c38c-438f-9d17-45e15a264292","Type":"ContainerDied","Data":"1e2fce975148ddc535dcb5381946e5dd6885461def5bcaec568ea9112ad15049"} Dec 11 15:36:23 crc kubenswrapper[5050]: I1211 15:36:23.175068 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2jvzc" Dec 11 15:36:23 crc kubenswrapper[5050]: I1211 15:36:23.339346 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd755ba-c38c-438f-9d17-45e15a264292-catalog-content\") pod \"1bd755ba-c38c-438f-9d17-45e15a264292\" (UID: \"1bd755ba-c38c-438f-9d17-45e15a264292\") " Dec 11 15:36:23 crc kubenswrapper[5050]: I1211 15:36:23.339503 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5zc2\" (UniqueName: \"kubernetes.io/projected/1bd755ba-c38c-438f-9d17-45e15a264292-kube-api-access-n5zc2\") pod \"1bd755ba-c38c-438f-9d17-45e15a264292\" (UID: \"1bd755ba-c38c-438f-9d17-45e15a264292\") " Dec 11 15:36:23 crc kubenswrapper[5050]: I1211 15:36:23.339661 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd755ba-c38c-438f-9d17-45e15a264292-utilities\") pod \"1bd755ba-c38c-438f-9d17-45e15a264292\" (UID: \"1bd755ba-c38c-438f-9d17-45e15a264292\") " Dec 11 15:36:23 crc kubenswrapper[5050]: I1211 15:36:23.343763 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bd755ba-c38c-438f-9d17-45e15a264292-utilities" (OuterVolumeSpecName: "utilities") pod "1bd755ba-c38c-438f-9d17-45e15a264292" (UID: "1bd755ba-c38c-438f-9d17-45e15a264292"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:36:23 crc kubenswrapper[5050]: I1211 15:36:23.346080 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bd755ba-c38c-438f-9d17-45e15a264292-kube-api-access-n5zc2" (OuterVolumeSpecName: "kube-api-access-n5zc2") pod "1bd755ba-c38c-438f-9d17-45e15a264292" (UID: "1bd755ba-c38c-438f-9d17-45e15a264292"). InnerVolumeSpecName "kube-api-access-n5zc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:36:23 crc kubenswrapper[5050]: I1211 15:36:23.442701 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd755ba-c38c-438f-9d17-45e15a264292-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 15:36:23 crc kubenswrapper[5050]: I1211 15:36:23.442749 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5zc2\" (UniqueName: \"kubernetes.io/projected/1bd755ba-c38c-438f-9d17-45e15a264292-kube-api-access-n5zc2\") on node \"crc\" DevicePath \"\"" Dec 11 15:36:23 crc kubenswrapper[5050]: I1211 15:36:23.467349 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bd755ba-c38c-438f-9d17-45e15a264292-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1bd755ba-c38c-438f-9d17-45e15a264292" (UID: "1bd755ba-c38c-438f-9d17-45e15a264292"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:36:23 crc kubenswrapper[5050]: I1211 15:36:23.545435 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd755ba-c38c-438f-9d17-45e15a264292-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 15:36:23 crc kubenswrapper[5050]: I1211 15:36:23.880704 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2jvzc" event={"ID":"1bd755ba-c38c-438f-9d17-45e15a264292","Type":"ContainerDied","Data":"73e516d97d5d8d7508ee67b16005c1a1b42f222d122d277908073724cb1aa323"} Dec 11 15:36:23 crc kubenswrapper[5050]: I1211 15:36:23.880739 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2jvzc" Dec 11 15:36:23 crc kubenswrapper[5050]: I1211 15:36:23.880757 5050 scope.go:117] "RemoveContainer" containerID="1e2fce975148ddc535dcb5381946e5dd6885461def5bcaec568ea9112ad15049" Dec 11 15:36:23 crc kubenswrapper[5050]: I1211 15:36:23.908798 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2jvzc"] Dec 11 15:36:23 crc kubenswrapper[5050]: I1211 15:36:23.914799 5050 scope.go:117] "RemoveContainer" containerID="9189a77a14c2de0447acb5823a2189cae45ca5f41f11ad4696d9290f1b332d6e" Dec 11 15:36:23 crc kubenswrapper[5050]: I1211 15:36:23.920494 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2jvzc"] Dec 11 15:36:23 crc kubenswrapper[5050]: I1211 15:36:23.937410 5050 scope.go:117] "RemoveContainer" containerID="3a496f9b44a6cd1d82794d8dc8d3fbbf548c6f282a18732b07135b4a43132cc9" Dec 11 15:36:25 crc kubenswrapper[5050]: I1211 15:36:25.558619 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bd755ba-c38c-438f-9d17-45e15a264292" path="/var/lib/kubelet/pods/1bd755ba-c38c-438f-9d17-45e15a264292/volumes" Dec 11 15:36:40 crc kubenswrapper[5050]: I1211 15:36:40.796738 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 15:36:40 crc kubenswrapper[5050]: I1211 15:36:40.797601 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 15:37:10 crc kubenswrapper[5050]: I1211 15:37:10.797324 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 15:37:10 crc kubenswrapper[5050]: I1211 15:37:10.798555 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 15:37:10 crc kubenswrapper[5050]: I1211 15:37:10.798630 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 15:37:10 crc kubenswrapper[5050]: I1211 15:37:10.800679 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8e3528ca5f78bfb9e7123275334aa38f06e1578dafb2743ec06b823dfbe30e85"} pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 15:37:10 crc kubenswrapper[5050]: I1211 15:37:10.800751 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" containerID="cri-o://8e3528ca5f78bfb9e7123275334aa38f06e1578dafb2743ec06b823dfbe30e85" gracePeriod=600 Dec 11 15:37:11 crc kubenswrapper[5050]: I1211 15:37:11.338752 5050 generic.go:334] "Generic (PLEG): container finished" podID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerID="8e3528ca5f78bfb9e7123275334aa38f06e1578dafb2743ec06b823dfbe30e85" exitCode=0 Dec 11 15:37:11 crc kubenswrapper[5050]: I1211 15:37:11.338846 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerDied","Data":"8e3528ca5f78bfb9e7123275334aa38f06e1578dafb2743ec06b823dfbe30e85"} Dec 11 15:37:11 crc kubenswrapper[5050]: I1211 15:37:11.339119 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerStarted","Data":"e1193cb2672c9b23c9602bdf8ae6d04616c4f7a2af4ed4a2aeb230e558518c57"} Dec 11 15:37:11 crc kubenswrapper[5050]: I1211 15:37:11.339142 5050 scope.go:117] "RemoveContainer" containerID="ebf03c01a0c1f974cd2ea46998cc404d2f6db1e8b59294fa047964a30636886e" Dec 11 15:37:29 crc kubenswrapper[5050]: I1211 15:37:29.951872 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6vgw5"] Dec 11 15:37:29 crc kubenswrapper[5050]: E1211 15:37:29.955555 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bd755ba-c38c-438f-9d17-45e15a264292" containerName="registry-server" Dec 11 15:37:29 crc kubenswrapper[5050]: I1211 15:37:29.955753 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bd755ba-c38c-438f-9d17-45e15a264292" containerName="registry-server" Dec 11 15:37:29 crc kubenswrapper[5050]: E1211 15:37:29.955881 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bd755ba-c38c-438f-9d17-45e15a264292" containerName="extract-content" Dec 11 15:37:29 crc kubenswrapper[5050]: I1211 15:37:29.955964 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bd755ba-c38c-438f-9d17-45e15a264292" containerName="extract-content" Dec 11 15:37:29 crc kubenswrapper[5050]: E1211 15:37:29.956098 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bd755ba-c38c-438f-9d17-45e15a264292" containerName="extract-utilities" Dec 11 15:37:29 crc kubenswrapper[5050]: I1211 15:37:29.956185 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bd755ba-c38c-438f-9d17-45e15a264292" containerName="extract-utilities" Dec 11 15:37:29 crc kubenswrapper[5050]: I1211 15:37:29.956616 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bd755ba-c38c-438f-9d17-45e15a264292" containerName="registry-server" Dec 11 15:37:29 crc kubenswrapper[5050]: I1211 15:37:29.959710 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6vgw5" Dec 11 15:37:29 crc kubenswrapper[5050]: I1211 15:37:29.963128 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vgw5"] Dec 11 15:37:30 crc kubenswrapper[5050]: I1211 15:37:30.109500 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afa7987a-d45a-4198-8694-9228cf1d1203-catalog-content\") pod \"redhat-marketplace-6vgw5\" (UID: \"afa7987a-d45a-4198-8694-9228cf1d1203\") " pod="openshift-marketplace/redhat-marketplace-6vgw5" Dec 11 15:37:30 crc kubenswrapper[5050]: I1211 15:37:30.109609 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6hpv\" (UniqueName: \"kubernetes.io/projected/afa7987a-d45a-4198-8694-9228cf1d1203-kube-api-access-s6hpv\") pod \"redhat-marketplace-6vgw5\" (UID: \"afa7987a-d45a-4198-8694-9228cf1d1203\") " pod="openshift-marketplace/redhat-marketplace-6vgw5" Dec 11 15:37:30 crc kubenswrapper[5050]: I1211 15:37:30.109673 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afa7987a-d45a-4198-8694-9228cf1d1203-utilities\") pod \"redhat-marketplace-6vgw5\" (UID: \"afa7987a-d45a-4198-8694-9228cf1d1203\") " pod="openshift-marketplace/redhat-marketplace-6vgw5" Dec 11 15:37:30 crc kubenswrapper[5050]: I1211 15:37:30.211350 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afa7987a-d45a-4198-8694-9228cf1d1203-utilities\") pod \"redhat-marketplace-6vgw5\" (UID: \"afa7987a-d45a-4198-8694-9228cf1d1203\") " pod="openshift-marketplace/redhat-marketplace-6vgw5" Dec 11 15:37:30 crc kubenswrapper[5050]: I1211 15:37:30.211551 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afa7987a-d45a-4198-8694-9228cf1d1203-catalog-content\") pod \"redhat-marketplace-6vgw5\" (UID: \"afa7987a-d45a-4198-8694-9228cf1d1203\") " pod="openshift-marketplace/redhat-marketplace-6vgw5" Dec 11 15:37:30 crc kubenswrapper[5050]: I1211 15:37:30.211643 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6hpv\" (UniqueName: \"kubernetes.io/projected/afa7987a-d45a-4198-8694-9228cf1d1203-kube-api-access-s6hpv\") pod \"redhat-marketplace-6vgw5\" (UID: \"afa7987a-d45a-4198-8694-9228cf1d1203\") " pod="openshift-marketplace/redhat-marketplace-6vgw5" Dec 11 15:37:30 crc kubenswrapper[5050]: I1211 15:37:30.212364 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afa7987a-d45a-4198-8694-9228cf1d1203-utilities\") pod \"redhat-marketplace-6vgw5\" (UID: \"afa7987a-d45a-4198-8694-9228cf1d1203\") " pod="openshift-marketplace/redhat-marketplace-6vgw5" Dec 11 15:37:30 crc kubenswrapper[5050]: I1211 15:37:30.212482 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afa7987a-d45a-4198-8694-9228cf1d1203-catalog-content\") pod \"redhat-marketplace-6vgw5\" (UID: \"afa7987a-d45a-4198-8694-9228cf1d1203\") " pod="openshift-marketplace/redhat-marketplace-6vgw5" Dec 11 15:37:30 crc kubenswrapper[5050]: I1211 15:37:30.242859 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6hpv\" (UniqueName: \"kubernetes.io/projected/afa7987a-d45a-4198-8694-9228cf1d1203-kube-api-access-s6hpv\") pod \"redhat-marketplace-6vgw5\" (UID: \"afa7987a-d45a-4198-8694-9228cf1d1203\") " pod="openshift-marketplace/redhat-marketplace-6vgw5" Dec 11 15:37:30 crc kubenswrapper[5050]: I1211 15:37:30.285673 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6vgw5" Dec 11 15:37:30 crc kubenswrapper[5050]: I1211 15:37:30.836427 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vgw5"] Dec 11 15:37:31 crc kubenswrapper[5050]: I1211 15:37:31.562318 5050 generic.go:334] "Generic (PLEG): container finished" podID="afa7987a-d45a-4198-8694-9228cf1d1203" containerID="a135c698b5a3073b1d23af850d45df83e76a84739923ada8fbc2f254e05b1a66" exitCode=0 Dec 11 15:37:31 crc kubenswrapper[5050]: I1211 15:37:31.562470 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vgw5" event={"ID":"afa7987a-d45a-4198-8694-9228cf1d1203","Type":"ContainerDied","Data":"a135c698b5a3073b1d23af850d45df83e76a84739923ada8fbc2f254e05b1a66"} Dec 11 15:37:31 crc kubenswrapper[5050]: I1211 15:37:31.562628 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vgw5" event={"ID":"afa7987a-d45a-4198-8694-9228cf1d1203","Type":"ContainerStarted","Data":"59725683aa180307431c5564a8ec5391f179a6aa47fcb631af54aefa4c843da8"} Dec 11 15:37:35 crc kubenswrapper[5050]: I1211 15:37:35.617639 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vgw5" event={"ID":"afa7987a-d45a-4198-8694-9228cf1d1203","Type":"ContainerStarted","Data":"0acb6f17180dcf3e66f0196b7da51e6e8eed3e21c1050326f53684107e1f5e4a"} Dec 11 15:37:37 crc kubenswrapper[5050]: I1211 15:37:37.708765 5050 generic.go:334] "Generic (PLEG): container finished" podID="afa7987a-d45a-4198-8694-9228cf1d1203" containerID="0acb6f17180dcf3e66f0196b7da51e6e8eed3e21c1050326f53684107e1f5e4a" exitCode=0 Dec 11 15:37:37 crc kubenswrapper[5050]: I1211 15:37:37.708840 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vgw5" event={"ID":"afa7987a-d45a-4198-8694-9228cf1d1203","Type":"ContainerDied","Data":"0acb6f17180dcf3e66f0196b7da51e6e8eed3e21c1050326f53684107e1f5e4a"} Dec 11 15:37:41 crc kubenswrapper[5050]: I1211 15:37:41.763112 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vgw5" event={"ID":"afa7987a-d45a-4198-8694-9228cf1d1203","Type":"ContainerStarted","Data":"71e4267ad0e718214cdf24fd8791560ce43a38c66085efeefe0c0d8d55df1f31"} Dec 11 15:37:41 crc kubenswrapper[5050]: I1211 15:37:41.796556 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6vgw5" podStartSLOduration=4.063729599 podStartE2EDuration="12.796539471s" podCreationTimestamp="2025-12-11 15:37:29 +0000 UTC" firstStartedPulling="2025-12-11 15:37:31.56415405 +0000 UTC m=+6542.407876646" lastFinishedPulling="2025-12-11 15:37:40.296963932 +0000 UTC m=+6551.140686518" observedRunningTime="2025-12-11 15:37:41.795850553 +0000 UTC m=+6552.639573139" watchObservedRunningTime="2025-12-11 15:37:41.796539471 +0000 UTC m=+6552.640262057" Dec 11 15:37:50 crc kubenswrapper[5050]: I1211 15:37:50.292636 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6vgw5" Dec 11 15:37:50 crc kubenswrapper[5050]: I1211 15:37:50.294237 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6vgw5" Dec 11 15:37:50 crc kubenswrapper[5050]: I1211 15:37:50.343563 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6vgw5" Dec 11 15:37:51 crc kubenswrapper[5050]: I1211 15:37:50.900084 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6vgw5" Dec 11 15:37:51 crc kubenswrapper[5050]: I1211 15:37:51.545202 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vgw5"] Dec 11 15:37:52 crc kubenswrapper[5050]: I1211 15:37:52.872148 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6vgw5" podUID="afa7987a-d45a-4198-8694-9228cf1d1203" containerName="registry-server" containerID="cri-o://71e4267ad0e718214cdf24fd8791560ce43a38c66085efeefe0c0d8d55df1f31" gracePeriod=2 Dec 11 15:37:53 crc kubenswrapper[5050]: I1211 15:37:53.465526 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6vgw5" Dec 11 15:37:53 crc kubenswrapper[5050]: I1211 15:37:53.514261 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afa7987a-d45a-4198-8694-9228cf1d1203-utilities\") pod \"afa7987a-d45a-4198-8694-9228cf1d1203\" (UID: \"afa7987a-d45a-4198-8694-9228cf1d1203\") " Dec 11 15:37:53 crc kubenswrapper[5050]: I1211 15:37:53.514584 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6hpv\" (UniqueName: \"kubernetes.io/projected/afa7987a-d45a-4198-8694-9228cf1d1203-kube-api-access-s6hpv\") pod \"afa7987a-d45a-4198-8694-9228cf1d1203\" (UID: \"afa7987a-d45a-4198-8694-9228cf1d1203\") " Dec 11 15:37:53 crc kubenswrapper[5050]: I1211 15:37:53.514613 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afa7987a-d45a-4198-8694-9228cf1d1203-catalog-content\") pod \"afa7987a-d45a-4198-8694-9228cf1d1203\" (UID: \"afa7987a-d45a-4198-8694-9228cf1d1203\") " Dec 11 15:37:53 crc kubenswrapper[5050]: I1211 15:37:53.515316 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afa7987a-d45a-4198-8694-9228cf1d1203-utilities" (OuterVolumeSpecName: "utilities") pod "afa7987a-d45a-4198-8694-9228cf1d1203" (UID: "afa7987a-d45a-4198-8694-9228cf1d1203"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:37:53 crc kubenswrapper[5050]: I1211 15:37:53.530287 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afa7987a-d45a-4198-8694-9228cf1d1203-kube-api-access-s6hpv" (OuterVolumeSpecName: "kube-api-access-s6hpv") pod "afa7987a-d45a-4198-8694-9228cf1d1203" (UID: "afa7987a-d45a-4198-8694-9228cf1d1203"). InnerVolumeSpecName "kube-api-access-s6hpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:37:53 crc kubenswrapper[5050]: I1211 15:37:53.538069 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afa7987a-d45a-4198-8694-9228cf1d1203-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "afa7987a-d45a-4198-8694-9228cf1d1203" (UID: "afa7987a-d45a-4198-8694-9228cf1d1203"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:37:53 crc kubenswrapper[5050]: I1211 15:37:53.618072 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afa7987a-d45a-4198-8694-9228cf1d1203-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 15:37:53 crc kubenswrapper[5050]: I1211 15:37:53.618647 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6hpv\" (UniqueName: \"kubernetes.io/projected/afa7987a-d45a-4198-8694-9228cf1d1203-kube-api-access-s6hpv\") on node \"crc\" DevicePath \"\"" Dec 11 15:37:53 crc kubenswrapper[5050]: I1211 15:37:53.618678 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afa7987a-d45a-4198-8694-9228cf1d1203-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 15:37:53 crc kubenswrapper[5050]: I1211 15:37:53.882802 5050 generic.go:334] "Generic (PLEG): container finished" podID="afa7987a-d45a-4198-8694-9228cf1d1203" containerID="71e4267ad0e718214cdf24fd8791560ce43a38c66085efeefe0c0d8d55df1f31" exitCode=0 Dec 11 15:37:53 crc kubenswrapper[5050]: I1211 15:37:53.882869 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vgw5" event={"ID":"afa7987a-d45a-4198-8694-9228cf1d1203","Type":"ContainerDied","Data":"71e4267ad0e718214cdf24fd8791560ce43a38c66085efeefe0c0d8d55df1f31"} Dec 11 15:37:53 crc kubenswrapper[5050]: I1211 15:37:53.882874 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6vgw5" Dec 11 15:37:53 crc kubenswrapper[5050]: I1211 15:37:53.882911 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6vgw5" event={"ID":"afa7987a-d45a-4198-8694-9228cf1d1203","Type":"ContainerDied","Data":"59725683aa180307431c5564a8ec5391f179a6aa47fcb631af54aefa4c843da8"} Dec 11 15:37:53 crc kubenswrapper[5050]: I1211 15:37:53.882934 5050 scope.go:117] "RemoveContainer" containerID="71e4267ad0e718214cdf24fd8791560ce43a38c66085efeefe0c0d8d55df1f31" Dec 11 15:37:53 crc kubenswrapper[5050]: I1211 15:37:53.911915 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vgw5"] Dec 11 15:37:53 crc kubenswrapper[5050]: I1211 15:37:53.916458 5050 scope.go:117] "RemoveContainer" containerID="0acb6f17180dcf3e66f0196b7da51e6e8eed3e21c1050326f53684107e1f5e4a" Dec 11 15:37:53 crc kubenswrapper[5050]: I1211 15:37:53.923617 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6vgw5"] Dec 11 15:37:53 crc kubenswrapper[5050]: I1211 15:37:53.936363 5050 scope.go:117] "RemoveContainer" containerID="a135c698b5a3073b1d23af850d45df83e76a84739923ada8fbc2f254e05b1a66" Dec 11 15:37:53 crc kubenswrapper[5050]: I1211 15:37:53.982279 5050 scope.go:117] "RemoveContainer" containerID="71e4267ad0e718214cdf24fd8791560ce43a38c66085efeefe0c0d8d55df1f31" Dec 11 15:37:53 crc kubenswrapper[5050]: E1211 15:37:53.982815 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71e4267ad0e718214cdf24fd8791560ce43a38c66085efeefe0c0d8d55df1f31\": container with ID starting with 71e4267ad0e718214cdf24fd8791560ce43a38c66085efeefe0c0d8d55df1f31 not found: ID does not exist" containerID="71e4267ad0e718214cdf24fd8791560ce43a38c66085efeefe0c0d8d55df1f31" Dec 11 15:37:53 crc kubenswrapper[5050]: I1211 15:37:53.982862 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71e4267ad0e718214cdf24fd8791560ce43a38c66085efeefe0c0d8d55df1f31"} err="failed to get container status \"71e4267ad0e718214cdf24fd8791560ce43a38c66085efeefe0c0d8d55df1f31\": rpc error: code = NotFound desc = could not find container \"71e4267ad0e718214cdf24fd8791560ce43a38c66085efeefe0c0d8d55df1f31\": container with ID starting with 71e4267ad0e718214cdf24fd8791560ce43a38c66085efeefe0c0d8d55df1f31 not found: ID does not exist" Dec 11 15:37:53 crc kubenswrapper[5050]: I1211 15:37:53.982891 5050 scope.go:117] "RemoveContainer" containerID="0acb6f17180dcf3e66f0196b7da51e6e8eed3e21c1050326f53684107e1f5e4a" Dec 11 15:37:53 crc kubenswrapper[5050]: E1211 15:37:53.983213 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0acb6f17180dcf3e66f0196b7da51e6e8eed3e21c1050326f53684107e1f5e4a\": container with ID starting with 0acb6f17180dcf3e66f0196b7da51e6e8eed3e21c1050326f53684107e1f5e4a not found: ID does not exist" containerID="0acb6f17180dcf3e66f0196b7da51e6e8eed3e21c1050326f53684107e1f5e4a" Dec 11 15:37:53 crc kubenswrapper[5050]: I1211 15:37:53.983236 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0acb6f17180dcf3e66f0196b7da51e6e8eed3e21c1050326f53684107e1f5e4a"} err="failed to get container status \"0acb6f17180dcf3e66f0196b7da51e6e8eed3e21c1050326f53684107e1f5e4a\": rpc error: code = NotFound desc = could not find container \"0acb6f17180dcf3e66f0196b7da51e6e8eed3e21c1050326f53684107e1f5e4a\": container with ID starting with 0acb6f17180dcf3e66f0196b7da51e6e8eed3e21c1050326f53684107e1f5e4a not found: ID does not exist" Dec 11 15:37:53 crc kubenswrapper[5050]: I1211 15:37:53.983250 5050 scope.go:117] "RemoveContainer" containerID="a135c698b5a3073b1d23af850d45df83e76a84739923ada8fbc2f254e05b1a66" Dec 11 15:37:53 crc kubenswrapper[5050]: E1211 15:37:53.983503 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a135c698b5a3073b1d23af850d45df83e76a84739923ada8fbc2f254e05b1a66\": container with ID starting with a135c698b5a3073b1d23af850d45df83e76a84739923ada8fbc2f254e05b1a66 not found: ID does not exist" containerID="a135c698b5a3073b1d23af850d45df83e76a84739923ada8fbc2f254e05b1a66" Dec 11 15:37:53 crc kubenswrapper[5050]: I1211 15:37:53.983556 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a135c698b5a3073b1d23af850d45df83e76a84739923ada8fbc2f254e05b1a66"} err="failed to get container status \"a135c698b5a3073b1d23af850d45df83e76a84739923ada8fbc2f254e05b1a66\": rpc error: code = NotFound desc = could not find container \"a135c698b5a3073b1d23af850d45df83e76a84739923ada8fbc2f254e05b1a66\": container with ID starting with a135c698b5a3073b1d23af850d45df83e76a84739923ada8fbc2f254e05b1a66 not found: ID does not exist" Dec 11 15:37:55 crc kubenswrapper[5050]: I1211 15:37:55.557655 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afa7987a-d45a-4198-8694-9228cf1d1203" path="/var/lib/kubelet/pods/afa7987a-d45a-4198-8694-9228cf1d1203/volumes" Dec 11 15:38:03 crc kubenswrapper[5050]: I1211 15:38:03.057792 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/octavia-db-sync-n8tmc"] Dec 11 15:38:03 crc kubenswrapper[5050]: I1211 15:38:03.069300 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/octavia-db-sync-n8tmc"] Dec 11 15:38:03 crc kubenswrapper[5050]: I1211 15:38:03.556719 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fb80c24-230a-4b3f-979a-9520f51ed32c" path="/var/lib/kubelet/pods/6fb80c24-230a-4b3f-979a-9520f51ed32c/volumes" Dec 11 15:38:06 crc kubenswrapper[5050]: I1211 15:38:06.402845 5050 scope.go:117] "RemoveContainer" containerID="f7a228099a0ae42dc9365fae502c1f5da022dac8d917141b5eab3dd3c293792c" Dec 11 15:38:06 crc kubenswrapper[5050]: I1211 15:38:06.428689 5050 scope.go:117] "RemoveContainer" containerID="ef3cc1b152fe2e6c01d02bf653b28a0ff7cfa29a0a06fa31c7e70a25e9c5370b" Dec 11 15:39:40 crc kubenswrapper[5050]: I1211 15:39:40.796894 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 15:39:40 crc kubenswrapper[5050]: I1211 15:39:40.797409 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 15:40:06 crc kubenswrapper[5050]: I1211 15:40:06.543907 5050 scope.go:117] "RemoveContainer" containerID="22b1dfb4a5657b19c2f23bf8514de39b07b2810851f4ca44610259f4c81cb7d7" Dec 11 15:40:06 crc kubenswrapper[5050]: I1211 15:40:06.568208 5050 scope.go:117] "RemoveContainer" containerID="7f24b2bebdda17df7d919c4a53800aa15481b7f4278bdd020fa13306048a2873" Dec 11 15:40:06 crc kubenswrapper[5050]: I1211 15:40:06.591225 5050 scope.go:117] "RemoveContainer" containerID="3fbf19dabea36cc7a385d0fbcc8915b21baa754a31df40dd82f2ef4000c00f60" Dec 11 15:40:06 crc kubenswrapper[5050]: I1211 15:40:06.622664 5050 scope.go:117] "RemoveContainer" containerID="f958aa1522472d79dcdf0daf0cba7e2e0b8ade5eca4bf022d39c6e775e12abb2" Dec 11 15:40:10 crc kubenswrapper[5050]: I1211 15:40:10.796340 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 15:40:10 crc kubenswrapper[5050]: I1211 15:40:10.797043 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 15:40:16 crc kubenswrapper[5050]: I1211 15:40:16.044858 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-9rvnd"] Dec 11 15:40:16 crc kubenswrapper[5050]: I1211 15:40:16.060995 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-1848-account-create-update-npqlp"] Dec 11 15:40:16 crc kubenswrapper[5050]: I1211 15:40:16.070669 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-9rvnd"] Dec 11 15:40:16 crc kubenswrapper[5050]: I1211 15:40:16.080118 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-1848-account-create-update-npqlp"] Dec 11 15:40:17 crc kubenswrapper[5050]: I1211 15:40:17.560385 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f08ac59-fc5a-4846-ba07-e5de181aa3c8" path="/var/lib/kubelet/pods/8f08ac59-fc5a-4846-ba07-e5de181aa3c8/volumes" Dec 11 15:40:17 crc kubenswrapper[5050]: I1211 15:40:17.561440 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97e356fb-2d8a-47f3-b2cc-c2af075c658c" path="/var/lib/kubelet/pods/97e356fb-2d8a-47f3-b2cc-c2af075c658c/volumes" Dec 11 15:40:24 crc kubenswrapper[5050]: I1211 15:40:24.954267 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" podUID="3477354d-838b-48cc-a6c3-612088d82640" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:40:24 crc kubenswrapper[5050]: I1211 15:40:24.954316 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" podUID="3477354d-838b-48cc-a6c3-612088d82640" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:40:29 crc kubenswrapper[5050]: I1211 15:40:29.052391 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-wbkw4"] Dec 11 15:40:29 crc kubenswrapper[5050]: I1211 15:40:29.064080 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-wbkw4"] Dec 11 15:40:29 crc kubenswrapper[5050]: I1211 15:40:29.560075 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="756e8ca6-c0b8-4051-b88c-0cb6b0159661" path="/var/lib/kubelet/pods/756e8ca6-c0b8-4051-b88c-0cb6b0159661/volumes" Dec 11 15:40:40 crc kubenswrapper[5050]: I1211 15:40:40.798218 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 15:40:40 crc kubenswrapper[5050]: I1211 15:40:40.798777 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 15:40:40 crc kubenswrapper[5050]: I1211 15:40:40.798831 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 15:40:40 crc kubenswrapper[5050]: I1211 15:40:40.799742 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e1193cb2672c9b23c9602bdf8ae6d04616c4f7a2af4ed4a2aeb230e558518c57"} pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 15:40:40 crc kubenswrapper[5050]: I1211 15:40:40.799798 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" containerID="cri-o://e1193cb2672c9b23c9602bdf8ae6d04616c4f7a2af4ed4a2aeb230e558518c57" gracePeriod=600 Dec 11 15:40:40 crc kubenswrapper[5050]: E1211 15:40:40.922207 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:40:41 crc kubenswrapper[5050]: I1211 15:40:41.584558 5050 generic.go:334] "Generic (PLEG): container finished" podID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerID="e1193cb2672c9b23c9602bdf8ae6d04616c4f7a2af4ed4a2aeb230e558518c57" exitCode=0 Dec 11 15:40:41 crc kubenswrapper[5050]: I1211 15:40:41.584605 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerDied","Data":"e1193cb2672c9b23c9602bdf8ae6d04616c4f7a2af4ed4a2aeb230e558518c57"} Dec 11 15:40:41 crc kubenswrapper[5050]: I1211 15:40:41.584640 5050 scope.go:117] "RemoveContainer" containerID="8e3528ca5f78bfb9e7123275334aa38f06e1578dafb2743ec06b823dfbe30e85" Dec 11 15:40:41 crc kubenswrapper[5050]: I1211 15:40:41.585196 5050 scope.go:117] "RemoveContainer" containerID="e1193cb2672c9b23c9602bdf8ae6d04616c4f7a2af4ed4a2aeb230e558518c57" Dec 11 15:40:41 crc kubenswrapper[5050]: E1211 15:40:41.585591 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:40:54 crc kubenswrapper[5050]: I1211 15:40:54.547273 5050 scope.go:117] "RemoveContainer" containerID="e1193cb2672c9b23c9602bdf8ae6d04616c4f7a2af4ed4a2aeb230e558518c57" Dec 11 15:40:54 crc kubenswrapper[5050]: E1211 15:40:54.548663 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:41:06 crc kubenswrapper[5050]: I1211 15:41:06.545928 5050 scope.go:117] "RemoveContainer" containerID="e1193cb2672c9b23c9602bdf8ae6d04616c4f7a2af4ed4a2aeb230e558518c57" Dec 11 15:41:06 crc kubenswrapper[5050]: E1211 15:41:06.546772 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:41:06 crc kubenswrapper[5050]: I1211 15:41:06.687932 5050 scope.go:117] "RemoveContainer" containerID="72c434b48dfb3f08da8a1635c5bb353f39eacf953520b12c042dccad30273b9c" Dec 11 15:41:06 crc kubenswrapper[5050]: I1211 15:41:06.725326 5050 scope.go:117] "RemoveContainer" containerID="a50da491ac8c14648bab31f323eb7ac6bcc69a84651a8d2540ce227e0ff2ebef" Dec 11 15:41:06 crc kubenswrapper[5050]: I1211 15:41:06.763193 5050 scope.go:117] "RemoveContainer" containerID="bc0e5c33022195409827a4a5e6edb5b46a7222d53c1dc50c05cb0771a9835ecd" Dec 11 15:41:17 crc kubenswrapper[5050]: I1211 15:41:17.547213 5050 scope.go:117] "RemoveContainer" containerID="e1193cb2672c9b23c9602bdf8ae6d04616c4f7a2af4ed4a2aeb230e558518c57" Dec 11 15:41:17 crc kubenswrapper[5050]: E1211 15:41:17.547982 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:41:32 crc kubenswrapper[5050]: I1211 15:41:32.547529 5050 scope.go:117] "RemoveContainer" containerID="e1193cb2672c9b23c9602bdf8ae6d04616c4f7a2af4ed4a2aeb230e558518c57" Dec 11 15:41:32 crc kubenswrapper[5050]: E1211 15:41:32.548360 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:41:46 crc kubenswrapper[5050]: I1211 15:41:46.546930 5050 scope.go:117] "RemoveContainer" containerID="e1193cb2672c9b23c9602bdf8ae6d04616c4f7a2af4ed4a2aeb230e558518c57" Dec 11 15:41:46 crc kubenswrapper[5050]: E1211 15:41:46.547912 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:42:01 crc kubenswrapper[5050]: I1211 15:42:01.546187 5050 scope.go:117] "RemoveContainer" containerID="e1193cb2672c9b23c9602bdf8ae6d04616c4f7a2af4ed4a2aeb230e558518c57" Dec 11 15:42:01 crc kubenswrapper[5050]: E1211 15:42:01.546994 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:42:15 crc kubenswrapper[5050]: I1211 15:42:15.546881 5050 scope.go:117] "RemoveContainer" containerID="e1193cb2672c9b23c9602bdf8ae6d04616c4f7a2af4ed4a2aeb230e558518c57" Dec 11 15:42:15 crc kubenswrapper[5050]: E1211 15:42:15.548417 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:42:28 crc kubenswrapper[5050]: I1211 15:42:28.546548 5050 scope.go:117] "RemoveContainer" containerID="e1193cb2672c9b23c9602bdf8ae6d04616c4f7a2af4ed4a2aeb230e558518c57" Dec 11 15:42:28 crc kubenswrapper[5050]: E1211 15:42:28.547375 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:42:40 crc kubenswrapper[5050]: I1211 15:42:40.546589 5050 scope.go:117] "RemoveContainer" containerID="e1193cb2672c9b23c9602bdf8ae6d04616c4f7a2af4ed4a2aeb230e558518c57" Dec 11 15:42:40 crc kubenswrapper[5050]: E1211 15:42:40.547422 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:42:47 crc kubenswrapper[5050]: I1211 15:42:47.045815 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zmbmw"] Dec 11 15:42:47 crc kubenswrapper[5050]: E1211 15:42:47.055499 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afa7987a-d45a-4198-8694-9228cf1d1203" containerName="extract-content" Dec 11 15:42:47 crc kubenswrapper[5050]: I1211 15:42:47.056209 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="afa7987a-d45a-4198-8694-9228cf1d1203" containerName="extract-content" Dec 11 15:42:47 crc kubenswrapper[5050]: E1211 15:42:47.056451 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afa7987a-d45a-4198-8694-9228cf1d1203" containerName="registry-server" Dec 11 15:42:47 crc kubenswrapper[5050]: I1211 15:42:47.056464 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="afa7987a-d45a-4198-8694-9228cf1d1203" containerName="registry-server" Dec 11 15:42:47 crc kubenswrapper[5050]: E1211 15:42:47.056685 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afa7987a-d45a-4198-8694-9228cf1d1203" containerName="extract-utilities" Dec 11 15:42:47 crc kubenswrapper[5050]: I1211 15:42:47.056699 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="afa7987a-d45a-4198-8694-9228cf1d1203" containerName="extract-utilities" Dec 11 15:42:47 crc kubenswrapper[5050]: I1211 15:42:47.058216 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="afa7987a-d45a-4198-8694-9228cf1d1203" containerName="registry-server" Dec 11 15:42:47 crc kubenswrapper[5050]: I1211 15:42:47.068551 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zmbmw"] Dec 11 15:42:47 crc kubenswrapper[5050]: I1211 15:42:47.068705 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zmbmw" Dec 11 15:42:47 crc kubenswrapper[5050]: I1211 15:42:47.171170 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfb3fcc4-1637-480a-990e-9f047794183b-utilities\") pod \"community-operators-zmbmw\" (UID: \"bfb3fcc4-1637-480a-990e-9f047794183b\") " pod="openshift-marketplace/community-operators-zmbmw" Dec 11 15:42:47 crc kubenswrapper[5050]: I1211 15:42:47.171258 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrdlv\" (UniqueName: \"kubernetes.io/projected/bfb3fcc4-1637-480a-990e-9f047794183b-kube-api-access-xrdlv\") pod \"community-operators-zmbmw\" (UID: \"bfb3fcc4-1637-480a-990e-9f047794183b\") " pod="openshift-marketplace/community-operators-zmbmw" Dec 11 15:42:47 crc kubenswrapper[5050]: I1211 15:42:47.171528 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfb3fcc4-1637-480a-990e-9f047794183b-catalog-content\") pod \"community-operators-zmbmw\" (UID: \"bfb3fcc4-1637-480a-990e-9f047794183b\") " pod="openshift-marketplace/community-operators-zmbmw" Dec 11 15:42:47 crc kubenswrapper[5050]: I1211 15:42:47.273931 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfb3fcc4-1637-480a-990e-9f047794183b-catalog-content\") pod \"community-operators-zmbmw\" (UID: \"bfb3fcc4-1637-480a-990e-9f047794183b\") " pod="openshift-marketplace/community-operators-zmbmw" Dec 11 15:42:47 crc kubenswrapper[5050]: I1211 15:42:47.274089 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfb3fcc4-1637-480a-990e-9f047794183b-utilities\") pod \"community-operators-zmbmw\" (UID: \"bfb3fcc4-1637-480a-990e-9f047794183b\") " pod="openshift-marketplace/community-operators-zmbmw" Dec 11 15:42:47 crc kubenswrapper[5050]: I1211 15:42:47.274163 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrdlv\" (UniqueName: \"kubernetes.io/projected/bfb3fcc4-1637-480a-990e-9f047794183b-kube-api-access-xrdlv\") pod \"community-operators-zmbmw\" (UID: \"bfb3fcc4-1637-480a-990e-9f047794183b\") " pod="openshift-marketplace/community-operators-zmbmw" Dec 11 15:42:47 crc kubenswrapper[5050]: I1211 15:42:47.274538 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfb3fcc4-1637-480a-990e-9f047794183b-utilities\") pod \"community-operators-zmbmw\" (UID: \"bfb3fcc4-1637-480a-990e-9f047794183b\") " pod="openshift-marketplace/community-operators-zmbmw" Dec 11 15:42:47 crc kubenswrapper[5050]: I1211 15:42:47.274535 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfb3fcc4-1637-480a-990e-9f047794183b-catalog-content\") pod \"community-operators-zmbmw\" (UID: \"bfb3fcc4-1637-480a-990e-9f047794183b\") " pod="openshift-marketplace/community-operators-zmbmw" Dec 11 15:42:47 crc kubenswrapper[5050]: I1211 15:42:47.301204 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrdlv\" (UniqueName: \"kubernetes.io/projected/bfb3fcc4-1637-480a-990e-9f047794183b-kube-api-access-xrdlv\") pod \"community-operators-zmbmw\" (UID: \"bfb3fcc4-1637-480a-990e-9f047794183b\") " pod="openshift-marketplace/community-operators-zmbmw" Dec 11 15:42:47 crc kubenswrapper[5050]: I1211 15:42:47.394559 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zmbmw" Dec 11 15:42:48 crc kubenswrapper[5050]: I1211 15:42:48.781729 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zmbmw"] Dec 11 15:42:48 crc kubenswrapper[5050]: I1211 15:42:48.930952 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zmbmw" event={"ID":"bfb3fcc4-1637-480a-990e-9f047794183b","Type":"ContainerStarted","Data":"3ec2a5ee0dbaea51d821223b1a0ee0e995db3dfb89d2f37d5851d2b3d6d97ec1"} Dec 11 15:42:49 crc kubenswrapper[5050]: I1211 15:42:49.942421 5050 generic.go:334] "Generic (PLEG): container finished" podID="bfb3fcc4-1637-480a-990e-9f047794183b" containerID="6f0a9ad35a40d95332cb00251c833ef5c5928b647111369d575373494e99d72e" exitCode=0 Dec 11 15:42:49 crc kubenswrapper[5050]: I1211 15:42:49.942764 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zmbmw" event={"ID":"bfb3fcc4-1637-480a-990e-9f047794183b","Type":"ContainerDied","Data":"6f0a9ad35a40d95332cb00251c833ef5c5928b647111369d575373494e99d72e"} Dec 11 15:42:49 crc kubenswrapper[5050]: I1211 15:42:49.945262 5050 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 15:42:51 crc kubenswrapper[5050]: I1211 15:42:51.965315 5050 generic.go:334] "Generic (PLEG): container finished" podID="bfb3fcc4-1637-480a-990e-9f047794183b" containerID="ecb6d95491f8343d87b207d4c5b6187b81a9d627f8d9d06a110987e465f86464" exitCode=0 Dec 11 15:42:51 crc kubenswrapper[5050]: I1211 15:42:51.965377 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zmbmw" event={"ID":"bfb3fcc4-1637-480a-990e-9f047794183b","Type":"ContainerDied","Data":"ecb6d95491f8343d87b207d4c5b6187b81a9d627f8d9d06a110987e465f86464"} Dec 11 15:42:52 crc kubenswrapper[5050]: I1211 15:42:52.983671 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zmbmw" event={"ID":"bfb3fcc4-1637-480a-990e-9f047794183b","Type":"ContainerStarted","Data":"14a7c98f02f8bb21a4c3925134cb4a367d929b9d393fcd9affd448c520811e5b"} Dec 11 15:42:53 crc kubenswrapper[5050]: I1211 15:42:53.009152 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zmbmw" podStartSLOduration=4.523116063 podStartE2EDuration="7.009135652s" podCreationTimestamp="2025-12-11 15:42:46 +0000 UTC" firstStartedPulling="2025-12-11 15:42:49.944991755 +0000 UTC m=+6860.788714341" lastFinishedPulling="2025-12-11 15:42:52.431011344 +0000 UTC m=+6863.274733930" observedRunningTime="2025-12-11 15:42:53.005543426 +0000 UTC m=+6863.849266002" watchObservedRunningTime="2025-12-11 15:42:53.009135652 +0000 UTC m=+6863.852858238" Dec 11 15:42:54 crc kubenswrapper[5050]: I1211 15:42:54.545991 5050 scope.go:117] "RemoveContainer" containerID="e1193cb2672c9b23c9602bdf8ae6d04616c4f7a2af4ed4a2aeb230e558518c57" Dec 11 15:42:54 crc kubenswrapper[5050]: E1211 15:42:54.546505 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:42:57 crc kubenswrapper[5050]: I1211 15:42:57.394863 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zmbmw" Dec 11 15:42:57 crc kubenswrapper[5050]: I1211 15:42:57.395164 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zmbmw" Dec 11 15:42:57 crc kubenswrapper[5050]: I1211 15:42:57.452195 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zmbmw" Dec 11 15:42:58 crc kubenswrapper[5050]: I1211 15:42:58.096673 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zmbmw" Dec 11 15:42:58 crc kubenswrapper[5050]: I1211 15:42:58.152686 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zmbmw"] Dec 11 15:42:58 crc kubenswrapper[5050]: I1211 15:42:58.214651 5050 patch_prober.go:28] interesting pod/nmstate-webhook-f8fb84555-jjj6z container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.44:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:42:58 crc kubenswrapper[5050]: I1211 15:42:58.214719 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-f8fb84555-jjj6z" podUID="3921dc89-0902-4125-9b83-ff0a3c1c486c" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.44:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:43:00 crc kubenswrapper[5050]: I1211 15:43:00.070913 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zmbmw" podUID="bfb3fcc4-1637-480a-990e-9f047794183b" containerName="registry-server" containerID="cri-o://14a7c98f02f8bb21a4c3925134cb4a367d929b9d393fcd9affd448c520811e5b" gracePeriod=2 Dec 11 15:43:01 crc kubenswrapper[5050]: I1211 15:43:01.080434 5050 generic.go:334] "Generic (PLEG): container finished" podID="bfb3fcc4-1637-480a-990e-9f047794183b" containerID="14a7c98f02f8bb21a4c3925134cb4a367d929b9d393fcd9affd448c520811e5b" exitCode=0 Dec 11 15:43:01 crc kubenswrapper[5050]: I1211 15:43:01.080520 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zmbmw" event={"ID":"bfb3fcc4-1637-480a-990e-9f047794183b","Type":"ContainerDied","Data":"14a7c98f02f8bb21a4c3925134cb4a367d929b9d393fcd9affd448c520811e5b"} Dec 11 15:43:01 crc kubenswrapper[5050]: I1211 15:43:01.792786 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zmbmw" Dec 11 15:43:01 crc kubenswrapper[5050]: I1211 15:43:01.925925 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrdlv\" (UniqueName: \"kubernetes.io/projected/bfb3fcc4-1637-480a-990e-9f047794183b-kube-api-access-xrdlv\") pod \"bfb3fcc4-1637-480a-990e-9f047794183b\" (UID: \"bfb3fcc4-1637-480a-990e-9f047794183b\") " Dec 11 15:43:01 crc kubenswrapper[5050]: I1211 15:43:01.926062 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfb3fcc4-1637-480a-990e-9f047794183b-catalog-content\") pod \"bfb3fcc4-1637-480a-990e-9f047794183b\" (UID: \"bfb3fcc4-1637-480a-990e-9f047794183b\") " Dec 11 15:43:01 crc kubenswrapper[5050]: I1211 15:43:01.926109 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfb3fcc4-1637-480a-990e-9f047794183b-utilities\") pod \"bfb3fcc4-1637-480a-990e-9f047794183b\" (UID: \"bfb3fcc4-1637-480a-990e-9f047794183b\") " Dec 11 15:43:01 crc kubenswrapper[5050]: I1211 15:43:01.927276 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfb3fcc4-1637-480a-990e-9f047794183b-utilities" (OuterVolumeSpecName: "utilities") pod "bfb3fcc4-1637-480a-990e-9f047794183b" (UID: "bfb3fcc4-1637-480a-990e-9f047794183b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:43:01 crc kubenswrapper[5050]: I1211 15:43:01.937250 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfb3fcc4-1637-480a-990e-9f047794183b-kube-api-access-xrdlv" (OuterVolumeSpecName: "kube-api-access-xrdlv") pod "bfb3fcc4-1637-480a-990e-9f047794183b" (UID: "bfb3fcc4-1637-480a-990e-9f047794183b"). InnerVolumeSpecName "kube-api-access-xrdlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:43:01 crc kubenswrapper[5050]: I1211 15:43:01.972526 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfb3fcc4-1637-480a-990e-9f047794183b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bfb3fcc4-1637-480a-990e-9f047794183b" (UID: "bfb3fcc4-1637-480a-990e-9f047794183b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:43:02 crc kubenswrapper[5050]: I1211 15:43:02.028570 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrdlv\" (UniqueName: \"kubernetes.io/projected/bfb3fcc4-1637-480a-990e-9f047794183b-kube-api-access-xrdlv\") on node \"crc\" DevicePath \"\"" Dec 11 15:43:02 crc kubenswrapper[5050]: I1211 15:43:02.028607 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfb3fcc4-1637-480a-990e-9f047794183b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 15:43:02 crc kubenswrapper[5050]: I1211 15:43:02.028616 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfb3fcc4-1637-480a-990e-9f047794183b-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 15:43:02 crc kubenswrapper[5050]: I1211 15:43:02.091699 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zmbmw" event={"ID":"bfb3fcc4-1637-480a-990e-9f047794183b","Type":"ContainerDied","Data":"3ec2a5ee0dbaea51d821223b1a0ee0e995db3dfb89d2f37d5851d2b3d6d97ec1"} Dec 11 15:43:02 crc kubenswrapper[5050]: I1211 15:43:02.091756 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zmbmw" Dec 11 15:43:02 crc kubenswrapper[5050]: I1211 15:43:02.091770 5050 scope.go:117] "RemoveContainer" containerID="14a7c98f02f8bb21a4c3925134cb4a367d929b9d393fcd9affd448c520811e5b" Dec 11 15:43:02 crc kubenswrapper[5050]: I1211 15:43:02.127167 5050 scope.go:117] "RemoveContainer" containerID="ecb6d95491f8343d87b207d4c5b6187b81a9d627f8d9d06a110987e465f86464" Dec 11 15:43:02 crc kubenswrapper[5050]: I1211 15:43:02.129141 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zmbmw"] Dec 11 15:43:02 crc kubenswrapper[5050]: I1211 15:43:02.141645 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zmbmw"] Dec 11 15:43:02 crc kubenswrapper[5050]: I1211 15:43:02.177329 5050 scope.go:117] "RemoveContainer" containerID="6f0a9ad35a40d95332cb00251c833ef5c5928b647111369d575373494e99d72e" Dec 11 15:43:03 crc kubenswrapper[5050]: I1211 15:43:03.559849 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfb3fcc4-1637-480a-990e-9f047794183b" path="/var/lib/kubelet/pods/bfb3fcc4-1637-480a-990e-9f047794183b/volumes" Dec 11 15:43:04 crc kubenswrapper[5050]: I1211 15:43:04.052638 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-76vp7"] Dec 11 15:43:04 crc kubenswrapper[5050]: I1211 15:43:04.063043 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-059a-account-create-update-hpdrc"] Dec 11 15:43:04 crc kubenswrapper[5050]: I1211 15:43:04.073528 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-76vp7"] Dec 11 15:43:04 crc kubenswrapper[5050]: I1211 15:43:04.081730 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-059a-account-create-update-hpdrc"] Dec 11 15:43:05 crc kubenswrapper[5050]: I1211 15:43:05.562086 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4adb707f-3ea4-4240-945e-56011d9af159" path="/var/lib/kubelet/pods/4adb707f-3ea4-4240-945e-56011d9af159/volumes" Dec 11 15:43:05 crc kubenswrapper[5050]: I1211 15:43:05.563740 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="690a3b25-657c-45f2-b9ea-0524747cfc73" path="/var/lib/kubelet/pods/690a3b25-657c-45f2-b9ea-0524747cfc73/volumes" Dec 11 15:43:06 crc kubenswrapper[5050]: I1211 15:43:06.900261 5050 scope.go:117] "RemoveContainer" containerID="08c868ab19a3bbdb777661860d40e3b26650d8c18672365c50e50d93b4f1cb98" Dec 11 15:43:06 crc kubenswrapper[5050]: I1211 15:43:06.926530 5050 scope.go:117] "RemoveContainer" containerID="d13fdec15ae43ef5e65e61a5ba702be22b0ed8bf81a2bc8d3525f301f55db3de" Dec 11 15:43:09 crc kubenswrapper[5050]: I1211 15:43:09.556907 5050 scope.go:117] "RemoveContainer" containerID="e1193cb2672c9b23c9602bdf8ae6d04616c4f7a2af4ed4a2aeb230e558518c57" Dec 11 15:43:09 crc kubenswrapper[5050]: E1211 15:43:09.557790 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:43:20 crc kubenswrapper[5050]: I1211 15:43:20.547468 5050 scope.go:117] "RemoveContainer" containerID="e1193cb2672c9b23c9602bdf8ae6d04616c4f7a2af4ed4a2aeb230e558518c57" Dec 11 15:43:20 crc kubenswrapper[5050]: E1211 15:43:20.549429 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:43:24 crc kubenswrapper[5050]: I1211 15:43:24.043818 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-d74n6"] Dec 11 15:43:24 crc kubenswrapper[5050]: I1211 15:43:24.075933 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-d74n6"] Dec 11 15:43:25 crc kubenswrapper[5050]: I1211 15:43:25.557665 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bc5b4ae-cf53-4fc7-8233-bcb362806684" path="/var/lib/kubelet/pods/8bc5b4ae-cf53-4fc7-8233-bcb362806684/volumes" Dec 11 15:43:32 crc kubenswrapper[5050]: I1211 15:43:32.545972 5050 scope.go:117] "RemoveContainer" containerID="e1193cb2672c9b23c9602bdf8ae6d04616c4f7a2af4ed4a2aeb230e558518c57" Dec 11 15:43:32 crc kubenswrapper[5050]: E1211 15:43:32.546847 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:43:44 crc kubenswrapper[5050]: I1211 15:43:44.546929 5050 scope.go:117] "RemoveContainer" containerID="e1193cb2672c9b23c9602bdf8ae6d04616c4f7a2af4ed4a2aeb230e558518c57" Dec 11 15:43:44 crc kubenswrapper[5050]: E1211 15:43:44.548759 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:43:50 crc kubenswrapper[5050]: I1211 15:43:50.032621 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-f133-account-create-update-4k82x"] Dec 11 15:43:50 crc kubenswrapper[5050]: I1211 15:43:50.042593 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-zmz4j"] Dec 11 15:43:50 crc kubenswrapper[5050]: I1211 15:43:50.053329 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-f133-account-create-update-4k82x"] Dec 11 15:43:50 crc kubenswrapper[5050]: I1211 15:43:50.062276 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-zmz4j"] Dec 11 15:43:51 crc kubenswrapper[5050]: I1211 15:43:51.559664 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8af6ed6f-5644-47f2-b05f-21d9d019e926" path="/var/lib/kubelet/pods/8af6ed6f-5644-47f2-b05f-21d9d019e926/volumes" Dec 11 15:43:51 crc kubenswrapper[5050]: I1211 15:43:51.560575 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8ebf3f9-d34a-4eff-8911-2760f8bb9b55" path="/var/lib/kubelet/pods/a8ebf3f9-d34a-4eff-8911-2760f8bb9b55/volumes" Dec 11 15:43:57 crc kubenswrapper[5050]: I1211 15:43:57.547152 5050 scope.go:117] "RemoveContainer" containerID="e1193cb2672c9b23c9602bdf8ae6d04616c4f7a2af4ed4a2aeb230e558518c57" Dec 11 15:43:57 crc kubenswrapper[5050]: E1211 15:43:57.548503 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:44:01 crc kubenswrapper[5050]: I1211 15:44:01.052967 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-t56dz"] Dec 11 15:44:01 crc kubenswrapper[5050]: I1211 15:44:01.062288 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-t56dz"] Dec 11 15:44:01 crc kubenswrapper[5050]: I1211 15:44:01.559878 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6e01872-2756-4ca0-b7d6-a8bf1c80ed46" path="/var/lib/kubelet/pods/e6e01872-2756-4ca0-b7d6-a8bf1c80ed46/volumes" Dec 11 15:44:07 crc kubenswrapper[5050]: I1211 15:44:07.038978 5050 scope.go:117] "RemoveContainer" containerID="b6f836f0045db03d7529079df1faf86d67993a6817fb87815a04a34c22c372df" Dec 11 15:44:07 crc kubenswrapper[5050]: I1211 15:44:07.081301 5050 scope.go:117] "RemoveContainer" containerID="55a2e9d9bf123fb10a3d360673f3ae553986b43cda11c9367ddb0244986c5899" Dec 11 15:44:07 crc kubenswrapper[5050]: I1211 15:44:07.168210 5050 scope.go:117] "RemoveContainer" containerID="d106b55970922c5710ed694066cf811edd1799badadc2a6dc2ce8e89552a73a3" Dec 11 15:44:07 crc kubenswrapper[5050]: I1211 15:44:07.244187 5050 scope.go:117] "RemoveContainer" containerID="389145b73a7bb74e1f167cd4a81a913e92f7842804320675f2d543e6870ecc00" Dec 11 15:44:12 crc kubenswrapper[5050]: I1211 15:44:12.547369 5050 scope.go:117] "RemoveContainer" containerID="e1193cb2672c9b23c9602bdf8ae6d04616c4f7a2af4ed4a2aeb230e558518c57" Dec 11 15:44:12 crc kubenswrapper[5050]: E1211 15:44:12.548167 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:44:27 crc kubenswrapper[5050]: I1211 15:44:27.548063 5050 scope.go:117] "RemoveContainer" containerID="e1193cb2672c9b23c9602bdf8ae6d04616c4f7a2af4ed4a2aeb230e558518c57" Dec 11 15:44:27 crc kubenswrapper[5050]: E1211 15:44:27.548936 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:44:40 crc kubenswrapper[5050]: I1211 15:44:40.546958 5050 scope.go:117] "RemoveContainer" containerID="e1193cb2672c9b23c9602bdf8ae6d04616c4f7a2af4ed4a2aeb230e558518c57" Dec 11 15:44:40 crc kubenswrapper[5050]: E1211 15:44:40.547817 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:44:52 crc kubenswrapper[5050]: I1211 15:44:52.546689 5050 scope.go:117] "RemoveContainer" containerID="e1193cb2672c9b23c9602bdf8ae6d04616c4f7a2af4ed4a2aeb230e558518c57" Dec 11 15:44:52 crc kubenswrapper[5050]: E1211 15:44:52.547439 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:45:00 crc kubenswrapper[5050]: I1211 15:45:00.354366 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424465-826cb"] Dec 11 15:45:00 crc kubenswrapper[5050]: E1211 15:45:00.355475 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfb3fcc4-1637-480a-990e-9f047794183b" containerName="extract-utilities" Dec 11 15:45:00 crc kubenswrapper[5050]: I1211 15:45:00.355495 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfb3fcc4-1637-480a-990e-9f047794183b" containerName="extract-utilities" Dec 11 15:45:00 crc kubenswrapper[5050]: E1211 15:45:00.355519 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfb3fcc4-1637-480a-990e-9f047794183b" containerName="registry-server" Dec 11 15:45:00 crc kubenswrapper[5050]: I1211 15:45:00.355530 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfb3fcc4-1637-480a-990e-9f047794183b" containerName="registry-server" Dec 11 15:45:00 crc kubenswrapper[5050]: E1211 15:45:00.355563 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfb3fcc4-1637-480a-990e-9f047794183b" containerName="extract-content" Dec 11 15:45:00 crc kubenswrapper[5050]: I1211 15:45:00.355572 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfb3fcc4-1637-480a-990e-9f047794183b" containerName="extract-content" Dec 11 15:45:00 crc kubenswrapper[5050]: I1211 15:45:00.355834 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfb3fcc4-1637-480a-990e-9f047794183b" containerName="registry-server" Dec 11 15:45:00 crc kubenswrapper[5050]: I1211 15:45:00.356697 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424465-826cb" Dec 11 15:45:00 crc kubenswrapper[5050]: I1211 15:45:00.359337 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 11 15:45:00 crc kubenswrapper[5050]: I1211 15:45:00.359678 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 11 15:45:00 crc kubenswrapper[5050]: I1211 15:45:00.368220 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424465-826cb"] Dec 11 15:45:00 crc kubenswrapper[5050]: I1211 15:45:00.431699 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6lqv\" (UniqueName: \"kubernetes.io/projected/b5140445-efab-4fd8-8ba9-5db4a8e87d96-kube-api-access-w6lqv\") pod \"collect-profiles-29424465-826cb\" (UID: \"b5140445-efab-4fd8-8ba9-5db4a8e87d96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424465-826cb" Dec 11 15:45:00 crc kubenswrapper[5050]: I1211 15:45:00.431818 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5140445-efab-4fd8-8ba9-5db4a8e87d96-config-volume\") pod \"collect-profiles-29424465-826cb\" (UID: \"b5140445-efab-4fd8-8ba9-5db4a8e87d96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424465-826cb" Dec 11 15:45:00 crc kubenswrapper[5050]: I1211 15:45:00.431982 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5140445-efab-4fd8-8ba9-5db4a8e87d96-secret-volume\") pod \"collect-profiles-29424465-826cb\" (UID: \"b5140445-efab-4fd8-8ba9-5db4a8e87d96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424465-826cb" Dec 11 15:45:00 crc kubenswrapper[5050]: I1211 15:45:00.534232 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5140445-efab-4fd8-8ba9-5db4a8e87d96-secret-volume\") pod \"collect-profiles-29424465-826cb\" (UID: \"b5140445-efab-4fd8-8ba9-5db4a8e87d96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424465-826cb" Dec 11 15:45:00 crc kubenswrapper[5050]: I1211 15:45:00.534368 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6lqv\" (UniqueName: \"kubernetes.io/projected/b5140445-efab-4fd8-8ba9-5db4a8e87d96-kube-api-access-w6lqv\") pod \"collect-profiles-29424465-826cb\" (UID: \"b5140445-efab-4fd8-8ba9-5db4a8e87d96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424465-826cb" Dec 11 15:45:00 crc kubenswrapper[5050]: I1211 15:45:00.534448 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5140445-efab-4fd8-8ba9-5db4a8e87d96-config-volume\") pod \"collect-profiles-29424465-826cb\" (UID: \"b5140445-efab-4fd8-8ba9-5db4a8e87d96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424465-826cb" Dec 11 15:45:00 crc kubenswrapper[5050]: I1211 15:45:00.535530 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5140445-efab-4fd8-8ba9-5db4a8e87d96-config-volume\") pod \"collect-profiles-29424465-826cb\" (UID: \"b5140445-efab-4fd8-8ba9-5db4a8e87d96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424465-826cb" Dec 11 15:45:00 crc kubenswrapper[5050]: I1211 15:45:00.539827 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5140445-efab-4fd8-8ba9-5db4a8e87d96-secret-volume\") pod \"collect-profiles-29424465-826cb\" (UID: \"b5140445-efab-4fd8-8ba9-5db4a8e87d96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424465-826cb" Dec 11 15:45:00 crc kubenswrapper[5050]: I1211 15:45:00.552761 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6lqv\" (UniqueName: \"kubernetes.io/projected/b5140445-efab-4fd8-8ba9-5db4a8e87d96-kube-api-access-w6lqv\") pod \"collect-profiles-29424465-826cb\" (UID: \"b5140445-efab-4fd8-8ba9-5db4a8e87d96\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424465-826cb" Dec 11 15:45:00 crc kubenswrapper[5050]: I1211 15:45:00.685125 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424465-826cb" Dec 11 15:45:01 crc kubenswrapper[5050]: I1211 15:45:01.180320 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424465-826cb"] Dec 11 15:45:01 crc kubenswrapper[5050]: I1211 15:45:01.293918 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424465-826cb" event={"ID":"b5140445-efab-4fd8-8ba9-5db4a8e87d96","Type":"ContainerStarted","Data":"4f5656b8180cce5751f570716925e8eaa87e794302ffece85ed795bfd746ecae"} Dec 11 15:45:02 crc kubenswrapper[5050]: I1211 15:45:02.304065 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424465-826cb" event={"ID":"b5140445-efab-4fd8-8ba9-5db4a8e87d96","Type":"ContainerStarted","Data":"3fbcd6203d895b12bf7867883cc332ba37ded1f0018d37983dd6aa5e5e81fcb2"} Dec 11 15:45:02 crc kubenswrapper[5050]: I1211 15:45:02.322482 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29424465-826cb" podStartSLOduration=2.322460931 podStartE2EDuration="2.322460931s" podCreationTimestamp="2025-12-11 15:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:45:02.317403015 +0000 UTC m=+6993.161125601" watchObservedRunningTime="2025-12-11 15:45:02.322460931 +0000 UTC m=+6993.166183517" Dec 11 15:45:03 crc kubenswrapper[5050]: I1211 15:45:03.315208 5050 generic.go:334] "Generic (PLEG): container finished" podID="b5140445-efab-4fd8-8ba9-5db4a8e87d96" containerID="3fbcd6203d895b12bf7867883cc332ba37ded1f0018d37983dd6aa5e5e81fcb2" exitCode=0 Dec 11 15:45:03 crc kubenswrapper[5050]: I1211 15:45:03.315321 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424465-826cb" event={"ID":"b5140445-efab-4fd8-8ba9-5db4a8e87d96","Type":"ContainerDied","Data":"3fbcd6203d895b12bf7867883cc332ba37ded1f0018d37983dd6aa5e5e81fcb2"} Dec 11 15:45:04 crc kubenswrapper[5050]: I1211 15:45:04.548857 5050 scope.go:117] "RemoveContainer" containerID="e1193cb2672c9b23c9602bdf8ae6d04616c4f7a2af4ed4a2aeb230e558518c57" Dec 11 15:45:04 crc kubenswrapper[5050]: E1211 15:45:04.549303 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:45:04 crc kubenswrapper[5050]: I1211 15:45:04.716440 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424465-826cb" Dec 11 15:45:04 crc kubenswrapper[5050]: I1211 15:45:04.837528 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6lqv\" (UniqueName: \"kubernetes.io/projected/b5140445-efab-4fd8-8ba9-5db4a8e87d96-kube-api-access-w6lqv\") pod \"b5140445-efab-4fd8-8ba9-5db4a8e87d96\" (UID: \"b5140445-efab-4fd8-8ba9-5db4a8e87d96\") " Dec 11 15:45:04 crc kubenswrapper[5050]: I1211 15:45:04.837851 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5140445-efab-4fd8-8ba9-5db4a8e87d96-config-volume\") pod \"b5140445-efab-4fd8-8ba9-5db4a8e87d96\" (UID: \"b5140445-efab-4fd8-8ba9-5db4a8e87d96\") " Dec 11 15:45:04 crc kubenswrapper[5050]: I1211 15:45:04.838056 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5140445-efab-4fd8-8ba9-5db4a8e87d96-secret-volume\") pod \"b5140445-efab-4fd8-8ba9-5db4a8e87d96\" (UID: \"b5140445-efab-4fd8-8ba9-5db4a8e87d96\") " Dec 11 15:45:04 crc kubenswrapper[5050]: I1211 15:45:04.838673 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5140445-efab-4fd8-8ba9-5db4a8e87d96-config-volume" (OuterVolumeSpecName: "config-volume") pod "b5140445-efab-4fd8-8ba9-5db4a8e87d96" (UID: "b5140445-efab-4fd8-8ba9-5db4a8e87d96"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:45:04 crc kubenswrapper[5050]: I1211 15:45:04.843126 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5140445-efab-4fd8-8ba9-5db4a8e87d96-kube-api-access-w6lqv" (OuterVolumeSpecName: "kube-api-access-w6lqv") pod "b5140445-efab-4fd8-8ba9-5db4a8e87d96" (UID: "b5140445-efab-4fd8-8ba9-5db4a8e87d96"). InnerVolumeSpecName "kube-api-access-w6lqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:45:04 crc kubenswrapper[5050]: I1211 15:45:04.843363 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5140445-efab-4fd8-8ba9-5db4a8e87d96-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b5140445-efab-4fd8-8ba9-5db4a8e87d96" (UID: "b5140445-efab-4fd8-8ba9-5db4a8e87d96"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:45:04 crc kubenswrapper[5050]: I1211 15:45:04.940518 5050 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5140445-efab-4fd8-8ba9-5db4a8e87d96-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 15:45:04 crc kubenswrapper[5050]: I1211 15:45:04.940570 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6lqv\" (UniqueName: \"kubernetes.io/projected/b5140445-efab-4fd8-8ba9-5db4a8e87d96-kube-api-access-w6lqv\") on node \"crc\" DevicePath \"\"" Dec 11 15:45:04 crc kubenswrapper[5050]: I1211 15:45:04.940653 5050 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5140445-efab-4fd8-8ba9-5db4a8e87d96-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 15:45:05 crc kubenswrapper[5050]: I1211 15:45:05.336587 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424465-826cb" event={"ID":"b5140445-efab-4fd8-8ba9-5db4a8e87d96","Type":"ContainerDied","Data":"4f5656b8180cce5751f570716925e8eaa87e794302ffece85ed795bfd746ecae"} Dec 11 15:45:05 crc kubenswrapper[5050]: I1211 15:45:05.336632 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f5656b8180cce5751f570716925e8eaa87e794302ffece85ed795bfd746ecae" Dec 11 15:45:05 crc kubenswrapper[5050]: I1211 15:45:05.337051 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424465-826cb" Dec 11 15:45:05 crc kubenswrapper[5050]: I1211 15:45:05.403513 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424420-lmctp"] Dec 11 15:45:05 crc kubenswrapper[5050]: I1211 15:45:05.414465 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424420-lmctp"] Dec 11 15:45:05 crc kubenswrapper[5050]: I1211 15:45:05.563307 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e7cfb00-52ec-46c0-af09-c9f8a67d69f7" path="/var/lib/kubelet/pods/4e7cfb00-52ec-46c0-af09-c9f8a67d69f7/volumes" Dec 11 15:45:07 crc kubenswrapper[5050]: I1211 15:45:07.697426 5050 scope.go:117] "RemoveContainer" containerID="3f39034e0cc12ca4d052108fa9c64b15358a201f93093de596d7110520e9635a" Dec 11 15:45:16 crc kubenswrapper[5050]: I1211 15:45:16.546734 5050 scope.go:117] "RemoveContainer" containerID="e1193cb2672c9b23c9602bdf8ae6d04616c4f7a2af4ed4a2aeb230e558518c57" Dec 11 15:45:16 crc kubenswrapper[5050]: E1211 15:45:16.547570 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:45:28 crc kubenswrapper[5050]: I1211 15:45:28.546055 5050 scope.go:117] "RemoveContainer" containerID="e1193cb2672c9b23c9602bdf8ae6d04616c4f7a2af4ed4a2aeb230e558518c57" Dec 11 15:45:28 crc kubenswrapper[5050]: E1211 15:45:28.546840 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:45:39 crc kubenswrapper[5050]: I1211 15:45:39.555180 5050 scope.go:117] "RemoveContainer" containerID="e1193cb2672c9b23c9602bdf8ae6d04616c4f7a2af4ed4a2aeb230e558518c57" Dec 11 15:45:39 crc kubenswrapper[5050]: E1211 15:45:39.556182 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:45:52 crc kubenswrapper[5050]: I1211 15:45:52.546752 5050 scope.go:117] "RemoveContainer" containerID="e1193cb2672c9b23c9602bdf8ae6d04616c4f7a2af4ed4a2aeb230e558518c57" Dec 11 15:45:53 crc kubenswrapper[5050]: I1211 15:45:53.449768 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-trcz8"] Dec 11 15:45:53 crc kubenswrapper[5050]: E1211 15:45:53.450728 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5140445-efab-4fd8-8ba9-5db4a8e87d96" containerName="collect-profiles" Dec 11 15:45:53 crc kubenswrapper[5050]: I1211 15:45:53.450746 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5140445-efab-4fd8-8ba9-5db4a8e87d96" containerName="collect-profiles" Dec 11 15:45:53 crc kubenswrapper[5050]: I1211 15:45:53.451110 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5140445-efab-4fd8-8ba9-5db4a8e87d96" containerName="collect-profiles" Dec 11 15:45:53 crc kubenswrapper[5050]: I1211 15:45:53.453321 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-trcz8" Dec 11 15:45:53 crc kubenswrapper[5050]: I1211 15:45:53.468559 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-trcz8"] Dec 11 15:45:53 crc kubenswrapper[5050]: I1211 15:45:53.530552 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/005f0c2b-06d5-4f61-b9c3-94070dc09608-catalog-content\") pod \"certified-operators-trcz8\" (UID: \"005f0c2b-06d5-4f61-b9c3-94070dc09608\") " pod="openshift-marketplace/certified-operators-trcz8" Dec 11 15:45:53 crc kubenswrapper[5050]: I1211 15:45:53.530635 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/005f0c2b-06d5-4f61-b9c3-94070dc09608-utilities\") pod \"certified-operators-trcz8\" (UID: \"005f0c2b-06d5-4f61-b9c3-94070dc09608\") " pod="openshift-marketplace/certified-operators-trcz8" Dec 11 15:45:53 crc kubenswrapper[5050]: I1211 15:45:53.530957 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7qkd\" (UniqueName: \"kubernetes.io/projected/005f0c2b-06d5-4f61-b9c3-94070dc09608-kube-api-access-w7qkd\") pod \"certified-operators-trcz8\" (UID: \"005f0c2b-06d5-4f61-b9c3-94070dc09608\") " pod="openshift-marketplace/certified-operators-trcz8" Dec 11 15:45:53 crc kubenswrapper[5050]: I1211 15:45:53.633912 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/005f0c2b-06d5-4f61-b9c3-94070dc09608-catalog-content\") pod \"certified-operators-trcz8\" (UID: \"005f0c2b-06d5-4f61-b9c3-94070dc09608\") " pod="openshift-marketplace/certified-operators-trcz8" Dec 11 15:45:53 crc kubenswrapper[5050]: I1211 15:45:53.634031 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/005f0c2b-06d5-4f61-b9c3-94070dc09608-utilities\") pod \"certified-operators-trcz8\" (UID: \"005f0c2b-06d5-4f61-b9c3-94070dc09608\") " pod="openshift-marketplace/certified-operators-trcz8" Dec 11 15:45:53 crc kubenswrapper[5050]: I1211 15:45:53.634157 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7qkd\" (UniqueName: \"kubernetes.io/projected/005f0c2b-06d5-4f61-b9c3-94070dc09608-kube-api-access-w7qkd\") pod \"certified-operators-trcz8\" (UID: \"005f0c2b-06d5-4f61-b9c3-94070dc09608\") " pod="openshift-marketplace/certified-operators-trcz8" Dec 11 15:45:53 crc kubenswrapper[5050]: I1211 15:45:53.634749 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/005f0c2b-06d5-4f61-b9c3-94070dc09608-utilities\") pod \"certified-operators-trcz8\" (UID: \"005f0c2b-06d5-4f61-b9c3-94070dc09608\") " pod="openshift-marketplace/certified-operators-trcz8" Dec 11 15:45:53 crc kubenswrapper[5050]: I1211 15:45:53.634997 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/005f0c2b-06d5-4f61-b9c3-94070dc09608-catalog-content\") pod \"certified-operators-trcz8\" (UID: \"005f0c2b-06d5-4f61-b9c3-94070dc09608\") " pod="openshift-marketplace/certified-operators-trcz8" Dec 11 15:45:53 crc kubenswrapper[5050]: I1211 15:45:53.656653 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7qkd\" (UniqueName: \"kubernetes.io/projected/005f0c2b-06d5-4f61-b9c3-94070dc09608-kube-api-access-w7qkd\") pod \"certified-operators-trcz8\" (UID: \"005f0c2b-06d5-4f61-b9c3-94070dc09608\") " pod="openshift-marketplace/certified-operators-trcz8" Dec 11 15:45:53 crc kubenswrapper[5050]: I1211 15:45:53.796900 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-trcz8" Dec 11 15:45:53 crc kubenswrapper[5050]: I1211 15:45:53.815697 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerStarted","Data":"5fa6a64e32e1f24d28d1f983c96e1edb7963e605f315af22dbc0a41da96ac168"} Dec 11 15:45:54 crc kubenswrapper[5050]: W1211 15:45:54.392310 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod005f0c2b_06d5_4f61_b9c3_94070dc09608.slice/crio-083f05e5dcfbe9932cde54e3c71744ac92fbb84da01d31603bd40b9dd1c29161 WatchSource:0}: Error finding container 083f05e5dcfbe9932cde54e3c71744ac92fbb84da01d31603bd40b9dd1c29161: Status 404 returned error can't find the container with id 083f05e5dcfbe9932cde54e3c71744ac92fbb84da01d31603bd40b9dd1c29161 Dec 11 15:45:54 crc kubenswrapper[5050]: I1211 15:45:54.394390 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-trcz8"] Dec 11 15:45:54 crc kubenswrapper[5050]: I1211 15:45:54.826480 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trcz8" event={"ID":"005f0c2b-06d5-4f61-b9c3-94070dc09608","Type":"ContainerStarted","Data":"083f05e5dcfbe9932cde54e3c71744ac92fbb84da01d31603bd40b9dd1c29161"} Dec 11 15:45:55 crc kubenswrapper[5050]: I1211 15:45:55.243871 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-l568j"] Dec 11 15:45:55 crc kubenswrapper[5050]: I1211 15:45:55.247303 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l568j" Dec 11 15:45:55 crc kubenswrapper[5050]: I1211 15:45:55.272598 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l568j"] Dec 11 15:45:55 crc kubenswrapper[5050]: I1211 15:45:55.370480 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lqbg\" (UniqueName: \"kubernetes.io/projected/d412378a-f569-4a89-86d9-eba8d19b9f40-kube-api-access-4lqbg\") pod \"redhat-operators-l568j\" (UID: \"d412378a-f569-4a89-86d9-eba8d19b9f40\") " pod="openshift-marketplace/redhat-operators-l568j" Dec 11 15:45:55 crc kubenswrapper[5050]: I1211 15:45:55.370656 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d412378a-f569-4a89-86d9-eba8d19b9f40-catalog-content\") pod \"redhat-operators-l568j\" (UID: \"d412378a-f569-4a89-86d9-eba8d19b9f40\") " pod="openshift-marketplace/redhat-operators-l568j" Dec 11 15:45:55 crc kubenswrapper[5050]: I1211 15:45:55.370744 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d412378a-f569-4a89-86d9-eba8d19b9f40-utilities\") pod \"redhat-operators-l568j\" (UID: \"d412378a-f569-4a89-86d9-eba8d19b9f40\") " pod="openshift-marketplace/redhat-operators-l568j" Dec 11 15:45:55 crc kubenswrapper[5050]: I1211 15:45:55.472984 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lqbg\" (UniqueName: \"kubernetes.io/projected/d412378a-f569-4a89-86d9-eba8d19b9f40-kube-api-access-4lqbg\") pod \"redhat-operators-l568j\" (UID: \"d412378a-f569-4a89-86d9-eba8d19b9f40\") " pod="openshift-marketplace/redhat-operators-l568j" Dec 11 15:45:55 crc kubenswrapper[5050]: I1211 15:45:55.473129 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d412378a-f569-4a89-86d9-eba8d19b9f40-catalog-content\") pod \"redhat-operators-l568j\" (UID: \"d412378a-f569-4a89-86d9-eba8d19b9f40\") " pod="openshift-marketplace/redhat-operators-l568j" Dec 11 15:45:55 crc kubenswrapper[5050]: I1211 15:45:55.473189 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d412378a-f569-4a89-86d9-eba8d19b9f40-utilities\") pod \"redhat-operators-l568j\" (UID: \"d412378a-f569-4a89-86d9-eba8d19b9f40\") " pod="openshift-marketplace/redhat-operators-l568j" Dec 11 15:45:55 crc kubenswrapper[5050]: I1211 15:45:55.474044 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d412378a-f569-4a89-86d9-eba8d19b9f40-utilities\") pod \"redhat-operators-l568j\" (UID: \"d412378a-f569-4a89-86d9-eba8d19b9f40\") " pod="openshift-marketplace/redhat-operators-l568j" Dec 11 15:45:55 crc kubenswrapper[5050]: I1211 15:45:55.474036 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d412378a-f569-4a89-86d9-eba8d19b9f40-catalog-content\") pod \"redhat-operators-l568j\" (UID: \"d412378a-f569-4a89-86d9-eba8d19b9f40\") " pod="openshift-marketplace/redhat-operators-l568j" Dec 11 15:45:55 crc kubenswrapper[5050]: I1211 15:45:55.504840 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lqbg\" (UniqueName: \"kubernetes.io/projected/d412378a-f569-4a89-86d9-eba8d19b9f40-kube-api-access-4lqbg\") pod \"redhat-operators-l568j\" (UID: \"d412378a-f569-4a89-86d9-eba8d19b9f40\") " pod="openshift-marketplace/redhat-operators-l568j" Dec 11 15:45:55 crc kubenswrapper[5050]: I1211 15:45:55.573794 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-l568j" Dec 11 15:45:55 crc kubenswrapper[5050]: I1211 15:45:55.842119 5050 generic.go:334] "Generic (PLEG): container finished" podID="005f0c2b-06d5-4f61-b9c3-94070dc09608" containerID="7f46ca7ac201e4d62a0d56f8012c203e224db88997168098043be1e7ac6dfbde" exitCode=0 Dec 11 15:45:55 crc kubenswrapper[5050]: I1211 15:45:55.842453 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trcz8" event={"ID":"005f0c2b-06d5-4f61-b9c3-94070dc09608","Type":"ContainerDied","Data":"7f46ca7ac201e4d62a0d56f8012c203e224db88997168098043be1e7ac6dfbde"} Dec 11 15:45:56 crc kubenswrapper[5050]: I1211 15:45:56.088333 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l568j"] Dec 11 15:45:56 crc kubenswrapper[5050]: I1211 15:45:56.853976 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l568j" event={"ID":"d412378a-f569-4a89-86d9-eba8d19b9f40","Type":"ContainerStarted","Data":"c2ffc2ba3ac2e5d3d41b8536d94337c4f003fb0d064ccab0bd6c4814c32a58d2"} Dec 11 15:45:57 crc kubenswrapper[5050]: I1211 15:45:57.863961 5050 generic.go:334] "Generic (PLEG): container finished" podID="d412378a-f569-4a89-86d9-eba8d19b9f40" containerID="7cfc48fdac1c223162e1012caaf72d79ddcdb1f01f863713a85f5e1ab6a8c34a" exitCode=0 Dec 11 15:45:57 crc kubenswrapper[5050]: I1211 15:45:57.863999 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l568j" event={"ID":"d412378a-f569-4a89-86d9-eba8d19b9f40","Type":"ContainerDied","Data":"7cfc48fdac1c223162e1012caaf72d79ddcdb1f01f863713a85f5e1ab6a8c34a"} Dec 11 15:45:58 crc kubenswrapper[5050]: I1211 15:45:58.879897 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trcz8" event={"ID":"005f0c2b-06d5-4f61-b9c3-94070dc09608","Type":"ContainerStarted","Data":"6b41132fe7397ffb099b0998c9315168f69bcd3685b73d23374e320369f6aea2"} Dec 11 15:46:00 crc kubenswrapper[5050]: I1211 15:46:00.903239 5050 generic.go:334] "Generic (PLEG): container finished" podID="005f0c2b-06d5-4f61-b9c3-94070dc09608" containerID="6b41132fe7397ffb099b0998c9315168f69bcd3685b73d23374e320369f6aea2" exitCode=0 Dec 11 15:46:00 crc kubenswrapper[5050]: I1211 15:46:00.903428 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trcz8" event={"ID":"005f0c2b-06d5-4f61-b9c3-94070dc09608","Type":"ContainerDied","Data":"6b41132fe7397ffb099b0998c9315168f69bcd3685b73d23374e320369f6aea2"} Dec 11 15:46:01 crc kubenswrapper[5050]: I1211 15:46:01.923369 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trcz8" event={"ID":"005f0c2b-06d5-4f61-b9c3-94070dc09608","Type":"ContainerStarted","Data":"991308ed4d1d51643b70d85b95a08c5dd76f1031606f578f81eee498a5bd6dff"} Dec 11 15:46:01 crc kubenswrapper[5050]: I1211 15:46:01.952563 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-trcz8" podStartSLOduration=3.410378888 podStartE2EDuration="8.95254741s" podCreationTimestamp="2025-12-11 15:45:53 +0000 UTC" firstStartedPulling="2025-12-11 15:45:55.845594359 +0000 UTC m=+7046.689316945" lastFinishedPulling="2025-12-11 15:46:01.387762881 +0000 UTC m=+7052.231485467" observedRunningTime="2025-12-11 15:46:01.942405189 +0000 UTC m=+7052.786127785" watchObservedRunningTime="2025-12-11 15:46:01.95254741 +0000 UTC m=+7052.796269996" Dec 11 15:46:03 crc kubenswrapper[5050]: I1211 15:46:03.797361 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-trcz8" Dec 11 15:46:03 crc kubenswrapper[5050]: I1211 15:46:03.797613 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-trcz8" Dec 11 15:46:04 crc kubenswrapper[5050]: I1211 15:46:04.842155 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-trcz8" podUID="005f0c2b-06d5-4f61-b9c3-94070dc09608" containerName="registry-server" probeResult="failure" output=< Dec 11 15:46:04 crc kubenswrapper[5050]: timeout: failed to connect service ":50051" within 1s Dec 11 15:46:04 crc kubenswrapper[5050]: > Dec 11 15:46:08 crc kubenswrapper[5050]: I1211 15:46:08.781839 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="fef8d631-c968-4ccd-92ec-e6fc5a2f6731" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Dec 11 15:46:13 crc kubenswrapper[5050]: I1211 15:46:13.848079 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-trcz8" Dec 11 15:46:13 crc kubenswrapper[5050]: I1211 15:46:13.899617 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-trcz8" Dec 11 15:46:14 crc kubenswrapper[5050]: I1211 15:46:14.082209 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-trcz8"] Dec 11 15:46:15 crc kubenswrapper[5050]: I1211 15:46:15.064560 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-trcz8" podUID="005f0c2b-06d5-4f61-b9c3-94070dc09608" containerName="registry-server" containerID="cri-o://991308ed4d1d51643b70d85b95a08c5dd76f1031606f578f81eee498a5bd6dff" gracePeriod=2 Dec 11 15:46:16 crc kubenswrapper[5050]: I1211 15:46:16.074294 5050 generic.go:334] "Generic (PLEG): container finished" podID="005f0c2b-06d5-4f61-b9c3-94070dc09608" containerID="991308ed4d1d51643b70d85b95a08c5dd76f1031606f578f81eee498a5bd6dff" exitCode=0 Dec 11 15:46:16 crc kubenswrapper[5050]: I1211 15:46:16.074379 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trcz8" event={"ID":"005f0c2b-06d5-4f61-b9c3-94070dc09608","Type":"ContainerDied","Data":"991308ed4d1d51643b70d85b95a08c5dd76f1031606f578f81eee498a5bd6dff"} Dec 11 15:46:16 crc kubenswrapper[5050]: E1211 15:46:16.235967 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 11 15:46:16 crc kubenswrapper[5050]: E1211 15:46:16.236132 5050 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4lqbg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-l568j_openshift-marketplace(d412378a-f569-4a89-86d9-eba8d19b9f40): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 15:46:16 crc kubenswrapper[5050]: E1211 15:46:16.237841 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-l568j" podUID="d412378a-f569-4a89-86d9-eba8d19b9f40" Dec 11 15:46:16 crc kubenswrapper[5050]: I1211 15:46:16.780453 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-trcz8" Dec 11 15:46:16 crc kubenswrapper[5050]: I1211 15:46:16.837854 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/005f0c2b-06d5-4f61-b9c3-94070dc09608-utilities\") pod \"005f0c2b-06d5-4f61-b9c3-94070dc09608\" (UID: \"005f0c2b-06d5-4f61-b9c3-94070dc09608\") " Dec 11 15:46:16 crc kubenswrapper[5050]: I1211 15:46:16.838141 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/005f0c2b-06d5-4f61-b9c3-94070dc09608-catalog-content\") pod \"005f0c2b-06d5-4f61-b9c3-94070dc09608\" (UID: \"005f0c2b-06d5-4f61-b9c3-94070dc09608\") " Dec 11 15:46:16 crc kubenswrapper[5050]: I1211 15:46:16.838341 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7qkd\" (UniqueName: \"kubernetes.io/projected/005f0c2b-06d5-4f61-b9c3-94070dc09608-kube-api-access-w7qkd\") pod \"005f0c2b-06d5-4f61-b9c3-94070dc09608\" (UID: \"005f0c2b-06d5-4f61-b9c3-94070dc09608\") " Dec 11 15:46:16 crc kubenswrapper[5050]: I1211 15:46:16.838615 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/005f0c2b-06d5-4f61-b9c3-94070dc09608-utilities" (OuterVolumeSpecName: "utilities") pod "005f0c2b-06d5-4f61-b9c3-94070dc09608" (UID: "005f0c2b-06d5-4f61-b9c3-94070dc09608"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:46:16 crc kubenswrapper[5050]: I1211 15:46:16.839035 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/005f0c2b-06d5-4f61-b9c3-94070dc09608-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 15:46:16 crc kubenswrapper[5050]: I1211 15:46:16.844116 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/005f0c2b-06d5-4f61-b9c3-94070dc09608-kube-api-access-w7qkd" (OuterVolumeSpecName: "kube-api-access-w7qkd") pod "005f0c2b-06d5-4f61-b9c3-94070dc09608" (UID: "005f0c2b-06d5-4f61-b9c3-94070dc09608"). InnerVolumeSpecName "kube-api-access-w7qkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:46:16 crc kubenswrapper[5050]: I1211 15:46:16.885887 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/005f0c2b-06d5-4f61-b9c3-94070dc09608-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "005f0c2b-06d5-4f61-b9c3-94070dc09608" (UID: "005f0c2b-06d5-4f61-b9c3-94070dc09608"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:46:16 crc kubenswrapper[5050]: I1211 15:46:16.941363 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/005f0c2b-06d5-4f61-b9c3-94070dc09608-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 15:46:16 crc kubenswrapper[5050]: I1211 15:46:16.941775 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7qkd\" (UniqueName: \"kubernetes.io/projected/005f0c2b-06d5-4f61-b9c3-94070dc09608-kube-api-access-w7qkd\") on node \"crc\" DevicePath \"\"" Dec 11 15:46:17 crc kubenswrapper[5050]: I1211 15:46:17.086345 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-trcz8" Dec 11 15:46:17 crc kubenswrapper[5050]: I1211 15:46:17.086343 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trcz8" event={"ID":"005f0c2b-06d5-4f61-b9c3-94070dc09608","Type":"ContainerDied","Data":"083f05e5dcfbe9932cde54e3c71744ac92fbb84da01d31603bd40b9dd1c29161"} Dec 11 15:46:17 crc kubenswrapper[5050]: I1211 15:46:17.086413 5050 scope.go:117] "RemoveContainer" containerID="991308ed4d1d51643b70d85b95a08c5dd76f1031606f578f81eee498a5bd6dff" Dec 11 15:46:17 crc kubenswrapper[5050]: E1211 15:46:17.089090 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-l568j" podUID="d412378a-f569-4a89-86d9-eba8d19b9f40" Dec 11 15:46:17 crc kubenswrapper[5050]: I1211 15:46:17.119619 5050 scope.go:117] "RemoveContainer" containerID="6b41132fe7397ffb099b0998c9315168f69bcd3685b73d23374e320369f6aea2" Dec 11 15:46:17 crc kubenswrapper[5050]: I1211 15:46:17.137367 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-trcz8"] Dec 11 15:46:17 crc kubenswrapper[5050]: I1211 15:46:17.146670 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-trcz8"] Dec 11 15:46:17 crc kubenswrapper[5050]: I1211 15:46:17.150595 5050 scope.go:117] "RemoveContainer" containerID="7f46ca7ac201e4d62a0d56f8012c203e224db88997168098043be1e7ac6dfbde" Dec 11 15:46:17 crc kubenswrapper[5050]: I1211 15:46:17.558928 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="005f0c2b-06d5-4f61-b9c3-94070dc09608" path="/var/lib/kubelet/pods/005f0c2b-06d5-4f61-b9c3-94070dc09608/volumes" Dec 11 15:46:30 crc kubenswrapper[5050]: I1211 15:46:30.207266 5050 generic.go:334] "Generic (PLEG): container finished" podID="ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8" containerID="1b74abef546ddc219dd2a15737230b4e90328b8a61b3f97039ec276041179c6c" exitCode=0 Dec 11 15:46:30 crc kubenswrapper[5050]: I1211 15:46:30.207367 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx" event={"ID":"ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8","Type":"ContainerDied","Data":"1b74abef546ddc219dd2a15737230b4e90328b8a61b3f97039ec276041179c6c"} Dec 11 15:46:31 crc kubenswrapper[5050]: I1211 15:46:31.665470 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx" Dec 11 15:46:31 crc kubenswrapper[5050]: I1211 15:46:31.688958 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8-inventory\") pod \"ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8\" (UID: \"ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8\") " Dec 11 15:46:31 crc kubenswrapper[5050]: I1211 15:46:31.689152 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8-ceph\") pod \"ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8\" (UID: \"ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8\") " Dec 11 15:46:31 crc kubenswrapper[5050]: I1211 15:46:31.689228 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czk49\" (UniqueName: \"kubernetes.io/projected/ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8-kube-api-access-czk49\") pod \"ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8\" (UID: \"ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8\") " Dec 11 15:46:31 crc kubenswrapper[5050]: I1211 15:46:31.689277 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8-ssh-key\") pod \"ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8\" (UID: \"ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8\") " Dec 11 15:46:31 crc kubenswrapper[5050]: I1211 15:46:31.689370 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8-tripleo-cleanup-combined-ca-bundle\") pod \"ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8\" (UID: \"ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8\") " Dec 11 15:46:31 crc kubenswrapper[5050]: I1211 15:46:31.695346 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8-ceph" (OuterVolumeSpecName: "ceph") pod "ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8" (UID: "ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:46:31 crc kubenswrapper[5050]: I1211 15:46:31.698327 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8-kube-api-access-czk49" (OuterVolumeSpecName: "kube-api-access-czk49") pod "ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8" (UID: "ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8"). InnerVolumeSpecName "kube-api-access-czk49". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:46:31 crc kubenswrapper[5050]: I1211 15:46:31.709360 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8-tripleo-cleanup-combined-ca-bundle" (OuterVolumeSpecName: "tripleo-cleanup-combined-ca-bundle") pod "ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8" (UID: "ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8"). InnerVolumeSpecName "tripleo-cleanup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:46:31 crc kubenswrapper[5050]: I1211 15:46:31.724022 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8" (UID: "ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:46:31 crc kubenswrapper[5050]: I1211 15:46:31.740315 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8-inventory" (OuterVolumeSpecName: "inventory") pod "ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8" (UID: "ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:46:31 crc kubenswrapper[5050]: I1211 15:46:31.790952 5050 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 15:46:31 crc kubenswrapper[5050]: I1211 15:46:31.791026 5050 reconciler_common.go:293] "Volume detached for volume \"tripleo-cleanup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8-tripleo-cleanup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:46:31 crc kubenswrapper[5050]: I1211 15:46:31.791039 5050 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 15:46:31 crc kubenswrapper[5050]: I1211 15:46:31.791050 5050 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8-ceph\") on node \"crc\" DevicePath \"\"" Dec 11 15:46:31 crc kubenswrapper[5050]: I1211 15:46:31.791058 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czk49\" (UniqueName: \"kubernetes.io/projected/ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8-kube-api-access-czk49\") on node \"crc\" DevicePath \"\"" Dec 11 15:46:32 crc kubenswrapper[5050]: I1211 15:46:32.227813 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx" event={"ID":"ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8","Type":"ContainerDied","Data":"541861f96a87d2c6e3ee2d5d6669b92fca0a9b346227bb5ebfbf8d08b082f85d"} Dec 11 15:46:32 crc kubenswrapper[5050]: I1211 15:46:32.228159 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="541861f96a87d2c6e3ee2d5d6669b92fca0a9b346227bb5ebfbf8d08b082f85d" Dec 11 15:46:32 crc kubenswrapper[5050]: I1211 15:46:32.227848 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tripleo-cleanup-tripleo-cleanup-openstack-cell1-9k9mx" Dec 11 15:46:32 crc kubenswrapper[5050]: I1211 15:46:32.230576 5050 generic.go:334] "Generic (PLEG): container finished" podID="d412378a-f569-4a89-86d9-eba8d19b9f40" containerID="0b98f93056031e2c249b24fe6f86a9f4ed3162f45306415d2b2e6e8f07a388e4" exitCode=0 Dec 11 15:46:32 crc kubenswrapper[5050]: I1211 15:46:32.230619 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l568j" event={"ID":"d412378a-f569-4a89-86d9-eba8d19b9f40","Type":"ContainerDied","Data":"0b98f93056031e2c249b24fe6f86a9f4ed3162f45306415d2b2e6e8f07a388e4"} Dec 11 15:46:33 crc kubenswrapper[5050]: I1211 15:46:33.240969 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-l568j" event={"ID":"d412378a-f569-4a89-86d9-eba8d19b9f40","Type":"ContainerStarted","Data":"ffce080159f0323da9740717a7dc3b4d55aaacbe131c269285ca5bae081bba24"} Dec 11 15:46:33 crc kubenswrapper[5050]: I1211 15:46:33.261413 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-l568j" podStartSLOduration=3.432613324 podStartE2EDuration="38.261396849s" podCreationTimestamp="2025-12-11 15:45:55 +0000 UTC" firstStartedPulling="2025-12-11 15:45:57.866784546 +0000 UTC m=+7048.710507132" lastFinishedPulling="2025-12-11 15:46:32.695568081 +0000 UTC m=+7083.539290657" observedRunningTime="2025-12-11 15:46:33.256493037 +0000 UTC m=+7084.100215623" watchObservedRunningTime="2025-12-11 15:46:33.261396849 +0000 UTC m=+7084.105119425" Dec 11 15:46:35 crc kubenswrapper[5050]: I1211 15:46:35.574205 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-l568j" Dec 11 15:46:35 crc kubenswrapper[5050]: I1211 15:46:35.574955 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-l568j" Dec 11 15:46:36 crc kubenswrapper[5050]: I1211 15:46:36.536247 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-6vts5"] Dec 11 15:46:36 crc kubenswrapper[5050]: E1211 15:46:36.537084 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="005f0c2b-06d5-4f61-b9c3-94070dc09608" containerName="extract-content" Dec 11 15:46:36 crc kubenswrapper[5050]: I1211 15:46:36.537104 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="005f0c2b-06d5-4f61-b9c3-94070dc09608" containerName="extract-content" Dec 11 15:46:36 crc kubenswrapper[5050]: E1211 15:46:36.537137 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="005f0c2b-06d5-4f61-b9c3-94070dc09608" containerName="extract-utilities" Dec 11 15:46:36 crc kubenswrapper[5050]: I1211 15:46:36.537145 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="005f0c2b-06d5-4f61-b9c3-94070dc09608" containerName="extract-utilities" Dec 11 15:46:36 crc kubenswrapper[5050]: E1211 15:46:36.537176 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="005f0c2b-06d5-4f61-b9c3-94070dc09608" containerName="registry-server" Dec 11 15:46:36 crc kubenswrapper[5050]: I1211 15:46:36.537183 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="005f0c2b-06d5-4f61-b9c3-94070dc09608" containerName="registry-server" Dec 11 15:46:36 crc kubenswrapper[5050]: E1211 15:46:36.537220 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Dec 11 15:46:36 crc kubenswrapper[5050]: I1211 15:46:36.537242 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Dec 11 15:46:36 crc kubenswrapper[5050]: I1211 15:46:36.537587 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="005f0c2b-06d5-4f61-b9c3-94070dc09608" containerName="registry-server" Dec 11 15:46:36 crc kubenswrapper[5050]: I1211 15:46:36.537609 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac9abb3a-77e8-4ff7-90c7-8eb335c95eb8" containerName="tripleo-cleanup-tripleo-cleanup-openstack-cell1" Dec 11 15:46:36 crc kubenswrapper[5050]: I1211 15:46:36.538628 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-6vts5" Dec 11 15:46:36 crc kubenswrapper[5050]: I1211 15:46:36.541000 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-mvxd9" Dec 11 15:46:36 crc kubenswrapper[5050]: I1211 15:46:36.541144 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 11 15:46:36 crc kubenswrapper[5050]: I1211 15:46:36.543274 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 11 15:46:36 crc kubenswrapper[5050]: I1211 15:46:36.547317 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 15:46:36 crc kubenswrapper[5050]: I1211 15:46:36.547319 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-6vts5"] Dec 11 15:46:36 crc kubenswrapper[5050]: I1211 15:46:36.626768 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-l568j" podUID="d412378a-f569-4a89-86d9-eba8d19b9f40" containerName="registry-server" probeResult="failure" output=< Dec 11 15:46:36 crc kubenswrapper[5050]: timeout: failed to connect service ":50051" within 1s Dec 11 15:46:36 crc kubenswrapper[5050]: > Dec 11 15:46:36 crc kubenswrapper[5050]: I1211 15:46:36.688419 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9c0b12c-757d-4918-854f-c44f3fa6e403-inventory\") pod \"bootstrap-openstack-openstack-cell1-6vts5\" (UID: \"a9c0b12c-757d-4918-854f-c44f3fa6e403\") " pod="openstack/bootstrap-openstack-openstack-cell1-6vts5" Dec 11 15:46:36 crc kubenswrapper[5050]: I1211 15:46:36.688507 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a9c0b12c-757d-4918-854f-c44f3fa6e403-ceph\") pod \"bootstrap-openstack-openstack-cell1-6vts5\" (UID: \"a9c0b12c-757d-4918-854f-c44f3fa6e403\") " pod="openstack/bootstrap-openstack-openstack-cell1-6vts5" Dec 11 15:46:36 crc kubenswrapper[5050]: I1211 15:46:36.688634 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cnq7\" (UniqueName: \"kubernetes.io/projected/a9c0b12c-757d-4918-854f-c44f3fa6e403-kube-api-access-2cnq7\") pod \"bootstrap-openstack-openstack-cell1-6vts5\" (UID: \"a9c0b12c-757d-4918-854f-c44f3fa6e403\") " pod="openstack/bootstrap-openstack-openstack-cell1-6vts5" Dec 11 15:46:36 crc kubenswrapper[5050]: I1211 15:46:36.688870 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c0b12c-757d-4918-854f-c44f3fa6e403-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-6vts5\" (UID: \"a9c0b12c-757d-4918-854f-c44f3fa6e403\") " pod="openstack/bootstrap-openstack-openstack-cell1-6vts5" Dec 11 15:46:36 crc kubenswrapper[5050]: I1211 15:46:36.688928 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a9c0b12c-757d-4918-854f-c44f3fa6e403-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-6vts5\" (UID: \"a9c0b12c-757d-4918-854f-c44f3fa6e403\") " pod="openstack/bootstrap-openstack-openstack-cell1-6vts5" Dec 11 15:46:36 crc kubenswrapper[5050]: I1211 15:46:36.791339 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9c0b12c-757d-4918-854f-c44f3fa6e403-inventory\") pod \"bootstrap-openstack-openstack-cell1-6vts5\" (UID: \"a9c0b12c-757d-4918-854f-c44f3fa6e403\") " pod="openstack/bootstrap-openstack-openstack-cell1-6vts5" Dec 11 15:46:36 crc kubenswrapper[5050]: I1211 15:46:36.791426 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a9c0b12c-757d-4918-854f-c44f3fa6e403-ceph\") pod \"bootstrap-openstack-openstack-cell1-6vts5\" (UID: \"a9c0b12c-757d-4918-854f-c44f3fa6e403\") " pod="openstack/bootstrap-openstack-openstack-cell1-6vts5" Dec 11 15:46:36 crc kubenswrapper[5050]: I1211 15:46:36.791517 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cnq7\" (UniqueName: \"kubernetes.io/projected/a9c0b12c-757d-4918-854f-c44f3fa6e403-kube-api-access-2cnq7\") pod \"bootstrap-openstack-openstack-cell1-6vts5\" (UID: \"a9c0b12c-757d-4918-854f-c44f3fa6e403\") " pod="openstack/bootstrap-openstack-openstack-cell1-6vts5" Dec 11 15:46:36 crc kubenswrapper[5050]: I1211 15:46:36.791699 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c0b12c-757d-4918-854f-c44f3fa6e403-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-6vts5\" (UID: \"a9c0b12c-757d-4918-854f-c44f3fa6e403\") " pod="openstack/bootstrap-openstack-openstack-cell1-6vts5" Dec 11 15:46:36 crc kubenswrapper[5050]: I1211 15:46:36.791746 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a9c0b12c-757d-4918-854f-c44f3fa6e403-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-6vts5\" (UID: \"a9c0b12c-757d-4918-854f-c44f3fa6e403\") " pod="openstack/bootstrap-openstack-openstack-cell1-6vts5" Dec 11 15:46:36 crc kubenswrapper[5050]: I1211 15:46:36.797284 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a9c0b12c-757d-4918-854f-c44f3fa6e403-ceph\") pod \"bootstrap-openstack-openstack-cell1-6vts5\" (UID: \"a9c0b12c-757d-4918-854f-c44f3fa6e403\") " pod="openstack/bootstrap-openstack-openstack-cell1-6vts5" Dec 11 15:46:36 crc kubenswrapper[5050]: I1211 15:46:36.797315 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a9c0b12c-757d-4918-854f-c44f3fa6e403-ssh-key\") pod \"bootstrap-openstack-openstack-cell1-6vts5\" (UID: \"a9c0b12c-757d-4918-854f-c44f3fa6e403\") " pod="openstack/bootstrap-openstack-openstack-cell1-6vts5" Dec 11 15:46:36 crc kubenswrapper[5050]: I1211 15:46:36.797509 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c0b12c-757d-4918-854f-c44f3fa6e403-bootstrap-combined-ca-bundle\") pod \"bootstrap-openstack-openstack-cell1-6vts5\" (UID: \"a9c0b12c-757d-4918-854f-c44f3fa6e403\") " pod="openstack/bootstrap-openstack-openstack-cell1-6vts5" Dec 11 15:46:36 crc kubenswrapper[5050]: I1211 15:46:36.807690 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9c0b12c-757d-4918-854f-c44f3fa6e403-inventory\") pod \"bootstrap-openstack-openstack-cell1-6vts5\" (UID: \"a9c0b12c-757d-4918-854f-c44f3fa6e403\") " pod="openstack/bootstrap-openstack-openstack-cell1-6vts5" Dec 11 15:46:36 crc kubenswrapper[5050]: I1211 15:46:36.809234 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cnq7\" (UniqueName: \"kubernetes.io/projected/a9c0b12c-757d-4918-854f-c44f3fa6e403-kube-api-access-2cnq7\") pod \"bootstrap-openstack-openstack-cell1-6vts5\" (UID: \"a9c0b12c-757d-4918-854f-c44f3fa6e403\") " pod="openstack/bootstrap-openstack-openstack-cell1-6vts5" Dec 11 15:46:36 crc kubenswrapper[5050]: I1211 15:46:36.859355 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-6vts5" Dec 11 15:46:37 crc kubenswrapper[5050]: I1211 15:46:37.485025 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-openstack-openstack-cell1-6vts5"] Dec 11 15:46:38 crc kubenswrapper[5050]: I1211 15:46:38.291438 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-6vts5" event={"ID":"a9c0b12c-757d-4918-854f-c44f3fa6e403","Type":"ContainerStarted","Data":"32680a83a0ebe814a7475868c73429d8f4b5e486dfd09adb8bd8bd7aa80568d2"} Dec 11 15:46:44 crc kubenswrapper[5050]: I1211 15:46:44.345992 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-6vts5" event={"ID":"a9c0b12c-757d-4918-854f-c44f3fa6e403","Type":"ContainerStarted","Data":"118704ea17e304845f15ed057fd4d04ca42c492a1e350957399dee1fb377c3c2"} Dec 11 15:46:44 crc kubenswrapper[5050]: I1211 15:46:44.371294 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-openstack-openstack-cell1-6vts5" podStartSLOduration=2.818042217 podStartE2EDuration="8.371276006s" podCreationTimestamp="2025-12-11 15:46:36 +0000 UTC" firstStartedPulling="2025-12-11 15:46:37.489810524 +0000 UTC m=+7088.333533110" lastFinishedPulling="2025-12-11 15:46:43.043044313 +0000 UTC m=+7093.886766899" observedRunningTime="2025-12-11 15:46:44.361024982 +0000 UTC m=+7095.204747578" watchObservedRunningTime="2025-12-11 15:46:44.371276006 +0000 UTC m=+7095.214998592" Dec 11 15:46:45 crc kubenswrapper[5050]: I1211 15:46:45.629373 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-l568j" Dec 11 15:46:45 crc kubenswrapper[5050]: I1211 15:46:45.683959 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-l568j" Dec 11 15:46:45 crc kubenswrapper[5050]: I1211 15:46:45.782737 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-l568j"] Dec 11 15:46:45 crc kubenswrapper[5050]: I1211 15:46:45.869305 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7kjsc"] Dec 11 15:46:45 crc kubenswrapper[5050]: I1211 15:46:45.869587 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7kjsc" podUID="2f44abb7-49c6-4244-9a69-309876fe3215" containerName="registry-server" containerID="cri-o://e074530b0d4da97fcaefbc59cb64d7c912012148bd9038b67f9390ef6a6645fb" gracePeriod=2 Dec 11 15:46:46 crc kubenswrapper[5050]: I1211 15:46:46.373301 5050 generic.go:334] "Generic (PLEG): container finished" podID="2f44abb7-49c6-4244-9a69-309876fe3215" containerID="e074530b0d4da97fcaefbc59cb64d7c912012148bd9038b67f9390ef6a6645fb" exitCode=0 Dec 11 15:46:46 crc kubenswrapper[5050]: I1211 15:46:46.373408 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kjsc" event={"ID":"2f44abb7-49c6-4244-9a69-309876fe3215","Type":"ContainerDied","Data":"e074530b0d4da97fcaefbc59cb64d7c912012148bd9038b67f9390ef6a6645fb"} Dec 11 15:46:46 crc kubenswrapper[5050]: I1211 15:46:46.950334 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7kjsc" Dec 11 15:46:47 crc kubenswrapper[5050]: I1211 15:46:47.142090 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f44abb7-49c6-4244-9a69-309876fe3215-utilities\") pod \"2f44abb7-49c6-4244-9a69-309876fe3215\" (UID: \"2f44abb7-49c6-4244-9a69-309876fe3215\") " Dec 11 15:46:47 crc kubenswrapper[5050]: I1211 15:46:47.142240 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85lkz\" (UniqueName: \"kubernetes.io/projected/2f44abb7-49c6-4244-9a69-309876fe3215-kube-api-access-85lkz\") pod \"2f44abb7-49c6-4244-9a69-309876fe3215\" (UID: \"2f44abb7-49c6-4244-9a69-309876fe3215\") " Dec 11 15:46:47 crc kubenswrapper[5050]: I1211 15:46:47.142286 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f44abb7-49c6-4244-9a69-309876fe3215-catalog-content\") pod \"2f44abb7-49c6-4244-9a69-309876fe3215\" (UID: \"2f44abb7-49c6-4244-9a69-309876fe3215\") " Dec 11 15:46:47 crc kubenswrapper[5050]: I1211 15:46:47.142498 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f44abb7-49c6-4244-9a69-309876fe3215-utilities" (OuterVolumeSpecName: "utilities") pod "2f44abb7-49c6-4244-9a69-309876fe3215" (UID: "2f44abb7-49c6-4244-9a69-309876fe3215"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:46:47 crc kubenswrapper[5050]: I1211 15:46:47.142917 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f44abb7-49c6-4244-9a69-309876fe3215-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 15:46:47 crc kubenswrapper[5050]: I1211 15:46:47.147865 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f44abb7-49c6-4244-9a69-309876fe3215-kube-api-access-85lkz" (OuterVolumeSpecName: "kube-api-access-85lkz") pod "2f44abb7-49c6-4244-9a69-309876fe3215" (UID: "2f44abb7-49c6-4244-9a69-309876fe3215"). InnerVolumeSpecName "kube-api-access-85lkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:46:47 crc kubenswrapper[5050]: I1211 15:46:47.245476 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85lkz\" (UniqueName: \"kubernetes.io/projected/2f44abb7-49c6-4244-9a69-309876fe3215-kube-api-access-85lkz\") on node \"crc\" DevicePath \"\"" Dec 11 15:46:47 crc kubenswrapper[5050]: I1211 15:46:47.388551 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7kjsc" event={"ID":"2f44abb7-49c6-4244-9a69-309876fe3215","Type":"ContainerDied","Data":"c0231efe495739b0bff814650c65b38c96e38cb53146a56abb8a428b393a3fcc"} Dec 11 15:46:47 crc kubenswrapper[5050]: I1211 15:46:47.388602 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7kjsc" Dec 11 15:46:47 crc kubenswrapper[5050]: I1211 15:46:47.388611 5050 scope.go:117] "RemoveContainer" containerID="e074530b0d4da97fcaefbc59cb64d7c912012148bd9038b67f9390ef6a6645fb" Dec 11 15:46:47 crc kubenswrapper[5050]: I1211 15:46:47.389887 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f44abb7-49c6-4244-9a69-309876fe3215-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f44abb7-49c6-4244-9a69-309876fe3215" (UID: "2f44abb7-49c6-4244-9a69-309876fe3215"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:46:47 crc kubenswrapper[5050]: I1211 15:46:47.451961 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f44abb7-49c6-4244-9a69-309876fe3215-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 15:46:47 crc kubenswrapper[5050]: I1211 15:46:47.519548 5050 scope.go:117] "RemoveContainer" containerID="75afbd52823f82c9a6df1a3f529de63347c49cd0697253058d9ef3c27264f2ca" Dec 11 15:46:47 crc kubenswrapper[5050]: I1211 15:46:47.545377 5050 scope.go:117] "RemoveContainer" containerID="b8057aa76b3cf8c210c5ce30bf935274f7040762e0bfa9e6f5fb9ff13ba68b76" Dec 11 15:46:47 crc kubenswrapper[5050]: I1211 15:46:47.718948 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7kjsc"] Dec 11 15:46:47 crc kubenswrapper[5050]: I1211 15:46:47.728977 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7kjsc"] Dec 11 15:46:49 crc kubenswrapper[5050]: I1211 15:46:49.559043 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f44abb7-49c6-4244-9a69-309876fe3215" path="/var/lib/kubelet/pods/2f44abb7-49c6-4244-9a69-309876fe3215/volumes" Dec 11 15:48:10 crc kubenswrapper[5050]: I1211 15:48:10.796658 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 15:48:10 crc kubenswrapper[5050]: I1211 15:48:10.797248 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 15:48:18 crc kubenswrapper[5050]: I1211 15:48:18.625007 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gwmpm"] Dec 11 15:48:18 crc kubenswrapper[5050]: E1211 15:48:18.626179 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f44abb7-49c6-4244-9a69-309876fe3215" containerName="extract-utilities" Dec 11 15:48:18 crc kubenswrapper[5050]: I1211 15:48:18.626198 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f44abb7-49c6-4244-9a69-309876fe3215" containerName="extract-utilities" Dec 11 15:48:18 crc kubenswrapper[5050]: E1211 15:48:18.626220 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f44abb7-49c6-4244-9a69-309876fe3215" containerName="extract-content" Dec 11 15:48:18 crc kubenswrapper[5050]: I1211 15:48:18.626228 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f44abb7-49c6-4244-9a69-309876fe3215" containerName="extract-content" Dec 11 15:48:18 crc kubenswrapper[5050]: E1211 15:48:18.626251 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f44abb7-49c6-4244-9a69-309876fe3215" containerName="registry-server" Dec 11 15:48:18 crc kubenswrapper[5050]: I1211 15:48:18.626261 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f44abb7-49c6-4244-9a69-309876fe3215" containerName="registry-server" Dec 11 15:48:18 crc kubenswrapper[5050]: I1211 15:48:18.626523 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f44abb7-49c6-4244-9a69-309876fe3215" containerName="registry-server" Dec 11 15:48:18 crc kubenswrapper[5050]: I1211 15:48:18.628754 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gwmpm" Dec 11 15:48:18 crc kubenswrapper[5050]: I1211 15:48:18.636107 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gwmpm"] Dec 11 15:48:18 crc kubenswrapper[5050]: I1211 15:48:18.783433 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89224ef4-18e2-4d7b-a1ac-2610a4af73df-utilities\") pod \"redhat-marketplace-gwmpm\" (UID: \"89224ef4-18e2-4d7b-a1ac-2610a4af73df\") " pod="openshift-marketplace/redhat-marketplace-gwmpm" Dec 11 15:48:18 crc kubenswrapper[5050]: I1211 15:48:18.783635 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52q92\" (UniqueName: \"kubernetes.io/projected/89224ef4-18e2-4d7b-a1ac-2610a4af73df-kube-api-access-52q92\") pod \"redhat-marketplace-gwmpm\" (UID: \"89224ef4-18e2-4d7b-a1ac-2610a4af73df\") " pod="openshift-marketplace/redhat-marketplace-gwmpm" Dec 11 15:48:18 crc kubenswrapper[5050]: I1211 15:48:18.783701 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89224ef4-18e2-4d7b-a1ac-2610a4af73df-catalog-content\") pod \"redhat-marketplace-gwmpm\" (UID: \"89224ef4-18e2-4d7b-a1ac-2610a4af73df\") " pod="openshift-marketplace/redhat-marketplace-gwmpm" Dec 11 15:48:18 crc kubenswrapper[5050]: I1211 15:48:18.885664 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52q92\" (UniqueName: \"kubernetes.io/projected/89224ef4-18e2-4d7b-a1ac-2610a4af73df-kube-api-access-52q92\") pod \"redhat-marketplace-gwmpm\" (UID: \"89224ef4-18e2-4d7b-a1ac-2610a4af73df\") " pod="openshift-marketplace/redhat-marketplace-gwmpm" Dec 11 15:48:18 crc kubenswrapper[5050]: I1211 15:48:18.885742 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89224ef4-18e2-4d7b-a1ac-2610a4af73df-catalog-content\") pod \"redhat-marketplace-gwmpm\" (UID: \"89224ef4-18e2-4d7b-a1ac-2610a4af73df\") " pod="openshift-marketplace/redhat-marketplace-gwmpm" Dec 11 15:48:18 crc kubenswrapper[5050]: I1211 15:48:18.885822 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89224ef4-18e2-4d7b-a1ac-2610a4af73df-utilities\") pod \"redhat-marketplace-gwmpm\" (UID: \"89224ef4-18e2-4d7b-a1ac-2610a4af73df\") " pod="openshift-marketplace/redhat-marketplace-gwmpm" Dec 11 15:48:18 crc kubenswrapper[5050]: I1211 15:48:18.886311 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89224ef4-18e2-4d7b-a1ac-2610a4af73df-catalog-content\") pod \"redhat-marketplace-gwmpm\" (UID: \"89224ef4-18e2-4d7b-a1ac-2610a4af73df\") " pod="openshift-marketplace/redhat-marketplace-gwmpm" Dec 11 15:48:18 crc kubenswrapper[5050]: I1211 15:48:18.886343 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89224ef4-18e2-4d7b-a1ac-2610a4af73df-utilities\") pod \"redhat-marketplace-gwmpm\" (UID: \"89224ef4-18e2-4d7b-a1ac-2610a4af73df\") " pod="openshift-marketplace/redhat-marketplace-gwmpm" Dec 11 15:48:18 crc kubenswrapper[5050]: I1211 15:48:18.904900 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52q92\" (UniqueName: \"kubernetes.io/projected/89224ef4-18e2-4d7b-a1ac-2610a4af73df-kube-api-access-52q92\") pod \"redhat-marketplace-gwmpm\" (UID: \"89224ef4-18e2-4d7b-a1ac-2610a4af73df\") " pod="openshift-marketplace/redhat-marketplace-gwmpm" Dec 11 15:48:18 crc kubenswrapper[5050]: I1211 15:48:18.947615 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gwmpm" Dec 11 15:48:19 crc kubenswrapper[5050]: I1211 15:48:19.469259 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gwmpm"] Dec 11 15:48:20 crc kubenswrapper[5050]: I1211 15:48:20.289105 5050 generic.go:334] "Generic (PLEG): container finished" podID="89224ef4-18e2-4d7b-a1ac-2610a4af73df" containerID="373c85552c453c57e1443170f8ba4f22590b41288d6e8282885f84e4662d9f5c" exitCode=0 Dec 11 15:48:20 crc kubenswrapper[5050]: I1211 15:48:20.289208 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gwmpm" event={"ID":"89224ef4-18e2-4d7b-a1ac-2610a4af73df","Type":"ContainerDied","Data":"373c85552c453c57e1443170f8ba4f22590b41288d6e8282885f84e4662d9f5c"} Dec 11 15:48:20 crc kubenswrapper[5050]: I1211 15:48:20.289422 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gwmpm" event={"ID":"89224ef4-18e2-4d7b-a1ac-2610a4af73df","Type":"ContainerStarted","Data":"89707cf3bf8566f0d9eedba198e37f4c7f66ab2945e8d16555105bf12dcf84f0"} Dec 11 15:48:20 crc kubenswrapper[5050]: I1211 15:48:20.291890 5050 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 15:48:22 crc kubenswrapper[5050]: I1211 15:48:22.316446 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gwmpm" event={"ID":"89224ef4-18e2-4d7b-a1ac-2610a4af73df","Type":"ContainerStarted","Data":"d6aaa61be9dde9ed3bc9551b10f7854ab3be7c7fecc3edf5253c65c2c3dbe4a3"} Dec 11 15:48:23 crc kubenswrapper[5050]: I1211 15:48:23.329848 5050 generic.go:334] "Generic (PLEG): container finished" podID="89224ef4-18e2-4d7b-a1ac-2610a4af73df" containerID="d6aaa61be9dde9ed3bc9551b10f7854ab3be7c7fecc3edf5253c65c2c3dbe4a3" exitCode=0 Dec 11 15:48:23 crc kubenswrapper[5050]: I1211 15:48:23.329964 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gwmpm" event={"ID":"89224ef4-18e2-4d7b-a1ac-2610a4af73df","Type":"ContainerDied","Data":"d6aaa61be9dde9ed3bc9551b10f7854ab3be7c7fecc3edf5253c65c2c3dbe4a3"} Dec 11 15:48:23 crc kubenswrapper[5050]: I1211 15:48:23.330361 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gwmpm" event={"ID":"89224ef4-18e2-4d7b-a1ac-2610a4af73df","Type":"ContainerStarted","Data":"ed254cd7c80d35b960b73a99134609559f2f5827c8bb8f4bff02cfe0ac8512e9"} Dec 11 15:48:23 crc kubenswrapper[5050]: I1211 15:48:23.356554 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gwmpm" podStartSLOduration=2.818191423 podStartE2EDuration="5.356526014s" podCreationTimestamp="2025-12-11 15:48:18 +0000 UTC" firstStartedPulling="2025-12-11 15:48:20.291651347 +0000 UTC m=+7191.135373933" lastFinishedPulling="2025-12-11 15:48:22.829985938 +0000 UTC m=+7193.673708524" observedRunningTime="2025-12-11 15:48:23.350121022 +0000 UTC m=+7194.193843618" watchObservedRunningTime="2025-12-11 15:48:23.356526014 +0000 UTC m=+7194.200248600" Dec 11 15:48:28 crc kubenswrapper[5050]: I1211 15:48:28.947835 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gwmpm" Dec 11 15:48:28 crc kubenswrapper[5050]: I1211 15:48:28.948335 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gwmpm" Dec 11 15:48:28 crc kubenswrapper[5050]: I1211 15:48:28.996667 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gwmpm" Dec 11 15:48:29 crc kubenswrapper[5050]: I1211 15:48:29.428364 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gwmpm" Dec 11 15:48:29 crc kubenswrapper[5050]: I1211 15:48:29.473674 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gwmpm"] Dec 11 15:48:31 crc kubenswrapper[5050]: I1211 15:48:31.400093 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gwmpm" podUID="89224ef4-18e2-4d7b-a1ac-2610a4af73df" containerName="registry-server" containerID="cri-o://ed254cd7c80d35b960b73a99134609559f2f5827c8bb8f4bff02cfe0ac8512e9" gracePeriod=2 Dec 11 15:48:31 crc kubenswrapper[5050]: I1211 15:48:31.905585 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gwmpm" Dec 11 15:48:32 crc kubenswrapper[5050]: I1211 15:48:32.026603 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89224ef4-18e2-4d7b-a1ac-2610a4af73df-utilities\") pod \"89224ef4-18e2-4d7b-a1ac-2610a4af73df\" (UID: \"89224ef4-18e2-4d7b-a1ac-2610a4af73df\") " Dec 11 15:48:32 crc kubenswrapper[5050]: I1211 15:48:32.027218 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52q92\" (UniqueName: \"kubernetes.io/projected/89224ef4-18e2-4d7b-a1ac-2610a4af73df-kube-api-access-52q92\") pod \"89224ef4-18e2-4d7b-a1ac-2610a4af73df\" (UID: \"89224ef4-18e2-4d7b-a1ac-2610a4af73df\") " Dec 11 15:48:32 crc kubenswrapper[5050]: I1211 15:48:32.027278 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89224ef4-18e2-4d7b-a1ac-2610a4af73df-catalog-content\") pod \"89224ef4-18e2-4d7b-a1ac-2610a4af73df\" (UID: \"89224ef4-18e2-4d7b-a1ac-2610a4af73df\") " Dec 11 15:48:32 crc kubenswrapper[5050]: I1211 15:48:32.027370 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89224ef4-18e2-4d7b-a1ac-2610a4af73df-utilities" (OuterVolumeSpecName: "utilities") pod "89224ef4-18e2-4d7b-a1ac-2610a4af73df" (UID: "89224ef4-18e2-4d7b-a1ac-2610a4af73df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:48:32 crc kubenswrapper[5050]: I1211 15:48:32.027912 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89224ef4-18e2-4d7b-a1ac-2610a4af73df-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 15:48:32 crc kubenswrapper[5050]: I1211 15:48:32.032709 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89224ef4-18e2-4d7b-a1ac-2610a4af73df-kube-api-access-52q92" (OuterVolumeSpecName: "kube-api-access-52q92") pod "89224ef4-18e2-4d7b-a1ac-2610a4af73df" (UID: "89224ef4-18e2-4d7b-a1ac-2610a4af73df"). InnerVolumeSpecName "kube-api-access-52q92". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:48:32 crc kubenswrapper[5050]: I1211 15:48:32.051569 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89224ef4-18e2-4d7b-a1ac-2610a4af73df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89224ef4-18e2-4d7b-a1ac-2610a4af73df" (UID: "89224ef4-18e2-4d7b-a1ac-2610a4af73df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:48:32 crc kubenswrapper[5050]: I1211 15:48:32.129765 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52q92\" (UniqueName: \"kubernetes.io/projected/89224ef4-18e2-4d7b-a1ac-2610a4af73df-kube-api-access-52q92\") on node \"crc\" DevicePath \"\"" Dec 11 15:48:32 crc kubenswrapper[5050]: I1211 15:48:32.129800 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89224ef4-18e2-4d7b-a1ac-2610a4af73df-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 15:48:32 crc kubenswrapper[5050]: I1211 15:48:32.412851 5050 generic.go:334] "Generic (PLEG): container finished" podID="89224ef4-18e2-4d7b-a1ac-2610a4af73df" containerID="ed254cd7c80d35b960b73a99134609559f2f5827c8bb8f4bff02cfe0ac8512e9" exitCode=0 Dec 11 15:48:32 crc kubenswrapper[5050]: I1211 15:48:32.412900 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gwmpm" event={"ID":"89224ef4-18e2-4d7b-a1ac-2610a4af73df","Type":"ContainerDied","Data":"ed254cd7c80d35b960b73a99134609559f2f5827c8bb8f4bff02cfe0ac8512e9"} Dec 11 15:48:32 crc kubenswrapper[5050]: I1211 15:48:32.412927 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gwmpm" Dec 11 15:48:32 crc kubenswrapper[5050]: I1211 15:48:32.412944 5050 scope.go:117] "RemoveContainer" containerID="ed254cd7c80d35b960b73a99134609559f2f5827c8bb8f4bff02cfe0ac8512e9" Dec 11 15:48:32 crc kubenswrapper[5050]: I1211 15:48:32.412931 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gwmpm" event={"ID":"89224ef4-18e2-4d7b-a1ac-2610a4af73df","Type":"ContainerDied","Data":"89707cf3bf8566f0d9eedba198e37f4c7f66ab2945e8d16555105bf12dcf84f0"} Dec 11 15:48:32 crc kubenswrapper[5050]: I1211 15:48:32.448106 5050 scope.go:117] "RemoveContainer" containerID="d6aaa61be9dde9ed3bc9551b10f7854ab3be7c7fecc3edf5253c65c2c3dbe4a3" Dec 11 15:48:32 crc kubenswrapper[5050]: I1211 15:48:32.449349 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gwmpm"] Dec 11 15:48:32 crc kubenswrapper[5050]: I1211 15:48:32.462792 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gwmpm"] Dec 11 15:48:32 crc kubenswrapper[5050]: I1211 15:48:32.477799 5050 scope.go:117] "RemoveContainer" containerID="373c85552c453c57e1443170f8ba4f22590b41288d6e8282885f84e4662d9f5c" Dec 11 15:48:32 crc kubenswrapper[5050]: I1211 15:48:32.521572 5050 scope.go:117] "RemoveContainer" containerID="ed254cd7c80d35b960b73a99134609559f2f5827c8bb8f4bff02cfe0ac8512e9" Dec 11 15:48:32 crc kubenswrapper[5050]: E1211 15:48:32.521957 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed254cd7c80d35b960b73a99134609559f2f5827c8bb8f4bff02cfe0ac8512e9\": container with ID starting with ed254cd7c80d35b960b73a99134609559f2f5827c8bb8f4bff02cfe0ac8512e9 not found: ID does not exist" containerID="ed254cd7c80d35b960b73a99134609559f2f5827c8bb8f4bff02cfe0ac8512e9" Dec 11 15:48:32 crc kubenswrapper[5050]: I1211 15:48:32.521995 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed254cd7c80d35b960b73a99134609559f2f5827c8bb8f4bff02cfe0ac8512e9"} err="failed to get container status \"ed254cd7c80d35b960b73a99134609559f2f5827c8bb8f4bff02cfe0ac8512e9\": rpc error: code = NotFound desc = could not find container \"ed254cd7c80d35b960b73a99134609559f2f5827c8bb8f4bff02cfe0ac8512e9\": container with ID starting with ed254cd7c80d35b960b73a99134609559f2f5827c8bb8f4bff02cfe0ac8512e9 not found: ID does not exist" Dec 11 15:48:32 crc kubenswrapper[5050]: I1211 15:48:32.522031 5050 scope.go:117] "RemoveContainer" containerID="d6aaa61be9dde9ed3bc9551b10f7854ab3be7c7fecc3edf5253c65c2c3dbe4a3" Dec 11 15:48:32 crc kubenswrapper[5050]: E1211 15:48:32.522361 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6aaa61be9dde9ed3bc9551b10f7854ab3be7c7fecc3edf5253c65c2c3dbe4a3\": container with ID starting with d6aaa61be9dde9ed3bc9551b10f7854ab3be7c7fecc3edf5253c65c2c3dbe4a3 not found: ID does not exist" containerID="d6aaa61be9dde9ed3bc9551b10f7854ab3be7c7fecc3edf5253c65c2c3dbe4a3" Dec 11 15:48:32 crc kubenswrapper[5050]: I1211 15:48:32.522389 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6aaa61be9dde9ed3bc9551b10f7854ab3be7c7fecc3edf5253c65c2c3dbe4a3"} err="failed to get container status \"d6aaa61be9dde9ed3bc9551b10f7854ab3be7c7fecc3edf5253c65c2c3dbe4a3\": rpc error: code = NotFound desc = could not find container \"d6aaa61be9dde9ed3bc9551b10f7854ab3be7c7fecc3edf5253c65c2c3dbe4a3\": container with ID starting with d6aaa61be9dde9ed3bc9551b10f7854ab3be7c7fecc3edf5253c65c2c3dbe4a3 not found: ID does not exist" Dec 11 15:48:32 crc kubenswrapper[5050]: I1211 15:48:32.522404 5050 scope.go:117] "RemoveContainer" containerID="373c85552c453c57e1443170f8ba4f22590b41288d6e8282885f84e4662d9f5c" Dec 11 15:48:32 crc kubenswrapper[5050]: E1211 15:48:32.522670 5050 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"373c85552c453c57e1443170f8ba4f22590b41288d6e8282885f84e4662d9f5c\": container with ID starting with 373c85552c453c57e1443170f8ba4f22590b41288d6e8282885f84e4662d9f5c not found: ID does not exist" containerID="373c85552c453c57e1443170f8ba4f22590b41288d6e8282885f84e4662d9f5c" Dec 11 15:48:32 crc kubenswrapper[5050]: I1211 15:48:32.522689 5050 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"373c85552c453c57e1443170f8ba4f22590b41288d6e8282885f84e4662d9f5c"} err="failed to get container status \"373c85552c453c57e1443170f8ba4f22590b41288d6e8282885f84e4662d9f5c\": rpc error: code = NotFound desc = could not find container \"373c85552c453c57e1443170f8ba4f22590b41288d6e8282885f84e4662d9f5c\": container with ID starting with 373c85552c453c57e1443170f8ba4f22590b41288d6e8282885f84e4662d9f5c not found: ID does not exist" Dec 11 15:48:33 crc kubenswrapper[5050]: I1211 15:48:33.574910 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89224ef4-18e2-4d7b-a1ac-2610a4af73df" path="/var/lib/kubelet/pods/89224ef4-18e2-4d7b-a1ac-2610a4af73df/volumes" Dec 11 15:48:40 crc kubenswrapper[5050]: I1211 15:48:40.797029 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 15:48:40 crc kubenswrapper[5050]: I1211 15:48:40.797516 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 15:49:10 crc kubenswrapper[5050]: I1211 15:49:10.796351 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 15:49:10 crc kubenswrapper[5050]: I1211 15:49:10.797072 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 15:49:10 crc kubenswrapper[5050]: I1211 15:49:10.797494 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 15:49:10 crc kubenswrapper[5050]: I1211 15:49:10.798339 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5fa6a64e32e1f24d28d1f983c96e1edb7963e605f315af22dbc0a41da96ac168"} pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 15:49:10 crc kubenswrapper[5050]: I1211 15:49:10.798391 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" containerID="cri-o://5fa6a64e32e1f24d28d1f983c96e1edb7963e605f315af22dbc0a41da96ac168" gracePeriod=600 Dec 11 15:49:11 crc kubenswrapper[5050]: I1211 15:49:11.811425 5050 generic.go:334] "Generic (PLEG): container finished" podID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerID="5fa6a64e32e1f24d28d1f983c96e1edb7963e605f315af22dbc0a41da96ac168" exitCode=0 Dec 11 15:49:11 crc kubenswrapper[5050]: I1211 15:49:11.811479 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerDied","Data":"5fa6a64e32e1f24d28d1f983c96e1edb7963e605f315af22dbc0a41da96ac168"} Dec 11 15:49:11 crc kubenswrapper[5050]: I1211 15:49:11.811975 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerStarted","Data":"33899e24adacef82929b0fbbcd58af7cb4f7aa3935b49ec9b5b4045a51da1d14"} Dec 11 15:49:11 crc kubenswrapper[5050]: I1211 15:49:11.812000 5050 scope.go:117] "RemoveContainer" containerID="e1193cb2672c9b23c9602bdf8ae6d04616c4f7a2af4ed4a2aeb230e558518c57" Dec 11 15:49:48 crc kubenswrapper[5050]: I1211 15:49:48.135704 5050 generic.go:334] "Generic (PLEG): container finished" podID="a9c0b12c-757d-4918-854f-c44f3fa6e403" containerID="118704ea17e304845f15ed057fd4d04ca42c492a1e350957399dee1fb377c3c2" exitCode=0 Dec 11 15:49:48 crc kubenswrapper[5050]: I1211 15:49:48.135794 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-6vts5" event={"ID":"a9c0b12c-757d-4918-854f-c44f3fa6e403","Type":"ContainerDied","Data":"118704ea17e304845f15ed057fd4d04ca42c492a1e350957399dee1fb377c3c2"} Dec 11 15:49:49 crc kubenswrapper[5050]: I1211 15:49:49.627573 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-6vts5" Dec 11 15:49:49 crc kubenswrapper[5050]: I1211 15:49:49.738662 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cnq7\" (UniqueName: \"kubernetes.io/projected/a9c0b12c-757d-4918-854f-c44f3fa6e403-kube-api-access-2cnq7\") pod \"a9c0b12c-757d-4918-854f-c44f3fa6e403\" (UID: \"a9c0b12c-757d-4918-854f-c44f3fa6e403\") " Dec 11 15:49:49 crc kubenswrapper[5050]: I1211 15:49:49.738888 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a9c0b12c-757d-4918-854f-c44f3fa6e403-ssh-key\") pod \"a9c0b12c-757d-4918-854f-c44f3fa6e403\" (UID: \"a9c0b12c-757d-4918-854f-c44f3fa6e403\") " Dec 11 15:49:49 crc kubenswrapper[5050]: I1211 15:49:49.738944 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a9c0b12c-757d-4918-854f-c44f3fa6e403-ceph\") pod \"a9c0b12c-757d-4918-854f-c44f3fa6e403\" (UID: \"a9c0b12c-757d-4918-854f-c44f3fa6e403\") " Dec 11 15:49:49 crc kubenswrapper[5050]: I1211 15:49:49.739087 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c0b12c-757d-4918-854f-c44f3fa6e403-bootstrap-combined-ca-bundle\") pod \"a9c0b12c-757d-4918-854f-c44f3fa6e403\" (UID: \"a9c0b12c-757d-4918-854f-c44f3fa6e403\") " Dec 11 15:49:49 crc kubenswrapper[5050]: I1211 15:49:49.739209 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9c0b12c-757d-4918-854f-c44f3fa6e403-inventory\") pod \"a9c0b12c-757d-4918-854f-c44f3fa6e403\" (UID: \"a9c0b12c-757d-4918-854f-c44f3fa6e403\") " Dec 11 15:49:49 crc kubenswrapper[5050]: I1211 15:49:49.744606 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9c0b12c-757d-4918-854f-c44f3fa6e403-kube-api-access-2cnq7" (OuterVolumeSpecName: "kube-api-access-2cnq7") pod "a9c0b12c-757d-4918-854f-c44f3fa6e403" (UID: "a9c0b12c-757d-4918-854f-c44f3fa6e403"). InnerVolumeSpecName "kube-api-access-2cnq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:49:49 crc kubenswrapper[5050]: I1211 15:49:49.745092 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c0b12c-757d-4918-854f-c44f3fa6e403-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "a9c0b12c-757d-4918-854f-c44f3fa6e403" (UID: "a9c0b12c-757d-4918-854f-c44f3fa6e403"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:49:49 crc kubenswrapper[5050]: I1211 15:49:49.745115 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c0b12c-757d-4918-854f-c44f3fa6e403-ceph" (OuterVolumeSpecName: "ceph") pod "a9c0b12c-757d-4918-854f-c44f3fa6e403" (UID: "a9c0b12c-757d-4918-854f-c44f3fa6e403"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:49:49 crc kubenswrapper[5050]: I1211 15:49:49.766531 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c0b12c-757d-4918-854f-c44f3fa6e403-inventory" (OuterVolumeSpecName: "inventory") pod "a9c0b12c-757d-4918-854f-c44f3fa6e403" (UID: "a9c0b12c-757d-4918-854f-c44f3fa6e403"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:49:49 crc kubenswrapper[5050]: I1211 15:49:49.768807 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c0b12c-757d-4918-854f-c44f3fa6e403-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a9c0b12c-757d-4918-854f-c44f3fa6e403" (UID: "a9c0b12c-757d-4918-854f-c44f3fa6e403"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:49:49 crc kubenswrapper[5050]: I1211 15:49:49.843088 5050 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a9c0b12c-757d-4918-854f-c44f3fa6e403-ceph\") on node \"crc\" DevicePath \"\"" Dec 11 15:49:49 crc kubenswrapper[5050]: I1211 15:49:49.843116 5050 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c0b12c-757d-4918-854f-c44f3fa6e403-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:49:49 crc kubenswrapper[5050]: I1211 15:49:49.843140 5050 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9c0b12c-757d-4918-854f-c44f3fa6e403-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 15:49:49 crc kubenswrapper[5050]: I1211 15:49:49.843150 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cnq7\" (UniqueName: \"kubernetes.io/projected/a9c0b12c-757d-4918-854f-c44f3fa6e403-kube-api-access-2cnq7\") on node \"crc\" DevicePath \"\"" Dec 11 15:49:49 crc kubenswrapper[5050]: I1211 15:49:49.843158 5050 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a9c0b12c-757d-4918-854f-c44f3fa6e403-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 15:49:50 crc kubenswrapper[5050]: I1211 15:49:50.160694 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-openstack-openstack-cell1-6vts5" event={"ID":"a9c0b12c-757d-4918-854f-c44f3fa6e403","Type":"ContainerDied","Data":"32680a83a0ebe814a7475868c73429d8f4b5e486dfd09adb8bd8bd7aa80568d2"} Dec 11 15:49:50 crc kubenswrapper[5050]: I1211 15:49:50.161089 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32680a83a0ebe814a7475868c73429d8f4b5e486dfd09adb8bd8bd7aa80568d2" Dec 11 15:49:50 crc kubenswrapper[5050]: I1211 15:49:50.160756 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-openstack-openstack-cell1-6vts5" Dec 11 15:49:50 crc kubenswrapper[5050]: I1211 15:49:50.309614 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-w86fd"] Dec 11 15:49:50 crc kubenswrapper[5050]: E1211 15:49:50.310293 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89224ef4-18e2-4d7b-a1ac-2610a4af73df" containerName="registry-server" Dec 11 15:49:50 crc kubenswrapper[5050]: I1211 15:49:50.310433 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="89224ef4-18e2-4d7b-a1ac-2610a4af73df" containerName="registry-server" Dec 11 15:49:50 crc kubenswrapper[5050]: E1211 15:49:50.310535 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89224ef4-18e2-4d7b-a1ac-2610a4af73df" containerName="extract-content" Dec 11 15:49:50 crc kubenswrapper[5050]: I1211 15:49:50.310598 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="89224ef4-18e2-4d7b-a1ac-2610a4af73df" containerName="extract-content" Dec 11 15:49:50 crc kubenswrapper[5050]: E1211 15:49:50.310669 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89224ef4-18e2-4d7b-a1ac-2610a4af73df" containerName="extract-utilities" Dec 11 15:49:50 crc kubenswrapper[5050]: I1211 15:49:50.310739 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="89224ef4-18e2-4d7b-a1ac-2610a4af73df" containerName="extract-utilities" Dec 11 15:49:50 crc kubenswrapper[5050]: E1211 15:49:50.310833 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9c0b12c-757d-4918-854f-c44f3fa6e403" containerName="bootstrap-openstack-openstack-cell1" Dec 11 15:49:50 crc kubenswrapper[5050]: I1211 15:49:50.310889 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9c0b12c-757d-4918-854f-c44f3fa6e403" containerName="bootstrap-openstack-openstack-cell1" Dec 11 15:49:50 crc kubenswrapper[5050]: I1211 15:49:50.311186 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="89224ef4-18e2-4d7b-a1ac-2610a4af73df" containerName="registry-server" Dec 11 15:49:50 crc kubenswrapper[5050]: I1211 15:49:50.311312 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9c0b12c-757d-4918-854f-c44f3fa6e403" containerName="bootstrap-openstack-openstack-cell1" Dec 11 15:49:50 crc kubenswrapper[5050]: I1211 15:49:50.312330 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-w86fd" Dec 11 15:49:50 crc kubenswrapper[5050]: I1211 15:49:50.314867 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 11 15:49:50 crc kubenswrapper[5050]: I1211 15:49:50.315166 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 11 15:49:50 crc kubenswrapper[5050]: I1211 15:49:50.315577 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 15:49:50 crc kubenswrapper[5050]: I1211 15:49:50.315677 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-mvxd9" Dec 11 15:49:50 crc kubenswrapper[5050]: I1211 15:49:50.333546 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-w86fd"] Dec 11 15:49:50 crc kubenswrapper[5050]: I1211 15:49:50.456521 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cc0a18bd-14c6-490d-9470-ba2dbf8523e2-ssh-key\") pod \"download-cache-openstack-openstack-cell1-w86fd\" (UID: \"cc0a18bd-14c6-490d-9470-ba2dbf8523e2\") " pod="openstack/download-cache-openstack-openstack-cell1-w86fd" Dec 11 15:49:50 crc kubenswrapper[5050]: I1211 15:49:50.456852 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc0a18bd-14c6-490d-9470-ba2dbf8523e2-inventory\") pod \"download-cache-openstack-openstack-cell1-w86fd\" (UID: \"cc0a18bd-14c6-490d-9470-ba2dbf8523e2\") " pod="openstack/download-cache-openstack-openstack-cell1-w86fd" Dec 11 15:49:50 crc kubenswrapper[5050]: I1211 15:49:50.456991 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cc0a18bd-14c6-490d-9470-ba2dbf8523e2-ceph\") pod \"download-cache-openstack-openstack-cell1-w86fd\" (UID: \"cc0a18bd-14c6-490d-9470-ba2dbf8523e2\") " pod="openstack/download-cache-openstack-openstack-cell1-w86fd" Dec 11 15:49:50 crc kubenswrapper[5050]: I1211 15:49:50.457138 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gm5s\" (UniqueName: \"kubernetes.io/projected/cc0a18bd-14c6-490d-9470-ba2dbf8523e2-kube-api-access-7gm5s\") pod \"download-cache-openstack-openstack-cell1-w86fd\" (UID: \"cc0a18bd-14c6-490d-9470-ba2dbf8523e2\") " pod="openstack/download-cache-openstack-openstack-cell1-w86fd" Dec 11 15:49:50 crc kubenswrapper[5050]: I1211 15:49:50.559396 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc0a18bd-14c6-490d-9470-ba2dbf8523e2-inventory\") pod \"download-cache-openstack-openstack-cell1-w86fd\" (UID: \"cc0a18bd-14c6-490d-9470-ba2dbf8523e2\") " pod="openstack/download-cache-openstack-openstack-cell1-w86fd" Dec 11 15:49:50 crc kubenswrapper[5050]: I1211 15:49:50.559483 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cc0a18bd-14c6-490d-9470-ba2dbf8523e2-ceph\") pod \"download-cache-openstack-openstack-cell1-w86fd\" (UID: \"cc0a18bd-14c6-490d-9470-ba2dbf8523e2\") " pod="openstack/download-cache-openstack-openstack-cell1-w86fd" Dec 11 15:49:50 crc kubenswrapper[5050]: I1211 15:49:50.559523 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gm5s\" (UniqueName: \"kubernetes.io/projected/cc0a18bd-14c6-490d-9470-ba2dbf8523e2-kube-api-access-7gm5s\") pod \"download-cache-openstack-openstack-cell1-w86fd\" (UID: \"cc0a18bd-14c6-490d-9470-ba2dbf8523e2\") " pod="openstack/download-cache-openstack-openstack-cell1-w86fd" Dec 11 15:49:50 crc kubenswrapper[5050]: I1211 15:49:50.559576 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cc0a18bd-14c6-490d-9470-ba2dbf8523e2-ssh-key\") pod \"download-cache-openstack-openstack-cell1-w86fd\" (UID: \"cc0a18bd-14c6-490d-9470-ba2dbf8523e2\") " pod="openstack/download-cache-openstack-openstack-cell1-w86fd" Dec 11 15:49:50 crc kubenswrapper[5050]: I1211 15:49:50.564575 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cc0a18bd-14c6-490d-9470-ba2dbf8523e2-ssh-key\") pod \"download-cache-openstack-openstack-cell1-w86fd\" (UID: \"cc0a18bd-14c6-490d-9470-ba2dbf8523e2\") " pod="openstack/download-cache-openstack-openstack-cell1-w86fd" Dec 11 15:49:50 crc kubenswrapper[5050]: I1211 15:49:50.564691 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc0a18bd-14c6-490d-9470-ba2dbf8523e2-inventory\") pod \"download-cache-openstack-openstack-cell1-w86fd\" (UID: \"cc0a18bd-14c6-490d-9470-ba2dbf8523e2\") " pod="openstack/download-cache-openstack-openstack-cell1-w86fd" Dec 11 15:49:50 crc kubenswrapper[5050]: I1211 15:49:50.565959 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cc0a18bd-14c6-490d-9470-ba2dbf8523e2-ceph\") pod \"download-cache-openstack-openstack-cell1-w86fd\" (UID: \"cc0a18bd-14c6-490d-9470-ba2dbf8523e2\") " pod="openstack/download-cache-openstack-openstack-cell1-w86fd" Dec 11 15:49:50 crc kubenswrapper[5050]: I1211 15:49:50.576147 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gm5s\" (UniqueName: \"kubernetes.io/projected/cc0a18bd-14c6-490d-9470-ba2dbf8523e2-kube-api-access-7gm5s\") pod \"download-cache-openstack-openstack-cell1-w86fd\" (UID: \"cc0a18bd-14c6-490d-9470-ba2dbf8523e2\") " pod="openstack/download-cache-openstack-openstack-cell1-w86fd" Dec 11 15:49:50 crc kubenswrapper[5050]: I1211 15:49:50.630886 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-w86fd" Dec 11 15:49:51 crc kubenswrapper[5050]: I1211 15:49:51.187776 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-openstack-openstack-cell1-w86fd"] Dec 11 15:49:51 crc kubenswrapper[5050]: W1211 15:49:51.191321 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc0a18bd_14c6_490d_9470_ba2dbf8523e2.slice/crio-1379d172e02103a112f0883f141c55d9dc4f7e3893cd4d577fc3d9392625549c WatchSource:0}: Error finding container 1379d172e02103a112f0883f141c55d9dc4f7e3893cd4d577fc3d9392625549c: Status 404 returned error can't find the container with id 1379d172e02103a112f0883f141c55d9dc4f7e3893cd4d577fc3d9392625549c Dec 11 15:49:52 crc kubenswrapper[5050]: I1211 15:49:52.181960 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-w86fd" event={"ID":"cc0a18bd-14c6-490d-9470-ba2dbf8523e2","Type":"ContainerStarted","Data":"6153afd9f04f13a6d36363fe3dba9f6c6810a3784079824aec9c8749b0de0375"} Dec 11 15:49:52 crc kubenswrapper[5050]: I1211 15:49:52.182260 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-w86fd" event={"ID":"cc0a18bd-14c6-490d-9470-ba2dbf8523e2","Type":"ContainerStarted","Data":"1379d172e02103a112f0883f141c55d9dc4f7e3893cd4d577fc3d9392625549c"} Dec 11 15:49:52 crc kubenswrapper[5050]: I1211 15:49:52.209332 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-openstack-openstack-cell1-w86fd" podStartSLOduration=1.610141343 podStartE2EDuration="2.209314165s" podCreationTimestamp="2025-12-11 15:49:50 +0000 UTC" firstStartedPulling="2025-12-11 15:49:51.193525542 +0000 UTC m=+7282.037248128" lastFinishedPulling="2025-12-11 15:49:51.792698364 +0000 UTC m=+7282.636420950" observedRunningTime="2025-12-11 15:49:52.201746322 +0000 UTC m=+7283.045468918" watchObservedRunningTime="2025-12-11 15:49:52.209314165 +0000 UTC m=+7283.053036751" Dec 11 15:51:26 crc kubenswrapper[5050]: I1211 15:51:26.081478 5050 generic.go:334] "Generic (PLEG): container finished" podID="cc0a18bd-14c6-490d-9470-ba2dbf8523e2" containerID="6153afd9f04f13a6d36363fe3dba9f6c6810a3784079824aec9c8749b0de0375" exitCode=0 Dec 11 15:51:26 crc kubenswrapper[5050]: I1211 15:51:26.081594 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-w86fd" event={"ID":"cc0a18bd-14c6-490d-9470-ba2dbf8523e2","Type":"ContainerDied","Data":"6153afd9f04f13a6d36363fe3dba9f6c6810a3784079824aec9c8749b0de0375"} Dec 11 15:51:27 crc kubenswrapper[5050]: I1211 15:51:27.508484 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-w86fd" Dec 11 15:51:27 crc kubenswrapper[5050]: I1211 15:51:27.636833 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc0a18bd-14c6-490d-9470-ba2dbf8523e2-inventory\") pod \"cc0a18bd-14c6-490d-9470-ba2dbf8523e2\" (UID: \"cc0a18bd-14c6-490d-9470-ba2dbf8523e2\") " Dec 11 15:51:27 crc kubenswrapper[5050]: I1211 15:51:27.637592 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cc0a18bd-14c6-490d-9470-ba2dbf8523e2-ssh-key\") pod \"cc0a18bd-14c6-490d-9470-ba2dbf8523e2\" (UID: \"cc0a18bd-14c6-490d-9470-ba2dbf8523e2\") " Dec 11 15:51:27 crc kubenswrapper[5050]: I1211 15:51:27.637721 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gm5s\" (UniqueName: \"kubernetes.io/projected/cc0a18bd-14c6-490d-9470-ba2dbf8523e2-kube-api-access-7gm5s\") pod \"cc0a18bd-14c6-490d-9470-ba2dbf8523e2\" (UID: \"cc0a18bd-14c6-490d-9470-ba2dbf8523e2\") " Dec 11 15:51:27 crc kubenswrapper[5050]: I1211 15:51:27.637851 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cc0a18bd-14c6-490d-9470-ba2dbf8523e2-ceph\") pod \"cc0a18bd-14c6-490d-9470-ba2dbf8523e2\" (UID: \"cc0a18bd-14c6-490d-9470-ba2dbf8523e2\") " Dec 11 15:51:27 crc kubenswrapper[5050]: I1211 15:51:27.642706 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc0a18bd-14c6-490d-9470-ba2dbf8523e2-ceph" (OuterVolumeSpecName: "ceph") pod "cc0a18bd-14c6-490d-9470-ba2dbf8523e2" (UID: "cc0a18bd-14c6-490d-9470-ba2dbf8523e2"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:51:27 crc kubenswrapper[5050]: I1211 15:51:27.643724 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc0a18bd-14c6-490d-9470-ba2dbf8523e2-kube-api-access-7gm5s" (OuterVolumeSpecName: "kube-api-access-7gm5s") pod "cc0a18bd-14c6-490d-9470-ba2dbf8523e2" (UID: "cc0a18bd-14c6-490d-9470-ba2dbf8523e2"). InnerVolumeSpecName "kube-api-access-7gm5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:51:27 crc kubenswrapper[5050]: I1211 15:51:27.666949 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc0a18bd-14c6-490d-9470-ba2dbf8523e2-inventory" (OuterVolumeSpecName: "inventory") pod "cc0a18bd-14c6-490d-9470-ba2dbf8523e2" (UID: "cc0a18bd-14c6-490d-9470-ba2dbf8523e2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:51:27 crc kubenswrapper[5050]: I1211 15:51:27.670024 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc0a18bd-14c6-490d-9470-ba2dbf8523e2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "cc0a18bd-14c6-490d-9470-ba2dbf8523e2" (UID: "cc0a18bd-14c6-490d-9470-ba2dbf8523e2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:51:27 crc kubenswrapper[5050]: I1211 15:51:27.740162 5050 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cc0a18bd-14c6-490d-9470-ba2dbf8523e2-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 15:51:27 crc kubenswrapper[5050]: I1211 15:51:27.740197 5050 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cc0a18bd-14c6-490d-9470-ba2dbf8523e2-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 15:51:27 crc kubenswrapper[5050]: I1211 15:51:27.740213 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gm5s\" (UniqueName: \"kubernetes.io/projected/cc0a18bd-14c6-490d-9470-ba2dbf8523e2-kube-api-access-7gm5s\") on node \"crc\" DevicePath \"\"" Dec 11 15:51:27 crc kubenswrapper[5050]: I1211 15:51:27.740226 5050 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cc0a18bd-14c6-490d-9470-ba2dbf8523e2-ceph\") on node \"crc\" DevicePath \"\"" Dec 11 15:51:28 crc kubenswrapper[5050]: I1211 15:51:28.106162 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-openstack-openstack-cell1-w86fd" event={"ID":"cc0a18bd-14c6-490d-9470-ba2dbf8523e2","Type":"ContainerDied","Data":"1379d172e02103a112f0883f141c55d9dc4f7e3893cd4d577fc3d9392625549c"} Dec 11 15:51:28 crc kubenswrapper[5050]: I1211 15:51:28.106209 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1379d172e02103a112f0883f141c55d9dc4f7e3893cd4d577fc3d9392625549c" Dec 11 15:51:28 crc kubenswrapper[5050]: I1211 15:51:28.106244 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-openstack-openstack-cell1-w86fd" Dec 11 15:51:28 crc kubenswrapper[5050]: I1211 15:51:28.182161 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-9t26p"] Dec 11 15:51:28 crc kubenswrapper[5050]: E1211 15:51:28.182834 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc0a18bd-14c6-490d-9470-ba2dbf8523e2" containerName="download-cache-openstack-openstack-cell1" Dec 11 15:51:28 crc kubenswrapper[5050]: I1211 15:51:28.182860 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc0a18bd-14c6-490d-9470-ba2dbf8523e2" containerName="download-cache-openstack-openstack-cell1" Dec 11 15:51:28 crc kubenswrapper[5050]: I1211 15:51:28.183157 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc0a18bd-14c6-490d-9470-ba2dbf8523e2" containerName="download-cache-openstack-openstack-cell1" Dec 11 15:51:28 crc kubenswrapper[5050]: I1211 15:51:28.184337 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-9t26p" Dec 11 15:51:28 crc kubenswrapper[5050]: I1211 15:51:28.186958 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 11 15:51:28 crc kubenswrapper[5050]: I1211 15:51:28.187164 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-mvxd9" Dec 11 15:51:28 crc kubenswrapper[5050]: I1211 15:51:28.187347 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 11 15:51:28 crc kubenswrapper[5050]: I1211 15:51:28.187519 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 15:51:28 crc kubenswrapper[5050]: I1211 15:51:28.192754 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-9t26p"] Dec 11 15:51:28 crc kubenswrapper[5050]: I1211 15:51:28.251747 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx99r\" (UniqueName: \"kubernetes.io/projected/8c985e38-46ae-4bd1-ad63-3d95d5e5455b-kube-api-access-lx99r\") pod \"configure-network-openstack-openstack-cell1-9t26p\" (UID: \"8c985e38-46ae-4bd1-ad63-3d95d5e5455b\") " pod="openstack/configure-network-openstack-openstack-cell1-9t26p" Dec 11 15:51:28 crc kubenswrapper[5050]: I1211 15:51:28.252061 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8c985e38-46ae-4bd1-ad63-3d95d5e5455b-ceph\") pod \"configure-network-openstack-openstack-cell1-9t26p\" (UID: \"8c985e38-46ae-4bd1-ad63-3d95d5e5455b\") " pod="openstack/configure-network-openstack-openstack-cell1-9t26p" Dec 11 15:51:28 crc kubenswrapper[5050]: I1211 15:51:28.252196 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c985e38-46ae-4bd1-ad63-3d95d5e5455b-ssh-key\") pod \"configure-network-openstack-openstack-cell1-9t26p\" (UID: \"8c985e38-46ae-4bd1-ad63-3d95d5e5455b\") " pod="openstack/configure-network-openstack-openstack-cell1-9t26p" Dec 11 15:51:28 crc kubenswrapper[5050]: I1211 15:51:28.252413 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c985e38-46ae-4bd1-ad63-3d95d5e5455b-inventory\") pod \"configure-network-openstack-openstack-cell1-9t26p\" (UID: \"8c985e38-46ae-4bd1-ad63-3d95d5e5455b\") " pod="openstack/configure-network-openstack-openstack-cell1-9t26p" Dec 11 15:51:28 crc kubenswrapper[5050]: I1211 15:51:28.354677 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c985e38-46ae-4bd1-ad63-3d95d5e5455b-ssh-key\") pod \"configure-network-openstack-openstack-cell1-9t26p\" (UID: \"8c985e38-46ae-4bd1-ad63-3d95d5e5455b\") " pod="openstack/configure-network-openstack-openstack-cell1-9t26p" Dec 11 15:51:28 crc kubenswrapper[5050]: I1211 15:51:28.355506 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c985e38-46ae-4bd1-ad63-3d95d5e5455b-inventory\") pod \"configure-network-openstack-openstack-cell1-9t26p\" (UID: \"8c985e38-46ae-4bd1-ad63-3d95d5e5455b\") " pod="openstack/configure-network-openstack-openstack-cell1-9t26p" Dec 11 15:51:28 crc kubenswrapper[5050]: I1211 15:51:28.356176 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx99r\" (UniqueName: \"kubernetes.io/projected/8c985e38-46ae-4bd1-ad63-3d95d5e5455b-kube-api-access-lx99r\") pod \"configure-network-openstack-openstack-cell1-9t26p\" (UID: \"8c985e38-46ae-4bd1-ad63-3d95d5e5455b\") " pod="openstack/configure-network-openstack-openstack-cell1-9t26p" Dec 11 15:51:28 crc kubenswrapper[5050]: I1211 15:51:28.356408 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8c985e38-46ae-4bd1-ad63-3d95d5e5455b-ceph\") pod \"configure-network-openstack-openstack-cell1-9t26p\" (UID: \"8c985e38-46ae-4bd1-ad63-3d95d5e5455b\") " pod="openstack/configure-network-openstack-openstack-cell1-9t26p" Dec 11 15:51:28 crc kubenswrapper[5050]: I1211 15:51:28.358304 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c985e38-46ae-4bd1-ad63-3d95d5e5455b-ssh-key\") pod \"configure-network-openstack-openstack-cell1-9t26p\" (UID: \"8c985e38-46ae-4bd1-ad63-3d95d5e5455b\") " pod="openstack/configure-network-openstack-openstack-cell1-9t26p" Dec 11 15:51:28 crc kubenswrapper[5050]: I1211 15:51:28.359485 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c985e38-46ae-4bd1-ad63-3d95d5e5455b-inventory\") pod \"configure-network-openstack-openstack-cell1-9t26p\" (UID: \"8c985e38-46ae-4bd1-ad63-3d95d5e5455b\") " pod="openstack/configure-network-openstack-openstack-cell1-9t26p" Dec 11 15:51:28 crc kubenswrapper[5050]: I1211 15:51:28.369352 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8c985e38-46ae-4bd1-ad63-3d95d5e5455b-ceph\") pod \"configure-network-openstack-openstack-cell1-9t26p\" (UID: \"8c985e38-46ae-4bd1-ad63-3d95d5e5455b\") " pod="openstack/configure-network-openstack-openstack-cell1-9t26p" Dec 11 15:51:28 crc kubenswrapper[5050]: I1211 15:51:28.376512 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx99r\" (UniqueName: \"kubernetes.io/projected/8c985e38-46ae-4bd1-ad63-3d95d5e5455b-kube-api-access-lx99r\") pod \"configure-network-openstack-openstack-cell1-9t26p\" (UID: \"8c985e38-46ae-4bd1-ad63-3d95d5e5455b\") " pod="openstack/configure-network-openstack-openstack-cell1-9t26p" Dec 11 15:51:28 crc kubenswrapper[5050]: I1211 15:51:28.503945 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-9t26p" Dec 11 15:51:29 crc kubenswrapper[5050]: I1211 15:51:29.051178 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-openstack-openstack-cell1-9t26p"] Dec 11 15:51:29 crc kubenswrapper[5050]: I1211 15:51:29.139990 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-9t26p" event={"ID":"8c985e38-46ae-4bd1-ad63-3d95d5e5455b","Type":"ContainerStarted","Data":"b3ef2fc06afb2e53bec64121945b5d2f754fc426eaa68e01b5320424456ac8ab"} Dec 11 15:51:29 crc kubenswrapper[5050]: I1211 15:51:29.682839 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 15:51:30 crc kubenswrapper[5050]: I1211 15:51:30.153647 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-9t26p" event={"ID":"8c985e38-46ae-4bd1-ad63-3d95d5e5455b","Type":"ContainerStarted","Data":"b070932e42b92fc9124f9f8303ed480683e7e68f8a754488f40b6ef76df02449"} Dec 11 15:51:30 crc kubenswrapper[5050]: I1211 15:51:30.180941 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-openstack-openstack-cell1-9t26p" podStartSLOduration=1.558315516 podStartE2EDuration="2.180917575s" podCreationTimestamp="2025-12-11 15:51:28 +0000 UTC" firstStartedPulling="2025-12-11 15:51:29.057403538 +0000 UTC m=+7379.901126134" lastFinishedPulling="2025-12-11 15:51:29.680005607 +0000 UTC m=+7380.523728193" observedRunningTime="2025-12-11 15:51:30.167498836 +0000 UTC m=+7381.011221432" watchObservedRunningTime="2025-12-11 15:51:30.180917575 +0000 UTC m=+7381.024640181" Dec 11 15:51:40 crc kubenswrapper[5050]: I1211 15:51:40.796303 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 15:51:40 crc kubenswrapper[5050]: I1211 15:51:40.796988 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 15:52:10 crc kubenswrapper[5050]: I1211 15:52:10.795994 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 15:52:10 crc kubenswrapper[5050]: I1211 15:52:10.796637 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 15:52:40 crc kubenswrapper[5050]: I1211 15:52:40.795977 5050 patch_prober.go:28] interesting pod/machine-config-daemon-wcb2s container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 15:52:40 crc kubenswrapper[5050]: I1211 15:52:40.796524 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 15:52:40 crc kubenswrapper[5050]: I1211 15:52:40.796575 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" Dec 11 15:52:40 crc kubenswrapper[5050]: I1211 15:52:40.797508 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"33899e24adacef82929b0fbbcd58af7cb4f7aa3935b49ec9b5b4045a51da1d14"} pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 15:52:40 crc kubenswrapper[5050]: I1211 15:52:40.797560 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerName="machine-config-daemon" containerID="cri-o://33899e24adacef82929b0fbbcd58af7cb4f7aa3935b49ec9b5b4045a51da1d14" gracePeriod=600 Dec 11 15:52:41 crc kubenswrapper[5050]: I1211 15:52:41.831150 5050 generic.go:334] "Generic (PLEG): container finished" podID="7e849b2e-7cd7-4e49-acd2-deab139c699a" containerID="33899e24adacef82929b0fbbcd58af7cb4f7aa3935b49ec9b5b4045a51da1d14" exitCode=0 Dec 11 15:52:41 crc kubenswrapper[5050]: I1211 15:52:41.831314 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" event={"ID":"7e849b2e-7cd7-4e49-acd2-deab139c699a","Type":"ContainerDied","Data":"33899e24adacef82929b0fbbcd58af7cb4f7aa3935b49ec9b5b4045a51da1d14"} Dec 11 15:52:41 crc kubenswrapper[5050]: I1211 15:52:41.831504 5050 scope.go:117] "RemoveContainer" containerID="5fa6a64e32e1f24d28d1f983c96e1edb7963e605f315af22dbc0a41da96ac168" Dec 11 15:52:41 crc kubenswrapper[5050]: E1211 15:52:41.907566 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:52:42 crc kubenswrapper[5050]: I1211 15:52:42.851871 5050 scope.go:117] "RemoveContainer" containerID="33899e24adacef82929b0fbbcd58af7cb4f7aa3935b49ec9b5b4045a51da1d14" Dec 11 15:52:42 crc kubenswrapper[5050]: E1211 15:52:42.852456 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:52:50 crc kubenswrapper[5050]: I1211 15:52:50.921098 5050 generic.go:334] "Generic (PLEG): container finished" podID="8c985e38-46ae-4bd1-ad63-3d95d5e5455b" containerID="b070932e42b92fc9124f9f8303ed480683e7e68f8a754488f40b6ef76df02449" exitCode=0 Dec 11 15:52:50 crc kubenswrapper[5050]: I1211 15:52:50.921197 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-9t26p" event={"ID":"8c985e38-46ae-4bd1-ad63-3d95d5e5455b","Type":"ContainerDied","Data":"b070932e42b92fc9124f9f8303ed480683e7e68f8a754488f40b6ef76df02449"} Dec 11 15:52:52 crc kubenswrapper[5050]: I1211 15:52:52.591114 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-9t26p" Dec 11 15:52:52 crc kubenswrapper[5050]: I1211 15:52:52.644490 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8c985e38-46ae-4bd1-ad63-3d95d5e5455b-ceph\") pod \"8c985e38-46ae-4bd1-ad63-3d95d5e5455b\" (UID: \"8c985e38-46ae-4bd1-ad63-3d95d5e5455b\") " Dec 11 15:52:52 crc kubenswrapper[5050]: I1211 15:52:52.644607 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c985e38-46ae-4bd1-ad63-3d95d5e5455b-inventory\") pod \"8c985e38-46ae-4bd1-ad63-3d95d5e5455b\" (UID: \"8c985e38-46ae-4bd1-ad63-3d95d5e5455b\") " Dec 11 15:52:52 crc kubenswrapper[5050]: I1211 15:52:52.645357 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lx99r\" (UniqueName: \"kubernetes.io/projected/8c985e38-46ae-4bd1-ad63-3d95d5e5455b-kube-api-access-lx99r\") pod \"8c985e38-46ae-4bd1-ad63-3d95d5e5455b\" (UID: \"8c985e38-46ae-4bd1-ad63-3d95d5e5455b\") " Dec 11 15:52:52 crc kubenswrapper[5050]: I1211 15:52:52.645442 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c985e38-46ae-4bd1-ad63-3d95d5e5455b-ssh-key\") pod \"8c985e38-46ae-4bd1-ad63-3d95d5e5455b\" (UID: \"8c985e38-46ae-4bd1-ad63-3d95d5e5455b\") " Dec 11 15:52:52 crc kubenswrapper[5050]: I1211 15:52:52.650805 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c985e38-46ae-4bd1-ad63-3d95d5e5455b-ceph" (OuterVolumeSpecName: "ceph") pod "8c985e38-46ae-4bd1-ad63-3d95d5e5455b" (UID: "8c985e38-46ae-4bd1-ad63-3d95d5e5455b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:52:52 crc kubenswrapper[5050]: I1211 15:52:52.671832 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c985e38-46ae-4bd1-ad63-3d95d5e5455b-kube-api-access-lx99r" (OuterVolumeSpecName: "kube-api-access-lx99r") pod "8c985e38-46ae-4bd1-ad63-3d95d5e5455b" (UID: "8c985e38-46ae-4bd1-ad63-3d95d5e5455b"). InnerVolumeSpecName "kube-api-access-lx99r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:52:52 crc kubenswrapper[5050]: I1211 15:52:52.677896 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c985e38-46ae-4bd1-ad63-3d95d5e5455b-inventory" (OuterVolumeSpecName: "inventory") pod "8c985e38-46ae-4bd1-ad63-3d95d5e5455b" (UID: "8c985e38-46ae-4bd1-ad63-3d95d5e5455b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:52:52 crc kubenswrapper[5050]: I1211 15:52:52.688669 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c985e38-46ae-4bd1-ad63-3d95d5e5455b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "8c985e38-46ae-4bd1-ad63-3d95d5e5455b" (UID: "8c985e38-46ae-4bd1-ad63-3d95d5e5455b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:52:52 crc kubenswrapper[5050]: I1211 15:52:52.748294 5050 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/8c985e38-46ae-4bd1-ad63-3d95d5e5455b-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 15:52:52 crc kubenswrapper[5050]: I1211 15:52:52.748326 5050 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8c985e38-46ae-4bd1-ad63-3d95d5e5455b-ceph\") on node \"crc\" DevicePath \"\"" Dec 11 15:52:52 crc kubenswrapper[5050]: I1211 15:52:52.748337 5050 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8c985e38-46ae-4bd1-ad63-3d95d5e5455b-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 15:52:52 crc kubenswrapper[5050]: I1211 15:52:52.748348 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lx99r\" (UniqueName: \"kubernetes.io/projected/8c985e38-46ae-4bd1-ad63-3d95d5e5455b-kube-api-access-lx99r\") on node \"crc\" DevicePath \"\"" Dec 11 15:52:52 crc kubenswrapper[5050]: I1211 15:52:52.944481 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-openstack-openstack-cell1-9t26p" event={"ID":"8c985e38-46ae-4bd1-ad63-3d95d5e5455b","Type":"ContainerDied","Data":"b3ef2fc06afb2e53bec64121945b5d2f754fc426eaa68e01b5320424456ac8ab"} Dec 11 15:52:52 crc kubenswrapper[5050]: I1211 15:52:52.944526 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3ef2fc06afb2e53bec64121945b5d2f754fc426eaa68e01b5320424456ac8ab" Dec 11 15:52:52 crc kubenswrapper[5050]: I1211 15:52:52.944541 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-openstack-openstack-cell1-9t26p" Dec 11 15:52:53 crc kubenswrapper[5050]: I1211 15:52:53.027889 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-5pb7q"] Dec 11 15:52:53 crc kubenswrapper[5050]: E1211 15:52:53.028441 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c985e38-46ae-4bd1-ad63-3d95d5e5455b" containerName="configure-network-openstack-openstack-cell1" Dec 11 15:52:53 crc kubenswrapper[5050]: I1211 15:52:53.028473 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c985e38-46ae-4bd1-ad63-3d95d5e5455b" containerName="configure-network-openstack-openstack-cell1" Dec 11 15:52:53 crc kubenswrapper[5050]: I1211 15:52:53.028679 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c985e38-46ae-4bd1-ad63-3d95d5e5455b" containerName="configure-network-openstack-openstack-cell1" Dec 11 15:52:53 crc kubenswrapper[5050]: I1211 15:52:53.029458 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-5pb7q" Dec 11 15:52:53 crc kubenswrapper[5050]: I1211 15:52:53.032451 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 11 15:52:53 crc kubenswrapper[5050]: I1211 15:52:53.032628 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 11 15:52:53 crc kubenswrapper[5050]: I1211 15:52:53.032757 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-mvxd9" Dec 11 15:52:53 crc kubenswrapper[5050]: I1211 15:52:53.033309 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 15:52:53 crc kubenswrapper[5050]: I1211 15:52:53.044646 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-5pb7q"] Dec 11 15:52:53 crc kubenswrapper[5050]: I1211 15:52:53.155979 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6c2d8343-b085-4545-9a26-2dd0bf907b5e-ceph\") pod \"validate-network-openstack-openstack-cell1-5pb7q\" (UID: \"6c2d8343-b085-4545-9a26-2dd0bf907b5e\") " pod="openstack/validate-network-openstack-openstack-cell1-5pb7q" Dec 11 15:52:53 crc kubenswrapper[5050]: I1211 15:52:53.156300 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrls2\" (UniqueName: \"kubernetes.io/projected/6c2d8343-b085-4545-9a26-2dd0bf907b5e-kube-api-access-jrls2\") pod \"validate-network-openstack-openstack-cell1-5pb7q\" (UID: \"6c2d8343-b085-4545-9a26-2dd0bf907b5e\") " pod="openstack/validate-network-openstack-openstack-cell1-5pb7q" Dec 11 15:52:53 crc kubenswrapper[5050]: I1211 15:52:53.156354 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c2d8343-b085-4545-9a26-2dd0bf907b5e-ssh-key\") pod \"validate-network-openstack-openstack-cell1-5pb7q\" (UID: \"6c2d8343-b085-4545-9a26-2dd0bf907b5e\") " pod="openstack/validate-network-openstack-openstack-cell1-5pb7q" Dec 11 15:52:53 crc kubenswrapper[5050]: I1211 15:52:53.156725 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c2d8343-b085-4545-9a26-2dd0bf907b5e-inventory\") pod \"validate-network-openstack-openstack-cell1-5pb7q\" (UID: \"6c2d8343-b085-4545-9a26-2dd0bf907b5e\") " pod="openstack/validate-network-openstack-openstack-cell1-5pb7q" Dec 11 15:52:53 crc kubenswrapper[5050]: I1211 15:52:53.258409 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c2d8343-b085-4545-9a26-2dd0bf907b5e-inventory\") pod \"validate-network-openstack-openstack-cell1-5pb7q\" (UID: \"6c2d8343-b085-4545-9a26-2dd0bf907b5e\") " pod="openstack/validate-network-openstack-openstack-cell1-5pb7q" Dec 11 15:52:53 crc kubenswrapper[5050]: I1211 15:52:53.258496 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6c2d8343-b085-4545-9a26-2dd0bf907b5e-ceph\") pod \"validate-network-openstack-openstack-cell1-5pb7q\" (UID: \"6c2d8343-b085-4545-9a26-2dd0bf907b5e\") " pod="openstack/validate-network-openstack-openstack-cell1-5pb7q" Dec 11 15:52:53 crc kubenswrapper[5050]: I1211 15:52:53.258593 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrls2\" (UniqueName: \"kubernetes.io/projected/6c2d8343-b085-4545-9a26-2dd0bf907b5e-kube-api-access-jrls2\") pod \"validate-network-openstack-openstack-cell1-5pb7q\" (UID: \"6c2d8343-b085-4545-9a26-2dd0bf907b5e\") " pod="openstack/validate-network-openstack-openstack-cell1-5pb7q" Dec 11 15:52:53 crc kubenswrapper[5050]: I1211 15:52:53.258625 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c2d8343-b085-4545-9a26-2dd0bf907b5e-ssh-key\") pod \"validate-network-openstack-openstack-cell1-5pb7q\" (UID: \"6c2d8343-b085-4545-9a26-2dd0bf907b5e\") " pod="openstack/validate-network-openstack-openstack-cell1-5pb7q" Dec 11 15:52:53 crc kubenswrapper[5050]: I1211 15:52:53.262703 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c2d8343-b085-4545-9a26-2dd0bf907b5e-inventory\") pod \"validate-network-openstack-openstack-cell1-5pb7q\" (UID: \"6c2d8343-b085-4545-9a26-2dd0bf907b5e\") " pod="openstack/validate-network-openstack-openstack-cell1-5pb7q" Dec 11 15:52:53 crc kubenswrapper[5050]: I1211 15:52:53.263323 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c2d8343-b085-4545-9a26-2dd0bf907b5e-ssh-key\") pod \"validate-network-openstack-openstack-cell1-5pb7q\" (UID: \"6c2d8343-b085-4545-9a26-2dd0bf907b5e\") " pod="openstack/validate-network-openstack-openstack-cell1-5pb7q" Dec 11 15:52:53 crc kubenswrapper[5050]: I1211 15:52:53.264139 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6c2d8343-b085-4545-9a26-2dd0bf907b5e-ceph\") pod \"validate-network-openstack-openstack-cell1-5pb7q\" (UID: \"6c2d8343-b085-4545-9a26-2dd0bf907b5e\") " pod="openstack/validate-network-openstack-openstack-cell1-5pb7q" Dec 11 15:52:53 crc kubenswrapper[5050]: I1211 15:52:53.281992 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrls2\" (UniqueName: \"kubernetes.io/projected/6c2d8343-b085-4545-9a26-2dd0bf907b5e-kube-api-access-jrls2\") pod \"validate-network-openstack-openstack-cell1-5pb7q\" (UID: \"6c2d8343-b085-4545-9a26-2dd0bf907b5e\") " pod="openstack/validate-network-openstack-openstack-cell1-5pb7q" Dec 11 15:52:53 crc kubenswrapper[5050]: I1211 15:52:53.346835 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-5pb7q" Dec 11 15:52:53 crc kubenswrapper[5050]: I1211 15:52:53.931451 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-openstack-openstack-cell1-5pb7q"] Dec 11 15:52:53 crc kubenswrapper[5050]: W1211 15:52:53.937094 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c2d8343_b085_4545_9a26_2dd0bf907b5e.slice/crio-b3b7be253ec6a8381c39dad3c2a10a0e22c3a93ac47b1a7613858931d1dc2f57 WatchSource:0}: Error finding container b3b7be253ec6a8381c39dad3c2a10a0e22c3a93ac47b1a7613858931d1dc2f57: Status 404 returned error can't find the container with id b3b7be253ec6a8381c39dad3c2a10a0e22c3a93ac47b1a7613858931d1dc2f57 Dec 11 15:52:53 crc kubenswrapper[5050]: I1211 15:52:53.978943 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-5pb7q" event={"ID":"6c2d8343-b085-4545-9a26-2dd0bf907b5e","Type":"ContainerStarted","Data":"b3b7be253ec6a8381c39dad3c2a10a0e22c3a93ac47b1a7613858931d1dc2f57"} Dec 11 15:52:56 crc kubenswrapper[5050]: I1211 15:52:56.546916 5050 scope.go:117] "RemoveContainer" containerID="33899e24adacef82929b0fbbcd58af7cb4f7aa3935b49ec9b5b4045a51da1d14" Dec 11 15:52:56 crc kubenswrapper[5050]: E1211 15:52:56.547656 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:52:57 crc kubenswrapper[5050]: I1211 15:52:57.008549 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-5pb7q" event={"ID":"6c2d8343-b085-4545-9a26-2dd0bf907b5e","Type":"ContainerStarted","Data":"e507e88b18168e702516add495e9defa31579d4b24045010d2d2bbc99fd7ddb9"} Dec 11 15:52:57 crc kubenswrapper[5050]: I1211 15:52:57.029648 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-openstack-openstack-cell1-5pb7q" podStartSLOduration=2.284264928 podStartE2EDuration="4.029630386s" podCreationTimestamp="2025-12-11 15:52:53 +0000 UTC" firstStartedPulling="2025-12-11 15:52:53.953922889 +0000 UTC m=+7464.797645475" lastFinishedPulling="2025-12-11 15:52:55.699288347 +0000 UTC m=+7466.543010933" observedRunningTime="2025-12-11 15:52:57.025058524 +0000 UTC m=+7467.868781110" watchObservedRunningTime="2025-12-11 15:52:57.029630386 +0000 UTC m=+7467.873352972" Dec 11 15:53:03 crc kubenswrapper[5050]: I1211 15:53:03.510311 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="71218193-88fc-4811-bf04-33a4f4a87898" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.1.133:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:53:03 crc kubenswrapper[5050]: I1211 15:53:03.772704 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="fef8d631-c968-4ccd-92ec-e6fc5a2f6731" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Dec 11 15:53:08 crc kubenswrapper[5050]: I1211 15:53:08.775529 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="fef8d631-c968-4ccd-92ec-e6fc5a2f6731" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Dec 11 15:53:09 crc kubenswrapper[5050]: I1211 15:53:09.558664 5050 scope.go:117] "RemoveContainer" containerID="33899e24adacef82929b0fbbcd58af7cb4f7aa3935b49ec9b5b4045a51da1d14" Dec 11 15:53:09 crc kubenswrapper[5050]: E1211 15:53:09.559345 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:53:13 crc kubenswrapper[5050]: I1211 15:53:13.276989 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g2bpg"] Dec 11 15:53:13 crc kubenswrapper[5050]: I1211 15:53:13.324989 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g2bpg" Dec 11 15:53:13 crc kubenswrapper[5050]: I1211 15:53:13.328387 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g2bpg"] Dec 11 15:53:13 crc kubenswrapper[5050]: I1211 15:53:13.453652 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/daba01c5-6d3c-4e32-84ce-d8f67b685671-catalog-content\") pod \"community-operators-g2bpg\" (UID: \"daba01c5-6d3c-4e32-84ce-d8f67b685671\") " pod="openshift-marketplace/community-operators-g2bpg" Dec 11 15:53:13 crc kubenswrapper[5050]: I1211 15:53:13.453773 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhkwl\" (UniqueName: \"kubernetes.io/projected/daba01c5-6d3c-4e32-84ce-d8f67b685671-kube-api-access-mhkwl\") pod \"community-operators-g2bpg\" (UID: \"daba01c5-6d3c-4e32-84ce-d8f67b685671\") " pod="openshift-marketplace/community-operators-g2bpg" Dec 11 15:53:13 crc kubenswrapper[5050]: I1211 15:53:13.453824 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/daba01c5-6d3c-4e32-84ce-d8f67b685671-utilities\") pod \"community-operators-g2bpg\" (UID: \"daba01c5-6d3c-4e32-84ce-d8f67b685671\") " pod="openshift-marketplace/community-operators-g2bpg" Dec 11 15:53:13 crc kubenswrapper[5050]: I1211 15:53:13.555036 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhkwl\" (UniqueName: \"kubernetes.io/projected/daba01c5-6d3c-4e32-84ce-d8f67b685671-kube-api-access-mhkwl\") pod \"community-operators-g2bpg\" (UID: \"daba01c5-6d3c-4e32-84ce-d8f67b685671\") " pod="openshift-marketplace/community-operators-g2bpg" Dec 11 15:53:13 crc kubenswrapper[5050]: I1211 15:53:13.555127 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/daba01c5-6d3c-4e32-84ce-d8f67b685671-utilities\") pod \"community-operators-g2bpg\" (UID: \"daba01c5-6d3c-4e32-84ce-d8f67b685671\") " pod="openshift-marketplace/community-operators-g2bpg" Dec 11 15:53:13 crc kubenswrapper[5050]: I1211 15:53:13.555282 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/daba01c5-6d3c-4e32-84ce-d8f67b685671-catalog-content\") pod \"community-operators-g2bpg\" (UID: \"daba01c5-6d3c-4e32-84ce-d8f67b685671\") " pod="openshift-marketplace/community-operators-g2bpg" Dec 11 15:53:13 crc kubenswrapper[5050]: I1211 15:53:13.555665 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/daba01c5-6d3c-4e32-84ce-d8f67b685671-utilities\") pod \"community-operators-g2bpg\" (UID: \"daba01c5-6d3c-4e32-84ce-d8f67b685671\") " pod="openshift-marketplace/community-operators-g2bpg" Dec 11 15:53:13 crc kubenswrapper[5050]: I1211 15:53:13.555792 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/daba01c5-6d3c-4e32-84ce-d8f67b685671-catalog-content\") pod \"community-operators-g2bpg\" (UID: \"daba01c5-6d3c-4e32-84ce-d8f67b685671\") " pod="openshift-marketplace/community-operators-g2bpg" Dec 11 15:53:13 crc kubenswrapper[5050]: I1211 15:53:13.574923 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhkwl\" (UniqueName: \"kubernetes.io/projected/daba01c5-6d3c-4e32-84ce-d8f67b685671-kube-api-access-mhkwl\") pod \"community-operators-g2bpg\" (UID: \"daba01c5-6d3c-4e32-84ce-d8f67b685671\") " pod="openshift-marketplace/community-operators-g2bpg" Dec 11 15:53:13 crc kubenswrapper[5050]: I1211 15:53:13.654729 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g2bpg" Dec 11 15:53:13 crc kubenswrapper[5050]: I1211 15:53:13.776929 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="fef8d631-c968-4ccd-92ec-e6fc5a2f6731" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Dec 11 15:53:13 crc kubenswrapper[5050]: I1211 15:53:13.777317 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Dec 11 15:53:13 crc kubenswrapper[5050]: I1211 15:53:13.778262 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"166c7ea8f22887bff1aac5363e204edce18d6f260bad2031a294155043ca2094"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Dec 11 15:53:13 crc kubenswrapper[5050]: I1211 15:53:13.778373 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fef8d631-c968-4ccd-92ec-e6fc5a2f6731" containerName="ceilometer-central-agent" containerID="cri-o://166c7ea8f22887bff1aac5363e204edce18d6f260bad2031a294155043ca2094" gracePeriod=30 Dec 11 15:53:18 crc kubenswrapper[5050]: I1211 15:53:18.278302 5050 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r54sd container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.23:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:53:18 crc kubenswrapper[5050]: I1211 15:53:18.278976 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" podUID="dbd5b107-5d08-43af-881c-11540f395267" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.23:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:53:18 crc kubenswrapper[5050]: I1211 15:53:18.278333 5050 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r54sd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:53:18 crc kubenswrapper[5050]: I1211 15:53:18.279064 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" podUID="dbd5b107-5d08-43af-881c-11540f395267" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.23:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:53:21 crc kubenswrapper[5050]: I1211 15:53:21.545745 5050 scope.go:117] "RemoveContainer" containerID="33899e24adacef82929b0fbbcd58af7cb4f7aa3935b49ec9b5b4045a51da1d14" Dec 11 15:53:21 crc kubenswrapper[5050]: E1211 15:53:21.546354 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:53:23 crc kubenswrapper[5050]: I1211 15:53:23.550187 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="71218193-88fc-4811-bf04-33a4f4a87898" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.1.133:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:53:23 crc kubenswrapper[5050]: I1211 15:53:23.550253 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="71218193-88fc-4811-bf04-33a4f4a87898" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.1.133:8080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:53:23 crc kubenswrapper[5050]: I1211 15:53:23.774922 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="fef8d631-c968-4ccd-92ec-e6fc5a2f6731" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:28.279390 5050 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r54sd container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.23:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:28.279961 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" podUID="dbd5b107-5d08-43af-881c-11540f395267" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.23:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:28.279397 5050 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r54sd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:28.280019 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" podUID="dbd5b107-5d08-43af-881c-11540f395267" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.23:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:30.564510 5050 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:30.564844 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:53:30.836939 5050 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:32.639241 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-qdrgd" podUID="4b4e8aa9-a0f6-4bd2-92d2-c524aedf6389" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:32.639366 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-qdrgd" podUID="4b4e8aa9-a0f6-4bd2-92d2-c524aedf6389" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:32.721643 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp" podUID="85683dfb-37fb-4301-8c7a-fbb7453b303d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:32.722419 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp" podUID="85683dfb-37fb-4301-8c7a-fbb7453b303d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:33.219765 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2" podUID="06df6c8c-640d-431b-b216-78345a9054e1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:33.219877 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2" podUID="06df6c8c-640d-431b-b216-78345a9054e1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:33.552271 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="71218193-88fc-4811-bf04-33a4f4a87898" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.1.133:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:33.552315 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="71218193-88fc-4811-bf04-33a4f4a87898" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.1.133:8080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:33.594361 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pjhfc" podUID="3c9e825c-0aee-42b9-a7a5-3191486f301d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:33.762259 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ttg8w" podUID="b885fa10-3ed3-41fd-94ae-2b7442519450" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:33.890260 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-8jnzj" podUID="a6345cf8-abc2-4c9a-bfe6-8b65187ada2d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:33.890260 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-8jnzj" podUID="a6345cf8-abc2-4c9a-bfe6-8b65187ada2d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:34.547200 5050 scope.go:117] "RemoveContainer" containerID="33899e24adacef82929b0fbbcd58af7cb4f7aa3935b49ec9b5b4045a51da1d14" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:53:34.547537 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:34.723266 5050 patch_prober.go:28] interesting pod/observability-operator-d8bb48f5d-wwdcc container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.1.128:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:34.723303 5050 patch_prober.go:28] interesting pod/observability-operator-d8bb48f5d-wwdcc container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.1.128:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:34.723330 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" podUID="1c8331c1-b8ee-456b-baa6-110917427b64" containerName="operator" probeResult="failure" output="Get \"http://10.217.1.128:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:34.723372 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" podUID="1c8331c1-b8ee-456b-baa6-110917427b64" containerName="operator" probeResult="failure" output="Get \"http://10.217.1.128:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:34.914338 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" podUID="3477354d-838b-48cc-a6c3-612088d82640" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:35.564675 5050 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": context deadline exceeded" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:35.564978 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": context deadline exceeded" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:38.280062 5050 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r54sd container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.23:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:38.280510 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" podUID="dbd5b107-5d08-43af-881c-11540f395267" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.23:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:38.280596 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:38.280110 5050 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r54sd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:38.281035 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" podUID="dbd5b107-5d08-43af-881c-11540f395267" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.23:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:38.281113 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:38.281822 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="packageserver" containerStatusID={"Type":"cri-o","ID":"c4fcd3e0277409e398f4cc4d635cc0accc5ab2df1e1cb5b5780d5fabcb6748cf"} pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" containerMessage="Container packageserver failed liveness probe, will be restarted" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:38.281867 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" podUID="dbd5b107-5d08-43af-881c-11540f395267" containerName="packageserver" containerID="cri-o://c4fcd3e0277409e398f4cc4d635cc0accc5ab2df1e1cb5b5780d5fabcb6748cf" gracePeriod=30 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:40.565585 5050 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:40.566192 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:40.643344 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-operator-799b66f579-tqs2v" podUID="53a1a6d1-6999-4fa1-a0ce-a20b83f1f347" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.73:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:53:40.837250 5050 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:42.600520 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-qdrgd" podUID="4b4e8aa9-a0f6-4bd2-92d2-c524aedf6389" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:42.673273 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp" podUID="85683dfb-37fb-4301-8c7a-fbb7453b303d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:42.771471 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="14ad1594-090d-4024-a999-9ffe77ce58d8" containerName="galera" probeResult="failure" output="command timed out" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:42.771471 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="14ad1594-090d-4024-a999-9ffe77ce58d8" containerName="galera" probeResult="failure" output="command timed out" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:43.179330 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2" podUID="06df6c8c-640d-431b-b216-78345a9054e1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:43.351380 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-65swh" podUID="58a60a6f-fcf5-4aa0-b6e4-97d7df2b2bd2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:43.554236 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="71218193-88fc-4811-bf04-33a4f4a87898" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.1.133:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:43.554420 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="71218193-88fc-4811-bf04-33a4f4a87898" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.1.133:8080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:43.569602 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:43.569635 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/kube-state-metrics-0" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:43.570413 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-state-metrics" containerStatusID={"Type":"cri-o","ID":"d6d78ae06a556ee38d684c35f4b28a4e8362d122891149382720399f1ed0ffed"} pod="openstack/kube-state-metrics-0" containerMessage="Container kube-state-metrics failed liveness probe, will be restarted" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:43.570459 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="71218193-88fc-4811-bf04-33a4f4a87898" containerName="kube-state-metrics" containerID="cri-o://d6d78ae06a556ee38d684c35f4b28a4e8362d122891149382720399f1ed0ffed" gracePeriod=30 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:43.595238 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pjhfc" podUID="3c9e825c-0aee-42b9-a7a5-3191486f301d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:43.597434 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="71218193-88fc-4811-bf04-33a4f4a87898" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.1.133:8081/readyz\": read tcp 10.217.0.2:58050->10.217.1.133:8081: read: connection reset by peer" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:43.846248 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-8jnzj" podUID="a6345cf8-abc2-4c9a-bfe6-8b65187ada2d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:44.723237 5050 patch_prober.go:28] interesting pod/observability-operator-d8bb48f5d-wwdcc container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.1.128:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:44.723296 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" podUID="1c8331c1-b8ee-456b-baa6-110917427b64" containerName="operator" probeResult="failure" output="Get \"http://10.217.1.128:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:44.765193 5050 patch_prober.go:28] interesting pod/observability-operator-d8bb48f5d-wwdcc container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.1.128:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:44.765239 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" podUID="1c8331c1-b8ee-456b-baa6-110917427b64" containerName="operator" probeResult="failure" output="Get \"http://10.217.1.128:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:44.955240 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" podUID="3477354d-838b-48cc-a6c3-612088d82640" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:44.955339 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" podUID="3477354d-838b-48cc-a6c3-612088d82640" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:45.565295 5050 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:45.565635 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:48.008219 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-w4tzc" podUID="54e98831-cd88-4dee-90db-e8fbb006e9c3" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:48.008369 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-w4tzc" podUID="54e98831-cd88-4dee-90db-e8fbb006e9c3" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:48.279053 5050 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r54sd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:48.279115 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" podUID="dbd5b107-5d08-43af-881c-11540f395267" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.23:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:48.469368 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="71218193-88fc-4811-bf04-33a4f4a87898" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.1.133:8081/readyz\": dial tcp 10.217.1.133:8081: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:49.546966 5050 scope.go:117] "RemoveContainer" containerID="33899e24adacef82929b0fbbcd58af7cb4f7aa3935b49ec9b5b4045a51da1d14" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:53:49.547551 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:50.567056 5050 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:50.567106 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:50.567286 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:50.685272 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-operator-799b66f579-tqs2v" podUID="53a1a6d1-6999-4fa1-a0ce-a20b83f1f347" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.73:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:50.685297 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-operator-799b66f579-tqs2v" podUID="53a1a6d1-6999-4fa1-a0ce-a20b83f1f347" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.73:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:53:50.838235 5050 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:51.171281 5050 generic.go:334] "Generic (PLEG): container finished" podID="71218193-88fc-4811-bf04-33a4f4a87898" containerID="d6d78ae06a556ee38d684c35f4b28a4e8362d122891149382720399f1ed0ffed" exitCode=-1 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:51.171312 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"71218193-88fc-4811-bf04-33a4f4a87898","Type":"ContainerDied","Data":"d6d78ae06a556ee38d684c35f4b28a4e8362d122891149382720399f1ed0ffed"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:52.640442 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-qdrgd" podUID="4b4e8aa9-a0f6-4bd2-92d2-c524aedf6389" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:52.640808 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-qdrgd" podUID="4b4e8aa9-a0f6-4bd2-92d2-c524aedf6389" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:52.640920 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-qdrgd" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:52.723272 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp" podUID="85683dfb-37fb-4301-8c7a-fbb7453b303d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:52.723817 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp" podUID="85683dfb-37fb-4301-8c7a-fbb7453b303d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:52.723875 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:52.771328 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="14ad1594-090d-4024-a999-9ffe77ce58d8" containerName="galera" probeResult="failure" output="command timed out" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:52.772804 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="14ad1594-090d-4024-a999-9ffe77ce58d8" containerName="galera" probeResult="failure" output="command timed out" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:53.043283 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qbc2f" podUID="9726c3f9-bcae-4722-a054-5a66c161953b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:53.221365 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2" podUID="06df6c8c-640d-431b-b216-78345a9054e1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:53.221541 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:53.221697 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2" podUID="06df6c8c-640d-431b-b216-78345a9054e1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:53.393188 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-65swh" podUID="58a60a6f-fcf5-4aa0-b6e4-97d7df2b2bd2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:53.393238 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-65swh" podUID="58a60a6f-fcf5-4aa0-b6e4-97d7df2b2bd2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:53.624337 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pjhfc" podUID="3c9e825c-0aee-42b9-a7a5-3191486f301d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:53.624405 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pjhfc" podUID="3c9e825c-0aee-42b9-a7a5-3191486f301d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:53.682706 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-qdrgd" podUID="4b4e8aa9-a0f6-4bd2-92d2-c524aedf6389" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:53.766220 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp" podUID="85683dfb-37fb-4301-8c7a-fbb7453b303d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:53.774207 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="fef8d631-c968-4ccd-92ec-e6fc5a2f6731" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:53.887244 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-8jnzj" podUID="a6345cf8-abc2-4c9a-bfe6-8b65187ada2d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:53.887318 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-8jnzj" podUID="a6345cf8-abc2-4c9a-bfe6-8b65187ada2d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:53.887405 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-8jnzj" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:54.264271 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2" podUID="06df6c8c-640d-431b-b216-78345a9054e1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:54.766620 5050 patch_prober.go:28] interesting pod/observability-operator-d8bb48f5d-wwdcc container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.1.128:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:54.766617 5050 patch_prober.go:28] interesting pod/observability-operator-d8bb48f5d-wwdcc container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.1.128:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:54.767280 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" podUID="1c8331c1-b8ee-456b-baa6-110917427b64" containerName="operator" probeResult="failure" output="Get \"http://10.217.1.128:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:54.767319 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" podUID="1c8331c1-b8ee-456b-baa6-110917427b64" containerName="operator" probeResult="failure" output="Get \"http://10.217.1.128:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:54.767397 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:54.767446 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:54.912348 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" podUID="3477354d-838b-48cc-a6c3-612088d82640" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:54.912467 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:54.912465 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-8jnzj" podUID="a6345cf8-abc2-4c9a-bfe6-8b65187ada2d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:55.204192 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="operator" containerStatusID={"Type":"cri-o","ID":"bc1a7c414fb7ed1343e941a8a7ba794d8eed3ccaad27a52f69cfbbfb3ef248e7"} pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" containerMessage="Container operator failed liveness probe, will be restarted" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:55.204264 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" podUID="1c8331c1-b8ee-456b-baa6-110917427b64" containerName="operator" containerID="cri-o://bc1a7c414fb7ed1343e941a8a7ba794d8eed3ccaad27a52f69cfbbfb3ef248e7" gracePeriod=30 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:55.402186 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-5d666c4679-krnwf" podUID="a6130f1a-c95b-445f-8235-e57fdcb270fe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:55.565858 5050 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:55.565924 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:55.565998 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:55.566140 5050 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:55.567282 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-apiserver" containerStatusID={"Type":"cri-o","ID":"bdc5023a5b80cbc534e9fae8c924add56e2bae71eb5c0725fc0e14f4b3419495"} pod="openshift-kube-apiserver/kube-apiserver-crc" containerMessage="Container kube-apiserver failed liveness probe, will be restarted" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:55.567410 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" containerID="cri-o://bdc5023a5b80cbc534e9fae8c924add56e2bae71eb5c0725fc0e14f4b3419495" gracePeriod=15 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:55.954242 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" podUID="3477354d-838b-48cc-a6c3-612088d82640" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:56.281375 5050 patch_prober.go:28] interesting pod/observability-operator-d8bb48f5d-wwdcc container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.1.128:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:56.281466 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" podUID="1c8331c1-b8ee-456b-baa6-110917427b64" containerName="operator" probeResult="failure" output="Get \"http://10.217.1.128:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:58.009204 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-w4tzc" podUID="54e98831-cd88-4dee-90db-e8fbb006e9c3" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:58.009233 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-w4tzc" podUID="54e98831-cd88-4dee-90db-e8fbb006e9c3" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:58.279838 5050 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r54sd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:58.280184 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" podUID="dbd5b107-5d08-43af-881c-11540f395267" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.23:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:58.468547 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="71218193-88fc-4811-bf04-33a4f4a87898" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.1.133:8081/readyz\": dial tcp 10.217.1.133:8081: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:53:59.272306 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" podUID="f5bec9f7-072c-4c21-80ea-af9f59313eef" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.279729 5050 reflector.go:484] object-"openshift-service-ca-operator"/"service-ca-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.279780 5050 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.279811 5050 reflector.go:484] object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.279841 5050 reflector.go:484] object-"metallb-system"/"speaker-dockercfg-p2rzt": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.279922 5050 reflector.go:484] object-"openshift-nmstate"/"nmstate-operator-dockercfg-b8vjd": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.279955 5050 reflector.go:484] object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.279985 5050 reflector.go:484] object-"openstack"/"cinder-scheduler-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.280087 5050 reflector.go:484] object-"openshift-console"/"service-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.280119 5050 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-operator-images": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.280154 5050 reflector.go:484] object-"openstack"/"alertmanager-metric-storage-cluster-tls-config": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.280186 5050 reflector.go:484] object-"openshift-multus"/"default-cni-sysctl-allowlist": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.280217 5050 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.280247 5050 reflector.go:484] object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.280283 5050 reflector.go:484] object-"openshift-operators"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.280313 5050 reflector.go:484] object-"openstack"/"manila-api-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.280343 5050 reflector.go:484] object-"openstack"/"ovncontroller-metrics-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.280373 5050 reflector.go:484] object-"openshift-image-registry"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.280402 5050 reflector.go:484] object-"openshift-apiserver"/"encryption-config-1": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.280451 5050 reflector.go:484] object-"openstack"/"keystone-scripts": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.280483 5050 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.280512 5050 reflector.go:484] object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-6tg85": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.280554 5050 reflector.go:484] object-"openstack"/"galera-openstack-cell1-dockercfg-vwxp8": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.280593 5050 reflector.go:484] object-"openshift-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.280635 5050 reflector.go:484] object-"openstack"/"placement-placement-dockercfg-4zzmp": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.280666 5050 reflector.go:484] object-"openshift-machine-api"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.280693 5050 reflector.go:484] object-"openshift-apiserver"/"etcd-client": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.280791 5050 reflector.go:484] object-"openshift-apiserver"/"etcd-serving-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.280824 5050 reflector.go:484] object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.280852 5050 reflector.go:484] object-"openstack"/"galera-openstack-dockercfg-5gcmv": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.280881 5050 reflector.go:484] object-"openshift-cluster-machine-approver"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.280908 5050 reflector.go:484] object-"openstack"/"ovndbcluster-sb-scripts": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.280937 5050 reflector.go:484] object-"openshift-marketplace"/"marketplace-trusted-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.280967 5050 reflector.go:484] object-"openshift-ingress-canary"/"canary-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.280995 5050 reflector.go:484] object-"openshift-marketplace"/"marketplace-operator-metrics": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.281043 5050 reflector.go:484] object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-d5dn6": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.281072 5050 reflector.go:484] object-"openshift-machine-api"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.281102 5050 reflector.go:484] object-"openstack"/"octavia-housekeeping-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.281129 5050 reflector.go:484] object-"openshift-machine-config-operator"/"node-bootstrapper-token": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.281158 5050 reflector.go:484] object-"openstack"/"ceilometer-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.281188 5050 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.281228 5050 reflector.go:484] object-"openstack-operators"/"infra-operator-webhook-server-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.281257 5050 reflector.go:484] object-"openshift-apiserver-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.284242 5050 reflector.go:484] object-"openshift-ingress"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.284578 5050 reflector.go:484] object-"openshift-multus"/"metrics-daemon-secret": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.284619 5050 reflector.go:484] object-"openstack"/"octavia-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.284662 5050 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.284705 5050 reflector.go:484] object-"openshift-operators"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.284747 5050 reflector.go:484] object-"openstack"/"aodh-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.284790 5050 reflector.go:484] object-"openshift-cluster-version"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.284833 5050 reflector.go:484] object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-n57x7": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.284873 5050 reflector.go:484] object-"openstack"/"openstack-cell1-scripts": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.284915 5050 reflector.go:484] object-"openshift-ingress"/"router-metrics-certs-default": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.284955 5050 reflector.go:484] object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.284997 5050 reflector.go:484] object-"openshift-authentication"/"v4-0-config-user-template-error": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.285059 5050 reflector.go:484] object-"cert-manager-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.285096 5050 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.285135 5050 reflector.go:484] object-"openshift-nmstate"/"plugin-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.285173 5050 reflector.go:484] object-"cert-manager"/"cert-manager-cainjector-dockercfg-sxrxp": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.285212 5050 reflector.go:484] object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-2bw5c": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.285251 5050 reflector.go:484] object-"metallb-system"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.285293 5050 reflector.go:484] object-"openstack"/"default-dockercfg-tmtdn": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.285330 5050 reflector.go:484] object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.285367 5050 reflector.go:484] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.285405 5050 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-server-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.285443 5050 reflector.go:484] object-"openshift-ingress"/"service-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.285479 5050 reflector.go:484] object-"openshift-nmstate"/"default-dockercfg-vtnxn": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.285514 5050 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.285551 5050 reflector.go:484] object-"openshift-console-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.285587 5050 reflector.go:484] object-"openshift-dns"/"dns-dockercfg-jwfmh": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.285626 5050 reflector.go:484] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.285666 5050 reflector.go:484] object-"openshift-marketplace"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.285703 5050 reflector.go:484] object-"openshift-authentication-operator"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.285738 5050 reflector.go:484] object-"openshift-image-registry"/"image-registry-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.285776 5050 reflector.go:484] object-"openshift-operators"/"observability-operator-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.285814 5050 reflector.go:484] object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-mz95s": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.285867 5050 reflector.go:484] object-"hostpath-provisioner"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.285904 5050 reflector.go:484] object-"openshift-machine-config-operator"/"mcc-proxy-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.285940 5050 reflector.go:484] object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-djswv": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.285978 5050 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.286070 5050 reflector.go:484] object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.286109 5050 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.286147 5050 reflector.go:484] object-"openstack"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.286183 5050 reflector.go:484] object-"openshift-cluster-machine-approver"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.286220 5050 reflector.go:484] object-"openstack"/"horizon-horizon-dockercfg-d7bqh": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.286254 5050 reflector.go:484] object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-t4t27": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.286292 5050 reflector.go:484] object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.286331 5050 reflector.go:484] object-"openstack"/"alertmanager-metric-storage-generated": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.286368 5050 reflector.go:484] object-"openstack"/"alertmanager-metric-storage-tls-assets-0": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.286405 5050 reflector.go:484] object-"openshift-controller-manager"/"openshift-global-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.286441 5050 reflector.go:484] object-"cert-manager"/"cert-manager-webhook-dockercfg-kbhwz": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.286476 5050 reflector.go:484] object-"openstack"/"glance-glance-dockercfg-ndgnr": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.286512 5050 reflector.go:484] object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.286554 5050 reflector.go:484] object-"openstack"/"cinder-scripts": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.286605 5050 reflector.go:484] object-"openshift-apiserver"/"image-import-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.286651 5050 reflector.go:484] object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.286691 5050 reflector.go:484] object-"openshift-machine-config-operator"/"proxy-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.286727 5050 reflector.go:484] object-"openstack"/"openstack-scripts": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.286766 5050 reflector.go:484] object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.286805 5050 reflector.go:484] object-"openshift-authentication"/"v4-0-config-user-template-provider-selection": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.286844 5050 reflector.go:484] object-"metallb-system"/"controller-certs-secret": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.286884 5050 reflector.go:484] object-"openstack"/"openstack-cell1-dockercfg-mvxd9": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.286925 5050 reflector.go:484] object-"openstack-operators"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.286965 5050 reflector.go:484] object-"openstack"/"keystone-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.287004 5050 reflector.go:484] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.287212 5050 reflector.go:484] object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.287268 5050 reflector.go:484] object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-g8gr8": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.287309 5050 reflector.go:484] object-"openshift-console-operator"/"console-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.287352 5050 reflector.go:484] object-"openstack"/"ovndbcluster-nb-scripts": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.287400 5050 reflector.go:484] object-"openshift-ingress-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.287438 5050 reflector.go:484] object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-w847r": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.287477 5050 reflector.go:484] object-"openshift-image-registry"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.287515 5050 reflector.go:484] object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.287554 5050 reflector.go:484] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.287591 5050 reflector.go:484] object-"openshift-kube-storage-version-migrator-operator"/"config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.287629 5050 reflector.go:484] object-"openstack"/"nova-nova-dockercfg-mdjbl": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.287668 5050 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.287704 5050 reflector.go:484] object-"openshift-oauth-apiserver"/"encryption-config-1": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.287741 5050 reflector.go:484] object-"openstack"/"horizon": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.287779 5050 reflector.go:484] object-"openstack"/"prometheus-metric-storage-web-config": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.287815 5050 reflector.go:484] object-"openshift-machine-config-operator"/"kube-rbac-proxy": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.287851 5050 reflector.go:484] object-"openshift-operators"/"observability-operator-sa-dockercfg-f5g9s": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.287888 5050 reflector.go:484] object-"openstack"/"ovndbcluster-sb-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.287923 5050 reflector.go:484] object-"openstack"/"cinder-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.287960 5050 reflector.go:484] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.287997 5050 reflector.go:484] object-"openshift-oauth-apiserver"/"etcd-serving-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.288054 5050 reflector.go:484] object-"openstack"/"octavia-housekeeping-scripts": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.288093 5050 reflector.go:484] object-"openstack"/"octavia-rsyslog-scripts": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.288132 5050 reflector.go:484] object-"openshift-config-operator"/"config-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.288169 5050 reflector.go:484] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.288205 5050 reflector.go:484] object-"openshift-apiserver"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.288243 5050 reflector.go:484] object-"openshift-console"/"console-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.288279 5050 reflector.go:484] object-"openshift-etcd-operator"/"etcd-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.288319 5050 reflector.go:484] object-"metallb-system"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.288356 5050 reflector.go:484] object-"openstack"/"octavia-hmport-map": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.288391 5050 reflector.go:484] object-"openshift-authentication"/"v4-0-config-system-cliconfig": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.288429 5050 reflector.go:484] object-"openstack"/"dns": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.288494 5050 reflector.go:484] object-"openstack"/"dataplane-adoption-secret": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.288552 5050 reflector.go:484] object-"openstack"/"keystone": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.288587 5050 reflector.go:484] object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-mc6vn": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.288627 5050 reflector.go:484] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.288663 5050 reflector.go:484] object-"openshift-controller-manager-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.288699 5050 reflector.go:484] object-"openstack"/"nova-cell0-conductor-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.288735 5050 reflector.go:484] object-"metallb-system"/"manager-account-dockercfg-m6zt9": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.288772 5050 reflector.go:484] object-"openstack"/"nova-metadata-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.288809 5050 reflector.go:484] object-"metallb-system"/"frr-startup": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.288845 5050 reflector.go:484] object-"openshift-etcd-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.288879 5050 reflector.go:484] object-"openstack"/"openstack-aee-default-env": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.288916 5050 reflector.go:484] object-"metallb-system"/"metallb-webhook-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.288953 5050 reflector.go:484] object-"openstack"/"ceph-conf-files": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.288989 5050 reflector.go:484] object-"openstack"/"heat-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.289043 5050 reflector.go:484] object-"openshift-ingress-canary"/"default-dockercfg-2llfx": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.289079 5050 reflector.go:484] object-"openstack"/"nova-cell1-conductor-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.289116 5050 reflector.go:484] object-"openshift-cluster-version"/"cluster-version-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.289148 5050 reflector.go:484] object-"openshift-controller-manager"/"config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.289185 5050 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.289221 5050 reflector.go:484] object-"openshift-etcd-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.289256 5050 reflector.go:484] object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.289289 5050 reflector.go:484] object-"openshift-console-operator"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.289323 5050 reflector.go:484] object-"openshift-authentication-operator"/"authentication-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.289360 5050 reflector.go:484] object-"openshift-machine-api"/"machine-api-operator-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.289396 5050 reflector.go:484] object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.289430 5050 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.289467 5050 reflector.go:484] object-"openshift-controller-manager"/"client-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.289501 5050 reflector.go:484] object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-25r77": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.289538 5050 reflector.go:484] object-"openshift-cluster-machine-approver"/"machine-approver-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.289574 5050 reflector.go:484] object-"cert-manager"/"cert-manager-dockercfg-8z6ch": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.289610 5050 reflector.go:484] object-"openshift-service-ca-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.289643 5050 reflector.go:484] object-"openstack"/"barbican-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.289680 5050 reflector.go:484] object-"openshift-ingress"/"router-certs-default": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.289714 5050 reflector.go:484] object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.289749 5050 reflector.go:484] object-"openshift-cluster-samples-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.289787 5050 reflector.go:484] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.289823 5050 reflector.go:484] object-"openstack"/"aodh-scripts": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.289860 5050 reflector.go:484] object-"openstack"/"octavia-certs-secret": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.289895 5050 reflector.go:484] object-"openstack"/"neutron-neutron-dockercfg-ctkbt": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.289936 5050 reflector.go:484] object-"openshift-etcd-operator"/"etcd-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.289971 5050 reflector.go:484] object-"openshift-authentication"/"audit": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.290095 5050 reflector.go:484] object-"openshift-oauth-apiserver"/"audit-1": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.290141 5050 reflector.go:484] object-"metallb-system"/"controller-dockercfg-5zwsv": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.290212 5050 reflector.go:484] object-"openshift-cluster-machine-approver"/"machine-approver-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.290245 5050 reflector.go:484] object-"openshift-ingress-operator"/"metrics-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.290307 5050 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.290364 5050 reflector.go:484] object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.290401 5050 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.290465 5050 reflector.go:484] object-"openstack"/"octavia-api-scripts": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.290543 5050 reflector.go:484] object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.290589 5050 reflector.go:484] object-"openstack"/"heat-cfnapi-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.290676 5050 reflector.go:484] object-"openstack"/"dnsmasq-dns-dockercfg-kpmgv": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.290757 5050 reflector.go:484] object-"openshift-controller-manager"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.290841 5050 reflector.go:484] object-"openshift-ingress"/"router-dockercfg-zdk86": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.290925 5050 reflector.go:484] object-"openstack"/"neutron-httpd-config": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.290999 5050 reflector.go:484] object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.291087 5050 reflector.go:484] object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.291165 5050 reflector.go:484] object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.291244 5050 reflector.go:484] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.291322 5050 reflector.go:484] object-"openshift-controller-manager"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.291404 5050 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.291453 5050 reflector.go:484] object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.291532 5050 reflector.go:484] object-"cert-manager-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.291611 5050 reflector.go:484] object-"openshift-operators"/"obo-prometheus-operator-dockercfg-mpfzr": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.291692 5050 reflector.go:484] object-"openshift-service-ca-operator"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.291759 5050 reflector.go:484] object-"openshift-console-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.291794 5050 reflector.go:484] object-"openstack"/"rabbitmq-erlang-cookie": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.291876 5050 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.291960 5050 reflector.go:484] object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.292049 5050 reflector.go:484] object-"openstack"/"glance-default-internal-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.292129 5050 reflector.go:484] object-"openstack"/"cinder-volume-volume1-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.292208 5050 reflector.go:484] object-"openshift-nmstate"/"openshift-nmstate-webhook": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.292288 5050 reflector.go:484] object-"openshift-service-ca"/"signing-key": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.292363 5050 reflector.go:484] object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.292414 5050 reflector.go:484] object-"openstack"/"openstack-config-data": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.292524 5050 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.292598 5050 reflector.go:484] object-"openshift-service-ca"/"service-ca-dockercfg-pn86c": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.292656 5050 reflector.go:484] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.292696 5050 reflector.go:484] object-"openshift-multus"/"multus-admission-controller-secret": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.292755 5050 reflector.go:484] object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.292791 5050 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.292851 5050 reflector.go:484] object-"openstack"/"nova-api-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.292914 5050 reflector.go:484] object-"openstack"/"rabbitmq-plugins-conf": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.292951 5050 reflector.go:484] object-"openshift-etcd-operator"/"etcd-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.293032 5050 reflector.go:484] object-"openstack"/"openstack-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.293070 5050 reflector.go:484] object-"openstack"/"ovncontroller-ovncontroller-dockercfg-fjq8l": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.293137 5050 reflector.go:484] object-"openshift-oauth-apiserver"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.293172 5050 reflector.go:484] object-"openshift-machine-config-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.293232 5050 reflector.go:484] object-"openshift-apiserver"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.293290 5050 reflector.go:484] object-"metallb-system"/"metallb-memberlist": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.293327 5050 reflector.go:484] object-"openstack"/"ovsdbserver-sb": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.293384 5050 reflector.go:484] object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.293446 5050 reflector.go:484] object-"openshift-ingress-canary"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.293482 5050 reflector.go:484] object-"openstack"/"prometheus-metric-storage": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.293539 5050 reflector.go:484] object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.293572 5050 reflector.go:484] object-"openshift-kube-storage-version-migrator-operator"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.293631 5050 reflector.go:484] object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-jbnlr": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.293667 5050 reflector.go:484] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.293743 5050 reflector.go:484] object-"openstack"/"memcached-config-data": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.293797 5050 reflector.go:484] object-"openstack"/"ovnnorthd-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.293862 5050 reflector.go:484] object-"openstack"/"rabbitmq-cell1-server-dockercfg-bvxnm": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.293899 5050 reflector.go:484] object-"openstack"/"octavia-rsyslog-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.293933 5050 reflector.go:484] object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-glgrh": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.293971 5050 reflector.go:484] object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.294003 5050 reflector.go:484] object-"openshift-dns"/"dns-default-metrics-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.294053 5050 reflector.go:484] object-"openshift-config-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.294112 5050 reflector.go:484] object-"openshift-nmstate"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.294145 5050 reflector.go:484] object-"openstack"/"glance-scripts": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.294179 5050 reflector.go:484] object-"openstack"/"ovsdbserver-nb": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.294213 5050 reflector.go:484] object-"openshift-nmstate"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.294244 5050 reflector.go:484] object-"openshift-oauth-apiserver"/"trusted-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.294278 5050 reflector.go:484] object-"metallb-system"/"frr-k8s-certs-secret": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.294311 5050 reflector.go:484] object-"openstack"/"rabbitmq-cell1-server-conf": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.294344 5050 reflector.go:484] object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-p54cz": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.294378 5050 reflector.go:484] object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-crgwv": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.294410 5050 reflector.go:484] object-"openshift-service-ca"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.294444 5050 reflector.go:484] object-"openshift-console"/"console-oauth-config": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.294502 5050 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.294535 5050 reflector.go:484] object-"openstack"/"cinder-cinder-dockercfg-lvj2r": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.294569 5050 reflector.go:484] object-"openshift-service-ca"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.294601 5050 reflector.go:484] object-"openshift-image-registry"/"image-registry-operator-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.294635 5050 reflector.go:484] object-"openstack"/"manila-scheduler-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.294672 5050 reflector.go:484] object-"openstack"/"openstack-cell1": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.294708 5050 reflector.go:484] object-"openstack"/"prometheus-metric-storage-tls-assets-0": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.294756 5050 reflector.go:484] object-"openshift-controller-manager-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.294792 5050 reflector.go:484] object-"openstack"/"ceilometer-scripts": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.294829 5050 reflector.go:484] object-"openshift-apiserver"/"config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.294860 5050 reflector.go:484] object-"openshift-route-controller-manager"/"client-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.294892 5050 reflector.go:484] object-"openstack"/"manila-share-share1-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.294925 5050 reflector.go:484] object-"openstack"/"horizon-config-data": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:00.295030 5050 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/persistence-rabbitmq-server-0: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/persistence-rabbitmq-server-0\": http2: client connection lost" pod="openstack/rabbitmq-server-0" volumeName="persistence" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.295223 5050 reflector.go:484] object-"openshift-network-diagnostics"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.295474 5050 reflector.go:484] object-"metallb-system"/"metallb-operator-controller-manager-service-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.295489 5050 reflector.go:484] object-"openstack"/"cert-galera-openstack-cell1-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.295522 5050 reflector.go:484] object-"openshift-machine-api"/"machine-api-operator-images": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.295540 5050 reflector.go:484] object-"cert-manager"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.295500 5050 reflector.go:484] object-"openstack-operators"/"webhook-server-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.295540 5050 reflector.go:484] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.295512 5050 reflector.go:484] object-"openstack"/"manila-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.295470 5050 reflector.go:484] object-"openstack"/"combined-ca-bundle": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.295606 5050 reflector.go:484] object-"openshift-apiserver"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.295625 5050 reflector.go:484] object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.295650 5050 reflector.go:484] object-"openshift-marketplace"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.295665 5050 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"pprof-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.295679 5050 reflector.go:484] object-"openstack"/"ovndbcluster-nb-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.295709 5050 reflector.go:484] object-"openstack"/"octavia-worker-scripts": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.295742 5050 reflector.go:484] object-"openshift-image-registry"/"installation-pull-secrets": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:00.295688 5050 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/events\": http2: client connection lost" event=< Dec 11 15:54:41 crc kubenswrapper[5050]: &Event{ObjectMeta:{packageserver-d55dfcdfc-r54sd.18803427859863d6 openshift-operator-lifecycle-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-operator-lifecycle-manager,Name:packageserver-d55dfcdfc-r54sd,UID:dbd5b107-5d08-43af-881c-11540f395267,APIVersion:v1,ResourceVersion:27090,FieldPath:spec.containers{packageserver},},Reason:ProbeError,Message:Liveness probe error: Get "https://10.217.0.23:5443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Dec 11 15:54:41 crc kubenswrapper[5050]: body: Dec 11 15:54:41 crc kubenswrapper[5050]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 15:53:18.278960086 +0000 UTC m=+7489.122682672,LastTimestamp:2025-12-11 15:53:18.278960086 +0000 UTC m=+7489.122682672,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 11 15:54:41 crc kubenswrapper[5050]: > Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.295771 5050 reflector.go:484] object-"openshift-authentication-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.295804 5050 reflector.go:484] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.295817 5050 reflector.go:484] object-"openshift-console"/"console-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.295826 5050 reflector.go:484] object-"openstack"/"openstackclient-openstackclient-dockercfg-c88pf": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.295851 5050 reflector.go:484] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.295869 5050 reflector.go:484] object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.295893 5050 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.295917 5050 reflector.go:484] object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-x9p8j": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.295942 5050 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.295946 5050 reflector.go:484] object-"openshift-machine-api"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.295965 5050 reflector.go:484] object-"openstack"/"nova-scheduler-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.295987 5050 reflector.go:484] object-"openstack"/"cert-galera-openstack-svc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296001 5050 reflector.go:484] object-"openshift-authentication"/"v4-0-config-system-service-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296006 5050 reflector.go:484] object-"openstack"/"prometheus-metric-storage-rulefiles-0": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296043 5050 reflector.go:484] object-"openstack"/"rabbitmq-server-dockercfg-zd6qh": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296061 5050 reflector.go:484] object-"openshift-service-ca-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296136 5050 reflector.go:484] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296144 5050 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296168 5050 reflector.go:484] object-"openshift-apiserver-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296181 5050 reflector.go:484] object-"openshift-machine-api"/"control-plane-machine-set-operator-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296191 5050 reflector.go:484] object-"openshift-machine-config-operator"/"mco-proxy-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296217 5050 reflector.go:484] object-"openstack"/"dns-svc": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296226 5050 reflector.go:484] object-"openstack"/"dataplanenodeset-openstack-cell1": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296255 5050 reflector.go:484] object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296281 5050 reflector.go:484] object-"openshift-cluster-samples-operator"/"samples-operator-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296288 5050 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296322 5050 reflector.go:484] object-"hostpath-provisioner"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296325 5050 reflector.go:484] object-"openshift-authentication"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296349 5050 reflector.go:484] object-"openshift-ovn-kubernetes"/"env-overrides": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296373 5050 reflector.go:484] object-"openshift-route-controller-manager"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296397 5050 reflector.go:484] object-"openshift-ingress-operator"/"trusted-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296398 5050 reflector.go:484] object-"openshift-etcd-operator"/"etcd-service-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296423 5050 reflector.go:484] object-"openshift-authentication"/"v4-0-config-user-template-login": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296480 5050 reflector.go:484] object-"openstack"/"telemetry-autoscaling-dockercfg-7ght8": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296491 5050 reflector.go:484] object-"openstack"/"barbican-barbican-dockercfg-fxl2b": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296508 5050 reflector.go:484] object-"openstack"/"nova-cell1-novncproxy-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296529 5050 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296423 5050 reflector.go:484] object-"openshift-console"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296529 5050 reflector.go:484] object-"openshift-image-registry"/"trusted-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296355 5050 reflector.go:484] object-"openstack"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296601 5050 reflector.go:484] object-"metallb-system"/"speaker-certs-secret": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296620 5050 reflector.go:484] object-"openshift-etcd-operator"/"etcd-client": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296613 5050 reflector.go:484] object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-gkldc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296641 5050 reflector.go:484] object-"openstack"/"glance-default-external-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296638 5050 reflector.go:484] object-"openstack"/"horizon-scripts": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296668 5050 reflector.go:484] object-"openshift-route-controller-manager"/"config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296678 5050 reflector.go:484] object-"openstack"/"placement-scripts": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296691 5050 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296724 5050 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296727 5050 reflector.go:484] object-"openstack"/"octavia-healthmanager-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296735 5050 reflector.go:484] object-"openstack"/"heat-engine-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296725 5050 reflector.go:484] object-"metallb-system"/"frr-k8s-webhook-server-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296770 5050 reflector.go:484] object-"openstack"/"barbican-keystone-listener-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296784 5050 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296803 5050 reflector.go:484] object-"openshift-network-console"/"networking-console-plugin-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296807 5050 reflector.go:484] object-"openstack"/"telemetry-ceilometer-dockercfg-nl629": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296842 5050 reflector.go:484] object-"openstack-operators"/"metrics-server-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296859 5050 reflector.go:484] object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-7bf58": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296845 5050 reflector.go:484] object-"openstack"/"keystone-keystone-dockercfg-jrbb7": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296874 5050 reflector.go:484] object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-qd9ll": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296864 5050 reflector.go:484] object-"openshift-ingress"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296886 5050 reflector.go:484] object-"openshift-operators"/"perses-operator-dockercfg-xflrf": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296903 5050 reflector.go:484] object-"openstack"/"openstack-config-secret": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296933 5050 reflector.go:484] object-"openshift-oauth-apiserver"/"etcd-client": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296940 5050 reflector.go:484] object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-nffdg": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296953 5050 reflector.go:484] object-"openshift-console"/"oauth-serving-cert": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.296980 5050 reflector.go:484] object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297002 5050 reflector.go:484] object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-8nf9c": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297066 5050 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297084 5050 reflector.go:484] object-"openstack"/"manila-manila-dockercfg-d7578": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297099 5050 reflector.go:484] object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297005 5050 reflector.go:484] object-"openstack"/"heat-heat-dockercfg-mz9rx": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297134 5050 reflector.go:484] object-"openstack"/"rabbitmq-default-user": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297103 5050 reflector.go:484] object-"openshift-authentication"/"v4-0-config-system-router-certs": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297152 5050 reflector.go:484] object-"openshift-multus"/"multus-ac-dockercfg-9lkdf": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297175 5050 reflector.go:484] object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297197 5050 reflector.go:484] object-"openshift-route-controller-manager"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297218 5050 reflector.go:484] object-"openshift-oauth-apiserver"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297218 5050 reflector.go:484] object-"openshift-authentication"/"v4-0-config-system-session": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297237 5050 reflector.go:484] object-"openshift-oauth-apiserver"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297239 5050 reflector.go:484] object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-55k4b": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297259 5050 reflector.go:484] object-"openstack"/"neutron-config": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297273 5050 reflector.go:484] object-"openstack"/"rabbitmq-cell1-plugins-conf": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297287 5050 reflector.go:484] object-"openstack-operators"/"openstack-operator-index-dockercfg-f86tg": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297293 5050 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297180 5050 reflector.go:484] object-"openstack"/"metric-storage-prometheus-dockercfg-4drvb": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297294 5050 reflector.go:484] object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297307 5050 reflector.go:484] object-"openstack"/"barbican-api-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297332 5050 reflector.go:484] object-"openshift-ingress"/"router-stats-default": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297311 5050 reflector.go:484] object-"openshift-image-registry"/"registry-dockercfg-kzzsd": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297313 5050 reflector.go:484] object-"openshift-dns"/"dns-default": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297333 5050 reflector.go:484] object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297369 5050 reflector.go:484] object-"openstack"/"ovncontroller-scripts": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297339 5050 reflector.go:484] object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-zlz4m": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297362 5050 reflector.go:484] object-"openshift-ingress-canary"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297376 5050 reflector.go:484] object-"openstack"/"barbican-worker-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297401 5050 reflector.go:484] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297370 5050 reflector.go:484] object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-7zqpj": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297416 5050 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297385 5050 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297405 5050 reflector.go:484] object-"openshift-controller-manager"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297439 5050 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297439 5050 reflector.go:484] object-"openshift-nmstate"/"nginx-conf": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297439 5050 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297460 5050 reflector.go:484] object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297460 5050 reflector.go:484] object-"openshift-marketplace"/"community-operators-dockercfg-dmngl": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297440 5050 reflector.go:484] object-"openstack"/"memcached-memcached-dockercfg-kl4q7": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297518 5050 reflector.go:484] object-"openshift-apiserver"/"audit-1": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297459 5050 reflector.go:484] object-"openshift-console"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297538 5050 reflector.go:484] object-"openshift-console-operator"/"trusted-ca": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297470 5050 reflector.go:484] object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297548 5050 reflector.go:484] object-"metallb-system"/"frr-k8s-daemon-dockercfg-wks79": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297479 5050 reflector.go:484] object-"openstack"/"metric-storage-alertmanager-dockercfg-c2fxf": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297549 5050 reflector.go:484] object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-4x88l": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297565 5050 reflector.go:484] pkg/kubelet/config/apiserver.go:66: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297484 5050 reflector.go:484] object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297487 5050 reflector.go:484] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297584 5050 reflector.go:484] object-"openshift-dns-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297588 5050 reflector.go:484] object-"openstack-operators"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297520 5050 reflector.go:484] object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297520 5050 reflector.go:484] object-"openstack"/"manila-scripts": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297613 5050 reflector.go:484] object-"openstack"/"ovn-data-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297537 5050 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297623 5050 reflector.go:484] object-"openstack"/"rabbitmq-cell1-default-user": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297561 5050 reflector.go:484] object-"openstack"/"ovnnorthd-scripts": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297634 5050 reflector.go:484] object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-whqpr": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297637 5050 reflector.go:484] object-"openshift-console"/"default-dockercfg-chnjx": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297590 5050 reflector.go:484] object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-68tnd": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297494 5050 reflector.go:484] object-"openshift-nmstate"/"nmstate-handler-dockercfg-94hht": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297656 5050 reflector.go:484] object-"openstack"/"octavia-api-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297665 5050 reflector.go:484] object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297667 5050 reflector.go:484] object-"openshift-authentication"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297682 5050 reflector.go:484] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297605 5050 reflector.go:484] object-"openshift-cluster-version"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297690 5050 reflector.go:484] object-"openshift-apiserver"/"trusted-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297710 5050 reflector.go:484] object-"openshift-console"/"console-dockercfg-f62pw": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297625 5050 reflector.go:484] object-"openshift-console"/"trusted-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297487 5050 reflector.go:484] object-"openstack"/"cinder-api-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297724 5050 reflector.go:484] object-"openshift-network-console"/"networking-console-plugin": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297657 5050 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297745 5050 reflector.go:484] object-"openshift-dns-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297671 5050 reflector.go:484] object-"metallb-system"/"metallb-excludel2": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297701 5050 reflector.go:484] object-"openstack"/"openstack-cell1-config-data": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297767 5050 reflector.go:484] object-"openshift-cluster-version"/"default-dockercfg-gxtc4": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297727 5050 reflector.go:484] object-"openstack"/"octavia-healthmanager-scripts": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297784 5050 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297787 5050 reflector.go:484] object-"openstack"/"placement-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297788 5050 reflector.go:484] object-"openstack"/"alertmanager-metric-storage-web-config": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297810 5050 reflector.go:484] object-"openshift-authentication-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297813 5050 reflector.go:484] object-"openshift-authentication"/"v4-0-config-system-serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297826 5050 reflector.go:484] object-"openshift-authentication-operator"/"trusted-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297839 5050 reflector.go:484] object-"openstack-operators"/"test-operator-controller-manager-dockercfg-cclxg": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297842 5050 reflector.go:484] object-"openshift-ingress-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297839 5050 reflector.go:484] object-"openshift-image-registry"/"image-registry-certificates": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297856 5050 reflector.go:484] object-"openstack"/"octavia-worker-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297869 5050 reflector.go:484] object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-f52rf": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297877 5050 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297879 5050 reflector.go:484] object-"metallb-system"/"metallb-operator-webhook-server-service-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297901 5050 reflector.go:484] object-"openstack"/"rabbitmq-cell1-erlang-cookie": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297907 5050 reflector.go:484] object-"openstack"/"heat-api-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297921 5050 reflector.go:484] object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297932 5050 reflector.go:484] object-"openshift-dns-operator"/"metrics-tls": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297948 5050 reflector.go:484] object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297953 5050 reflector.go:484] object-"openstack"/"cinder-backup-config-data": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:00.297952 5050 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": http2: client connection lost" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297969 5050 reflector.go:484] object-"openshift-service-ca"/"signing-cabundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297988 5050 reflector.go:484] object-"cert-manager"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297990 5050 reflector.go:484] object-"openshift-route-controller-manager"/"serving-cert": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.297989 5050 reflector.go:484] object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:00.297985 5050 status_manager.go:851] "Failed to get status for pod" podUID="dbd5b107-5d08-43af-881c-11540f395267" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-d55dfcdfc-r54sd\": http2: client connection lost" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.298022 5050 reflector.go:484] object-"openstack"/"rabbitmq-server-conf": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.298022 5050 reflector.go:484] object-"openshift-authentication-operator"/"service-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:00.299199 5050 reflector.go:484] object-"openstack"/"octavia-octavia-dockercfg-h4g5n": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:00.568131 5050 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": context deadline exceeded" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:00.568202 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": context deadline exceeded" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:00.642627 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-operator-799b66f579-tqs2v" podUID="53a1a6d1-6999-4fa1-a0ce-a20b83f1f347" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.73:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:00.642753 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-799b66f579-tqs2v" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:01.546262 5050 scope.go:117] "RemoveContainer" containerID="33899e24adacef82929b0fbbcd58af7cb4f7aa3935b49ec9b5b4045a51da1d14" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:01.546526 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:01.611486 5050 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:01.611541 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:01.684244 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-operator-799b66f579-tqs2v" podUID="53a1a6d1-6999-4fa1-a0ce-a20b83f1f347" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.73:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:02.388798 5050 request.go:700] Waited for 1.0052496s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager-operator/secrets?fieldSelector=metadata.name%3Dcert-manager-operator-controller-manager-dockercfg-p54cz&resourceVersion=109512 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:02.603272 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-qdrgd" podUID="4b4e8aa9-a0f6-4bd2-92d2-c524aedf6389" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:02.673162 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp" podUID="85683dfb-37fb-4301-8c7a-fbb7453b303d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:02.772302 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="14ad1594-090d-4024-a999-9ffe77ce58d8" containerName="galera" probeResult="failure" output="command timed out" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:02.772373 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:02.772993 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="14ad1594-090d-4024-a999-9ffe77ce58d8" containerName="galera" probeResult="failure" output="command timed out" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:02.773087 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:02.773206 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"11ec72e9f06368997b90f40f70878174683db5a652ebb2eb2c432e34abde950e"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:02.847212 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="7fb00b03-fe6e-4c66-bd36-adf9443871a8" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.1.135:9090/-/healthy\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:02.847223 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="7fb00b03-fe6e-4c66-bd36-adf9443871a8" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.1.135:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:03.179200 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2" podUID="06df6c8c-640d-431b-b216-78345a9054e1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:03.351320 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-65swh" podUID="58a60a6f-fcf5-4aa0-b6e4-97d7df2b2bd2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:03.351443 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-65swh" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:03.409535 5050 request.go:700] Waited for 1.93581806s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-default-user&resourceVersion=109846 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:03.583247 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pjhfc" podUID="3c9e825c-0aee-42b9-a7a5-3191486f301d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:03.583404 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pjhfc" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:03.846266 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-8jnzj" podUID="a6345cf8-abc2-4c9a-bfe6-8b65187ada2d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:04.394265 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-65swh" podUID="58a60a6f-fcf5-4aa0-b6e4-97d7df2b2bd2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:04.429471 5050 request.go:700] Waited for 2.859888706s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-scripts&resourceVersion=109803 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:04.435217 5050 patch_prober.go:28] interesting pod/dns-default-25p7l container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.217.0.27:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:04.435276 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-25p7l" podUID="66dfe3f5-9e7a-4e1c-b03a-b6f01dab6ef4" containerName="dns" probeResult="failure" output="Get \"http://10.217.0.27:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:04.501311 5050 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:04.501384 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:04.724314 5050 patch_prober.go:28] interesting pod/observability-operator-d8bb48f5d-wwdcc container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.1.128:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:04.724382 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" podUID="1c8331c1-b8ee-456b-baa6-110917427b64" containerName="operator" probeResult="failure" output="Get \"http://10.217.1.128:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:04.953334 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" podUID="3477354d-838b-48cc-a6c3-612088d82640" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:04.953356 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" podUID="3477354d-838b-48cc-a6c3-612088d82640" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:05.016282 5050 patch_prober.go:28] interesting pod/perses-operator-5446b9c989-dqk4m container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.1.129:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:05.016329 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5446b9c989-dqk4m" podUID="051b7665-675e-4109-a8e8-5a416c8b49cc" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.1.129:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:05.127258 5050 patch_prober.go:28] interesting pod/controller-manager-69cffd76bd-8bkp6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:05.127289 5050 patch_prober.go:28] interesting pod/controller-manager-69cffd76bd-8bkp6 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:05.127312 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-69cffd76bd-8bkp6" podUID="6a956942-e4db-4c66-a7c2-1c370c1569f4" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:05.127318 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-69cffd76bd-8bkp6" podUID="6a956942-e4db-4c66-a7c2-1c370c1569f4" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:05.325162 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pjhfc" podUID="3c9e825c-0aee-42b9-a7a5-3191486f301d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:05.401173 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-5d666c4679-krnwf" podUID="a6130f1a-c95b-445f-8235-e57fdcb270fe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:05.429587 5050 request.go:700] Waited for 3.758369155s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=109522 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:06.137191 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" podUID="2b344bba-8e1d-415b-9b5f-e21d3144fe42" containerName="hostpath-provisioner" probeResult="failure" output="Get \"http://10.217.0.31:9898/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:06.240151 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5fb79d99b5-m4xgd" podUID="2086bc41-00a2-4c97-a491-08511f3ed6e5" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.117:8080/dashboard/auth/login/?next=/dashboard/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:06.240144 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/horizon-5fb79d99b5-m4xgd" podUID="2086bc41-00a2-4c97-a491-08511f3ed6e5" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.117:8080/dashboard/auth/login/?next=/dashboard/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:06.449026 5050 request.go:700] Waited for 4.68840937s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-galera-openstack-svc&resourceVersion=109087 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:07.468821 5050 request.go:700] Waited for 5.625364242s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dauthentication-operator-config&resourceVersion=109602 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:07.478222 5050 patch_prober.go:28] interesting pod/dns-default-25p7l container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.217.0.27:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:07.478285 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-25p7l" podUID="66dfe3f5-9e7a-4e1c-b03a-b6f01dab6ef4" containerName="dns" probeResult="failure" output="Get \"http://10.217.0.27:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:07.847195 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="7fb00b03-fe6e-4c66-bd36-adf9443871a8" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.1.135:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:07.847248 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="7fb00b03-fe6e-4c66-bd36-adf9443871a8" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.1.135:9090/-/healthy\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:08.010194 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-w4tzc" podUID="54e98831-cd88-4dee-90db-e8fbb006e9c3" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:08.010201 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-w4tzc" podUID="54e98831-cd88-4dee-90db-e8fbb006e9c3" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:08.010297 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-w4tzc" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:08.010328 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/speaker-w4tzc" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:08.011070 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="speaker" containerStatusID={"Type":"cri-o","ID":"dc143bce1375c19fe135afda15f93779f6bf099de8cfba514d7efadf417d3fa5"} pod="metallb-system/speaker-w4tzc" containerMessage="Container speaker failed liveness probe, will be restarted" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:08.011124 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/speaker-w4tzc" podUID="54e98831-cd88-4dee-90db-e8fbb006e9c3" containerName="speaker" containerID="cri-o://dc143bce1375c19fe135afda15f93779f6bf099de8cfba514d7efadf417d3fa5" gracePeriod=2 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:08.093296 5050 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-9jns7 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:08.093359 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jns7" podUID="d1f35830-d883-4b41-ab97-7c382dec0387" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:08.093297 5050 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-9jns7 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:08.093421 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jns7" podUID="d1f35830-d883-4b41-ab97-7c382dec0387" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:08.279174 5050 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r54sd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:08.279235 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" podUID="dbd5b107-5d08-43af-881c-11540f395267" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.23:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:08.365201 5050 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8qmpk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:08.365280 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk" podUID="81255772-fddc-4936-8de3-da4649c32d1f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:08.365301 5050 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8qmpk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:08.365352 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk" podUID="81255772-fddc-4936-8de3-da4649c32d1f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:08.467922 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="71218193-88fc-4811-bf04-33a4f4a87898" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.1.133:8081/readyz\": dial tcp 10.217.1.133:8081: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:09.053383 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-w4tzc" podUID="54e98831-cd88-4dee-90db-e8fbb006e9c3" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:09.314251 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" podUID="f5bec9f7-072c-4c21-80ea-af9f59313eef" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:09.314342 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" podUID="f5bec9f7-072c-4c21-80ea-af9f59313eef" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:10.217636 5050 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-cnp7n container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:10.217696 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" podUID="1d89350d-55e9-4ef6-8182-287894b6c14b" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:10.298936 5050 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:10.298991 5050 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:10.299022 5050 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": net/http: TLS handshake timeout" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:10.299165 5050 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/mysql-db-openstack-galera-0: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/mysql-db-openstack-galera-0\": net/http: TLS handshake timeout" pod="openstack/openstack-galera-0" volumeName="mysql-db" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:10.520240 5050 patch_prober.go:28] interesting pod/dns-default-25p7l container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.217.0.27:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:10.520373 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-25p7l" podUID="66dfe3f5-9e7a-4e1c-b03a-b6f01dab6ef4" containerName="dns" probeResult="failure" output="Get \"http://10.217.0.27:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:10.520526 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-25p7l" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:10.569276 5050 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:10.569326 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:10.682351 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-operator-799b66f579-tqs2v" podUID="53a1a6d1-6999-4fa1-a0ce-a20b83f1f347" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.73:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:10.682282 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-operator-799b66f579-tqs2v" podUID="53a1a6d1-6999-4fa1-a0ce-a20b83f1f347" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.73:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:10.889779 5050 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/events\": read tcp 38.102.83.147:36124->38.102.83.147:6443: read: connection reset by peer" event=< Dec 11 15:54:41 crc kubenswrapper[5050]: &Event{ObjectMeta:{packageserver-d55dfcdfc-r54sd.18803427859863d6 openshift-operator-lifecycle-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-operator-lifecycle-manager,Name:packageserver-d55dfcdfc-r54sd,UID:dbd5b107-5d08-43af-881c-11540f395267,APIVersion:v1,ResourceVersion:27090,FieldPath:spec.containers{packageserver},},Reason:ProbeError,Message:Liveness probe error: Get "https://10.217.0.23:5443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Dec 11 15:54:41 crc kubenswrapper[5050]: body: Dec 11 15:54:41 crc kubenswrapper[5050]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 15:53:18.278960086 +0000 UTC m=+7489.122682672,LastTimestamp:2025-12-11 15:53:18.278960086 +0000 UTC m=+7489.122682672,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 11 15:54:41 crc kubenswrapper[5050]: > Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:10.890459 5050 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": read tcp 192.168.126.11:52390->192.168.126.11:6443: read: connection reset by peer" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:10.890506 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": read tcp 192.168.126.11:52390->192.168.126.11:6443: read: connection reset by peer" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.612499 5050 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.612744 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.879516 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T15:54:11Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T15:54:11Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T15:54:11Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T15:54:11Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.880081 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.880541 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.880737 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.881048 5050 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.881075 5050 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.883179 5050 reflector.go:561] object-"openshift-ovn-kubernetes"/"env-overrides": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Denv-overrides&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32888->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.883204 5050 reflector.go:561] object-"openstack"/"octavia-rsyslog-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-rsyslog-scripts&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60746->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.883232 5050 trace.go:236] Trace[1995588346]: "Reflector ListAndWatch" name:object-"openshift-ovn-kubernetes"/"env-overrides" (11-Dec-2025 15:54:01.166) (total time: 10716ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1995588346]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Denv-overrides&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32888->38.102.83.147:6443: read: connection reset by peer 10716ms (15:54:11.883) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1995588346]: [10.716287445s] [10.716287445s] END Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.883231 5050 reflector.go:561] object-"openshift-ingress-operator"/"trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60988->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.883250 5050 trace.go:236] Trace[1028996714]: "Reflector ListAndWatch" name:object-"openstack"/"octavia-rsyslog-scripts" (11-Dec-2025 15:54:01.101) (total time: 10781ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1028996714]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-rsyslog-scripts&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60746->38.102.83.147:6443: read: connection reset by peer 10781ms (15:54:11.883) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1028996714]: [10.781781909s] [10.781781909s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.883255 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Denv-overrides&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32888->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.883223 5050 reflector.go:561] object-"openstack"/"octavia-healthmanager-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-healthmanager-scripts&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32858->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.883282 5050 trace.go:236] Trace[323892776]: "Reflector ListAndWatch" name:object-"openshift-ingress-operator"/"trusted-ca" (11-Dec-2025 15:54:01.142) (total time: 10740ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[323892776]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60988->38.102.83.147:6443: read: connection reset by peer 10740ms (15:54:11.883) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[323892776]: [10.740967006s] [10.740967006s] END Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.883300 5050 trace.go:236] Trace[1781268561]: "Reflector ListAndWatch" name:object-"openstack"/"octavia-healthmanager-scripts" (11-Dec-2025 15:54:01.160) (total time: 10722ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1781268561]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-healthmanager-scripts&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32858->38.102.83.147:6443: read: connection reset by peer 10722ms (15:54:11.883) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1781268561]: [10.722628995s] [10.722628995s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.883297 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60988->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.883301 5050 reflector.go:561] object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-djswv": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Doctavia-operator-controller-manager-dockercfg-djswv&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33064->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.883328 5050 reflector.go:561] object-"openstack"/"ovncontroller-ovncontroller-dockercfg-fjq8l": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncontroller-ovncontroller-dockercfg-fjq8l&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32918->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.883350 5050 trace.go:236] Trace[1393292268]: "Reflector ListAndWatch" name:object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-djswv" (11-Dec-2025 15:54:01.210) (total time: 10672ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1393292268]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Doctavia-operator-controller-manager-dockercfg-djswv&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33064->38.102.83.147:6443: read: connection reset by peer 10672ms (15:54:11.883) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1393292268]: [10.672505212s] [10.672505212s] END Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.883410 5050 reflector.go:561] object-"openshift-apiserver"/"image-import-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dimage-import-ca&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32768->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.883409 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"octavia-operator-controller-manager-dockercfg-djswv\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Doctavia-operator-controller-manager-dockercfg-djswv&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33064->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.883358 5050 trace.go:236] Trace[1214236765]: "Reflector ListAndWatch" name:object-"openstack"/"ovncontroller-ovncontroller-dockercfg-fjq8l" (11-Dec-2025 15:54:01.178) (total time: 10705ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1214236765]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncontroller-ovncontroller-dockercfg-fjq8l&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32918->38.102.83.147:6443: read: connection reset by peer 10705ms (15:54:11.883) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1214236765]: [10.705160176s] [10.705160176s] END Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.883420 5050 reflector.go:561] object-"openshift-service-ca"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60924->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.883437 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovncontroller-ovncontroller-dockercfg-fjq8l\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncontroller-ovncontroller-dockercfg-fjq8l&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32918->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.883376 5050 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-msq4c&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33122->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.883470 5050 trace.go:236] Trace[1377081074]: "Reflector ListAndWatch" name:object-"openshift-service-ca"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.126) (total time: 10756ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1377081074]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60924->38.102.83.147:6443: read: connection reset by peer 10756ms (15:54:11.883) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1377081074]: [10.756780099s] [10.756780099s] END Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.883488 5050 trace.go:236] Trace[184701040]: "Reflector ListAndWatch" name:object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" (11-Dec-2025 15:54:01.224) (total time: 10659ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[184701040]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-msq4c&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33122->38.102.83.147:6443: read: connection reset by peer 10659ms (15:54:11.883) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[184701040]: [10.65937718s] [10.65937718s] END Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.883483 5050 reflector.go:561] object-"openstack"/"rabbitmq-cell1-plugins-conf": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-cell1-plugins-conf&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60936->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.883487 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60924->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.883493 5050 reflector.go:561] object-"openstack"/"ovnnorthd-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovnnorthd-scripts&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33146->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.883501 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-msq4c&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33122->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.883514 5050 trace.go:236] Trace[1844601784]: "Reflector ListAndWatch" name:object-"openstack"/"rabbitmq-cell1-plugins-conf" (11-Dec-2025 15:54:01.127) (total time: 10756ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1844601784]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-cell1-plugins-conf&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60936->38.102.83.147:6443: read: connection reset by peer 10756ms (15:54:11.883) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1844601784]: [10.756403749s] [10.756403749s] END Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.883437 5050 trace.go:236] Trace[1989524451]: "Reflector ListAndWatch" name:object-"openshift-apiserver"/"image-import-ca" (11-Dec-2025 15:54:01.142) (total time: 10741ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1989524451]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dimage-import-ca&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32768->38.102.83.147:6443: read: connection reset by peer 10741ms (15:54:11.883) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1989524451]: [10.741094119s] [10.741094119s] END Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.883543 5050 trace.go:236] Trace[1577716451]: "Reflector ListAndWatch" name:object-"openstack"/"ovnnorthd-scripts" (11-Dec-2025 15:54:01.226) (total time: 10656ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1577716451]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovnnorthd-scripts&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33146->38.102.83.147:6443: read: connection reset by peer 10656ms (15:54:11.883) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1577716451]: [10.656798491s] [10.656798491s] END Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.883545 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=109167": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60718->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.883560 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovnnorthd-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovnnorthd-scripts&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33146->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.883426 5050 reflector.go:561] object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-whqpr": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncluster-ovndbcluster-sb-dockercfg-whqpr&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60914->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.883583 5050 trace.go:236] Trace[687158145]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Dec-2025 15:54:01.088) (total time: 10795ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[687158145]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=109167": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60718->38.102.83.147:6443: read: connection reset by peer 10795ms (15:54:11.883) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[687158145]: [10.795132247s] [10.795132247s] END Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.883593 5050 reflector.go:561] object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dkube-scheduler-operator-serving-cert&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60938->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.883610 5050 trace.go:236] Trace[782527302]: "Reflector ListAndWatch" name:object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-whqpr" (11-Dec-2025 15:54:01.122) (total time: 10760ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[782527302]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncluster-ovndbcluster-sb-dockercfg-whqpr&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60914->38.102.83.147:6443: read: connection reset by peer 10760ms (15:54:11.883) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[782527302]: [10.760701685s] [10.760701685s] END Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.883624 5050 trace.go:236] Trace[2111207144]: "Reflector ListAndWatch" name:object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" (11-Dec-2025 15:54:01.129) (total time: 10754ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2111207144]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dkube-scheduler-operator-serving-cert&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60938->38.102.83.147:6443: read: connection reset by peer 10753ms (15:54:11.883) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2111207144]: [10.754008615s] [10.754008615s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.883626 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovncluster-ovndbcluster-sb-dockercfg-whqpr\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncluster-ovndbcluster-sb-dockercfg-whqpr&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60914->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.883635 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dkube-scheduler-operator-serving-cert&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60938->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.883526 5050 reflector.go:561] object-"openshift-marketplace"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33034->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.883658 5050 reflector.go:561] object-"openstack"/"octavia-worker-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-worker-scripts&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60762->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.883678 5050 reflector.go:561] object-"openshift-authentication"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33094->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.883693 5050 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-global-ca&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32872->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.883709 5050 trace.go:236] Trace[22438371]: "Reflector ListAndWatch" name:object-"openstack"/"octavia-worker-scripts" (11-Dec-2025 15:54:01.104) (total time: 10779ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[22438371]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-worker-scripts&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60762->38.102.83.147:6443: read: connection reset by peer 10779ms (15:54:11.883) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[22438371]: [10.779645252s] [10.779645252s] END Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.883723 5050 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-control-plane-metrics-cert&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32852->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.883730 5050 trace.go:236] Trace[1842155119]: "Reflector ListAndWatch" name:object-"openshift-authentication"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.213) (total time: 10669ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1842155119]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33094->38.102.83.147:6443: read: connection reset by peer 10669ms (15:54:11.883) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1842155119]: [10.669735038s] [10.669735038s] END Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.883755 5050 trace.go:236] Trace[377977237]: "Reflector ListAndWatch" name:object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" (11-Dec-2025 15:54:01.160) (total time: 10723ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[377977237]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-control-plane-metrics-cert&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32852->38.102.83.147:6443: read: connection reset by peer 10723ms (15:54:11.883) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[377977237]: [10.723058326s] [10.723058326s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.883755 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33094->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.883658 5050 reflector.go:561] object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-n57x7": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dnova-operator-controller-manager-dockercfg-n57x7&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32780->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.883766 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-control-plane-metrics-cert&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32852->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.883595 5050 reflector.go:561] object-"openshift-console-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60722->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.883800 5050 trace.go:236] Trace[1876270000]: "Reflector ListAndWatch" name:object-"openshift-console-operator"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.092) (total time: 10790ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1876270000]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60722->38.102.83.147:6443: read: connection reset by peer 10790ms (15:54:11.883) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1876270000]: [10.790967045s] [10.790967045s] END Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.883794 5050 reflector.go:561] object-"openshift-network-node-identity"/"ovnkube-identity-cm": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dovnkube-identity-cm&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60906->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.883801 5050 reflector.go:561] object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-2bw5c": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dneutron-operator-controller-manager-dockercfg-2bw5c&resourceVersion=109623": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33164->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.883809 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60722->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.883810 5050 reflector.go:561] object-"cert-manager-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32804->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.883828 5050 trace.go:236] Trace[1720263484]: "Reflector ListAndWatch" name:object-"openshift-network-node-identity"/"ovnkube-identity-cm" (11-Dec-2025 15:54:01.120) (total time: 10763ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1720263484]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dovnkube-identity-cm&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60906->38.102.83.147:6443: read: connection reset by peer 10763ms (15:54:11.883) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1720263484]: [10.763380486s] [10.763380486s] END Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.883838 5050 trace.go:236] Trace[1135020023]: "Reflector ListAndWatch" name:object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-2bw5c" (11-Dec-2025 15:54:01.226) (total time: 10656ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1135020023]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dneutron-operator-controller-manager-dockercfg-2bw5c&resourceVersion=109623": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33164->38.102.83.147:6443: read: connection reset by peer 10656ms (15:54:11.883) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1135020023]: [10.656906844s] [10.656906844s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.883838 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dovnkube-identity-cm&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60906->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.883846 5050 trace.go:236] Trace[282693525]: "Reflector ListAndWatch" name:object-"cert-manager-operator"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.152) (total time: 10731ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[282693525]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32804->38.102.83.147:6443: read: connection reset by peer 10731ms (15:54:11.883) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[282693525]: [10.731155162s] [10.731155162s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.883850 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"neutron-operator-controller-manager-dockercfg-2bw5c\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dneutron-operator-controller-manager-dockercfg-2bw5c&resourceVersion=109623\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33164->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.883731 5050 trace.go:236] Trace[1013951247]: "Reflector ListAndWatch" name:object-"openshift-controller-manager"/"openshift-global-ca" (11-Dec-2025 15:54:01.164) (total time: 10719ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1013951247]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-global-ca&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32872->38.102.83.147:6443: read: connection reset by peer 10719ms (15:54:11.883) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1013951247]: [10.71946843s] [10.71946843s] END Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.883870 5050 reflector.go:561] object-"metallb-system"/"controller-dockercfg-5zwsv": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dcontroller-dockercfg-5zwsv&resourceVersion=109087": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60734->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.883875 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-global-ca&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32872->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.883805 5050 trace.go:236] Trace[168840701]: "Reflector ListAndWatch" name:object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-n57x7" (11-Dec-2025 15:54:01.145) (total time: 10738ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[168840701]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dnova-operator-controller-manager-dockercfg-n57x7&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32780->38.102.83.147:6443: read: connection reset by peer 10737ms (15:54:11.883) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[168840701]: [10.738110549s] [10.738110549s] END Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.883900 5050 trace.go:236] Trace[411435896]: "Reflector ListAndWatch" name:object-"metallb-system"/"controller-dockercfg-5zwsv" (11-Dec-2025 15:54:01.094) (total time: 10789ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[411435896]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dcontroller-dockercfg-5zwsv&resourceVersion=109087": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60734->38.102.83.147:6443: read: connection reset by peer 10789ms (15:54:11.883) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[411435896]: [10.789703382s] [10.789703382s] END Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.883893 5050 reflector.go:561] object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Droute-controller-manager-sa-dockercfg-h2zr2&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33114->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.883915 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"controller-dockercfg-5zwsv\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dcontroller-dockercfg-5zwsv&resourceVersion=109087\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60734->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.883640 5050 reflector.go:561] object-"openshift-machine-config-operator"/"mco-proxy-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmco-proxy-tls&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60888->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.883925 5050 reflector.go:561] object-"openstack"/"rabbitmq-cell1-server-conf": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-cell1-server-conf&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60974->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.883944 5050 trace.go:236] Trace[2126010841]: "Reflector ListAndWatch" name:object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" (11-Dec-2025 15:54:01.222) (total time: 10661ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2126010841]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Droute-controller-manager-sa-dockercfg-h2zr2&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33114->38.102.83.147:6443: read: connection reset by peer 10661ms (15:54:11.883) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2126010841]: [10.661145308s] [10.661145308s] END Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.883954 5050 trace.go:236] Trace[1213880805]: "Reflector ListAndWatch" name:object-"openstack"/"rabbitmq-cell1-server-conf" (11-Dec-2025 15:54:01.135) (total time: 10748ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1213880805]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-cell1-server-conf&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60974->38.102.83.147:6443: read: connection reset by peer 10748ms (15:54:11.883) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1213880805]: [10.748069726s] [10.748069726s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.883897 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"nova-operator-controller-manager-dockercfg-n57x7\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dnova-operator-controller-manager-dockercfg-n57x7&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32780->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.883961 5050 reflector.go:561] object-"openstack-operators"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32900->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.883970 5050 reflector.go:561] object-"openstack"/"cinder-cinder-dockercfg-lvj2r": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-cinder-dockercfg-lvj2r&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33188->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.884053 5050 reflector.go:561] object-"openstack-operators"/"openstack-operator-index-dockercfg-f86tg": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-index-dockercfg-f86tg&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32956->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.884066 5050 trace.go:236] Trace[138948153]: "Reflector ListAndWatch" name:object-"openstack"/"cinder-cinder-dockercfg-lvj2r" (11-Dec-2025 15:54:01.229) (total time: 10654ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[138948153]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-cinder-dockercfg-lvj2r&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33188->38.102.83.147:6443: read: connection reset by peer 10654ms (15:54:11.883) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[138948153]: [10.654373216s] [10.654373216s] END Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.884072 5050 reflector.go:561] object-"openstack"/"alertmanager-metric-storage-generated": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dalertmanager-metric-storage-generated&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32926->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.884086 5050 trace.go:236] Trace[1557683027]: "Reflector ListAndWatch" name:object-"openstack-operators"/"openstack-operator-index-dockercfg-f86tg" (11-Dec-2025 15:54:01.183) (total time: 10700ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1557683027]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-index-dockercfg-f86tg&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32956->38.102.83.147:6443: read: connection reset by peer 10700ms (15:54:11.884) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1557683027]: [10.700544223s] [10.700544223s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.884083 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-cinder-dockercfg-lvj2r\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-cinder-dockercfg-lvj2r&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33188->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.883961 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-h2zr2\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Droute-controller-manager-sa-dockercfg-h2zr2&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33114->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.884097 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openstack-operator-index-dockercfg-f86tg\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-index-dockercfg-f86tg&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32956->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.884080 5050 reflector.go:561] object-"openshift-oauth-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60872->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.884149 5050 trace.go:236] Trace[1801285521]: "Reflector ListAndWatch" name:object-"openshift-oauth-apiserver"/"trusted-ca-bundle" (11-Dec-2025 15:54:01.113) (total time: 10770ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1801285521]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60872->38.102.83.147:6443: read: connection reset by peer 10770ms (15:54:11.884) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1801285521]: [10.770152377s] [10.770152377s] END Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.884159 5050 trace.go:236] Trace[259992763]: "Reflector ListAndWatch" name:object-"openstack-operators"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.167) (total time: 10716ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[259992763]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32900->38.102.83.147:6443: read: connection reset by peer 10716ms (15:54:11.883) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[259992763]: [10.716820959s] [10.716820959s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.884209 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32900->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.884104 5050 trace.go:236] Trace[809903828]: "Reflector ListAndWatch" name:object-"openstack"/"alertmanager-metric-storage-generated" (11-Dec-2025 15:54:01.178) (total time: 10705ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[809903828]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dalertmanager-metric-storage-generated&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32926->38.102.83.147:6443: read: connection reset by peer 10705ms (15:54:11.884) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[809903828]: [10.705477385s] [10.705477385s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.884231 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"alertmanager-metric-storage-generated\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dalertmanager-metric-storage-generated&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32926->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.884178 5050 reflector.go:561] object-"metallb-system"/"metallb-memberlist": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-memberlist&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33134->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.884243 5050 reflector.go:561] object-"openshift-dns"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33286->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.884254 5050 reflector.go:561] object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-68tnd": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-controller-operator-dockercfg-68tnd&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33270->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.883908 5050 reflector.go:561] object-"openshift-console"/"console-oauth-config": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-oauth-config&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32938->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.884271 5050 reflector.go:561] object-"openshift-multus"/"multus-admission-controller-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-admission-controller-secret&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33174->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.884290 5050 trace.go:236] Trace[1447302693]: "Reflector ListAndWatch" name:object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-68tnd" (11-Dec-2025 15:54:01.240) (total time: 10643ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1447302693]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-controller-operator-dockercfg-68tnd&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33270->38.102.83.147:6443: read: connection reset by peer 10643ms (15:54:11.884) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1447302693]: [10.643513785s] [10.643513785s] END Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.884272 5050 reflector.go:561] object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-sa-dockercfg-djjff&resourceVersion=109407": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33364->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.884277 5050 trace.go:236] Trace[3434803]: "Reflector ListAndWatch" name:object-"openshift-dns"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.246) (total time: 10637ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[3434803]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33286->38.102.83.147:6443: read: connection reset by peer 10637ms (15:54:11.884) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[3434803]: [10.63772426s] [10.63772426s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.884303 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openstack-operator-controller-operator-dockercfg-68tnd\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-controller-operator-dockercfg-68tnd&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33270->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.884266 5050 trace.go:236] Trace[2108068661]: "Reflector ListAndWatch" name:object-"metallb-system"/"metallb-memberlist" (11-Dec-2025 15:54:01.226) (total time: 10657ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2108068661]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-memberlist&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33134->38.102.83.147:6443: read: connection reset by peer 10657ms (15:54:11.884) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2108068661]: [10.657549201s] [10.657549201s] END Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.884326 5050 trace.go:236] Trace[495117544]: "Reflector ListAndWatch" name:object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" (11-Dec-2025 15:54:01.266) (total time: 10617ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[495117544]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-sa-dockercfg-djjff&resourceVersion=109407": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33364->38.102.83.147:6443: read: connection reset by peer 10617ms (15:54:11.884) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[495117544]: [10.617448117s] [10.617448117s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.884335 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-memberlist\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-memberlist&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33134->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.883595 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=109167\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60718->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.883860 5050 reflector.go:158] "Unhandled Error" err="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32804->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.884339 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-djjff\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-sa-dockercfg-djjff&resourceVersion=109407\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33364->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.884178 5050 reflector.go:561] object-"openstack"/"octavia-worker-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-worker-config-data&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60922->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.884366 5050 reflector.go:561] object-"openstack"/"neutron-neutron-dockercfg-ctkbt": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-neutron-dockercfg-ctkbt&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33432->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.884321 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33286->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.884394 5050 trace.go:236] Trace[1237103766]: "Reflector ListAndWatch" name:object-"openstack"/"octavia-worker-config-data" (11-Dec-2025 15:54:01.124) (total time: 10760ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1237103766]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-worker-config-data&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60922->38.102.83.147:6443: read: connection reset by peer 10759ms (15:54:11.884) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1237103766]: [10.760111669s] [10.760111669s] END Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.884398 5050 trace.go:236] Trace[1207811993]: "Reflector ListAndWatch" name:object-"openstack"/"neutron-neutron-dockercfg-ctkbt" (11-Dec-2025 15:54:01.280) (total time: 10603ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1207811993]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-neutron-dockercfg-ctkbt&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33432->38.102.83.147:6443: read: connection reset by peer 10603ms (15:54:11.884) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1207811993]: [10.603963945s] [10.603963945s] END Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.884397 5050 reflector.go:561] object-"openshift-multus"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33130->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.884411 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"neutron-neutron-dockercfg-ctkbt\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-neutron-dockercfg-ctkbt&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33432->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.884373 5050 reflector.go:561] object-"openshift-ingress"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60818->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.884419 5050 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-kubernetes-node-dockercfg-pwtwl&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33288->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.883266 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-rsyslog-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-rsyslog-scripts&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60746->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.883727 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-worker-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-worker-scripts&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60762->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.883965 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-cell1-server-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-cell1-server-conf&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60974->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.884178 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60872->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.884417 5050 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackageserver-service-cert&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32982->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.884443 5050 reflector.go:561] object-"openshift-ingress-canary"/"default-dockercfg-2llfx": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2llfx&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33078->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.884477 5050 trace.go:236] Trace[853277113]: "Reflector ListAndWatch" name:object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" (11-Dec-2025 15:54:01.187) (total time: 10697ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[853277113]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackageserver-service-cert&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32982->38.102.83.147:6443: read: connection reset by peer 10697ms (15:54:11.884) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[853277113]: [10.6970961s] [10.6970961s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.883549 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"image-import-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dimage-import-ca&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32768->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.884406 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-worker-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-worker-config-data&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60922->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.884489 5050 reflector.go:561] object-"openshift-console-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33242->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.884509 5050 reflector.go:561] object-"openstack"/"horizon": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dhorizon&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32916->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.884536 5050 trace.go:236] Trace[483419299]: "Reflector ListAndWatch" name:object-"openshift-console-operator"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.234) (total time: 10649ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[483419299]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33242->38.102.83.147:6443: read: connection reset by peer 10649ms (15:54:11.884) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[483419299]: [10.649604249s] [10.649604249s] END Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.884547 5050 reflector.go:561] object-"cert-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60984->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.884553 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33242->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.884491 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackageserver-service-cert&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32982->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.883954 5050 trace.go:236] Trace[1525165714]: "Reflector ListAndWatch" name:object-"openshift-machine-config-operator"/"mco-proxy-tls" (11-Dec-2025 15:54:01.115) (total time: 10768ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1525165714]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmco-proxy-tls&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60888->38.102.83.147:6443: read: connection reset by peer 10768ms (15:54:11.883) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1525165714]: [10.768512653s] [10.768512653s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.884588 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmco-proxy-tls&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60888->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.883523 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-cell1-plugins-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-cell1-plugins-conf&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60936->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.883311 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-healthmanager-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-healthmanager-scripts&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32858->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.884592 5050 reflector.go:561] object-"openshift-operators"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33098->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.884634 5050 reflector.go:561] object-"openstack"/"cinder-backup-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-backup-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60840->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.884643 5050 trace.go:236] Trace[586766450]: "Reflector ListAndWatch" name:object-"openshift-operators"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.216) (total time: 10668ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[586766450]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33098->38.102.83.147:6443: read: connection reset by peer 10668ms (15:54:11.884) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[586766450]: [10.668162706s] [10.668162706s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.884655 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33098->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.883725 5050 reflector.go:561] object-"openstack"/"ovndbcluster-sb-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-sb-scripts&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32792->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.884686 5050 trace.go:236] Trace[414355079]: "Reflector ListAndWatch" name:object-"openstack"/"ovndbcluster-sb-scripts" (11-Dec-2025 15:54:01.151) (total time: 10733ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[414355079]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-sb-scripts&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32792->38.102.83.147:6443: read: connection reset by peer 10732ms (15:54:11.883) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[414355079]: [10.733308771s] [10.733308771s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.884697 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovndbcluster-sb-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-sb-scripts&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32792->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.884695 5050 reflector.go:561] object-"openshift-oauth-apiserver"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32996->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.884720 5050 reflector.go:561] object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32816->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.884750 5050 trace.go:236] Trace[512452829]: "Reflector ListAndWatch" name:object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.155) (total time: 10729ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[512452829]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32816->38.102.83.147:6443: read: connection reset by peer 10729ms (15:54:11.884) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[512452829]: [10.729464848s] [10.729464848s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.884760 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32816->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.884305 5050 trace.go:236] Trace[669417980]: "Reflector ListAndWatch" name:object-"openshift-multus"/"multus-admission-controller-secret" (11-Dec-2025 15:54:01.228) (total time: 10656ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[669417980]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-admission-controller-secret&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33174->38.102.83.147:6443: read: connection reset by peer 10655ms (15:54:11.884) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[669417980]: [10.6560142s] [10.6560142s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.884780 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-admission-controller-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-admission-controller-secret&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33174->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.884539 5050 trace.go:236] Trace[1384237605]: "Reflector ListAndWatch" name:object-"openstack"/"horizon" (11-Dec-2025 15:54:01.175) (total time: 10708ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1384237605]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dhorizon&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32916->38.102.83.147:6443: read: connection reset by peer 10708ms (15:54:11.884) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1384237605]: [10.708736652s] [10.708736652s] END Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.884792 5050 reflector.go:561] object-"openshift-machine-api"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33464->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.884802 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"horizon\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dhorizon&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32916->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.884729 5050 trace.go:236] Trace[178234609]: "Reflector ListAndWatch" name:object-"openshift-oauth-apiserver"/"serving-cert" (11-Dec-2025 15:54:01.188) (total time: 10696ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[178234609]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32996->38.102.83.147:6443: read: connection reset by peer 10695ms (15:54:11.884) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[178234609]: [10.696015021s] [10.696015021s] END Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.884821 5050 trace.go:236] Trace[396644186]: "Reflector ListAndWatch" name:object-"openshift-machine-api"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.284) (total time: 10600ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[396644186]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33464->38.102.83.147:6443: read: connection reset by peer 10600ms (15:54:11.884) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[396644186]: [10.600502592s] [10.600502592s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.884824 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32996->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.884471 5050 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60940->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.884854 5050 trace.go:236] Trace[980199779]: "Reflector ListAndWatch" name:object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.129) (total time: 10755ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[980199779]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60940->38.102.83.147:6443: read: connection reset by peer 10754ms (15:54:11.884) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[980199779]: [10.755214977s] [10.755214977s] END Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.884850 5050 reflector.go:561] object-"openshift-multus"/"cni-copy-resources": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dcni-copy-resources&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33254->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.884864 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60940->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.884421 5050 reflector.go:561] object-"openshift-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60956->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.884882 5050 trace.go:236] Trace[1882260358]: "Reflector ListAndWatch" name:object-"openshift-multus"/"cni-copy-resources" (11-Dec-2025 15:54:01.234) (total time: 10649ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1882260358]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dcni-copy-resources&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33254->38.102.83.147:6443: read: connection reset by peer 10649ms (15:54:11.884) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1882260358]: [10.649970418s] [10.649970418s] END Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.884895 5050 trace.go:236] Trace[1211583873]: "Reflector ListAndWatch" name:object-"openshift-apiserver"/"trusted-ca-bundle" (11-Dec-2025 15:54:01.130) (total time: 10753ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1211583873]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60956->38.102.83.147:6443: read: connection reset by peer 10753ms (15:54:11.884) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1211583873]: [10.753889251s] [10.753889251s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.884893 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"cni-copy-resources\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dcni-copy-resources&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33254->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.884891 5050 reflector.go:561] object-"openshift-oauth-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33256->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.884903 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60956->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.884180 5050 reflector.go:561] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dnode-ca-dockercfg-4777p&resourceVersion=109434": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33226->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.884915 5050 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32954->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.884927 5050 trace.go:236] Trace[1257721111]: "Reflector ListAndWatch" name:object-"openshift-oauth-apiserver"/"etcd-serving-ca" (11-Dec-2025 15:54:01.234) (total time: 10649ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1257721111]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33256->38.102.83.147:6443: read: connection reset by peer 10649ms (15:54:11.884) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1257721111]: [10.649987518s] [10.649987518s] END Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.884937 5050 trace.go:236] Trace[926433944]: "Reflector ListAndWatch" name:object-"openshift-image-registry"/"node-ca-dockercfg-4777p" (11-Dec-2025 15:54:01.233) (total time: 10651ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[926433944]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dnode-ca-dockercfg-4777p&resourceVersion=109434": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33226->38.102.83.147:6443: read: connection reset by peer 10650ms (15:54:11.884) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[926433944]: [10.651345705s] [10.651345705s] END Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.884941 5050 trace.go:236] Trace[1318175152]: "Reflector ListAndWatch" name:object-"openshift-controller-manager"/"serving-cert" (11-Dec-2025 15:54:01.181) (total time: 10703ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1318175152]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32954->38.102.83.147:6443: read: connection reset by peer 10703ms (15:54:11.884) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1318175152]: [10.703806s] [10.703806s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.884947 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"node-ca-dockercfg-4777p\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dnode-ca-dockercfg-4777p&resourceVersion=109434\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33226->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.884939 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33256->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.884951 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32954->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.884971 5050 reflector.go:561] object-"openshift-authentication-operator"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33326->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.885030 5050 trace.go:236] Trace[1153758278]: "Reflector ListAndWatch" name:object-"openshift-authentication-operator"/"trusted-ca-bundle" (11-Dec-2025 15:54:01.251) (total time: 10633ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1153758278]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33326->38.102.83.147:6443: read: connection reset by peer 10633ms (15:54:11.884) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1153758278]: [10.633780794s] [10.633780794s] END Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.885039 5050 reflector.go:561] object-"openshift-oauth-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109117": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32906->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.885043 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33326->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.885050 5050 reflector.go:561] object-"openstack"/"keystone-keystone-dockercfg-jrbb7": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-keystone-dockercfg-jrbb7&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33202->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.885066 5050 trace.go:236] Trace[591090046]: "Reflector ListAndWatch" name:object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.168) (total time: 10716ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[591090046]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109117": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32906->38.102.83.147:6443: read: connection reset by peer 10716ms (15:54:11.885) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[591090046]: [10.716573212s] [10.716573212s] END Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.885065 5050 reflector.go:561] object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-zlz4m": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dbarbican-operator-controller-manager-dockercfg-zlz4m&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33390->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.885188 5050 reflector.go:561] object-"openshift-ingress"/"router-dockercfg-zdk86": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-dockercfg-zdk86&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60822->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.885190 5050 reflector.go:561] object-"metallb-system"/"speaker-certs-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dspeaker-certs-secret&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60890->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.885195 5050 reflector.go:561] object-"openstack"/"combined-ca-bundle": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcombined-ca-bundle&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60740->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.885220 5050 trace.go:236] Trace[490696119]: "Reflector ListAndWatch" name:object-"openshift-ingress"/"router-dockercfg-zdk86" (11-Dec-2025 15:54:01.110) (total time: 10774ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[490696119]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-dockercfg-zdk86&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60822->38.102.83.147:6443: read: connection reset by peer 10774ms (15:54:11.885) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[490696119]: [10.77436892s] [10.77436892s] END Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.885221 5050 trace.go:236] Trace[1450949749]: "Reflector ListAndWatch" name:object-"metallb-system"/"speaker-certs-secret" (11-Dec-2025 15:54:01.116) (total time: 10768ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1450949749]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dspeaker-certs-secret&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60890->38.102.83.147:6443: read: connection reset by peer 10768ms (15:54:11.885) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1450949749]: [10.768507093s] [10.768507093s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.885231 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"router-dockercfg-zdk86\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-dockercfg-zdk86&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60822->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.885235 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"speaker-certs-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dspeaker-certs-secret&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60890->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.884831 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33464->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.885245 5050 trace.go:236] Trace[874947697]: "Reflector ListAndWatch" name:object-"openstack"/"combined-ca-bundle" (11-Dec-2025 15:54:01.100) (total time: 10785ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[874947697]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcombined-ca-bundle&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60740->38.102.83.147:6443: read: connection reset by peer 10785ms (15:54:11.885) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[874947697]: [10.785155369s] [10.785155369s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.885265 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"combined-ca-bundle\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcombined-ca-bundle&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60740->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.885078 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109117\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32906->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.885081 5050 trace.go:236] Trace[534621416]: "Reflector ListAndWatch" name:object-"openstack"/"keystone-keystone-dockercfg-jrbb7" (11-Dec-2025 15:54:01.229) (total time: 10655ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[534621416]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-keystone-dockercfg-jrbb7&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33202->38.102.83.147:6443: read: connection reset by peer 10655ms (15:54:11.885) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[534621416]: [10.655470935s] [10.655470935s] END Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.885287 5050 reflector.go:561] object-"openshift-console"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33454->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.885318 5050 trace.go:236] Trace[433626778]: "Reflector ListAndWatch" name:object-"openshift-console"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.284) (total time: 10601ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[433626778]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33454->38.102.83.147:6443: read: connection reset by peer 10601ms (15:54:11.885) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[433626778]: [10.601022847s] [10.601022847s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.885312 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"keystone-keystone-dockercfg-jrbb7\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-keystone-dockercfg-jrbb7&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33202->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.885326 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33454->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.885223 5050 trace.go:236] Trace[1462238119]: "Reflector ListAndWatch" name:object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-zlz4m" (11-Dec-2025 15:54:01.271) (total time: 10613ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1462238119]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dbarbican-operator-controller-manager-dockercfg-zlz4m&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33390->38.102.83.147:6443: read: connection reset by peer 10613ms (15:54:11.885) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1462238119]: [10.613355957s] [10.613355957s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.885344 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"barbican-operator-controller-manager-dockercfg-zlz4m\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dbarbican-operator-controller-manager-dockercfg-zlz4m&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33390->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.884363 5050 reflector.go:561] object-"openstack"/"rabbitmq-default-user": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-default-user&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33310->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.885394 5050 trace.go:236] Trace[1411097394]: "Reflector ListAndWatch" name:object-"openstack"/"rabbitmq-default-user" (11-Dec-2025 15:54:01.251) (total time: 10634ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1411097394]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-default-user&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33310->38.102.83.147:6443: read: connection reset by peer 10633ms (15:54:11.884) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1411097394]: [10.634166254s] [10.634166254s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.885404 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-default-user\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-default-user&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33310->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.884361 5050 reflector.go:561] object-"openstack"/"rabbitmq-server-conf": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-server-conf&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60868->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.885430 5050 reflector.go:561] object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-7zqpj": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmariadb-operator-controller-manager-dockercfg-7zqpj&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33008->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.885474 5050 trace.go:236] Trace[477570563]: "Reflector ListAndWatch" name:object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-7zqpj" (11-Dec-2025 15:54:01.189) (total time: 10695ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[477570563]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmariadb-operator-controller-manager-dockercfg-7zqpj&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33008->38.102.83.147:6443: read: connection reset by peer 10695ms (15:54:11.885) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[477570563]: [10.695481017s] [10.695481017s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.885488 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"mariadb-operator-controller-manager-dockercfg-7zqpj\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmariadb-operator-controller-manager-dockercfg-7zqpj&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33008->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.885439 5050 trace.go:236] Trace[1421700443]: "Reflector ListAndWatch" name:object-"openstack"/"rabbitmq-server-conf" (11-Dec-2025 15:54:01.114) (total time: 10771ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1421700443]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-server-conf&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60868->38.102.83.147:6443: read: connection reset by peer 10770ms (15:54:11.884) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1421700443]: [10.771295138s] [10.771295138s] END Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.885508 5050 reflector.go:561] object-"openshift-multus"/"default-dockercfg-2q5b6": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2q5b6&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60854->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.885515 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-server-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-server-conf&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60868->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.883668 5050 trace.go:236] Trace[448323245]: "Reflector ListAndWatch" name:object-"openshift-marketplace"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.198) (total time: 10685ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[448323245]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33034->38.102.83.147:6443: read: connection reset by peer 10684ms (15:54:11.883) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[448323245]: [10.685020827s] [10.685020827s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.885540 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109067\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33034->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.884436 5050 reflector.go:561] object-"openshift-cluster-version"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33386->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.885580 5050 trace.go:236] Trace[1078464413]: "Reflector ListAndWatch" name:object-"openshift-cluster-version"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.269) (total time: 10616ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1078464413]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33386->38.102.83.147:6443: read: connection reset by peer 10614ms (15:54:11.884) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1078464413]: [10.616095701s] [10.616095701s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.885592 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33386->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.885598 5050 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33082->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.884453 5050 trace.go:236] Trace[2107766763]: "Reflector ListAndWatch" name:object-"openshift-ingress"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.108) (total time: 10775ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2107766763]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60818->38.102.83.147:6443: read: connection reset by peer 10775ms (15:54:11.884) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2107766763]: [10.77548958s] [10.77548958s] END Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.884349 5050 trace.go:236] Trace[2088911120]: "Reflector ListAndWatch" name:object-"openshift-console"/"console-oauth-config" (11-Dec-2025 15:54:01.178) (total time: 10705ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2088911120]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-oauth-config&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32938->38.102.83.147:6443: read: connection reset by peer 10705ms (15:54:11.883) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2088911120]: [10.705573237s] [10.705573237s] END Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.884457 5050 trace.go:236] Trace[1450366114]: "Reflector ListAndWatch" name:object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" (11-Dec-2025 15:54:01.246) (total time: 10637ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1450366114]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-kubernetes-node-dockercfg-pwtwl&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33288->38.102.83.147:6443: read: connection reset by peer 10637ms (15:54:11.884) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1450366114]: [10.637876834s] [10.637876834s] END Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.884459 5050 trace.go:236] Trace[545442817]: "Reflector ListAndWatch" name:object-"openshift-multus"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.225) (total time: 10659ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[545442817]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33130->38.102.83.147:6443: read: connection reset by peer 10658ms (15:54:11.884) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[545442817]: [10.65900979s] [10.65900979s] END Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.884496 5050 trace.go:236] Trace[1312695506]: "Reflector ListAndWatch" name:object-"openshift-ingress-canary"/"default-dockercfg-2llfx" (11-Dec-2025 15:54:01.211) (total time: 10673ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1312695506]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2llfx&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33078->38.102.83.147:6443: read: connection reset by peer 10673ms (15:54:11.884) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1312695506]: [10.67316773s] [10.67316773s] END Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.884574 5050 trace.go:236] Trace[435415803]: "Reflector ListAndWatch" name:object-"cert-manager"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.135) (total time: 10748ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[435415803]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60984->38.102.83.147:6443: read: connection reset by peer 10748ms (15:54:11.884) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[435415803]: [10.748725874s] [10.748725874s] END Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.884663 5050 trace.go:236] Trace[381174829]: "Reflector ListAndWatch" name:object-"openstack"/"cinder-backup-config-data" (11-Dec-2025 15:54:01.113) (total time: 10771ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[381174829]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-backup-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60840->38.102.83.147:6443: read: connection reset by peer 10771ms (15:54:11.884) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[381174829]: [10.771107063s] [10.771107063s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.885648 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-backup-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-backup-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60840->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.884784 5050 reflector.go:561] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60776->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.885697 5050 trace.go:236] Trace[348684433]: "Reflector ListAndWatch" name:object-"openshift-network-diagnostics"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.104) (total time: 10781ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[348684433]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60776->38.102.83.147:6443: read: connection reset by peer 10780ms (15:54:11.884) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[348684433]: [10.781401569s] [10.781401569s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.885709 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60776->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.885087 5050 reflector.go:561] object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Ddns-operator-dockercfg-9mqw5&resourceVersion=109382": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33352->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.885755 5050 trace.go:236] Trace[2058064183]: "Reflector ListAndWatch" name:object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" (11-Dec-2025 15:54:01.257) (total time: 10628ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2058064183]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Ddns-operator-dockercfg-9mqw5&resourceVersion=109382": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33352->38.102.83.147:6443: read: connection reset by peer 10627ms (15:54:11.885) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2058064183]: [10.628241166s] [10.628241166s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.885769 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-9mqw5\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Ddns-operator-dockercfg-9mqw5&resourceVersion=109382\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33352->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.885862 5050 trace.go:236] Trace[1529683129]: "Reflector ListAndWatch" name:object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" (11-Dec-2025 15:54:01.212) (total time: 10673ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1529683129]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33082->38.102.83.147:6443: read: connection reset by peer 10673ms (15:54:11.885) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1529683129]: [10.673264182s] [10.673264182s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.885879 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-qx5rd\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33082->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.885901 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60818->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.885923 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-oauth-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-oauth-config&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32938->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.885940 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-pwtwl\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-kubernetes-node-dockercfg-pwtwl&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33288->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.885957 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33130->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.885976 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-canary\"/\"default-dockercfg-2llfx\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2llfx&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33078->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.885996 5050 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60984->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.886037 5050 reflector.go:561] object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-dockercfg-k9rxt&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33110->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.886081 5050 trace.go:236] Trace[769668074]: "Reflector ListAndWatch" name:object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" (11-Dec-2025 15:54:01.221) (total time: 10664ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[769668074]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-dockercfg-k9rxt&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33110->38.102.83.147:6443: read: connection reset by peer 10664ms (15:54:11.886) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[769668074]: [10.664710293s] [10.664710293s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.886097 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-k9rxt\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-dockercfg-k9rxt&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33110->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.886095 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=109780": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32842->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.886122 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-cliconfig": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-cliconfig&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60798->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.886143 5050 trace.go:236] Trace[610097140]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Dec-2025 15:54:01.157) (total time: 10728ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[610097140]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=109780": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32842->38.102.83.147:6443: read: connection reset by peer 10728ms (15:54:11.886) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[610097140]: [10.728176153s] [10.728176153s] END Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.886160 5050 trace.go:236] Trace[815065853]: "Reflector ListAndWatch" name:object-"openshift-authentication"/"v4-0-config-system-cliconfig" (11-Dec-2025 15:54:01.104) (total time: 10781ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[815065853]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-cliconfig&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60798->38.102.83.147:6443: read: connection reset by peer 10781ms (15:54:11.886) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[815065853]: [10.781600614s] [10.781600614s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.886159 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=109780\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32842->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.885146 5050 reflector.go:561] object-"openstack"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109642": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32960->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.886175 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-cliconfig&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60798->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.886251 5050 trace.go:236] Trace[1383506002]: "Reflector ListAndWatch" name:object-"openstack"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.185) (total time: 10700ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1383506002]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109642": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32960->38.102.83.147:6443: read: connection reset by peer 10699ms (15:54:11.885) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1383506002]: [10.700295947s] [10.700295947s] END Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.886264 5050 reflector.go:561] object-"openstack"/"nova-api-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-api-config-data&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33218->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.885145 5050 reflector.go:561] object-"openstack"/"cinder-scheduler-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-scheduler-config-data&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33414->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.886303 5050 trace.go:236] Trace[1362153086]: "Reflector ListAndWatch" name:object-"openstack"/"nova-api-config-data" (11-Dec-2025 15:54:01.231) (total time: 10655ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1362153086]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-api-config-data&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33218->38.102.83.147:6443: read: connection reset by peer 10654ms (15:54:11.886) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1362153086]: [10.655025403s] [10.655025403s] END Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.886310 5050 trace.go:236] Trace[675872308]: "Reflector ListAndWatch" name:object-"openstack"/"cinder-scheduler-config-data" (11-Dec-2025 15:54:01.279) (total time: 10607ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[675872308]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-scheduler-config-data&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33414->38.102.83.147:6443: read: connection reset by peer 10605ms (15:54:11.885) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[675872308]: [10.607062188s] [10.607062188s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.886315 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-api-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-api-config-data&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33218->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.886323 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-scheduler-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-scheduler-config-data&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33414->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.885147 5050 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-serving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33332->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.886380 5050 reflector.go:561] object-"openshift-etcd-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60804->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.886410 5050 trace.go:236] Trace[919769866]: "Reflector ListAndWatch" name:object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" (11-Dec-2025 15:54:01.251) (total time: 10635ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[919769866]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-serving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33332->38.102.83.147:6443: read: connection reset by peer 10633ms (15:54:11.885) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[919769866]: [10.63511571s] [10.63511571s] END Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.886429 5050 trace.go:236] Trace[1886927158]: "Reflector ListAndWatch" name:object-"openshift-etcd-operator"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.107) (total time: 10778ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1886927158]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60804->38.102.83.147:6443: read: connection reset by peer 10778ms (15:54:11.886) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1886927158]: [10.778796179s] [10.778796179s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.886428 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-serving-cert&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33332->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.884352 5050 reflector.go:561] object-"openshift-network-node-identity"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32880->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.886271 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109642\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32960->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.886482 5050 trace.go:236] Trace[2035005769]: "Reflector ListAndWatch" name:object-"openshift-network-node-identity"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.165) (total time: 10720ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2035005769]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32880->38.102.83.147:6443: read: connection reset by peer 10718ms (15:54:11.884) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2035005769]: [10.720898168s] [10.720898168s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.886495 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32880->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.886446 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60804->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.885167 5050 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcatalog-operator-serving-cert&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33022->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.886523 5050 reflector.go:561] object-"openshift-service-ca-operator"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60788->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.886551 5050 trace.go:236] Trace[808318015]: "Reflector ListAndWatch" name:object-"openshift-service-ca-operator"/"serving-cert" (11-Dec-2025 15:54:01.104) (total time: 10782ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[808318015]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60788->38.102.83.147:6443: read: connection reset by peer 10782ms (15:54:11.886) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[808318015]: [10.782444837s] [10.782444837s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.886560 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60788->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.886618 5050 reflector.go:561] object-"openstack-operators"/"metrics-server-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmetrics-server-cert&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60800->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.886642 5050 trace.go:236] Trace[1266202250]: "Reflector ListAndWatch" name:object-"openstack-operators"/"metrics-server-cert" (11-Dec-2025 15:54:01.107) (total time: 10779ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1266202250]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmetrics-server-cert&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60800->38.102.83.147:6443: read: connection reset by peer 10779ms (15:54:11.886) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1266202250]: [10.779300152s] [10.779300152s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.886650 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"metrics-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmetrics-server-cert&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60800->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.886693 5050 reflector.go:561] object-"openstack"/"openstack-cell1-dockercfg-mvxd9": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dopenstack-cell1-dockercfg-mvxd9&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33308->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.886719 5050 trace.go:236] Trace[840754803]: "Reflector ListAndWatch" name:object-"openstack"/"openstack-cell1-dockercfg-mvxd9" (11-Dec-2025 15:54:01.246) (total time: 10639ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[840754803]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dopenstack-cell1-dockercfg-mvxd9&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33308->38.102.83.147:6443: read: connection reset by peer 10639ms (15:54:11.886) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[840754803]: [10.639744174s] [10.639744174s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.886728 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-cell1-dockercfg-mvxd9\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dopenstack-cell1-dockercfg-mvxd9&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33308->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.886774 5050 reflector.go:561] object-"openstack"/"alertmanager-metric-storage-tls-assets-0": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dalertmanager-metric-storage-tls-assets-0&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33294->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.886800 5050 trace.go:236] Trace[353483308]: "Reflector ListAndWatch" name:object-"openstack"/"alertmanager-metric-storage-tls-assets-0" (11-Dec-2025 15:54:01.246) (total time: 10640ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[353483308]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dalertmanager-metric-storage-tls-assets-0&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33294->38.102.83.147:6443: read: connection reset by peer 10640ms (15:54:11.886) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[353483308]: [10.640243267s] [10.640243267s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.886809 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"alertmanager-metric-storage-tls-assets-0\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dalertmanager-metric-storage-tls-assets-0&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33294->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.886850 5050 reflector.go:561] object-"openshift-nmstate"/"default-dockercfg-vtnxn": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-vtnxn&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60904->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.886873 5050 trace.go:236] Trace[736558945]: "Reflector ListAndWatch" name:object-"openshift-nmstate"/"default-dockercfg-vtnxn" (11-Dec-2025 15:54:01.119) (total time: 10767ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[736558945]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-vtnxn&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60904->38.102.83.147:6443: read: connection reset by peer 10767ms (15:54:11.886) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[736558945]: [10.767754453s] [10.767754453s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.886882 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"default-dockercfg-vtnxn\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-vtnxn&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60904->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.886924 5050 reflector.go:561] object-"openstack"/"horizon-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dhorizon-scripts&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33338->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.886948 5050 trace.go:236] Trace[482586460]: "Reflector ListAndWatch" name:object-"openstack"/"horizon-scripts" (11-Dec-2025 15:54:01.256) (total time: 10630ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[482586460]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dhorizon-scripts&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33338->38.102.83.147:6443: read: connection reset by peer 10630ms (15:54:11.886) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[482586460]: [10.630743303s] [10.630743303s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.886957 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"horizon-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dhorizon-scripts&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33338->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.887001 5050 reflector.go:561] object-"openstack"/"horizon-config-data": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dhorizon-config-data&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33402->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.887041 5050 trace.go:236] Trace[1119280771]: "Reflector ListAndWatch" name:object-"openstack"/"horizon-config-data" (11-Dec-2025 15:54:01.276) (total time: 10610ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1119280771]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dhorizon-config-data&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33402->38.102.83.147:6443: read: connection reset by peer 10610ms (15:54:11.886) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1119280771]: [10.610072629s] [10.610072629s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.887050 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"horizon-config-data\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dhorizon-config-data&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33402->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.887098 5050 reflector.go:561] object-"openshift-nmstate"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60964->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.887121 5050 trace.go:236] Trace[1281509084]: "Reflector ListAndWatch" name:object-"openshift-nmstate"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.132) (total time: 10754ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1281509084]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60964->38.102.83.147:6443: read: connection reset by peer 10754ms (15:54:11.887) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1281509084]: [10.754698743s] [10.754698743s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.887130 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60964->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.887187 5050 reflector.go:561] object-"openshift-console-operator"/"console-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dconsole-operator-config&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33438->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.887227 5050 trace.go:236] Trace[1312140750]: "Reflector ListAndWatch" name:object-"openshift-console-operator"/"console-operator-config" (11-Dec-2025 15:54:01.282) (total time: 10604ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1312140750]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dconsole-operator-config&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33438->38.102.83.147:6443: read: connection reset by peer 10604ms (15:54:11.887) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1312140750]: [10.604456419s] [10.604456419s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.887239 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"console-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dconsole-operator-config&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33438->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.887288 5050 reflector.go:561] object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-admission-webhook-service-cert&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32846->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.887316 5050 trace.go:236] Trace[1629952969]: "Reflector ListAndWatch" name:object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" (11-Dec-2025 15:54:01.159) (total time: 10727ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1629952969]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-admission-webhook-service-cert&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32846->38.102.83.147:6443: read: connection reset by peer 10727ms (15:54:11.887) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1629952969]: [10.727972417s] [10.727972417s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.887326 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-admission-webhook-service-cert&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32846->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.887368 5050 reflector.go:561] object-"openshift-network-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60826->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.887392 5050 trace.go:236] Trace[821965287]: "Reflector ListAndWatch" name:object-"openshift-network-operator"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.112) (total time: 10775ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[821965287]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60826->38.102.83.147:6443: read: connection reset by peer 10775ms (15:54:11.887) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[821965287]: [10.775207243s] [10.775207243s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.887400 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60826->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.888141 5050 trace.go:236] Trace[150629466]: "Reflector ListAndWatch" name:object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" (11-Dec-2025 15:54:01.198) (total time: 10689ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[150629466]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcatalog-operator-serving-cert&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33022->38.102.83.147:6443: read: connection reset by peer 10686ms (15:54:11.885) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[150629466]: [10.689468716s] [10.689468716s] END Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.884269 5050 reflector.go:561] object-"openstack"/"keystone-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-config-data&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60812->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.888162 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcatalog-operator-serving-cert&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33022->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.888215 5050 trace.go:236] Trace[700992294]: "Reflector ListAndWatch" name:object-"openstack"/"keystone-config-data" (11-Dec-2025 15:54:01.108) (total time: 10779ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[700992294]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-config-data&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60812->38.102.83.147:6443: read: connection reset by peer 10775ms (15:54:11.884) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[700992294]: [10.779213101s] [10.779213101s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.888230 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"keystone-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-config-data&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60812->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.885550 5050 trace.go:236] Trace[241487403]: "Reflector ListAndWatch" name:object-"openshift-multus"/"default-dockercfg-2q5b6" (11-Dec-2025 15:54:01.113) (total time: 10771ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[241487403]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2q5b6&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60854->38.102.83.147:6443: read: connection reset by peer 10771ms (15:54:11.885) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[241487403]: [10.771768691s] [10.771768691s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.888277 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-dockercfg-2q5b6\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2q5b6&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:60854->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.886511 5050 reflector.go:561] object-"openstack"/"heat-cfnapi-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dheat-cfnapi-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32814->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.888313 5050 trace.go:236] Trace[1755826089]: "Reflector ListAndWatch" name:object-"openstack"/"heat-cfnapi-config-data" (11-Dec-2025 15:54:01.153) (total time: 10734ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1755826089]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dheat-cfnapi-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32814->38.102.83.147:6443: read: connection reset by peer 10732ms (15:54:11.886) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1755826089]: [10.734314368s] [10.734314368s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.888328 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"heat-cfnapi-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dheat-cfnapi-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32814->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.890630 5050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36582->38.102.83.147:6443: read: connection reset by peer" interval="200ms" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.902492 5050 reflector.go:561] object-"openshift-apiserver-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32790->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.902547 5050 trace.go:236] Trace[1280934375]: "Reflector ListAndWatch" name:object-"openshift-apiserver-operator"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.148) (total time: 10754ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1280934375]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32790->38.102.83.147:6443: read: connection reset by peer 10754ms (15:54:11.902) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1280934375]: [10.754486618s] [10.754486618s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.902564 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32790->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.922305 5050 reflector.go:561] object-"openshift-authentication"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32828->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.922344 5050 trace.go:236] Trace[1383018275]: "Reflector ListAndWatch" name:object-"openshift-authentication"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.157) (total time: 10764ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1383018275]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32828->38.102.83.147:6443: read: connection reset by peer 10764ms (15:54:11.922) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1383018275]: [10.764398564s] [10.764398564s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.922355 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32828->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.942946 5050 reflector.go:561] object-"openshift-image-registry"/"installation-pull-secrets": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dinstallation-pull-secrets&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32826->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.943001 5050 trace.go:236] Trace[1895275326]: "Reflector ListAndWatch" name:object-"openshift-image-registry"/"installation-pull-secrets" (11-Dec-2025 15:54:01.156) (total time: 10786ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1895275326]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dinstallation-pull-secrets&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32826->38.102.83.147:6443: read: connection reset by peer 10786ms (15:54:11.942) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1895275326]: [10.786341522s] [10.786341522s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.943027 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"installation-pull-secrets\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dinstallation-pull-secrets&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32826->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.963437 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-ocp-branding-template&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33382->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.963478 5050 trace.go:236] Trace[582833918]: "Reflector ListAndWatch" name:object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" (11-Dec-2025 15:54:01.267) (total time: 10696ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[582833918]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-ocp-branding-template&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33382->38.102.83.147:6443: read: connection reset by peer 10696ms (15:54:11.963) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[582833918]: [10.696436692s] [10.696436692s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.963491 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-ocp-branding-template&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33382->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:11.982426 5050 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=109851": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33100->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:11.982469 5050 trace.go:236] Trace[427057942]: "Reflector ListAndWatch" name:object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" (11-Dec-2025 15:54:01.216) (total time: 10765ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[427057942]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=109851": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33100->38.102.83.147:6443: read: connection reset by peer 10765ms (15:54:11.982) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[427057942]: [10.765883233s] [10.765883233s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:11.982483 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=109851\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33100->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.003211 5050 reflector.go:561] object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Doauth-apiserver-sa-dockercfg-6r2bq&resourceVersion=109501": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33498->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.003283 5050 trace.go:236] Trace[1246928765]: "Reflector ListAndWatch" name:object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" (11-Dec-2025 15:54:01.288) (total time: 10715ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1246928765]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Doauth-apiserver-sa-dockercfg-6r2bq&resourceVersion=109501": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33498->38.102.83.147:6443: read: connection reset by peer 10715ms (15:54:12.003) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1246928765]: [10.715134153s] [10.715134153s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.003309 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-6r2bq\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Doauth-apiserver-sa-dockercfg-6r2bq&resourceVersion=109501\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33498->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.022292 5050 reflector.go:561] object-"openstack"/"openstack-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-scripts&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33530->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.022342 5050 trace.go:236] Trace[1752513139]: "Reflector ListAndWatch" name:object-"openstack"/"openstack-scripts" (11-Dec-2025 15:54:01.293) (total time: 10728ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1752513139]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-scripts&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33530->38.102.83.147:6443: read: connection reset by peer 10728ms (15:54:12.022) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1752513139]: [10.728585724s] [10.728585724s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.022359 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-scripts&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33530->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.042847 5050 reflector.go:561] object-"openshift-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33556->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.042910 5050 trace.go:236] Trace[1326105199]: "Reflector ListAndWatch" name:object-"openshift-config-operator"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.298) (total time: 10744ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1326105199]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33556->38.102.83.147:6443: read: connection reset by peer 10744ms (15:54:12.042) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1326105199]: [10.74484348s] [10.74484348s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.042931 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33556->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.063306 5050 reflector.go:561] object-"openstack"/"barbican-keystone-listener-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-keystone-listener-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33588->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.063352 5050 trace.go:236] Trace[489472650]: "Reflector ListAndWatch" name:object-"openstack"/"barbican-keystone-listener-config-data" (11-Dec-2025 15:54:01.305) (total time: 10757ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[489472650]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-keystone-listener-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33588->38.102.83.147:6443: read: connection reset by peer 10757ms (15:54:12.063) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[489472650]: [10.757959091s] [10.757959091s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.063367 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"barbican-keystone-listener-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-keystone-listener-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33588->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.084353 5050 reflector.go:561] object-"openstack"/"barbican-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33596->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.084426 5050 trace.go:236] Trace[1717726941]: "Reflector ListAndWatch" name:object-"openstack"/"barbican-config-data" (11-Dec-2025 15:54:01.311) (total time: 10773ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1717726941]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33596->38.102.83.147:6443: read: connection reset by peer 10773ms (15:54:12.084) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1717726941]: [10.773294202s] [10.773294202s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.084452 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"barbican-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33596->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.091443 5050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="400ms" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.102619 5050 reflector.go:561] object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-baremetal-operator-webhook-server-cert&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33616->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.102680 5050 trace.go:236] Trace[938291798]: "Reflector ListAndWatch" name:object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" (11-Dec-2025 15:54:01.317) (total time: 10785ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[938291798]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-baremetal-operator-webhook-server-cert&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33616->38.102.83.147:6443: read: connection reset by peer 10785ms (15:54:12.102) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[938291798]: [10.785436067s] [10.785436067s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.102701 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openstack-baremetal-operator-webhook-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-baremetal-operator-webhook-server-cert&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33616->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.122943 5050 reflector.go:561] object-"openstack"/"dns-svc": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Ddns-svc&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33634->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.123043 5050 trace.go:236] Trace[691484057]: "Reflector ListAndWatch" name:object-"openstack"/"dns-svc" (11-Dec-2025 15:54:01.322) (total time: 10800ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[691484057]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Ddns-svc&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33634->38.102.83.147:6443: read: connection reset by peer 10800ms (15:54:12.122) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[691484057]: [10.800440959s] [10.800440959s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.123071 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"dns-svc\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Ddns-svc&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33634->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.142919 5050 reflector.go:561] object-"openshift-machine-config-operator"/"proxy-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dproxy-tls&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33214->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.142979 5050 trace.go:236] Trace[1553616790]: "Reflector ListAndWatch" name:object-"openshift-machine-config-operator"/"proxy-tls" (11-Dec-2025 15:54:01.231) (total time: 10911ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1553616790]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dproxy-tls&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33214->38.102.83.147:6443: read: connection reset by peer 10911ms (15:54:12.142) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1553616790]: [10.911839963s] [10.911839963s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.142998 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dproxy-tls&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33214->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.162688 5050 reflector.go:561] object-"openshift-network-node-identity"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33684->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.162750 5050 trace.go:236] Trace[479315046]: "Reflector ListAndWatch" name:object-"openshift-network-node-identity"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.335) (total time: 10827ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[479315046]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33684->38.102.83.147:6443: read: connection reset by peer 10827ms (15:54:12.162) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[479315046]: [10.827636678s] [10.827636678s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.162775 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33684->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.182681 5050 reflector.go:561] object-"openshift-multus"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33702->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.182724 5050 trace.go:236] Trace[937204196]: "Reflector ListAndWatch" name:object-"openshift-multus"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.339) (total time: 10843ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[937204196]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33702->38.102.83.147:6443: read: connection reset by peer 10843ms (15:54:12.182) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[937204196]: [10.843273437s] [10.843273437s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.182745 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33702->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.202555 5050 reflector.go:561] object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33756->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.202611 5050 trace.go:236] Trace[1048771372]: "Reflector ListAndWatch" name:object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.341) (total time: 10860ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1048771372]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33756->38.102.83.147:6443: read: connection reset by peer 10860ms (15:54:12.202) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1048771372]: [10.860969061s] [10.860969061s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.202631 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33756->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.222492 5050 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33760->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.222544 5050 trace.go:236] Trace[1586824437]: "Reflector ListAndWatch" name:object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.349) (total time: 10873ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1586824437]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33760->38.102.83.147:6443: read: connection reset by peer 10873ms (15:54:12.222) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1586824437]: [10.873430185s] [10.873430185s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.222560 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33760->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.243066 5050 reflector.go:561] object-"openstack"/"openstack-config-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dopenstack-config-secret&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33718->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.243109 5050 trace.go:236] Trace[427364965]: "Reflector ListAndWatch" name:object-"openstack"/"openstack-config-secret" (11-Dec-2025 15:54:01.339) (total time: 10903ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[427364965]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dopenstack-config-secret&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33718->38.102.83.147:6443: read: connection reset by peer 10903ms (15:54:12.243) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[427364965]: [10.903667185s] [10.903667185s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.243123 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-config-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dopenstack-config-secret&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33718->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.262407 5050 reflector.go:561] object-"openstack"/"barbican-api-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-api-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33724->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.262455 5050 trace.go:236] Trace[1678507603]: "Reflector ListAndWatch" name:object-"openstack"/"barbican-api-config-data" (11-Dec-2025 15:54:01.340) (total time: 10921ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1678507603]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-api-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33724->38.102.83.147:6443: read: connection reset by peer 10921ms (15:54:12.262) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1678507603]: [10.921915013s] [10.921915013s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.262471 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"barbican-api-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-api-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33724->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.282758 5050 reflector.go:561] object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33780->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.282827 5050 trace.go:236] Trace[1448175412]: "Reflector ListAndWatch" name:object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.360) (total time: 10921ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1448175412]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33780->38.102.83.147:6443: read: connection reset by peer 10921ms (15:54:12.282) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1448175412]: [10.921886883s] [10.921886883s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.282846 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33780->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.303461 5050 reflector.go:561] object-"metallb-system"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33796->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.303535 5050 trace.go:236] Trace[1498799557]: "Reflector ListAndWatch" name:object-"metallb-system"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.360) (total time: 10942ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1498799557]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33796->38.102.83.147:6443: read: connection reset by peer 10942ms (15:54:12.303) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1498799557]: [10.942561436s] [10.942561436s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.303558 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33796->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.323079 5050 reflector.go:561] object-"openstack"/"dnsmasq-dns-dockercfg-kpmgv": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddnsmasq-dns-dockercfg-kpmgv&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33810->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.323139 5050 trace.go:236] Trace[2099504318]: "Reflector ListAndWatch" name:object-"openstack"/"dnsmasq-dns-dockercfg-kpmgv" (11-Dec-2025 15:54:01.360) (total time: 10962ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2099504318]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddnsmasq-dns-dockercfg-kpmgv&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33810->38.102.83.147:6443: read: connection reset by peer 10962ms (15:54:12.323) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2099504318]: [10.962162402s] [10.962162402s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.323157 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"dnsmasq-dns-dockercfg-kpmgv\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddnsmasq-dns-dockercfg-kpmgv&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33810->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.342673 5050 reflector.go:561] object-"openshift-dns"/"dns-dockercfg-jwfmh": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-dockercfg-jwfmh&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33828->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.342720 5050 trace.go:236] Trace[530786859]: "Reflector ListAndWatch" name:object-"openshift-dns"/"dns-dockercfg-jwfmh" (11-Dec-2025 15:54:01.365) (total time: 10976ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[530786859]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-dockercfg-jwfmh&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33828->38.102.83.147:6443: read: connection reset by peer 10976ms (15:54:12.342) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[530786859]: [10.976800654s] [10.976800654s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.342737 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"dns-dockercfg-jwfmh\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-dockercfg-jwfmh&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33828->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.362540 5050 reflector.go:561] object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-qd9ll": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dironic-operator-controller-manager-dockercfg-qd9ll&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33870->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.362601 5050 trace.go:236] Trace[1319744490]: "Reflector ListAndWatch" name:object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-qd9ll" (11-Dec-2025 15:54:01.370) (total time: 10992ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1319744490]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dironic-operator-controller-manager-dockercfg-qd9ll&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33870->38.102.83.147:6443: read: connection reset by peer 10992ms (15:54:12.362) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1319744490]: [10.992144835s] [10.992144835s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.362621 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"ironic-operator-controller-manager-dockercfg-qd9ll\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dironic-operator-controller-manager-dockercfg-qd9ll&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33870->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.382918 5050 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-server-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-tls&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33856->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.382984 5050 trace.go:236] Trace[1445273443]: "Reflector ListAndWatch" name:object-"openshift-machine-config-operator"/"machine-config-server-tls" (11-Dec-2025 15:54:01.369) (total time: 11013ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1445273443]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-tls&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33856->38.102.83.147:6443: read: connection reset by peer 11013ms (15:54:12.382) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1445273443]: [11.013875967s] [11.013875967s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.383008 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-tls&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33856->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.403035 5050 reflector.go:561] object-"openshift-route-controller-manager"/"config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33886->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.403094 5050 trace.go:236] Trace[763398079]: "Reflector ListAndWatch" name:object-"openshift-route-controller-manager"/"config" (11-Dec-2025 15:54:01.377) (total time: 11025ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[763398079]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33886->38.102.83.147:6443: read: connection reset by peer 11025ms (15:54:12.403) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[763398079]: [11.025074787s] [11.025074787s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.403111 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33886->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.416349 5050 patch_prober.go:28] interesting pod/dns-default-25p7l container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.416401 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-25p7l" podUID="66dfe3f5-9e7a-4e1c-b03a-b6f01dab6ef4" containerName="dns" probeResult="failure" output="Get \"http://10.217.0.27:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.422886 5050 reflector.go:561] object-"openstack"/"alertmanager-metric-storage-cluster-tls-config": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dalertmanager-metric-storage-cluster-tls-config&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33836->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.423000 5050 trace.go:236] Trace[1965901532]: "Reflector ListAndWatch" name:object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" (11-Dec-2025 15:54:01.369) (total time: 11053ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1965901532]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dalertmanager-metric-storage-cluster-tls-config&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33836->38.102.83.147:6443: read: connection reset by peer 11053ms (15:54:12.422) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1965901532]: [11.05393226s] [11.05393226s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.423049 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"alertmanager-metric-storage-cluster-tls-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dalertmanager-metric-storage-cluster-tls-config&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33836->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.442876 5050 reflector.go:561] object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmetrics-daemon-sa-dockercfg-d427c&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33908->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.442948 5050 trace.go:236] Trace[68977698]: "Reflector ListAndWatch" name:object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" (11-Dec-2025 15:54:01.378) (total time: 11064ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[68977698]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmetrics-daemon-sa-dockercfg-d427c&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33908->38.102.83.147:6443: read: connection reset by peer 11064ms (15:54:12.442) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[68977698]: [11.064856063s] [11.064856063s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.442976 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-d427c\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmetrics-daemon-sa-dockercfg-d427c&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33908->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.462839 5050 reflector.go:561] object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dservice-ca-operator-dockercfg-rg9jl&resourceVersion=109623": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33048->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.462913 5050 trace.go:236] Trace[1594157831]: "Reflector ListAndWatch" name:object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" (11-Dec-2025 15:54:01.208) (total time: 11254ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1594157831]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dservice-ca-operator-dockercfg-rg9jl&resourceVersion=109623": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33048->38.102.83.147:6443: read: connection reset by peer 11254ms (15:54:12.462) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1594157831]: [11.2547478s] [11.2547478s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.462932 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-rg9jl\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dservice-ca-operator-dockercfg-rg9jl&resourceVersion=109623\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33048->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.483395 5050 reflector.go:561] object-"openstack"/"ceilometer-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dceilometer-scripts&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32870->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.483461 5050 trace.go:236] Trace[757642152]: "Reflector ListAndWatch" name:object-"openstack"/"ceilometer-scripts" (11-Dec-2025 15:54:01.164) (total time: 11319ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[757642152]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dceilometer-scripts&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32870->38.102.83.147:6443: read: connection reset by peer 11319ms (15:54:12.483) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[757642152]: [11.319330129s] [11.319330129s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.483483 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ceilometer-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dceilometer-scripts&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32870->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.491961 5050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="800ms" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.502918 5050 reflector.go:561] object-"openshift-console"/"oauth-serving-cert": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Doauth-serving-cert&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32970->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.502983 5050 trace.go:236] Trace[931739647]: "Reflector ListAndWatch" name:object-"openshift-console"/"oauth-serving-cert" (11-Dec-2025 15:54:01.187) (total time: 11315ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[931739647]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Doauth-serving-cert&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32970->38.102.83.147:6443: read: connection reset by peer 11315ms (15:54:12.502) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[931739647]: [11.315723473s] [11.315723473s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.503005 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"oauth-serving-cert\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Doauth-serving-cert&resourceVersion=109067\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:32970->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.522831 5050 reflector.go:561] object-"openstack"/"nova-cell1-conductor-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell1-conductor-config-data&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33882->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.522918 5050 trace.go:236] Trace[392133115]: "Reflector ListAndWatch" name:object-"openstack"/"nova-cell1-conductor-config-data" (11-Dec-2025 15:54:01.371) (total time: 11151ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[392133115]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell1-conductor-config-data&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33882->38.102.83.147:6443: read: connection reset by peer 11151ms (15:54:12.522) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[392133115]: [11.151395081s] [11.151395081s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.522946 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-cell1-conductor-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell1-conductor-config-data&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33882->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.542544 5050 reflector.go:561] object-"openshift-ingress"/"router-metrics-certs-default": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-metrics-certs-default&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33772->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.542613 5050 trace.go:236] Trace[1709302096]: "Reflector ListAndWatch" name:object-"openshift-ingress"/"router-metrics-certs-default" (11-Dec-2025 15:54:01.351) (total time: 11190ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1709302096]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-metrics-certs-default&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33772->38.102.83.147:6443: read: connection reset by peer 11190ms (15:54:12.542) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1709302096]: [11.19090942s] [11.19090942s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.542636 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"router-metrics-certs-default\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-metrics-certs-default&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33772->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.562869 5050 reflector.go:561] object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-7bf58": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dswift-operator-controller-manager-dockercfg-7bf58&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33660->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.562934 5050 trace.go:236] Trace[775166590]: "Reflector ListAndWatch" name:object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-7bf58" (11-Dec-2025 15:54:01.333) (total time: 11228ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[775166590]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dswift-operator-controller-manager-dockercfg-7bf58&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33660->38.102.83.147:6443: read: connection reset by peer 11228ms (15:54:12.562) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[775166590]: [11.228930818s] [11.228930818s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.562957 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"swift-operator-controller-manager-dockercfg-7bf58\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dswift-operator-controller-manager-dockercfg-7bf58&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33660->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.584471 5050 reflector.go:561] object-"openshift-dns"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33924->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.584533 5050 trace.go:236] Trace[848726799]: "Reflector ListAndWatch" name:object-"openshift-dns"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.381) (total time: 11203ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[848726799]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33924->38.102.83.147:6443: read: connection reset by peer 11203ms (15:54:12.584) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[848726799]: [11.203315032s] [11.203315032s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.584556 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33924->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.602546 5050 reflector.go:561] object-"openstack"/"nova-nova-dockercfg-mdjbl": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-nova-dockercfg-mdjbl&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33148->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.602609 5050 trace.go:236] Trace[906191383]: "Reflector ListAndWatch" name:object-"openstack"/"nova-nova-dockercfg-mdjbl" (11-Dec-2025 15:54:01.226) (total time: 11375ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[906191383]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-nova-dockercfg-mdjbl&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33148->38.102.83.147:6443: read: connection reset by peer 11375ms (15:54:12.602) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[906191383]: [11.375768242s] [11.375768242s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.602630 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-nova-dockercfg-mdjbl\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-nova-dockercfg-mdjbl&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33148->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.622555 5050 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"pprof-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpprof-cert&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33104->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.622620 5050 trace.go:236] Trace[2135119819]: "Reflector ListAndWatch" name:object-"openshift-operator-lifecycle-manager"/"pprof-cert" (11-Dec-2025 15:54:01.218) (total time: 11403ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2135119819]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpprof-cert&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33104->38.102.83.147:6443: read: connection reset by peer 11403ms (15:54:12.622) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2135119819]: [11.403648058s] [11.403648058s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.622641 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpprof-cert&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33104->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.639430 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-qdrgd" podUID="4b4e8aa9-a0f6-4bd2-92d2-c524aedf6389" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.639787 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-qdrgd" podUID="4b4e8aa9-a0f6-4bd2-92d2-c524aedf6389" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.639816 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-qdrgd" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.640601 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manager" containerStatusID={"Type":"cri-o","ID":"af7a54f4263f021d4aff8a9a4ae17f2163472123dd835db7a0edb1c97d4ed3a2"} pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-qdrgd" containerMessage="Container manager failed liveness probe, will be restarted" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.640647 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-qdrgd" podUID="4b4e8aa9-a0f6-4bd2-92d2-c524aedf6389" containerName="manager" containerID="cri-o://af7a54f4263f021d4aff8a9a4ae17f2163472123dd835db7a0edb1c97d4ed3a2" gracePeriod=10 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.642617 5050 reflector.go:561] object-"openshift-ingress-canary"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33206->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.642668 5050 trace.go:236] Trace[1022552983]: "Reflector ListAndWatch" name:object-"openshift-ingress-canary"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.229) (total time: 11412ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1022552983]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33206->38.102.83.147:6443: read: connection reset by peer 11412ms (15:54:12.642) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1022552983]: [11.412906627s] [11.412906627s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.642685 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33206->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.663400 5050 reflector.go:561] object-"openstack"/"prometheus-metric-storage-rulefiles-0": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dprometheus-metric-storage-rulefiles-0&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33656->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.663463 5050 trace.go:236] Trace[1252378392]: "Reflector ListAndWatch" name:object-"openstack"/"prometheus-metric-storage-rulefiles-0" (11-Dec-2025 15:54:01.332) (total time: 11330ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1252378392]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dprometheus-metric-storage-rulefiles-0&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33656->38.102.83.147:6443: read: connection reset by peer 11330ms (15:54:12.663) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1252378392]: [11.330562761s] [11.330562761s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.663482 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"prometheus-metric-storage-rulefiles-0\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dprometheus-metric-storage-rulefiles-0&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33656->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.683397 5050 reflector.go:561] object-"openshift-authentication-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33698->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.683442 5050 trace.go:236] Trace[546219960]: "Reflector ListAndWatch" name:object-"openshift-authentication-operator"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.339) (total time: 11344ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[546219960]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33698->38.102.83.147:6443: read: connection reset by peer 11343ms (15:54:12.683) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[546219960]: [11.344023371s] [11.344023371s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.683458 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33698->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.703185 5050 reflector.go:561] object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/secrets?fieldSelector=metadata.name%3Dopenshift-config-operator-dockercfg-7pc5z&resourceVersion=109307": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33824->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.703240 5050 trace.go:236] Trace[1906114078]: "Reflector ListAndWatch" name:object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" (11-Dec-2025 15:54:01.365) (total time: 11337ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1906114078]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/secrets?fieldSelector=metadata.name%3Dopenshift-config-operator-dockercfg-7pc5z&resourceVersion=109307": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33824->38.102.83.147:6443: read: connection reset by peer 11337ms (15:54:12.703) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1906114078]: [11.337341062s] [11.337341062s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.703256 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-7pc5z\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/secrets?fieldSelector=metadata.name%3Dopenshift-config-operator-dockercfg-7pc5z&resourceVersion=109307\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33824->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.722307 5050 reflector.go:561] object-"openshift-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33380->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.722316 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp" podUID="85683dfb-37fb-4301-8c7a-fbb7453b303d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.722354 5050 trace.go:236] Trace[326507041]: "Reflector ListAndWatch" name:object-"openshift-apiserver"/"etcd-serving-ca" (11-Dec-2025 15:54:01.266) (total time: 11455ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[326507041]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33380->38.102.83.147:6443: read: connection reset by peer 11455ms (15:54:12.722) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[326507041]: [11.455387715s] [11.455387715s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.722367 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33380->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.722373 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.722947 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp" podUID="85683dfb-37fb-4301-8c7a-fbb7453b303d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.723218 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manager" containerStatusID={"Type":"cri-o","ID":"17ce3bb0fb08a6af9be92adfec958eef19d6b30d1b82e27337ec77e555a96524"} pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp" containerMessage="Container manager failed liveness probe, will be restarted" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.723264 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp" podUID="85683dfb-37fb-4301-8c7a-fbb7453b303d" containerName="manager" containerID="cri-o://17ce3bb0fb08a6af9be92adfec958eef19d6b30d1b82e27337ec77e555a96524" gracePeriod=10 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.742844 5050 reflector.go:561] object-"openshift-service-ca-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33396->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.742897 5050 trace.go:236] Trace[1987735080]: "Reflector ListAndWatch" name:object-"openshift-service-ca-operator"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.274) (total time: 11468ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1987735080]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33396->38.102.83.147:6443: read: connection reset by peer 11468ms (15:54:12.742) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1987735080]: [11.468459445s] [11.468459445s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.742915 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33396->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.764502 5050 reflector.go:561] object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-sa-dockercfg-5xfcg&resourceVersion=109237": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33334->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.764557 5050 trace.go:236] Trace[2000446578]: "Reflector ListAndWatch" name:object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" (11-Dec-2025 15:54:01.254) (total time: 11510ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2000446578]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-sa-dockercfg-5xfcg&resourceVersion=109237": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33334->38.102.83.147:6443: read: connection reset by peer 11510ms (15:54:12.764) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2000446578]: [11.510501611s] [11.510501611s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.764575 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-5xfcg\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-sa-dockercfg-5xfcg&resourceVersion=109237\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33334->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.771141 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="14ad1594-090d-4024-a999-9ffe77ce58d8" containerName="galera" probeResult="failure" output="command timed out" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.783204 5050 reflector.go:561] object-"openshift-multus"/"default-cni-sysctl-allowlist": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Ddefault-cni-sysctl-allowlist&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33480->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.783271 5050 trace.go:236] Trace[2059948553]: "Reflector ListAndWatch" name:object-"openshift-multus"/"default-cni-sysctl-allowlist" (11-Dec-2025 15:54:01.284) (total time: 11498ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2059948553]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Ddefault-cni-sysctl-allowlist&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33480->38.102.83.147:6443: read: connection reset by peer 11498ms (15:54:12.783) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2059948553]: [11.498277704s] [11.498277704s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.783287 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Ddefault-cni-sysctl-allowlist&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33480->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.803472 5050 reflector.go:561] object-"openshift-network-node-identity"/"network-node-identity-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/secrets?fieldSelector=metadata.name%3Dnetwork-node-identity-cert&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33644->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.803531 5050 trace.go:236] Trace[951458050]: "Reflector ListAndWatch" name:object-"openshift-network-node-identity"/"network-node-identity-cert" (11-Dec-2025 15:54:01.327) (total time: 11475ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[951458050]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/secrets?fieldSelector=metadata.name%3Dnetwork-node-identity-cert&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33644->38.102.83.147:6443: read: connection reset by peer 11475ms (15:54:12.803) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[951458050]: [11.475824392s] [11.475824392s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.803549 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/secrets?fieldSelector=metadata.name%3Dnetwork-node-identity-cert&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33644->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.823059 5050 reflector.go:561] object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-g8gr8": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-admission-webhook-dockercfg-g8gr8&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33424->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.823143 5050 trace.go:236] Trace[15114699]: "Reflector ListAndWatch" name:object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-g8gr8" (11-Dec-2025 15:54:01.280) (total time: 11542ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[15114699]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-admission-webhook-dockercfg-g8gr8&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33424->38.102.83.147:6443: read: connection reset by peer 11542ms (15:54:12.823) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[15114699]: [11.542769316s] [11.542769316s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.823178 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-g8gr8\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-admission-webhook-dockercfg-g8gr8&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33424->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.842950 5050 reflector.go:561] object-"openshift-nmstate"/"plugin-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dplugin-serving-cert&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33752->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.843035 5050 trace.go:236] Trace[2078609738]: "Reflector ListAndWatch" name:object-"openshift-nmstate"/"plugin-serving-cert" (11-Dec-2025 15:54:01.341) (total time: 11501ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2078609738]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dplugin-serving-cert&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33752->38.102.83.147:6443: read: connection reset by peer 11501ms (15:54:12.842) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2078609738]: [11.501385187s] [11.501385187s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.843058 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"plugin-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dplugin-serving-cert&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33752->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.847184 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="7fb00b03-fe6e-4c66-bd36-adf9443871a8" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.1.135:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.847287 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.848784 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="7fb00b03-fe6e-4c66-bd36-adf9443871a8" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.1.135:9090/-/healthy\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.851111 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-8jnzj" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.868597 5050 reflector.go:561] object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-config&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33696->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.868698 5050 trace.go:236] Trace[1271907269]: "Reflector ListAndWatch" name:object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" (11-Dec-2025 15:54:01.338) (total time: 11530ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1271907269]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-config&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33696->38.102.83.147:6443: read: connection reset by peer 11530ms (15:54:12.868) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1271907269]: [11.530345323s] [11.530345323s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.868753 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-config&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33696->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.883439 5050 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-kubernetes-control-plane-dockercfg-gs7dd&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33494->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.883486 5050 trace.go:236] Trace[561011502]: "Reflector ListAndWatch" name:object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" (11-Dec-2025 15:54:01.288) (total time: 11595ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[561011502]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-kubernetes-control-plane-dockercfg-gs7dd&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33494->38.102.83.147:6443: read: connection reset by peer 11595ms (15:54:12.883) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[561011502]: [11.595360295s] [11.595360295s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.883500 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-gs7dd\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-kubernetes-control-plane-dockercfg-gs7dd&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33494->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.903084 5050 request.go:700] Waited for 1.017667244s, retries: 1, retry-after: 1s - retry-reason: due to retryable error, error: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-barbican-dockercfg-fxl2b&resourceVersion=109512": read tcp 38.102.83.147:33570->38.102.83.147:6443: read: connection reset by peer - request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-barbican-dockercfg-fxl2b&resourceVersion=109512 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.903530 5050 reflector.go:561] object-"openstack"/"barbican-barbican-dockercfg-fxl2b": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-barbican-dockercfg-fxl2b&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33570->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.903606 5050 trace.go:236] Trace[2056201276]: "Reflector ListAndWatch" name:object-"openstack"/"barbican-barbican-dockercfg-fxl2b" (11-Dec-2025 15:54:01.304) (total time: 11599ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2056201276]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-barbican-dockercfg-fxl2b&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33570->38.102.83.147:6443: read: connection reset by peer 11599ms (15:54:12.903) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2056201276]: [11.599401523s] [11.599401523s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.903625 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"barbican-barbican-dockercfg-fxl2b\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-barbican-dockercfg-fxl2b&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33570->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.923389 5050 reflector.go:561] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-controller-manager-operator-config&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33578->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.923464 5050 trace.go:236] Trace[1497632781]: "Reflector ListAndWatch" name:object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" (11-Dec-2025 15:54:01.305) (total time: 11618ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1497632781]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-controller-manager-operator-config&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33578->38.102.83.147:6443: read: connection reset by peer 11618ms (15:54:12.923) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1497632781]: [11.618059833s] [11.618059833s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.923484 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-controller-manager-operator-config&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33578->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.943296 5050 reflector.go:561] object-"openshift-machine-config-operator"/"kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=109851": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33604->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.943355 5050 trace.go:236] Trace[1988080215]: "Reflector ListAndWatch" name:object-"openshift-machine-config-operator"/"kube-rbac-proxy" (11-Dec-2025 15:54:01.317) (total time: 11626ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1988080215]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=109851": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33604->38.102.83.147:6443: read: connection reset by peer 11626ms (15:54:12.943) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1988080215]: [11.626126309s] [11.626126309s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.943371 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=109851\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33604->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.962896 5050 reflector.go:561] object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dingress-operator-dockercfg-7lnqk&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33594->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.963029 5050 trace.go:236] Trace[1622790410]: "Reflector ListAndWatch" name:object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" (11-Dec-2025 15:54:01.307) (total time: 11655ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1622790410]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dingress-operator-dockercfg-7lnqk&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33594->38.102.83.147:6443: read: connection reset by peer 11655ms (15:54:12.962) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1622790410]: [11.655099505s] [11.655099505s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.963057 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-7lnqk\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dingress-operator-dockercfg-7lnqk&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33594->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:12.983350 5050 reflector.go:561] object-"openshift-operators"/"observability-operator-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobservability-operator-tls&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33514->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:12.983439 5050 trace.go:236] Trace[391116163]: "Reflector ListAndWatch" name:object-"openshift-operators"/"observability-operator-tls" (11-Dec-2025 15:54:01.289) (total time: 11694ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[391116163]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobservability-operator-tls&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33514->38.102.83.147:6443: read: connection reset by peer 11694ms (15:54:12.983) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[391116163]: [11.694211923s] [11.694211923s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:12.983462 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"observability-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobservability-operator-tls&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33514->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.002375 5050 reflector.go:561] object-"openstack"/"octavia-healthmanager-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-healthmanager-config-data&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33626->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.002432 5050 trace.go:236] Trace[1862246503]: "Reflector ListAndWatch" name:object-"openstack"/"octavia-healthmanager-config-data" (11-Dec-2025 15:54:01.319) (total time: 11683ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1862246503]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-healthmanager-config-data&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33626->38.102.83.147:6443: read: connection reset by peer 11683ms (15:54:13.002) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1862246503]: [11.683043154s] [11.683043154s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.002450 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-healthmanager-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-healthmanager-config-data&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33626->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.023000 5050 reflector.go:561] object-"openshift-network-node-identity"/"env-overrides": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Denv-overrides&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33546->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.023088 5050 trace.go:236] Trace[300222206]: "Reflector ListAndWatch" name:object-"openshift-network-node-identity"/"env-overrides" (11-Dec-2025 15:54:01.295) (total time: 11727ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[300222206]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Denv-overrides&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33546->38.102.83.147:6443: read: connection reset by peer 11727ms (15:54:13.022) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[300222206]: [11.727180367s] [11.727180367s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.023106 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Denv-overrides&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33546->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.042238 5050 patch_prober.go:28] interesting pod/route-controller-manager-767f6d799d-cv7mn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.042300 5050 patch_prober.go:28] interesting pod/route-controller-manager-767f6d799d-cv7mn container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.042257 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qbc2f" podUID="9726c3f9-bcae-4722-a054-5a66c161953b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.042330 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn" podUID="5b7ceea3-4e92-46ee-81de-5b8f932144ad" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.042352 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn" podUID="5b7ceea3-4e92-46ee-81de-5b8f932144ad" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.042753 5050 reflector.go:561] object-"openshift-nmstate"/"openshift-nmstate-webhook": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dopenshift-nmstate-webhook&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33554->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.043005 5050 trace.go:236] Trace[655829446]: "Reflector ListAndWatch" name:object-"openshift-nmstate"/"openshift-nmstate-webhook" (11-Dec-2025 15:54:01.296) (total time: 11746ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[655829446]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dopenshift-nmstate-webhook&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33554->38.102.83.147:6443: read: connection reset by peer 11745ms (15:54:13.042) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[655829446]: [11.746031982s] [11.746031982s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.043049 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"openshift-nmstate-webhook\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dopenshift-nmstate-webhook&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33554->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.063506 5050 reflector.go:561] object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-dockercfg-5nsgg&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33670->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.063582 5050 trace.go:236] Trace[313694293]: "Reflector ListAndWatch" name:object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" (11-Dec-2025 15:54:01.335) (total time: 11728ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[313694293]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-dockercfg-5nsgg&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33670->38.102.83.147:6443: read: connection reset by peer 11728ms (15:54:13.063) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[313694293]: [11.728477251s] [11.728477251s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.063599 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-5nsgg\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-dockercfg-5nsgg&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33670->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.082548 5050 reflector.go:561] object-"openshift-image-registry"/"image-registry-certificates": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dimage-registry-certificates&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33528->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.082608 5050 trace.go:236] Trace[1092034774]: "Reflector ListAndWatch" name:object-"openshift-image-registry"/"image-registry-certificates" (11-Dec-2025 15:54:01.291) (total time: 11790ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1092034774]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dimage-registry-certificates&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33528->38.102.83.147:6443: read: connection reset by peer 11790ms (15:54:13.082) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1092034774]: [11.790996926s] [11.790996926s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.082625 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"image-registry-certificates\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dimage-registry-certificates&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33528->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.103253 5050 reflector.go:561] object-"openshift-cluster-version"/"default-dockercfg-gxtc4": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-gxtc4&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33282->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.103303 5050 trace.go:236] Trace[1349325702]: "Reflector ListAndWatch" name:object-"openshift-cluster-version"/"default-dockercfg-gxtc4" (11-Dec-2025 15:54:01.240) (total time: 11862ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1349325702]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-gxtc4&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33282->38.102.83.147:6443: read: connection reset by peer 11862ms (15:54:13.103) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1349325702]: [11.862497161s] [11.862497161s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.103319 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"default-dockercfg-gxtc4\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-gxtc4&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33282->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.122936 5050 reflector.go:561] object-"openshift-cluster-version"/"cluster-version-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/secrets?fieldSelector=metadata.name%3Dcluster-version-operator-serving-cert&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34008->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.122986 5050 trace.go:236] Trace[572132929]: "Reflector ListAndWatch" name:object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" (11-Dec-2025 15:54:01.384) (total time: 11738ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[572132929]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/secrets?fieldSelector=metadata.name%3Dcluster-version-operator-serving-cert&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34008->38.102.83.147:6443: read: connection reset by peer 11738ms (15:54:13.122) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[572132929]: [11.738417307s] [11.738417307s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.122998 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/secrets?fieldSelector=metadata.name%3Dcluster-version-operator-serving-cert&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34008->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.142998 5050 reflector.go:561] object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-4x88l": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dovn-operator-controller-manager-dockercfg-4x88l&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34132->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.143062 5050 trace.go:236] Trace[1429846870]: "Reflector ListAndWatch" name:object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-4x88l" (11-Dec-2025 15:54:01.399) (total time: 11744ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1429846870]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dovn-operator-controller-manager-dockercfg-4x88l&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34132->38.102.83.147:6443: read: connection reset by peer 11743ms (15:54:13.142) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1429846870]: [11.744000447s] [11.744000447s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.143077 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"ovn-operator-controller-manager-dockercfg-4x88l\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dovn-operator-controller-manager-dockercfg-4x88l&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34132->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.162875 5050 reflector.go:561] object-"openstack-operators"/"infra-operator-webhook-server-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dinfra-operator-webhook-server-cert&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34158->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.162964 5050 trace.go:236] Trace[1646062302]: "Reflector ListAndWatch" name:object-"openstack-operators"/"infra-operator-webhook-server-cert" (11-Dec-2025 15:54:01.402) (total time: 11759ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1646062302]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dinfra-operator-webhook-server-cert&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34158->38.102.83.147:6443: read: connection reset by peer 11759ms (15:54:13.162) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1646062302]: [11.759933274s] [11.759933274s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.163000 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"infra-operator-webhook-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dinfra-operator-webhook-server-cert&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34158->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.183067 5050 reflector.go:561] object-"hostpath-provisioner"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33958->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.183108 5050 trace.go:236] Trace[627095399]: "Reflector ListAndWatch" name:object-"hostpath-provisioner"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.381) (total time: 11801ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[627095399]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33958->38.102.83.147:6443: read: connection reset by peer 11801ms (15:54:13.183) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[627095399]: [11.801743724s] [11.801743724s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.183131 5050 reflector.go:158] "Unhandled Error" err="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33958->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.203109 5050 reflector.go:561] object-"openstack"/"octavia-hmport-map": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Doctavia-hmport-map&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34174->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.203148 5050 trace.go:236] Trace[1132025824]: "Reflector ListAndWatch" name:object-"openstack"/"octavia-hmport-map" (11-Dec-2025 15:54:01.405) (total time: 11797ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1132025824]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Doctavia-hmport-map&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34174->38.102.83.147:6443: read: connection reset by peer 11797ms (15:54:13.203) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1132025824]: [11.797935782s] [11.797935782s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.203162 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-hmport-map\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Doctavia-hmport-map&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34174->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.220130 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2" podUID="06df6c8c-640d-431b-b216-78345a9054e1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.220197 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2" podUID="06df6c8c-640d-431b-b216-78345a9054e1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.220246 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.221045 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manager" containerStatusID={"Type":"cri-o","ID":"9285260a37630061624ba74a7c92327ead3d7a69163e896c20c90b3da8d7a4b6"} pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2" containerMessage="Container manager failed liveness probe, will be restarted" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.221104 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2" podUID="06df6c8c-640d-431b-b216-78345a9054e1" containerName="manager" containerID="cri-o://9285260a37630061624ba74a7c92327ead3d7a69163e896c20c90b3da8d7a4b6" gracePeriod=10 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.223191 5050 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-serving-cert&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34190->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.223233 5050 trace.go:236] Trace[567913199]: "Reflector ListAndWatch" name:object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" (11-Dec-2025 15:54:01.407) (total time: 11815ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[567913199]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-serving-cert&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34190->38.102.83.147:6443: read: connection reset by peer 11815ms (15:54:13.223) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[567913199]: [11.815824821s] [11.815824821s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.223247 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-serving-cert&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34190->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.242636 5050 reflector.go:561] object-"openshift-etcd-operator"/"etcd-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-operator-config&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34228->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.242713 5050 trace.go:236] Trace[146742187]: "Reflector ListAndWatch" name:object-"openshift-etcd-operator"/"etcd-operator-config" (11-Dec-2025 15:54:01.411) (total time: 11830ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[146742187]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-operator-config&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34228->38.102.83.147:6443: read: connection reset by peer 11830ms (15:54:13.242) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[146742187]: [11.830837843s] [11.830837843s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.242732 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-operator-config&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34228->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.263307 5050 reflector.go:561] object-"openstack"/"cinder-volume-volume1-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-volume-volume1-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34258->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.263376 5050 trace.go:236] Trace[405583418]: "Reflector ListAndWatch" name:object-"openstack"/"cinder-volume-volume1-config-data" (11-Dec-2025 15:54:01.420) (total time: 11843ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[405583418]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-volume-volume1-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34258->38.102.83.147:6443: read: connection reset by peer 11843ms (15:54:13.263) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[405583418]: [11.84303419s] [11.84303419s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.263391 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-volume-volume1-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-volume-volume1-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34258->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.282853 5050 reflector.go:561] object-"openshift-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34270->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.282901 5050 trace.go:236] Trace[1269130181]: "Reflector ListAndWatch" name:object-"openshift-config-operator"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.428) (total time: 11854ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1269130181]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34270->38.102.83.147:6443: read: connection reset by peer 11854ms (15:54:13.282) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1269130181]: [11.854045685s] [11.854045685s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.282917 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34270->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.293715 5050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="1.6s" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.303284 5050 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-config&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34288->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.303342 5050 trace.go:236] Trace[1449660779]: "Reflector ListAndWatch" name:object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" (11-Dec-2025 15:54:01.433) (total time: 11869ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1449660779]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-config&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34288->38.102.83.147:6443: read: connection reset by peer 11869ms (15:54:13.303) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1449660779]: [11.869936591s] [11.869936591s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.303361 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-config&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34288->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.323219 5050 reflector.go:561] object-"openshift-machine-api"/"control-plane-machine-set-operator-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-tls&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34244->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.323286 5050 trace.go:236] Trace[1239615350]: "Reflector ListAndWatch" name:object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" (11-Dec-2025 15:54:01.417) (total time: 11906ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1239615350]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-tls&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34244->38.102.83.147:6443: read: connection reset by peer 11906ms (15:54:13.323) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1239615350]: [11.906183891s] [11.906183891s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.323309 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-tls&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34244->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.342695 5050 reflector.go:561] object-"openshift-console"/"console-dockercfg-f62pw": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-dockercfg-f62pw&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34610->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.342772 5050 trace.go:236] Trace[1434679514]: "Reflector ListAndWatch" name:object-"openshift-console"/"console-dockercfg-f62pw" (11-Dec-2025 15:54:01.512) (total time: 11829ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1434679514]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-dockercfg-f62pw&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34610->38.102.83.147:6443: read: connection reset by peer 11829ms (15:54:13.342) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1434679514]: [11.829900278s] [11.829900278s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.342796 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-dockercfg-f62pw\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-dockercfg-f62pw&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34610->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.362655 5050 reflector.go:561] object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/secrets?fieldSelector=metadata.name%3Dconsole-operator-dockercfg-4xjcr&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34600->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.362763 5050 trace.go:236] Trace[992827829]: "Reflector ListAndWatch" name:object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" (11-Dec-2025 15:54:01.508) (total time: 11854ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[992827829]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/secrets?fieldSelector=metadata.name%3Dconsole-operator-dockercfg-4xjcr&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34600->38.102.83.147:6443: read: connection reset by peer 11854ms (15:54:13.362) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[992827829]: [11.85460387s] [11.85460387s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.362787 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"console-operator-dockercfg-4xjcr\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/secrets?fieldSelector=metadata.name%3Dconsole-operator-dockercfg-4xjcr&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34600->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.382508 5050 reflector.go:561] object-"openstack"/"octavia-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-config-data&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34116->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.382617 5050 trace.go:236] Trace[293033601]: "Reflector ListAndWatch" name:object-"openstack"/"octavia-config-data" (11-Dec-2025 15:54:01.396) (total time: 11985ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[293033601]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-config-data&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34116->38.102.83.147:6443: read: connection reset by peer 11985ms (15:54:13.382) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[293033601]: [11.985685202s] [11.985685202s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.382641 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-config-data&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34116->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.392206 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-65swh" podUID="58a60a6f-fcf5-4aa0-b6e4-97d7df2b2bd2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.392258 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-65swh" podUID="58a60a6f-fcf5-4aa0-b6e4-97d7df2b2bd2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.402676 5050 reflector.go:561] object-"openshift-operators"/"perses-operator-dockercfg-xflrf": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dperses-operator-dockercfg-xflrf&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34160->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.402799 5050 trace.go:236] Trace[1878622697]: "Reflector ListAndWatch" name:object-"openshift-operators"/"perses-operator-dockercfg-xflrf" (11-Dec-2025 15:54:01.404) (total time: 11998ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1878622697]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dperses-operator-dockercfg-xflrf&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34160->38.102.83.147:6443: read: connection reset by peer 11998ms (15:54:13.402) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1878622697]: [11.998654219s] [11.998654219s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.402827 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"perses-operator-dockercfg-xflrf\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dperses-operator-dockercfg-xflrf&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34160->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.422363 5050 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34196->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.422415 5050 trace.go:236] Trace[1935486746]: "Reflector ListAndWatch" name:object-"openshift-network-console"/"networking-console-plugin" (11-Dec-2025 15:54:01.407) (total time: 12014ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1935486746]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34196->38.102.83.147:6443: read: connection reset by peer 12014ms (15:54:13.422) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1935486746]: [12.014989067s] [12.014989067s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.422432 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34196->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.443225 5050 reflector.go:561] object-"openshift-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34260->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.443281 5050 trace.go:236] Trace[178689151]: "Reflector ListAndWatch" name:object-"openshift-apiserver"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.424) (total time: 12018ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[178689151]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34260->38.102.83.147:6443: read: connection reset by peer 12018ms (15:54:13.443) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[178689151]: [12.018792349s] [12.018792349s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.443350 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34260->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.463503 5050 reflector.go:561] object-"openshift-apiserver"/"etcd-client": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34348->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.463585 5050 trace.go:236] Trace[596395552]: "Reflector ListAndWatch" name:object-"openshift-apiserver"/"etcd-client" (11-Dec-2025 15:54:01.451) (total time: 12012ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[596395552]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34348->38.102.83.147:6443: read: connection reset by peer 12012ms (15:54:13.463) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[596395552]: [12.012367246s] [12.012367246s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.463606 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34348->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.482839 5050 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34384->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.482882 5050 trace.go:236] Trace[75397394]: "Reflector ListAndWatch" name:object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.457) (total time: 12025ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[75397394]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34384->38.102.83.147:6443: read: connection reset by peer 12025ms (15:54:13.482) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[75397394]: [12.025230971s] [12.025230971s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.482897 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34384->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.503066 5050 reflector.go:561] object-"openstack"/"manila-scheduler-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-scheduler-config-data&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34414->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.503133 5050 trace.go:236] Trace[1946233461]: "Reflector ListAndWatch" name:object-"openstack"/"manila-scheduler-config-data" (11-Dec-2025 15:54:01.472) (total time: 12030ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1946233461]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-scheduler-config-data&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34414->38.102.83.147:6443: read: connection reset by peer 12030ms (15:54:13.503) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1946233461]: [12.030685987s] [12.030685987s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.503153 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"manila-scheduler-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-scheduler-config-data&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34414->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.523241 5050 reflector.go:561] object-"openstack"/"horizon-horizon-dockercfg-d7bqh": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dhorizon-horizon-dockercfg-d7bqh&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34458->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.523298 5050 trace.go:236] Trace[1876905488]: "Reflector ListAndWatch" name:object-"openstack"/"horizon-horizon-dockercfg-d7bqh" (11-Dec-2025 15:54:01.487) (total time: 12036ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1876905488]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dhorizon-horizon-dockercfg-d7bqh&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34458->38.102.83.147:6443: read: connection reset by peer 12036ms (15:54:13.523) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1876905488]: [12.036093712s] [12.036093712s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.523317 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"horizon-horizon-dockercfg-d7bqh\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dhorizon-horizon-dockercfg-d7bqh&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34458->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.542545 5050 reflector.go:561] object-"openstack"/"openstack-aee-default-env": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-aee-default-env&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34486->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.542607 5050 trace.go:236] Trace[1390136627]: "Reflector ListAndWatch" name:object-"openstack"/"openstack-aee-default-env" (11-Dec-2025 15:54:01.490) (total time: 12052ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1390136627]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-aee-default-env&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34486->38.102.83.147:6443: read: connection reset by peer 12052ms (15:54:13.542) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1390136627]: [12.05205116s] [12.05205116s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.542626 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-aee-default-env\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-aee-default-env&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34486->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.564696 5050 reflector.go:561] object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-operator-dockercfg-r9srn&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34546->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.564795 5050 trace.go:236] Trace[717606518]: "Reflector ListAndWatch" name:object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" (11-Dec-2025 15:54:01.499) (total time: 12065ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[717606518]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-operator-dockercfg-r9srn&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34546->38.102.83.147:6443: read: connection reset by peer 12064ms (15:54:13.564) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[717606518]: [12.065020087s] [12.065020087s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.564814 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-r9srn\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-operator-dockercfg-r9srn&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34546->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.584416 5050 reflector.go:561] object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-6tg85": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dplacement-operator-controller-manager-dockercfg-6tg85&resourceVersion=109087": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34620->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.584483 5050 trace.go:236] Trace[1615564097]: "Reflector ListAndWatch" name:object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-6tg85" (11-Dec-2025 15:54:01.513) (total time: 12070ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1615564097]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dplacement-operator-controller-manager-dockercfg-6tg85&resourceVersion=109087": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34620->38.102.83.147:6443: read: connection reset by peer 12070ms (15:54:13.584) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1615564097]: [12.070499114s] [12.070499114s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.584541 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"placement-operator-controller-manager-dockercfg-6tg85\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dplacement-operator-controller-manager-dockercfg-6tg85&resourceVersion=109087\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34620->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.603205 5050 reflector.go:561] object-"openshift-authentication-operator"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35072->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.603282 5050 trace.go:236] Trace[1701193308]: "Reflector ListAndWatch" name:object-"openshift-authentication-operator"/"serving-cert" (11-Dec-2025 15:54:01.617) (total time: 11986ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1701193308]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35072->38.102.83.147:6443: read: connection reset by peer 11986ms (15:54:13.603) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1701193308]: [11.986116413s] [11.986116413s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.603301 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35072->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.623135 5050 reflector.go:561] object-"openstack"/"placement-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-scripts&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33948->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.623177 5050 trace.go:236] Trace[805672825]: "Reflector ListAndWatch" name:object-"openstack"/"placement-scripts" (11-Dec-2025 15:54:01.381) (total time: 12241ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[805672825]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-scripts&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33948->38.102.83.147:6443: read: connection reset by peer 12241ms (15:54:13.623) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[805672825]: [12.241912166s] [12.241912166s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.623190 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"placement-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-scripts&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33948->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.643552 5050 reflector.go:561] object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-p54cz": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager-operator/secrets?fieldSelector=metadata.name%3Dcert-manager-operator-controller-manager-dockercfg-p54cz&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33974->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.643616 5050 trace.go:236] Trace[1418355092]: "Reflector ListAndWatch" name:object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-p54cz" (11-Dec-2025 15:54:01.383) (total time: 12260ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1418355092]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager-operator/secrets?fieldSelector=metadata.name%3Dcert-manager-operator-controller-manager-dockercfg-p54cz&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33974->38.102.83.147:6443: read: connection reset by peer 12260ms (15:54:13.643) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1418355092]: [12.260135644s] [12.260135644s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.643637 5050 reflector.go:158] "Unhandled Error" err="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-p54cz\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager-operator/secrets?fieldSelector=metadata.name%3Dcert-manager-operator-controller-manager-dockercfg-p54cz&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33974->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.653473 5050 patch_prober.go:28] interesting pod/dns-default-25p7l container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.217.0.27:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.653526 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pjhfc" podUID="3c9e825c-0aee-42b9-a7a5-3191486f301d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.653540 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-25p7l" podUID="66dfe3f5-9e7a-4e1c-b03a-b6f01dab6ef4" containerName="dns" probeResult="failure" output="Get \"http://10.217.0.27:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.653592 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pjhfc" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.654442 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manager" containerStatusID={"Type":"cri-o","ID":"ce2bc8fd07e25246673f9423055ae960e439b06b2f99f0e8154eb011bf21074d"} pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pjhfc" containerMessage="Container manager failed liveness probe, will be restarted" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.654489 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pjhfc" podUID="3c9e825c-0aee-42b9-a7a5-3191486f301d" containerName="manager" containerID="cri-o://ce2bc8fd07e25246673f9423055ae960e439b06b2f99f0e8154eb011bf21074d" gracePeriod=10 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.663461 5050 reflector.go:561] object-"openshift-route-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34012->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.663534 5050 trace.go:236] Trace[1308369156]: "Reflector ListAndWatch" name:object-"openshift-route-controller-manager"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.387) (total time: 12275ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1308369156]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34012->38.102.83.147:6443: read: connection reset by peer 12275ms (15:54:13.663) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1308369156]: [12.275804984s] [12.275804984s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.663577 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34012->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.683140 5050 reflector.go:561] object-"openshift-network-operator"/"iptables-alerter-script": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Diptables-alerter-script&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34050->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.683196 5050 trace.go:236] Trace[1391567346]: "Reflector ListAndWatch" name:object-"openshift-network-operator"/"iptables-alerter-script" (11-Dec-2025 15:54:01.392) (total time: 12290ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1391567346]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Diptables-alerter-script&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34050->38.102.83.147:6443: read: connection reset by peer 12290ms (15:54:13.683) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1391567346]: [12.290633471s] [12.290633471s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.683219 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"iptables-alerter-script\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Diptables-alerter-script&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34050->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.695390 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lvbdb" podUID="fa0985f7-7d87-41b3-9916-f22375a0489c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.703456 5050 reflector.go:561] object-"openstack"/"heat-engine-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dheat-engine-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34078->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.703519 5050 trace.go:236] Trace[738923504]: "Reflector ListAndWatch" name:object-"openstack"/"heat-engine-config-data" (11-Dec-2025 15:54:01.394) (total time: 12308ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[738923504]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dheat-engine-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34078->38.102.83.147:6443: read: connection reset by peer 12308ms (15:54:13.703) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[738923504]: [12.308822149s] [12.308822149s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.703532 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"heat-engine-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dheat-engine-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34078->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.722765 5050 reflector.go:561] object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-mc6vn": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Ddesignate-operator-controller-manager-dockercfg-mc6vn&resourceVersion=109087": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34356->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.722836 5050 trace.go:236] Trace[621275267]: "Reflector ListAndWatch" name:object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-mc6vn" (11-Dec-2025 15:54:01.455) (total time: 12267ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[621275267]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Ddesignate-operator-controller-manager-dockercfg-mc6vn&resourceVersion=109087": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34356->38.102.83.147:6443: read: connection reset by peer 12267ms (15:54:13.722) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[621275267]: [12.267346177s] [12.267346177s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.722859 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"designate-operator-controller-manager-dockercfg-mc6vn\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Ddesignate-operator-controller-manager-dockercfg-mc6vn&resourceVersion=109087\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34356->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.737167 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pjhfc" podUID="3c9e825c-0aee-42b9-a7a5-3191486f301d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.737224 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lvbdb" podUID="fa0985f7-7d87-41b3-9916-f22375a0489c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.742319 5050 reflector.go:561] object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-dockercfg-qt55r&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34402->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.742376 5050 trace.go:236] Trace[995734693]: "Reflector ListAndWatch" name:object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" (11-Dec-2025 15:54:01.466) (total time: 12275ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[995734693]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-dockercfg-qt55r&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34402->38.102.83.147:6443: read: connection reset by peer 12275ms (15:54:13.742) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[995734693]: [12.275581168s] [12.275581168s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.742392 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-qt55r\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-dockercfg-qt55r&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34402->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.763403 5050 reflector.go:561] object-"openshift-route-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109642": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34482->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.763454 5050 trace.go:236] Trace[820326490]: "Reflector ListAndWatch" name:object-"openshift-route-controller-manager"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.489) (total time: 12273ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[820326490]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109642": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34482->38.102.83.147:6443: read: connection reset by peer 12273ms (15:54:13.763) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[820326490]: [12.273781799s] [12.273781799s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.763474 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109642\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34482->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.779519 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-qdrgd" podUID="4b4e8aa9-a0f6-4bd2-92d2-c524aedf6389" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.782431 5050 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-sa-dockercfg-nl2j4&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34494->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.782488 5050 trace.go:236] Trace[1170291128]: "Reflector ListAndWatch" name:object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" (11-Dec-2025 15:54:01.490) (total time: 12291ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1170291128]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-sa-dockercfg-nl2j4&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34494->38.102.83.147:6443: read: connection reset by peer 12291ms (15:54:13.782) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1170291128]: [12.291913365s] [12.291913365s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.782509 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-nl2j4\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-sa-dockercfg-nl2j4&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34494->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.802756 5050 reflector.go:561] object-"openshift-console-operator"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34608->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.802812 5050 trace.go:236] Trace[1193538326]: "Reflector ListAndWatch" name:object-"openshift-console-operator"/"serving-cert" (11-Dec-2025 15:54:01.511) (total time: 12291ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1193538326]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34608->38.102.83.147:6443: read: connection reset by peer 12290ms (15:54:13.802) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1193538326]: [12.291033452s] [12.291033452s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.802834 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34608->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.822707 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34686->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.822771 5050 trace.go:236] Trace[1734388796]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Dec-2025 15:54:01.525) (total time: 12297ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1734388796]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34686->38.102.83.147:6443: read: connection reset by peer 12296ms (15:54:13.822) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1734388796]: [12.297000641s] [12.297000641s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.822797 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34686->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.843053 5050 reflector.go:561] object-"openstack"/"telemetry-ceilometer-dockercfg-nl629": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dtelemetry-ceilometer-dockercfg-nl629&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34790->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.843116 5050 trace.go:236] Trace[1950445655]: "Reflector ListAndWatch" name:object-"openstack"/"telemetry-ceilometer-dockercfg-nl629" (11-Dec-2025 15:54:01.545) (total time: 12297ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1950445655]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dtelemetry-ceilometer-dockercfg-nl629&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34790->38.102.83.147:6443: read: connection reset by peer 12297ms (15:54:13.843) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1950445655]: [12.297480054s] [12.297480054s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.843138 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"telemetry-ceilometer-dockercfg-nl629\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dtelemetry-ceilometer-dockercfg-nl629&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34790->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.863159 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp" podUID="85683dfb-37fb-4301-8c7a-fbb7453b303d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.863293 5050 reflector.go:561] object-"openshift-image-registry"/"registry-dockercfg-kzzsd": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dregistry-dockercfg-kzzsd&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34834->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.863368 5050 trace.go:236] Trace[1404947014]: "Reflector ListAndWatch" name:object-"openshift-image-registry"/"registry-dockercfg-kzzsd" (11-Dec-2025 15:54:01.559) (total time: 12304ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1404947014]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dregistry-dockercfg-kzzsd&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34834->38.102.83.147:6443: read: connection reset by peer 12304ms (15:54:13.863) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1404947014]: [12.304080992s] [12.304080992s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.863387 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"registry-dockercfg-kzzsd\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dregistry-dockercfg-kzzsd&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34834->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.882495 5050 reflector.go:561] object-"openshift-network-operator"/"metrics-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34880->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.882547 5050 trace.go:236] Trace[2031836163]: "Reflector ListAndWatch" name:object-"openshift-network-operator"/"metrics-tls" (11-Dec-2025 15:54:01.566) (total time: 12316ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2031836163]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34880->38.102.83.147:6443: read: connection reset by peer 12316ms (15:54:13.882) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2031836163]: [12.316399812s] [12.316399812s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.882566 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34880->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.903025 5050 reflector.go:561] object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-glgrh": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dheat-operator-controller-manager-dockercfg-glgrh&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34984->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.903120 5050 trace.go:236] Trace[920621622]: "Reflector ListAndWatch" name:object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-glgrh" (11-Dec-2025 15:54:01.593) (total time: 12309ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[920621622]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dheat-operator-controller-manager-dockercfg-glgrh&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34984->38.102.83.147:6443: read: connection reset by peer 12309ms (15:54:13.903) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[920621622]: [12.309465415s] [12.309465415s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.903141 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"heat-operator-controller-manager-dockercfg-glgrh\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dheat-operator-controller-manager-dockercfg-glgrh&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34984->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.922372 5050 request.go:700] Waited for 2.034228956s, retries: 1, retry-after: 1s - retry-reason: due to retryable error, error: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dcluster-samples-operator-dockercfg-xpp9w&resourceVersion=109512": read tcp 38.102.83.147:35066->38.102.83.147:6443: read: connection reset by peer - request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dcluster-samples-operator-dockercfg-xpp9w&resourceVersion=109512 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.922898 5050 reflector.go:561] object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dcluster-samples-operator-dockercfg-xpp9w&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35066->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.922954 5050 trace.go:236] Trace[1845782631]: "Reflector ListAndWatch" name:object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" (11-Dec-2025 15:54:01.611) (total time: 12311ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1845782631]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dcluster-samples-operator-dockercfg-xpp9w&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35066->38.102.83.147:6443: read: connection reset by peer 12311ms (15:54:13.922) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1845782631]: [12.311428958s] [12.311428958s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.922974 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-xpp9w\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dcluster-samples-operator-dockercfg-xpp9w&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35066->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.943164 5050 reflector.go:561] object-"openstack"/"ceph-conf-files": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dceph-conf-files&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35142->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.943241 5050 trace.go:236] Trace[240439214]: "Reflector ListAndWatch" name:object-"openstack"/"ceph-conf-files" (11-Dec-2025 15:54:01.629) (total time: 12313ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[240439214]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dceph-conf-files&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35142->38.102.83.147:6443: read: connection reset by peer 12313ms (15:54:13.943) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[240439214]: [12.313691639s] [12.313691639s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.943287 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ceph-conf-files\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dceph-conf-files&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35142->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.945318 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-5854674fcc-qqb7f" podUID="12778398-2baa-44cb-9fd1-f2034870e9fc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.962842 5050 reflector.go:561] object-"openshift-cluster-version"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35082->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.962916 5050 trace.go:236] Trace[227201424]: "Reflector ListAndWatch" name:object-"openshift-cluster-version"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.619) (total time: 12343ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[227201424]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35082->38.102.83.147:6443: read: connection reset by peer 12343ms (15:54:13.962) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[227201424]: [12.343557729s] [12.343557729s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.962940 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35082->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:13.982597 5050 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35228->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:13.982671 5050 trace.go:236] Trace[1118638753]: "Reflector ListAndWatch" name:object-"openshift-network-diagnostics"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.651) (total time: 12330ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1118638753]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35228->38.102.83.147:6443: read: connection reset by peer 12330ms (15:54:13.982) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1118638753]: [12.330992062s] [12.330992062s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:13.982691 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109067\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35228->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.002590 5050 reflector.go:561] object-"openstack"/"heat-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dheat-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35132->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.002648 5050 trace.go:236] Trace[632999103]: "Reflector ListAndWatch" name:object-"openstack"/"heat-config-data" (11-Dec-2025 15:54:01.626) (total time: 12376ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[632999103]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dheat-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35132->38.102.83.147:6443: read: connection reset by peer 12376ms (15:54:14.002) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[632999103]: [12.376251794s] [12.376251794s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.002667 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"heat-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dheat-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35132->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.022494 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-service-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-service-ca&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34698->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.022541 5050 trace.go:236] Trace[346947895]: "Reflector ListAndWatch" name:object-"openshift-authentication"/"v4-0-config-system-service-ca" (11-Dec-2025 15:54:01.530) (total time: 12492ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[346947895]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-service-ca&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34698->38.102.83.147:6443: read: connection reset by peer 12492ms (15:54:14.022) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[346947895]: [12.492042456s] [12.492042456s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.022557 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-service-ca&resourceVersion=109067\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34698->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.028164 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ttg8w" podUID="b885fa10-3ed3-41fd-94ae-2b7442519450" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.028250 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-5854674fcc-qqb7f" podUID="12778398-2baa-44cb-9fd1-f2034870e9fc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.028306 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-grhdp" podUID="712f888b-7c45-4c1f-95d8-ccc464b7c15f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.028217 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-grhdp" podUID="712f888b-7c45-4c1f-95d8-ccc464b7c15f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.043310 5050 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34754->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.043395 5050 trace.go:236] Trace[1078790413]: "Reflector ListAndWatch" name:object-"openshift-controller-manager"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.541) (total time: 12502ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1078790413]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34754->38.102.83.147:6443: read: connection reset by peer 12502ms (15:54:14.043) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1078790413]: [12.502343553s] [12.502343553s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.043415 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34754->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.062336 5050 reflector.go:561] object-"openstack"/"ovn-data-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovn-data-cert&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34808->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.062387 5050 trace.go:236] Trace[2132050836]: "Reflector ListAndWatch" name:object-"openstack"/"ovn-data-cert" (11-Dec-2025 15:54:01.551) (total time: 12510ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2132050836]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovn-data-cert&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34808->38.102.83.147:6443: read: connection reset by peer 12510ms (15:54:14.062) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2132050836]: [12.510686757s] [12.510686757s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.062405 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovn-data-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovn-data-cert&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34808->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.082477 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=109666": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35824->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.082552 5050 trace.go:236] Trace[1409694501]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Dec-2025 15:54:01.773) (total time: 12308ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1409694501]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=109666": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35824->38.102.83.147:6443: read: connection reset by peer 12308ms (15:54:14.082) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1409694501]: [12.308628633s] [12.308628633s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.082570 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=109666\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35824->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.102388 5050 reflector.go:561] object-"openstack"/"default-dockercfg-tmtdn": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-tmtdn&resourceVersion=109623": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34710->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.102455 5050 trace.go:236] Trace[1977207454]: "Reflector ListAndWatch" name:object-"openstack"/"default-dockercfg-tmtdn" (11-Dec-2025 15:54:01.532) (total time: 12569ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1977207454]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-tmtdn&resourceVersion=109623": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34710->38.102.83.147:6443: read: connection reset by peer 12569ms (15:54:14.102) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1977207454]: [12.569849671s] [12.569849671s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.102476 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"default-dockercfg-tmtdn\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-tmtdn&resourceVersion=109623\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34710->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.122306 5050 reflector.go:561] object-"openstack"/"prometheus-metric-storage-tls-assets-0": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage-tls-assets-0&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34792->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.122351 5050 trace.go:236] Trace[1290462183]: "Reflector ListAndWatch" name:object-"openstack"/"prometheus-metric-storage-tls-assets-0" (11-Dec-2025 15:54:01.546) (total time: 12575ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1290462183]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage-tls-assets-0&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34792->38.102.83.147:6443: read: connection reset by peer 12575ms (15:54:14.122) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1290462183]: [12.575930244s] [12.575930244s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.122390 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"prometheus-metric-storage-tls-assets-0\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage-tls-assets-0&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34792->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.137357 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" podUID="2b344bba-8e1d-415b-9b5f-e21d3144fe42" containerName="hostpath-provisioner" probeResult="failure" output="Get \"http://10.217.0.31:9898/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.142284 5050 reflector.go:561] object-"openshift-etcd-operator"/"etcd-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-ca-bundle&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34330->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.142332 5050 trace.go:236] Trace[688430922]: "Reflector ListAndWatch" name:object-"openshift-etcd-operator"/"etcd-ca-bundle" (11-Dec-2025 15:54:01.448) (total time: 12693ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[688430922]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-ca-bundle&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34330->38.102.83.147:6443: read: connection reset by peer 12693ms (15:54:14.142) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[688430922]: [12.693523985s] [12.693523985s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.142348 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-ca-bundle&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34330->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.162326 5050 reflector.go:561] object-"metallb-system"/"frr-startup": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dfrr-startup&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34768->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.162386 5050 trace.go:236] Trace[63918704]: "Reflector ListAndWatch" name:object-"metallb-system"/"frr-startup" (11-Dec-2025 15:54:01.544) (total time: 12618ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[63918704]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dfrr-startup&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34768->38.102.83.147:6443: read: connection reset by peer 12618ms (15:54:14.162) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[63918704]: [12.618074883s] [12.618074883s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.162401 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"frr-startup\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dfrr-startup&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34768->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.182700 5050 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34940->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.182752 5050 trace.go:236] Trace[474462962]: "Reflector ListAndWatch" name:object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.581) (total time: 12601ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[474462962]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34940->38.102.83.147:6443: read: connection reset by peer 12601ms (15:54:14.182) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[474462962]: [12.601124539s] [12.601124539s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.182771 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34940->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.202699 5050 reflector.go:561] object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-jbnlr": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-baremetal-operator-controller-manager-dockercfg-jbnlr&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35118->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.202825 5050 trace.go:236] Trace[894724601]: "Reflector ListAndWatch" name:object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-jbnlr" (11-Dec-2025 15:54:01.621) (total time: 12581ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[894724601]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-baremetal-operator-controller-manager-dockercfg-jbnlr&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35118->38.102.83.147:6443: read: connection reset by peer 12581ms (15:54:14.202) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[894724601]: [12.581426672s] [12.581426672s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.202850 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openstack-baremetal-operator-controller-manager-dockercfg-jbnlr\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-baremetal-operator-controller-manager-dockercfg-jbnlr&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35118->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.222476 5050 reflector.go:561] object-"openshift-cluster-samples-operator"/"samples-operator-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dsamples-operator-tls&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35190->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.222629 5050 trace.go:236] Trace[149794366]: "Reflector ListAndWatch" name:object-"openshift-cluster-samples-operator"/"samples-operator-tls" (11-Dec-2025 15:54:01.642) (total time: 12579ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[149794366]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dsamples-operator-tls&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35190->38.102.83.147:6443: read: connection reset by peer 12579ms (15:54:14.222) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[149794366]: [12.579716306s] [12.579716306s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.222650 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dsamples-operator-tls&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35190->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.242633 5050 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35232->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.242688 5050 trace.go:236] Trace[226450889]: "Reflector ListAndWatch" name:object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" (11-Dec-2025 15:54:01.652) (total time: 12589ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[226450889]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35232->38.102.83.147:6443: read: connection reset by peer 12589ms (15:54:14.242) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[226450889]: [12.589918579s] [12.589918579s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.242706 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35232->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.261191 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2" podUID="06df6c8c-640d-431b-b216-78345a9054e1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.262449 5050 reflector.go:561] object-"openshift-dns-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35288->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.262583 5050 trace.go:236] Trace[485939716]: "Reflector ListAndWatch" name:object-"openshift-dns-operator"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.659) (total time: 12602ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[485939716]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35288->38.102.83.147:6443: read: connection reset by peer 12602ms (15:54:14.262) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[485939716]: [12.602549037s] [12.602549037s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.262599 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35288->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.282363 5050 reflector.go:561] object-"openshift-ingress-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35298->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.282425 5050 trace.go:236] Trace[350854396]: "Reflector ListAndWatch" name:object-"openshift-ingress-operator"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.668) (total time: 12613ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[350854396]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35298->38.102.83.147:6443: read: connection reset by peer 12613ms (15:54:14.282) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[350854396]: [12.613736436s] [12.613736436s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.282449 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35298->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.303109 5050 reflector.go:561] object-"openstack"/"cinder-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-scripts&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35398->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.303184 5050 trace.go:236] Trace[163813871]: "Reflector ListAndWatch" name:object-"openstack"/"cinder-scripts" (11-Dec-2025 15:54:01.682) (total time: 12620ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[163813871]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-scripts&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35398->38.102.83.147:6443: read: connection reset by peer 12620ms (15:54:14.303) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[163813871]: [12.620168969s] [12.620168969s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.303207 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-scripts&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35398->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.322981 5050 reflector.go:561] object-"openshift-route-controller-manager"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35410->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.323060 5050 trace.go:236] Trace[1614904712]: "Reflector ListAndWatch" name:object-"openshift-route-controller-manager"/"serving-cert" (11-Dec-2025 15:54:01.687) (total time: 12635ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1614904712]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35410->38.102.83.147:6443: read: connection reset by peer 12635ms (15:54:14.322) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1614904712]: [12.635988963s] [12.635988963s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.323081 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35410->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.343206 5050 reflector.go:561] object-"openstack"/"octavia-api-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-api-config-data&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35478->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.343271 5050 trace.go:236] Trace[1344916016]: "Reflector ListAndWatch" name:object-"openstack"/"octavia-api-config-data" (11-Dec-2025 15:54:01.702) (total time: 12640ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1344916016]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-api-config-data&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35478->38.102.83.147:6443: read: connection reset by peer 12640ms (15:54:14.343) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1344916016]: [12.640269197s] [12.640269197s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.343291 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-api-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-api-config-data&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35478->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.362353 5050 reflector.go:561] object-"openshift-apiserver"/"config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35502->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.362406 5050 trace.go:236] Trace[1294988419]: "Reflector ListAndWatch" name:object-"openshift-apiserver"/"config" (11-Dec-2025 15:54:01.708) (total time: 12653ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1294988419]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35502->38.102.83.147:6443: read: connection reset by peer 12653ms (15:54:14.362) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1294988419]: [12.653998585s] [12.653998585s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.362424 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35502->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.382783 5050 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-dockercfg-mfbb7&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35690->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.382858 5050 trace.go:236] Trace[1519219599]: "Reflector ListAndWatch" name:object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" (11-Dec-2025 15:54:01.743) (total time: 12638ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1519219599]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-dockercfg-mfbb7&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35690->38.102.83.147:6443: read: connection reset by peer 12638ms (15:54:14.382) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1519219599]: [12.63887035s] [12.63887035s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.382875 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-mfbb7\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-dockercfg-mfbb7&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35690->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.402864 5050 reflector.go:561] object-"openstack"/"rabbitmq-erlang-cookie": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-erlang-cookie&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35784->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.402934 5050 trace.go:236] Trace[1598068398]: "Reflector ListAndWatch" name:object-"openstack"/"rabbitmq-erlang-cookie" (11-Dec-2025 15:54:01.764) (total time: 12638ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1598068398]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-erlang-cookie&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35784->38.102.83.147:6443: read: connection reset by peer 12638ms (15:54:14.402) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1598068398]: [12.638090139s] [12.638090139s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.402952 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-erlang-cookie\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-erlang-cookie&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35784->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.422536 5050 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34956->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.422598 5050 trace.go:236] Trace[1969068304]: "Reflector ListAndWatch" name:object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.589) (total time: 12833ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1969068304]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34956->38.102.83.147:6443: read: connection reset by peer 12833ms (15:54:14.422) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1969068304]: [12.833425463s] [12.833425463s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.422615 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34956->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.442518 5050 reflector.go:561] object-"openshift-dns"/"dns-default": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Ddns-default&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35044->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.442567 5050 trace.go:236] Trace[1725622399]: "Reflector ListAndWatch" name:object-"openshift-dns"/"dns-default" (11-Dec-2025 15:54:01.604) (total time: 12838ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1725622399]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Ddns-default&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35044->38.102.83.147:6443: read: connection reset by peer 12838ms (15:54:14.442) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1725622399]: [12.838329604s] [12.838329604s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.442580 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"dns-default\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Ddns-default&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35044->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.462592 5050 reflector.go:561] object-"openshift-config-operator"/"config-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/secrets?fieldSelector=metadata.name%3Dconfig-operator-serving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35868->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.462667 5050 trace.go:236] Trace[241418893]: "Reflector ListAndWatch" name:object-"openshift-config-operator"/"config-operator-serving-cert" (11-Dec-2025 15:54:01.789) (total time: 12673ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[241418893]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/secrets?fieldSelector=metadata.name%3Dconfig-operator-serving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35868->38.102.83.147:6443: read: connection reset by peer 12673ms (15:54:14.462) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[241418893]: [12.673515418s] [12.673515418s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.462682 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"config-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/secrets?fieldSelector=metadata.name%3Dconfig-operator-serving-cert&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35868->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.482910 5050 reflector.go:561] object-"openstack"/"octavia-octavia-dockercfg-h4g5n": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-octavia-dockercfg-h4g5n&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35330->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.482969 5050 trace.go:236] Trace[1553946752]: "Reflector ListAndWatch" name:object-"openstack"/"octavia-octavia-dockercfg-h4g5n" (11-Dec-2025 15:54:01.673) (total time: 12809ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1553946752]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-octavia-dockercfg-h4g5n&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35330->38.102.83.147:6443: read: connection reset by peer 12809ms (15:54:14.482) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1553946752]: [12.809621045s] [12.809621045s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.482986 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-octavia-dockercfg-h4g5n\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-octavia-dockercfg-h4g5n&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35330->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.501766 5050 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.501821 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.502536 5050 reflector.go:561] object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-w847r": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dwatcher-operator-controller-manager-dockercfg-w847r&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35384->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.502578 5050 trace.go:236] Trace[1509788259]: "Reflector ListAndWatch" name:object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-w847r" (11-Dec-2025 15:54:01.681) (total time: 12820ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1509788259]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dwatcher-operator-controller-manager-dockercfg-w847r&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35384->38.102.83.147:6443: read: connection reset by peer 12820ms (15:54:14.502) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1509788259]: [12.820582998s] [12.820582998s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.502594 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"watcher-operator-controller-manager-dockercfg-w847r\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dwatcher-operator-controller-manager-dockercfg-w847r&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35384->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.523799 5050 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-tls&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35466->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.524086 5050 trace.go:236] Trace[1651857922]: "Reflector ListAndWatch" name:object-"openshift-machine-api"/"machine-api-operator-tls" (11-Dec-2025 15:54:01.694) (total time: 12829ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1651857922]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-tls&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35466->38.102.83.147:6443: read: connection reset by peer 12829ms (15:54:14.523) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1651857922]: [12.829359573s] [12.829359573s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.524129 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-tls&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35466->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.542685 5050 reflector.go:561] object-"openstack"/"dataplane-adoption-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddataplane-adoption-secret&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35526->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.542758 5050 trace.go:236] Trace[296674098]: "Reflector ListAndWatch" name:object-"openstack"/"dataplane-adoption-secret" (11-Dec-2025 15:54:01.714) (total time: 12828ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[296674098]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddataplane-adoption-secret&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35526->38.102.83.147:6443: read: connection reset by peer 12828ms (15:54:14.542) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[296674098]: [12.828338546s] [12.828338546s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.542797 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"dataplane-adoption-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddataplane-adoption-secret&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35526->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.563023 5050 reflector.go:561] object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-x9p8j": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dkeystone-operator-controller-manager-dockercfg-x9p8j&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35584->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.563095 5050 trace.go:236] Trace[1197087636]: "Reflector ListAndWatch" name:object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-x9p8j" (11-Dec-2025 15:54:01.724) (total time: 12838ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1197087636]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dkeystone-operator-controller-manager-dockercfg-x9p8j&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35584->38.102.83.147:6443: read: connection reset by peer 12838ms (15:54:14.562) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1197087636]: [12.838310453s] [12.838310453s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.563116 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"keystone-operator-controller-manager-dockercfg-x9p8j\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dkeystone-operator-controller-manager-dockercfg-x9p8j&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35584->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.582644 5050 reflector.go:561] object-"openshift-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35616->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.582716 5050 trace.go:236] Trace[1541184087]: "Reflector ListAndWatch" name:object-"openshift-apiserver"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.743) (total time: 12838ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1541184087]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35616->38.102.83.147:6443: read: connection reset by peer 12838ms (15:54:14.582) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1541184087]: [12.838844797s] [12.838844797s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.582731 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35616->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.603388 5050 reflector.go:561] object-"openstack"/"rabbitmq-cell1-server-dockercfg-bvxnm": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-server-dockercfg-bvxnm&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35676->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.603470 5050 trace.go:236] Trace[1934741716]: "Reflector ListAndWatch" name:object-"openstack"/"rabbitmq-cell1-server-dockercfg-bvxnm" (11-Dec-2025 15:54:01.748) (total time: 12855ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1934741716]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-server-dockercfg-bvxnm&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35676->38.102.83.147:6443: read: connection reset by peer 12855ms (15:54:14.603) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1934741716]: [12.855242827s] [12.855242827s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.603488 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-cell1-server-dockercfg-bvxnm\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-server-dockercfg-bvxnm&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35676->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.622652 5050 reflector.go:561] object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/secrets?fieldSelector=metadata.name%3Dcsi-hostpath-provisioner-sa-dockercfg-qd74k&resourceVersion=109184": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35864->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.622728 5050 trace.go:236] Trace[1548351348]: "Reflector ListAndWatch" name:object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" (11-Dec-2025 15:54:01.789) (total time: 12833ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1548351348]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/secrets?fieldSelector=metadata.name%3Dcsi-hostpath-provisioner-sa-dockercfg-qd74k&resourceVersion=109184": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35864->38.102.83.147:6443: read: connection reset by peer 12833ms (15:54:14.622) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1548351348]: [12.833605507s] [12.833605507s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.622747 5050 reflector.go:158] "Unhandled Error" err="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-qd74k\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/secrets?fieldSelector=metadata.name%3Dcsi-hostpath-provisioner-sa-dockercfg-qd74k&resourceVersion=109184\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35864->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.643221 5050 reflector.go:561] object-"openstack"/"keystone": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35756->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.643282 5050 trace.go:236] Trace[741946029]: "Reflector ListAndWatch" name:object-"openstack"/"keystone" (11-Dec-2025 15:54:01.754) (total time: 12889ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[741946029]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35756->38.102.83.147:6443: read: connection reset by peer 12889ms (15:54:14.643) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[741946029]: [12.889074053s] [12.889074053s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.643302 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"keystone\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35756->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.662873 5050 reflector.go:561] object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-nffdg": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dglance-operator-controller-manager-dockercfg-nffdg&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34282->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.662943 5050 trace.go:236] Trace[1659461351]: "Reflector ListAndWatch" name:object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-nffdg" (11-Dec-2025 15:54:01.430) (total time: 13232ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1659461351]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dglance-operator-controller-manager-dockercfg-nffdg&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34282->38.102.83.147:6443: read: connection reset by peer 13232ms (15:54:14.662) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1659461351]: [13.23273566s] [13.23273566s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.662965 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"glance-operator-controller-manager-dockercfg-nffdg\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dglance-operator-controller-manager-dockercfg-nffdg&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34282->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.682529 5050 reflector.go:561] object-"openshift-authentication"/"audit": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Daudit&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34300->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.682584 5050 trace.go:236] Trace[1072767527]: "Reflector ListAndWatch" name:object-"openshift-authentication"/"audit" (11-Dec-2025 15:54:01.438) (total time: 13244ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1072767527]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Daudit&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34300->38.102.83.147:6443: read: connection reset by peer 13244ms (15:54:14.682) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1072767527]: [13.244255829s] [13.244255829s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.682605 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"audit\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Daudit&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34300->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.702826 5050 reflector.go:561] object-"metallb-system"/"manager-account-dockercfg-m6zt9": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmanager-account-dockercfg-m6zt9&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34312->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.702903 5050 trace.go:236] Trace[1271562522]: "Reflector ListAndWatch" name:object-"metallb-system"/"manager-account-dockercfg-m6zt9" (11-Dec-2025 15:54:01.445) (total time: 13257ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1271562522]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmanager-account-dockercfg-m6zt9&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34312->38.102.83.147:6443: read: connection reset by peer 13257ms (15:54:14.702) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1271562522]: [13.257350019s] [13.257350019s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.702927 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"manager-account-dockercfg-m6zt9\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmanager-account-dockercfg-m6zt9&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34312->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.722947 5050 reflector.go:561] object-"openshift-nmstate"/"nmstate-operator-dockercfg-b8vjd": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dnmstate-operator-dockercfg-b8vjd&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34526->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.723046 5050 trace.go:236] Trace[1168230350]: "Reflector ListAndWatch" name:object-"openshift-nmstate"/"nmstate-operator-dockercfg-b8vjd" (11-Dec-2025 15:54:01.494) (total time: 13228ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1168230350]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dnmstate-operator-dockercfg-b8vjd&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34526->38.102.83.147:6443: read: connection reset by peer 13228ms (15:54:14.722) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1168230350]: [13.228524607s] [13.228524607s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.723066 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"nmstate-operator-dockercfg-b8vjd\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dnmstate-operator-dockercfg-b8vjd&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34526->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.723630 5050 patch_prober.go:28] interesting pod/observability-operator-d8bb48f5d-wwdcc container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.1.128:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.723659 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" podUID="1c8331c1-b8ee-456b-baa6-110917427b64" containerName="operator" probeResult="failure" output="Get \"http://10.217.1.128:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.743283 5050 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/secrets?fieldSelector=metadata.name%3Dauthentication-operator-dockercfg-mz9bj&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34652->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.743355 5050 trace.go:236] Trace[157695936]: "Reflector ListAndWatch" name:object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" (11-Dec-2025 15:54:01.522) (total time: 13220ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[157695936]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/secrets?fieldSelector=metadata.name%3Dauthentication-operator-dockercfg-mz9bj&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34652->38.102.83.147:6443: read: connection reset by peer 13220ms (15:54:14.743) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[157695936]: [13.220859202s] [13.220859202s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.743383 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-mz9bj\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/secrets?fieldSelector=metadata.name%3Dauthentication-operator-dockercfg-mz9bj&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34652->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.763661 5050 reflector.go:561] object-"openstack"/"glance-default-internal-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-default-internal-config-data&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34732->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.763802 5050 trace.go:236] Trace[937355636]: "Reflector ListAndWatch" name:object-"openstack"/"glance-default-internal-config-data" (11-Dec-2025 15:54:01.533) (total time: 13230ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[937355636]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-default-internal-config-data&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34732->38.102.83.147:6443: read: connection reset by peer 13229ms (15:54:14.763) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[937355636]: [13.230093819s] [13.230093819s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.763827 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"glance-default-internal-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-default-internal-config-data&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34732->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.779357 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pjhfc" podUID="3c9e825c-0aee-42b9-a7a5-3191486f301d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.782654 5050 reflector.go:561] object-"openstack"/"keystone-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-scripts&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34788->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.782715 5050 trace.go:236] Trace[1853508614]: "Reflector ListAndWatch" name:object-"openstack"/"keystone-scripts" (11-Dec-2025 15:54:01.545) (total time: 13237ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1853508614]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-scripts&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34788->38.102.83.147:6443: read: connection reset by peer 13236ms (15:54:14.782) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1853508614]: [13.237045685s] [13.237045685s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.782737 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"keystone-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-scripts&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34788->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.802588 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-session": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-session&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34872->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.802642 5050 trace.go:236] Trace[1346805754]: "Reflector ListAndWatch" name:object-"openshift-authentication"/"v4-0-config-system-session" (11-Dec-2025 15:54:01.559) (total time: 13242ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1346805754]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-session&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34872->38.102.83.147:6443: read: connection reset by peer 13242ms (15:54:14.802) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1346805754]: [13.24281086s] [13.24281086s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.802661 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-session\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-session&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34872->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.823472 5050 reflector.go:561] object-"openshift-controller-manager-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34908->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.823543 5050 trace.go:236] Trace[269838022]: "Reflector ListAndWatch" name:object-"openshift-controller-manager-operator"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.571) (total time: 13251ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[269838022]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34908->38.102.83.147:6443: read: connection reset by peer 13251ms (15:54:14.823) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[269838022]: [13.251534764s] [13.251534764s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.823563 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34908->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.842549 5050 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-node-metrics-cert&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34938->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.842605 5050 trace.go:236] Trace[1915532741]: "Reflector ListAndWatch" name:object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" (11-Dec-2025 15:54:01.580) (total time: 13262ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1915532741]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-node-metrics-cert&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34938->38.102.83.147:6443: read: connection reset by peer 13262ms (15:54:14.842) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1915532741]: [13.262069616s] [13.262069616s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.842621 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-node-metrics-cert&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34938->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.862703 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-trusted-ca-bundle&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34948->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.862773 5050 trace.go:236] Trace[1402251184]: "Reflector ListAndWatch" name:object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" (11-Dec-2025 15:54:01.584) (total time: 13278ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1402251184]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-trusted-ca-bundle&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34948->38.102.83.147:6443: read: connection reset by peer 13278ms (15:54:14.862) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1402251184]: [13.27827598s] [13.27827598s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.862798 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-trusted-ca-bundle&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34948->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.882732 5050 reflector.go:561] object-"openstack"/"manila-api-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-api-config-data&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34978->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.882791 5050 trace.go:236] Trace[1766674921]: "Reflector ListAndWatch" name:object-"openstack"/"manila-api-config-data" (11-Dec-2025 15:54:01.591) (total time: 13291ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1766674921]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-api-config-data&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34978->38.102.83.147:6443: read: connection reset by peer 13291ms (15:54:14.882) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1766674921]: [13.29134587s] [13.29134587s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.882809 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"manila-api-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-api-config-data&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34978->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.894568 5050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="3.2s" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.904542 5050 reflector.go:561] object-"openstack"/"ovndbcluster-nb-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-nb-scripts&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35366->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.904589 5050 trace.go:236] Trace[2010135789]: "Reflector ListAndWatch" name:object-"openstack"/"ovndbcluster-nb-scripts" (11-Dec-2025 15:54:01.677) (total time: 13226ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2010135789]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-nb-scripts&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35366->38.102.83.147:6443: read: connection reset by peer 13226ms (15:54:14.904) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2010135789]: [13.226833131s] [13.226833131s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.904706 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovndbcluster-nb-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-nb-scripts&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35366->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.912371 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" podUID="3477354d-838b-48cc-a6c3-612088d82640" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.922943 5050 request.go:700] Waited for 3.033360953s, retries: 1, retry-after: 1s - retry-reason: due to retryable error, error: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dgalera-openstack-dockercfg-5gcmv&resourceVersion=109743": read tcp 38.102.83.147:35450->38.102.83.147:6443: read: connection reset by peer - request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dgalera-openstack-dockercfg-5gcmv&resourceVersion=109743 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.923530 5050 reflector.go:561] object-"openstack"/"galera-openstack-dockercfg-5gcmv": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dgalera-openstack-dockercfg-5gcmv&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35450->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.923592 5050 trace.go:236] Trace[920927807]: "Reflector ListAndWatch" name:object-"openstack"/"galera-openstack-dockercfg-5gcmv" (11-Dec-2025 15:54:01.694) (total time: 13228ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[920927807]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dgalera-openstack-dockercfg-5gcmv&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35450->38.102.83.147:6443: read: connection reset by peer 13228ms (15:54:14.923) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[920927807]: [13.228839515s] [13.228839515s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.923610 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"galera-openstack-dockercfg-5gcmv\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dgalera-openstack-dockercfg-5gcmv&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35450->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.942572 5050 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serving-cert&resourceVersion=109623": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35492->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.942648 5050 trace.go:236] Trace[909999600]: "Reflector ListAndWatch" name:object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" (11-Dec-2025 15:54:01.706) (total time: 13236ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[909999600]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serving-cert&resourceVersion=109623": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35492->38.102.83.147:6443: read: connection reset by peer 13236ms (15:54:14.942) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[909999600]: [13.23647713s] [13.23647713s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.942666 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serving-cert&resourceVersion=109623\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35492->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.962871 5050 reflector.go:561] object-"openshift-ingress"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35520->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.962946 5050 trace.go:236] Trace[1838628847]: "Reflector ListAndWatch" name:object-"openshift-ingress"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.714) (total time: 13248ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1838628847]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35520->38.102.83.147:6443: read: connection reset by peer 13248ms (15:54:14.962) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1838628847]: [13.248547483s] [13.248547483s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.962969 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109067\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35520->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:14.982917 5050 reflector.go:561] object-"openstack"/"ceilometer-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dceilometer-config-data&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35644->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:14.983033 5050 trace.go:236] Trace[1877278636]: "Reflector ListAndWatch" name:object-"openstack"/"ceilometer-config-data" (11-Dec-2025 15:54:01.744) (total time: 13238ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1877278636]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dceilometer-config-data&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35644->38.102.83.147:6443: read: connection reset by peer 13238ms (15:54:14.982) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1877278636]: [13.238970637s] [13.238970637s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:14.983051 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ceilometer-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dceilometer-config-data&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35644->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.003249 5050 reflector.go:561] object-"openshift-etcd-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35334->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.003332 5050 trace.go:236] Trace[141981909]: "Reflector ListAndWatch" name:object-"openshift-etcd-operator"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.673) (total time: 13329ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[141981909]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35334->38.102.83.147:6443: read: connection reset by peer 13329ms (15:54:15.003) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[141981909]: [13.329893272s] [13.329893272s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.003350 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35334->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.024453 5050 reflector.go:561] object-"openstack"/"openstackclient-openstackclient-dockercfg-c88pf": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dopenstackclient-openstackclient-dockercfg-c88pf&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35596->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.024641 5050 trace.go:236] Trace[1013153513]: "Reflector ListAndWatch" name:object-"openstack"/"openstackclient-openstackclient-dockercfg-c88pf" (11-Dec-2025 15:54:01.727) (total time: 13297ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1013153513]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dopenstackclient-openstackclient-dockercfg-c88pf&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35596->38.102.83.147:6443: read: connection reset by peer 13297ms (15:54:15.024) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1013153513]: [13.297299829s] [13.297299829s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.024666 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstackclient-openstackclient-dockercfg-c88pf\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dopenstackclient-openstackclient-dockercfg-c88pf&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35596->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.042951 5050 reflector.go:561] object-"openshift-authentication-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35618->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.043099 5050 trace.go:236] Trace[1706958716]: "Reflector ListAndWatch" name:object-"openshift-authentication-operator"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.743) (total time: 13299ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1706958716]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35618->38.102.83.147:6443: read: connection reset by peer 13299ms (15:54:15.042) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1706958716]: [13.299231991s] [13.299231991s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.043118 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35618->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.058195 5050 patch_prober.go:28] interesting pod/perses-operator-5446b9c989-dqk4m container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.1.129:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.058209 5050 patch_prober.go:28] interesting pod/perses-operator-5446b9c989-dqk4m container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.1.129:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.058237 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5446b9c989-dqk4m" podUID="051b7665-675e-4109-a8e8-5a416c8b49cc" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.1.129:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.058232 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-5446b9c989-dqk4m" podUID="051b7665-675e-4109-a8e8-5a416c8b49cc" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.1.129:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.063413 5050 reflector.go:561] object-"openstack-operators"/"test-operator-controller-manager-dockercfg-cclxg": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dtest-operator-controller-manager-dockercfg-cclxg&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35712->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.063516 5050 trace.go:236] Trace[1792802807]: "Reflector ListAndWatch" name:object-"openstack-operators"/"test-operator-controller-manager-dockercfg-cclxg" (11-Dec-2025 15:54:01.751) (total time: 13312ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1792802807]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dtest-operator-controller-manager-dockercfg-cclxg&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35712->38.102.83.147:6443: read: connection reset by peer 13312ms (15:54:15.063) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1792802807]: [13.31225592s] [13.31225592s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.063538 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"test-operator-controller-manager-dockercfg-cclxg\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dtest-operator-controller-manager-dockercfg-cclxg&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35712->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.082480 5050 reflector.go:561] object-"openstack"/"dataplanenodeset-openstack-cell1": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddataplanenodeset-openstack-cell1&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35770->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.082552 5050 trace.go:236] Trace[1917734088]: "Reflector ListAndWatch" name:object-"openstack"/"dataplanenodeset-openstack-cell1" (11-Dec-2025 15:54:01.763) (total time: 13318ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1917734088]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddataplanenodeset-openstack-cell1&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35770->38.102.83.147:6443: read: connection reset by peer 13318ms (15:54:15.082) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1917734088]: [13.318813266s] [13.318813266s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.082570 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"dataplanenodeset-openstack-cell1\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddataplanenodeset-openstack-cell1&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35770->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.102633 5050 reflector.go:561] object-"openshift-route-controller-manager"/"client-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35846->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.102688 5050 trace.go:236] Trace[1737725458]: "Reflector ListAndWatch" name:object-"openshift-route-controller-manager"/"client-ca" (11-Dec-2025 15:54:01.778) (total time: 13324ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1737725458]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35846->38.102.83.147:6443: read: connection reset by peer 13324ms (15:54:15.102) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1737725458]: [13.324352594s] [13.324352594s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.102701 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35846->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.123435 5050 reflector.go:561] object-"openshift-apiserver"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34086->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.123487 5050 trace.go:236] Trace[227427146]: "Reflector ListAndWatch" name:object-"openshift-apiserver"/"serving-cert" (11-Dec-2025 15:54:01.395) (total time: 13727ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[227427146]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34086->38.102.83.147:6443: read: connection reset by peer 13727ms (15:54:15.123) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[227427146]: [13.727698089s] [13.727698089s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.123503 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34086->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.127658 5050 patch_prober.go:28] interesting pod/controller-manager-69cffd76bd-8bkp6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.127658 5050 patch_prober.go:28] interesting pod/controller-manager-69cffd76bd-8bkp6 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.127695 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-69cffd76bd-8bkp6" podUID="6a956942-e4db-4c66-a7c2-1c370c1569f4" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.127726 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-69cffd76bd-8bkp6" podUID="6a956942-e4db-4c66-a7c2-1c370c1569f4" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.142807 5050 reflector.go:561] object-"openstack"/"manila-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-config-data&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34214->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.142870 5050 trace.go:236] Trace[1310314646]: "Reflector ListAndWatch" name:object-"openstack"/"manila-config-data" (11-Dec-2025 15:54:01.408) (total time: 13734ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1310314646]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-config-data&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34214->38.102.83.147:6443: read: connection reset by peer 13734ms (15:54:15.142) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1310314646]: [13.734507342s] [13.734507342s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.142888 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"manila-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-config-data&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34214->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.162968 5050 reflector.go:561] object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-t4t27": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncluster-ovndbcluster-nb-dockercfg-t4t27&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34266->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.163114 5050 trace.go:236] Trace[583272446]: "Reflector ListAndWatch" name:object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-t4t27" (11-Dec-2025 15:54:01.426) (total time: 13736ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[583272446]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncluster-ovndbcluster-nb-dockercfg-t4t27&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34266->38.102.83.147:6443: read: connection reset by peer 13736ms (15:54:15.162) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[583272446]: [13.736367682s] [13.736367682s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.163141 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovncluster-ovndbcluster-nb-dockercfg-t4t27\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncluster-ovndbcluster-nb-dockercfg-t4t27&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34266->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.183049 5050 reflector.go:561] object-"openshift-multus"/"metrics-daemon-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmetrics-daemon-secret&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34432->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.183108 5050 trace.go:236] Trace[1928117557]: "Reflector ListAndWatch" name:object-"openshift-multus"/"metrics-daemon-secret" (11-Dec-2025 15:54:01.475) (total time: 13707ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1928117557]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmetrics-daemon-secret&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34432->38.102.83.147:6443: read: connection reset by peer 13707ms (15:54:15.183) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1928117557]: [13.707381556s] [13.707381556s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.183131 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"metrics-daemon-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmetrics-daemon-secret&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34432->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.203260 5050 reflector.go:561] object-"openstack"/"memcached-config-data": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dmemcached-config-data&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34538->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.203348 5050 trace.go:236] Trace[1449201583]: "Reflector ListAndWatch" name:object-"openstack"/"memcached-config-data" (11-Dec-2025 15:54:01.495) (total time: 13707ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1449201583]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dmemcached-config-data&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34538->38.102.83.147:6443: read: connection reset by peer 13707ms (15:54:15.203) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1449201583]: [13.707771266s] [13.707771266s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.203372 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"memcached-config-data\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dmemcached-config-data&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34538->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.209123 5050 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cd66n container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.209176 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" podUID="ea884e88-c0df-4212-976a-0d7ce1731fdc" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.209208 5050 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cd66n container/openshift-apiserver namespace/openshift-apiserver: Liveness probe status=failure output="Get \"https://10.217.0.9:8443/livez?exclude=etcd\": context deadline exceeded" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.209278 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" podUID="ea884e88-c0df-4212-976a-0d7ce1731fdc" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.9:8443/livez?exclude=etcd\": context deadline exceeded" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.223243 5050 reflector.go:561] object-"openshift-console"/"console-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-serving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34590->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.223320 5050 trace.go:236] Trace[1427474337]: "Reflector ListAndWatch" name:object-"openshift-console"/"console-serving-cert" (11-Dec-2025 15:54:01.507) (total time: 13716ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1427474337]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-serving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34590->38.102.83.147:6443: read: connection reset by peer 13716ms (15:54:15.223) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1427474337]: [13.716184032s] [13.716184032s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.223343 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-serving-cert&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34590->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.242434 5050 reflector.go:561] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-ancillary-tools-dockercfg-vnmsz&resourceVersion=109360": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34630->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.242502 5050 trace.go:236] Trace[759021614]: "Reflector ListAndWatch" name:object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" (11-Dec-2025 15:54:01.518) (total time: 13724ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[759021614]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-ancillary-tools-dockercfg-vnmsz&resourceVersion=109360": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34630->38.102.83.147:6443: read: connection reset by peer 13724ms (15:54:15.242) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[759021614]: [13.72431243s] [13.72431243s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.242521 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-vnmsz\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-ancillary-tools-dockercfg-vnmsz&resourceVersion=109360\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34630->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.262739 5050 reflector.go:561] object-"openshift-nmstate"/"nmstate-handler-dockercfg-94hht": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dnmstate-handler-dockercfg-94hht&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34844->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.262790 5050 trace.go:236] Trace[311408994]: "Reflector ListAndWatch" name:object-"openshift-nmstate"/"nmstate-handler-dockercfg-94hht" (11-Dec-2025 15:54:01.559) (total time: 13703ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[311408994]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dnmstate-handler-dockercfg-94hht&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34844->38.102.83.147:6443: read: connection reset by peer 13703ms (15:54:15.262) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[311408994]: [13.703414159s] [13.703414159s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.262806 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"nmstate-handler-dockercfg-94hht\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dnmstate-handler-dockercfg-94hht&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34844->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.282560 5050 reflector.go:561] object-"openstack"/"manila-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-scripts&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34896->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.282640 5050 trace.go:236] Trace[1768828237]: "Reflector ListAndWatch" name:object-"openstack"/"manila-scripts" (11-Dec-2025 15:54:01.569) (total time: 13713ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1768828237]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-scripts&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34896->38.102.83.147:6443: read: connection reset by peer 13713ms (15:54:15.282) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1768828237]: [13.7131556s] [13.7131556s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.282660 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"manila-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-scripts&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34896->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.302780 5050 reflector.go:561] object-"openshift-image-registry"/"image-registry-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dimage-registry-tls&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34928->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.302833 5050 trace.go:236] Trace[546491959]: "Reflector ListAndWatch" name:object-"openshift-image-registry"/"image-registry-tls" (11-Dec-2025 15:54:01.579) (total time: 13723ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[546491959]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dimage-registry-tls&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34928->38.102.83.147:6443: read: connection reset by peer 13723ms (15:54:15.302) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[546491959]: [13.723397565s] [13.723397565s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.302852 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"image-registry-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dimage-registry-tls&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34928->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.322743 5050 reflector.go:561] object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-mz95s": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-controller-manager-dockercfg-mz95s&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34966->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.322796 5050 trace.go:236] Trace[956003630]: "Reflector ListAndWatch" name:object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-mz95s" (11-Dec-2025 15:54:01.590) (total time: 13732ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[956003630]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-controller-manager-dockercfg-mz95s&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34966->38.102.83.147:6443: read: connection reset by peer 13732ms (15:54:15.322) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[956003630]: [13.732421726s] [13.732421726s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.322813 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openstack-operator-controller-manager-dockercfg-mz95s\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-controller-manager-dockercfg-mz95s&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34966->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.343004 5050 reflector.go:561] object-"openstack"/"cert-galera-openstack-cell1-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-galera-openstack-cell1-svc&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35040->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.343082 5050 trace.go:236] Trace[1740064885]: "Reflector ListAndWatch" name:object-"openstack"/"cert-galera-openstack-cell1-svc" (11-Dec-2025 15:54:01.601) (total time: 13742ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1740064885]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-galera-openstack-cell1-svc&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35040->38.102.83.147:6443: read: connection reset by peer 13741ms (15:54:15.342) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1740064885]: [13.742019513s] [13.742019513s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.343127 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-galera-openstack-cell1-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-galera-openstack-cell1-svc&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35040->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.362736 5050 reflector.go:561] object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-f52rf": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dtelemetry-operator-controller-manager-dockercfg-f52rf&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35096->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.362815 5050 trace.go:236] Trace[693539987]: "Reflector ListAndWatch" name:object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-f52rf" (11-Dec-2025 15:54:01.619) (total time: 13743ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[693539987]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dtelemetry-operator-controller-manager-dockercfg-f52rf&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35096->38.102.83.147:6443: read: connection reset by peer 13743ms (15:54:15.362) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[693539987]: [13.743352139s] [13.743352139s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.362840 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"telemetry-operator-controller-manager-dockercfg-f52rf\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dtelemetry-operator-controller-manager-dockercfg-f52rf&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35096->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.382733 5050 reflector.go:561] object-"openshift-etcd-operator"/"etcd-client": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35116->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.382807 5050 trace.go:236] Trace[1416078524]: "Reflector ListAndWatch" name:object-"openshift-etcd-operator"/"etcd-client" (11-Dec-2025 15:54:01.621) (total time: 13761ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1416078524]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35116->38.102.83.147:6443: read: connection reset by peer 13761ms (15:54:15.382) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1416078524]: [13.761373942s] [13.761373942s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.382827 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35116->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.401241 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-5d666c4679-krnwf" podUID="a6130f1a-c95b-445f-8235-e57fdcb270fe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.401834 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5d666c4679-krnwf" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.402514 5050 reflector.go:561] object-"openshift-cluster-samples-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35102->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.402598 5050 trace.go:236] Trace[641508969]: "Reflector ListAndWatch" name:object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.620) (total time: 13782ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[641508969]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35102->38.102.83.147:6443: read: connection reset by peer 13782ms (15:54:15.402) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[641508969]: [13.782146477s] [13.782146477s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.402619 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35102->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.423470 5050 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackage-server-manager-serving-cert&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35348->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.423538 5050 trace.go:236] Trace[622063366]: "Reflector ListAndWatch" name:object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" (11-Dec-2025 15:54:01.674) (total time: 13749ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[622063366]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackage-server-manager-serving-cert&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35348->38.102.83.147:6443: read: connection reset by peer 13748ms (15:54:15.423) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[622063366]: [13.749030271s] [13.749030271s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.423557 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackage-server-manager-serving-cert&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35348->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.442768 5050 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=109623": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35422->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.442843 5050 trace.go:236] Trace[732561238]: "Reflector ListAndWatch" name:object-"openshift-network-console"/"networking-console-plugin-cert" (11-Dec-2025 15:54:01.688) (total time: 13754ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[732561238]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=109623": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35422->38.102.83.147:6443: read: connection reset by peer 13754ms (15:54:15.442) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[732561238]: [13.754677642s] [13.754677642s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.442865 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=109623\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35422->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.463287 5050 reflector.go:561] object-"openstack"/"neutron-httpd-config": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-httpd-config&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35472->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.463352 5050 trace.go:236] Trace[80622854]: "Reflector ListAndWatch" name:object-"openstack"/"neutron-httpd-config" (11-Dec-2025 15:54:01.702) (total time: 13760ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[80622854]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-httpd-config&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35472->38.102.83.147:6443: read: connection reset by peer 13760ms (15:54:15.463) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[80622854]: [13.760416505s] [13.760416505s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.463373 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"neutron-httpd-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-httpd-config&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35472->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.483263 5050 reflector.go:561] object-"openshift-service-ca"/"service-ca-dockercfg-pn86c": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dservice-ca-dockercfg-pn86c&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35498->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.483322 5050 trace.go:236] Trace[953248744]: "Reflector ListAndWatch" name:object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" (11-Dec-2025 15:54:01.706) (total time: 13777ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[953248744]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dservice-ca-dockercfg-pn86c&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35498->38.102.83.147:6443: read: connection reset by peer 13777ms (15:54:15.483) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[953248744]: [13.777182134s] [13.777182134s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.483343 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"service-ca-dockercfg-pn86c\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dservice-ca-dockercfg-pn86c&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35498->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.503025 5050 reflector.go:561] object-"openshift-ingress"/"service-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35570->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.503085 5050 trace.go:236] Trace[1176110535]: "Reflector ListAndWatch" name:object-"openshift-ingress"/"service-ca-bundle" (11-Dec-2025 15:54:01.722) (total time: 13780ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1176110535]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35570->38.102.83.147:6443: read: connection reset by peer 13780ms (15:54:15.502) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1176110535]: [13.780524284s] [13.780524284s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.503109 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35570->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.523048 5050 reflector.go:561] object-"openshift-operators"/"obo-prometheus-operator-dockercfg-mpfzr": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-dockercfg-mpfzr&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35680->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.523126 5050 trace.go:236] Trace[256142501]: "Reflector ListAndWatch" name:object-"openshift-operators"/"obo-prometheus-operator-dockercfg-mpfzr" (11-Dec-2025 15:54:01.748) (total time: 13774ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[256142501]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-dockercfg-mpfzr&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35680->38.102.83.147:6443: read: connection reset by peer 13774ms (15:54:15.523) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[256142501]: [13.774239046s] [13.774239046s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.523146 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-mpfzr\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-dockercfg-mpfzr&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35680->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.542835 5050 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-daemon-dockercfg-r5tcq&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35728->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.542919 5050 trace.go:236] Trace[779923772]: "Reflector ListAndWatch" name:object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" (11-Dec-2025 15:54:01.751) (total time: 13791ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[779923772]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-daemon-dockercfg-r5tcq&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35728->38.102.83.147:6443: read: connection reset by peer 13791ms (15:54:15.542) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[779923772]: [13.791113968s] [13.791113968s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.542937 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-r5tcq\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-daemon-dockercfg-r5tcq&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35728->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.545940 5050 scope.go:117] "RemoveContainer" containerID="33899e24adacef82929b0fbbcd58af7cb4f7aa3935b49ec9b5b4045a51da1d14" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.546238 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.563417 5050 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.563460 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.563502 5050 reflector.go:561] object-"openstack"/"placement-placement-dockercfg-4zzmp": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-placement-dockercfg-4zzmp&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35842->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.563642 5050 trace.go:236] Trace[662971247]: "Reflector ListAndWatch" name:object-"openstack"/"placement-placement-dockercfg-4zzmp" (11-Dec-2025 15:54:01.776) (total time: 13787ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[662971247]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-placement-dockercfg-4zzmp&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35842->38.102.83.147:6443: read: connection reset by peer 13786ms (15:54:15.563) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[662971247]: [13.78746267s] [13.78746267s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.563671 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"placement-placement-dockercfg-4zzmp\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-placement-dockercfg-4zzmp&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35842->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.583998 5050 reflector.go:561] object-"openstack"/"cinder-api-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-api-config-data&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34322->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.584072 5050 trace.go:236] Trace[1542335768]: "Reflector ListAndWatch" name:object-"openstack"/"cinder-api-config-data" (11-Dec-2025 15:54:01.447) (total time: 14136ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1542335768]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-api-config-data&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34322->38.102.83.147:6443: read: connection reset by peer 14136ms (15:54:15.583) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1542335768]: [14.136363297s] [14.136363297s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.584093 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-api-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-api-config-data&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34322->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.603254 5050 reflector.go:561] object-"openshift-ingress-canary"/"canary-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Dcanary-serving-cert&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34472->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.603328 5050 trace.go:236] Trace[1081745703]: "Reflector ListAndWatch" name:object-"openshift-ingress-canary"/"canary-serving-cert" (11-Dec-2025 15:54:01.489) (total time: 14113ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1081745703]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Dcanary-serving-cert&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34472->38.102.83.147:6443: read: connection reset by peer 14113ms (15:54:15.603) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1081745703]: [14.113962007s] [14.113962007s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.603351 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-canary\"/\"canary-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Dcanary-serving-cert&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34472->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.623377 5050 reflector.go:561] object-"metallb-system"/"frr-k8s-daemon-dockercfg-wks79": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dfrr-k8s-daemon-dockercfg-wks79&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34556->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.623445 5050 trace.go:236] Trace[988594394]: "Reflector ListAndWatch" name:object-"metallb-system"/"frr-k8s-daemon-dockercfg-wks79" (11-Dec-2025 15:54:01.500) (total time: 14122ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[988594394]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dfrr-k8s-daemon-dockercfg-wks79&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34556->38.102.83.147:6443: read: connection reset by peer 14122ms (15:54:15.623) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[988594394]: [14.122539496s] [14.122539496s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.623463 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"frr-k8s-daemon-dockercfg-wks79\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dfrr-k8s-daemon-dockercfg-wks79&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34556->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.642607 5050 reflector.go:561] object-"openstack"/"cinder-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-config-data&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34606->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.642961 5050 trace.go:236] Trace[1632869125]: "Reflector ListAndWatch" name:object-"openstack"/"cinder-config-data" (11-Dec-2025 15:54:01.509) (total time: 14133ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1632869125]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-config-data&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34606->38.102.83.147:6443: read: connection reset by peer 14133ms (15:54:15.642) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1632869125]: [14.133651674s] [14.133651674s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.642987 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-config-data&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34606->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.662440 5050 reflector.go:561] object-"openshift-etcd-operator"/"etcd-service-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-service-ca-bundle&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34858->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.662499 5050 trace.go:236] Trace[1369022996]: "Reflector ListAndWatch" name:object-"openshift-etcd-operator"/"etcd-service-ca-bundle" (11-Dec-2025 15:54:01.559) (total time: 14103ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1369022996]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-service-ca-bundle&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34858->38.102.83.147:6443: read: connection reset by peer 14103ms (15:54:15.662) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1369022996]: [14.103079655s] [14.103079655s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.662517 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-service-ca-bundle&resourceVersion=109067\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34858->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.682918 5050 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34922->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.682969 5050 trace.go:236] Trace[1282266690]: "Reflector ListAndWatch" name:object-"openshift-kube-storage-version-migrator-operator"/"config" (11-Dec-2025 15:54:01.571) (total time: 14110ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1282266690]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34922->38.102.83.147:6443: read: connection reset by peer 14110ms (15:54:15.682) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1282266690]: [14.110954406s] [14.110954406s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.682985 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=109067\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34922->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.703117 5050 reflector.go:561] object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcertified-operators-dockercfg-4rs5g&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34974->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.703171 5050 trace.go:236] Trace[1105235963]: "Reflector ListAndWatch" name:object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" (11-Dec-2025 15:54:01.590) (total time: 14112ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1105235963]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcertified-operators-dockercfg-4rs5g&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34974->38.102.83.147:6443: read: connection reset by peer 14112ms (15:54:15.703) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1105235963]: [14.112780625s] [14.112780625s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.703189 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-4rs5g\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcertified-operators-dockercfg-4rs5g&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34974->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.723306 5050 reflector.go:561] object-"openstack"/"ovndbcluster-nb-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-nb-config&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34998->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.723393 5050 trace.go:236] Trace[651012818]: "Reflector ListAndWatch" name:object-"openstack"/"ovndbcluster-nb-config" (11-Dec-2025 15:54:01.593) (total time: 14129ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[651012818]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-nb-config&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34998->38.102.83.147:6443: read: connection reset by peer 14129ms (15:54:15.723) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[651012818]: [14.129711449s] [14.129711449s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.723416 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovndbcluster-nb-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-nb-config&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34998->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.743585 5050 reflector.go:561] object-"openstack"/"ovnnorthd-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovnnorthd-config&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34982->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.743655 5050 trace.go:236] Trace[997071342]: "Reflector ListAndWatch" name:object-"openstack"/"ovnnorthd-config" (11-Dec-2025 15:54:01.591) (total time: 14152ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[997071342]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovnnorthd-config&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34982->38.102.83.147:6443: read: connection reset by peer 14152ms (15:54:15.743) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[997071342]: [14.152217611s] [14.152217611s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.743676 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovnnorthd-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovnnorthd-config&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34982->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.763068 5050 reflector.go:561] object-"metallb-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35058->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.763121 5050 trace.go:236] Trace[1051579051]: "Reflector ListAndWatch" name:object-"metallb-system"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.606) (total time: 14156ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1051579051]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35058->38.102.83.147:6443: read: connection reset by peer 14156ms (15:54:15.763) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1051579051]: [14.156577808s] [14.156577808s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.763138 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35058->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.782972 5050 reflector.go:561] object-"openshift-nmstate"/"nginx-conf": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/configmaps?fieldSelector=metadata.name%3Dnginx-conf&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35030->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.783064 5050 trace.go:236] Trace[592238877]: "Reflector ListAndWatch" name:object-"openshift-nmstate"/"nginx-conf" (11-Dec-2025 15:54:01.599) (total time: 14183ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[592238877]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/configmaps?fieldSelector=metadata.name%3Dnginx-conf&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35030->38.102.83.147:6443: read: connection reset by peer 14183ms (15:54:15.782) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[592238877]: [14.183082798s] [14.183082798s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.783083 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"nginx-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/configmaps?fieldSelector=metadata.name%3Dnginx-conf&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35030->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.802404 5050 reflector.go:561] object-"openshift-oauth-apiserver"/"audit-1": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&resourceVersion=109642": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35172->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.802478 5050 trace.go:236] Trace[336054302]: "Reflector ListAndWatch" name:object-"openshift-oauth-apiserver"/"audit-1" (11-Dec-2025 15:54:01.639) (total time: 14162ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[336054302]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&resourceVersion=109642": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35172->38.102.83.147:6443: read: connection reset by peer 14162ms (15:54:15.802) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[336054302]: [14.162949889s] [14.162949889s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.802495 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&resourceVersion=109642\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35172->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.822616 5050 reflector.go:561] object-"openshift-service-ca"/"signing-key": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dsigning-key&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35186->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.822671 5050 trace.go:236] Trace[2062904566]: "Reflector ListAndWatch" name:object-"openshift-service-ca"/"signing-key" (11-Dec-2025 15:54:01.640) (total time: 14182ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2062904566]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dsigning-key&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35186->38.102.83.147:6443: read: connection reset by peer 14182ms (15:54:15.822) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2062904566]: [14.182059811s] [14.182059811s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.822688 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"signing-key\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dsigning-key&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35186->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.842792 5050 reflector.go:561] object-"openstack"/"octavia-housekeeping-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-housekeeping-scripts&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35292->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.842846 5050 trace.go:236] Trace[1652102079]: "Reflector ListAndWatch" name:object-"openstack"/"octavia-housekeeping-scripts" (11-Dec-2025 15:54:01.663) (total time: 14179ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1652102079]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-housekeeping-scripts&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35292->38.102.83.147:6443: read: connection reset by peer 14179ms (15:54:15.842) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1652102079]: [14.179724099s] [14.179724099s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.842867 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-housekeeping-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-housekeeping-scripts&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35292->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.862834 5050 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35314->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.862893 5050 trace.go:236] Trace[2049082925]: "Reflector ListAndWatch" name:object-"openshift-controller-manager"/"config" (11-Dec-2025 15:54:01.671) (total time: 14191ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2049082925]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35314->38.102.83.147:6443: read: connection reset by peer 14191ms (15:54:15.862) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2049082925]: [14.191705069s] [14.191705069s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.862913 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35314->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.882758 5050 reflector.go:561] object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35588->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.882805 5050 trace.go:236] Trace[1360471899]: "Reflector ListAndWatch" name:object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.726) (total time: 14155ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1360471899]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35588->38.102.83.147:6443: read: connection reset by peer 14155ms (15:54:15.882) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1360471899]: [14.155792987s] [14.155792987s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.882817 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35588->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.890245 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="7fb00b03-fe6e-4c66-bd36-adf9443871a8" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.1.135:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.902579 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-router-certs": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-router-certs&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35798->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.902663 5050 trace.go:236] Trace[398371947]: "Reflector ListAndWatch" name:object-"openshift-authentication"/"v4-0-config-system-router-certs" (11-Dec-2025 15:54:01.767) (total time: 14135ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[398371947]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-router-certs&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35798->38.102.83.147:6443: read: connection reset by peer 14135ms (15:54:15.902) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[398371947]: [14.135602306s] [14.135602306s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.902683 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-router-certs&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35798->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.923232 5050 reflector.go:561] object-"openshift-machine-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35980->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.923332 5050 trace.go:236] Trace[1006857302]: "Reflector ListAndWatch" name:object-"openshift-machine-config-operator"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.805) (total time: 14118ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1006857302]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35980->38.102.83.147:6443: read: connection reset by peer 14118ms (15:54:15.923) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1006857302]: [14.11820111s] [14.11820111s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.923351 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35980->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.942765 5050 request.go:700] Waited for 4.05307387s, retries: 1, retry-after: 1s - retry-reason: due to retryable error, error: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&resourceVersion=109846": read tcp 38.102.83.147:34238->38.102.83.147:6443: read: connection reset by peer - request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&resourceVersion=109846 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.943333 5050 reflector.go:561] object-"openshift-machine-config-operator"/"mcc-proxy-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34238->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.943388 5050 trace.go:236] Trace[1135102218]: "Reflector ListAndWatch" name:object-"openshift-machine-config-operator"/"mcc-proxy-tls" (11-Dec-2025 15:54:01.412) (total time: 14530ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1135102218]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34238->38.102.83.147:6443: read: connection reset by peer 14530ms (15:54:15.943) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1135102218]: [14.530413343s] [14.530413343s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.943409 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34238->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.963289 5050 reflector.go:561] object-"openstack"/"octavia-api-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-api-scripts&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34280->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.963391 5050 trace.go:236] Trace[1705664842]: "Reflector ListAndWatch" name:object-"openstack"/"octavia-api-scripts" (11-Dec-2025 15:54:01.428) (total time: 14534ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1705664842]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-api-scripts&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34280->38.102.83.147:6443: read: connection reset by peer 14534ms (15:54:15.963) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1705664842]: [14.53440506s] [14.53440506s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.963419 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-api-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-api-scripts&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34280->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:15.983321 5050 reflector.go:561] object-"metallb-system"/"frr-k8s-webhook-server-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dfrr-k8s-webhook-server-cert&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34308->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:15.983407 5050 trace.go:236] Trace[634879528]: "Reflector ListAndWatch" name:object-"metallb-system"/"frr-k8s-webhook-server-cert" (11-Dec-2025 15:54:01.439) (total time: 14543ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[634879528]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dfrr-k8s-webhook-server-cert&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34308->38.102.83.147:6443: read: connection reset by peer 14543ms (15:54:15.983) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[634879528]: [14.543979377s] [14.543979377s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:15.983434 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"frr-k8s-webhook-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dfrr-k8s-webhook-server-cert&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34308->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.003204 5050 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-dockercfg-vw8fw&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34368->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.003294 5050 trace.go:236] Trace[585508720]: "Reflector ListAndWatch" name:object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" (11-Dec-2025 15:54:01.456) (total time: 14546ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[585508720]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-dockercfg-vw8fw&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34368->38.102.83.147:6443: read: connection reset by peer 14546ms (15:54:16.003) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[585508720]: [14.54668352s] [14.54668352s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.003319 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-vw8fw\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-dockercfg-vw8fw&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34368->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.022632 5050 reflector.go:561] object-"openstack"/"rabbitmq-cell1-default-user": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-default-user&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34420->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.022702 5050 trace.go:236] Trace[108367848]: "Reflector ListAndWatch" name:object-"openstack"/"rabbitmq-cell1-default-user" (11-Dec-2025 15:54:01.473) (total time: 14549ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[108367848]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-default-user&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34420->38.102.83.147:6443: read: connection reset by peer 14549ms (15:54:16.022) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[108367848]: [14.549101384s] [14.549101384s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.022729 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-cell1-default-user\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-default-user&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34420->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.043151 5050 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dmachine-approver-config&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34446->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.043237 5050 trace.go:236] Trace[134443705]: "Reflector ListAndWatch" name:object-"openshift-cluster-machine-approver"/"machine-approver-config" (11-Dec-2025 15:54:01.484) (total time: 14558ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[134443705]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dmachine-approver-config&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34446->38.102.83.147:6443: read: connection reset by peer 14558ms (15:54:16.043) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[134443705]: [14.55827692s] [14.55827692s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.043258 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dmachine-approver-config&resourceVersion=109067\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34446->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.063348 5050 reflector.go:561] object-"openshift-etcd-operator"/"etcd-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-operator-serving-cert&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34564->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.063412 5050 trace.go:236] Trace[274282025]: "Reflector ListAndWatch" name:object-"openshift-etcd-operator"/"etcd-operator-serving-cert" (11-Dec-2025 15:54:01.500) (total time: 14562ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[274282025]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-operator-serving-cert&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34564->38.102.83.147:6443: read: connection reset by peer 14562ms (15:54:16.063) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[274282025]: [14.562517544s] [14.562517544s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.063432 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-operator-serving-cert&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34564->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.083206 5050 reflector.go:561] object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34580->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.083281 5050 trace.go:236] Trace[1183528843]: "Reflector ListAndWatch" name:object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.507) (total time: 14576ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1183528843]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34580->38.102.83.147:6443: read: connection reset by peer 14576ms (15:54:16.083) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1183528843]: [14.57622145s] [14.57622145s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.083296 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34580->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.103153 5050 reflector.go:561] object-"openshift-network-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34720->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.103212 5050 trace.go:236] Trace[591466694]: "Reflector ListAndWatch" name:object-"openshift-network-operator"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.533) (total time: 14569ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[591466694]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34720->38.102.83.147:6443: read: connection reset by peer 14569ms (15:54:16.103) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[591466694]: [14.569540682s] [14.569540682s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.103232 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34720->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.122683 5050 reflector.go:561] object-"openshift-oauth-apiserver"/"encryption-config-1": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34794->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.122750 5050 trace.go:236] Trace[642296659]: "Reflector ListAndWatch" name:object-"openshift-oauth-apiserver"/"encryption-config-1" (11-Dec-2025 15:54:01.550) (total time: 14572ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[642296659]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34794->38.102.83.147:6443: read: connection reset by peer 14572ms (15:54:16.122) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[642296659]: [14.572233133s] [14.572233133s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.122773 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34794->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.142906 5050 reflector.go:561] object-"openshift-service-ca"/"signing-cabundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dsigning-cabundle&resourceVersion=109642": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34836->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.142973 5050 trace.go:236] Trace[1907587547]: "Reflector ListAndWatch" name:object-"openshift-service-ca"/"signing-cabundle" (11-Dec-2025 15:54:01.559) (total time: 14583ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1907587547]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dsigning-cabundle&resourceVersion=109642": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34836->38.102.83.147:6443: read: connection reset by peer 14583ms (15:54:16.142) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1907587547]: [14.583609148s] [14.583609148s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.142992 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"signing-cabundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dsigning-cabundle&resourceVersion=109642\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34836->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.163343 5050 reflector.go:561] object-"metallb-system"/"metallb-excludel2": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dmetallb-excludel2&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35014->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.163414 5050 trace.go:236] Trace[207440526]: "Reflector ListAndWatch" name:object-"metallb-system"/"metallb-excludel2" (11-Dec-2025 15:54:01.596) (total time: 14566ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[207440526]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dmetallb-excludel2&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35014->38.102.83.147:6443: read: connection reset by peer 14566ms (15:54:16.163) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[207440526]: [14.566625214s] [14.566625214s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.163432 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-excludel2\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dmetallb-excludel2&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35014->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.183313 5050 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-config&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35922->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.183403 5050 trace.go:236] Trace[656144582]: "Reflector ListAndWatch" name:object-"openshift-ovn-kubernetes"/"ovnkube-config" (11-Dec-2025 15:54:01.795) (total time: 14388ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[656144582]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-config&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35922->38.102.83.147:6443: read: connection reset by peer 14388ms (15:54:16.183) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[656144582]: [14.388283505s] [14.388283505s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.183421 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-config&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35922->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.202602 5050 reflector.go:561] object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Doauth-openshift-dockercfg-znhcc&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36022->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.202695 5050 trace.go:236] Trace[946956234]: "Reflector ListAndWatch" name:object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" (11-Dec-2025 15:54:01.809) (total time: 14393ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[946956234]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Doauth-openshift-dockercfg-znhcc&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36022->38.102.83.147:6443: read: connection reset by peer 14393ms (15:54:16.202) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[946956234]: [14.393123415s] [14.393123415s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.202715 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-znhcc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Doauth-openshift-dockercfg-znhcc&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36022->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.222826 5050 reflector.go:561] object-"openstack"/"manila-share-share1-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-share-share1-config-data&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34352->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.222903 5050 trace.go:236] Trace[179570605]: "Reflector ListAndWatch" name:object-"openstack"/"manila-share-share1-config-data" (11-Dec-2025 15:54:01.453) (total time: 14769ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[179570605]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-share-share1-config-data&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34352->38.102.83.147:6443: read: connection reset by peer 14769ms (15:54:16.222) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[179570605]: [14.769528009s] [14.769528009s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.222940 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"manila-share-share1-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-share-share1-config-data&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34352->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.239200 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/horizon-5fb79d99b5-m4xgd" podUID="2086bc41-00a2-4c97-a491-08511f3ed6e5" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.117:8080/dashboard/auth/login/?next=/dashboard/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.239278 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5fb79d99b5-m4xgd" podUID="2086bc41-00a2-4c97-a491-08511f3ed6e5" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.117:8080/dashboard/auth/login/?next=/dashboard/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.243213 5050 reflector.go:561] object-"metallb-system"/"speaker-dockercfg-p2rzt": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dspeaker-dockercfg-p2rzt&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34818->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.243267 5050 trace.go:236] Trace[378864284]: "Reflector ListAndWatch" name:object-"metallb-system"/"speaker-dockercfg-p2rzt" (11-Dec-2025 15:54:01.559) (total time: 14683ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[378864284]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dspeaker-dockercfg-p2rzt&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34818->38.102.83.147:6443: read: connection reset by peer 14683ms (15:54:16.243) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[378864284]: [14.683995568s] [14.683995568s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.243285 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"speaker-dockercfg-p2rzt\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dspeaker-dockercfg-p2rzt&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34818->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.262570 5050 reflector.go:561] object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dcluster-image-registry-operator-dockercfg-m4qtx&resourceVersion=109397": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34890->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.262632 5050 trace.go:236] Trace[172589854]: "Reflector ListAndWatch" name:object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" (11-Dec-2025 15:54:01.569) (total time: 14693ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[172589854]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dcluster-image-registry-operator-dockercfg-m4qtx&resourceVersion=109397": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34890->38.102.83.147:6443: read: connection reset by peer 14693ms (15:54:16.262) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[172589854]: [14.693198545s] [14.693198545s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.262654 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-m4qtx\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dcluster-image-registry-operator-dockercfg-m4qtx&resourceVersion=109397\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34890->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.282900 5050 reflector.go:561] object-"openshift-service-ca-operator"/"service-ca-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-operator-config&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34944->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.283002 5050 trace.go:236] Trace[27739532]: "Reflector ListAndWatch" name:object-"openshift-service-ca-operator"/"service-ca-operator-config" (11-Dec-2025 15:54:01.584) (total time: 14698ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[27739532]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-operator-config&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34944->38.102.83.147:6443: read: connection reset by peer 14698ms (15:54:16.282) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[27739532]: [14.698525657s] [14.698525657s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.283033 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-operator-config&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34944->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.303907 5050 reflector.go:561] object-"openshift-marketplace"/"marketplace-trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dmarketplace-trusted-ca&resourceVersion=109117": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35048->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.303970 5050 trace.go:236] Trace[2057991123]: "Reflector ListAndWatch" name:object-"openshift-marketplace"/"marketplace-trusted-ca" (11-Dec-2025 15:54:01.605) (total time: 14698ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2057991123]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dmarketplace-trusted-ca&resourceVersion=109117": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35048->38.102.83.147:6443: read: connection reset by peer 14698ms (15:54:16.303) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2057991123]: [14.698536947s] [14.698536947s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.303989 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dmarketplace-trusted-ca&resourceVersion=109117\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35048->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.323141 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-serving-cert&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35152->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.323202 5050 trace.go:236] Trace[2126728365]: "Reflector ListAndWatch" name:object-"openshift-authentication"/"v4-0-config-system-serving-cert" (11-Dec-2025 15:54:01.634) (total time: 14688ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2126728365]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-serving-cert&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35152->38.102.83.147:6443: read: connection reset by peer 14688ms (15:54:16.323) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2126728365]: [14.68855146s] [14.68855146s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.323220 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-serving-cert&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35152->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.343297 5050 reflector.go:561] pkg/kubelet/config/apiserver.go:66: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dcrc&resourceVersion=109930": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35166->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.343373 5050 trace.go:236] Trace[1250582021]: "Reflector ListAndWatch" name:pkg/kubelet/config/apiserver.go:66 (11-Dec-2025 15:54:01.634) (total time: 14708ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1250582021]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dcrc&resourceVersion=109930": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35166->38.102.83.147:6443: read: connection reset by peer 14708ms (15:54:16.343) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1250582021]: [14.70869284s] [14.70869284s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.343397 5050 reflector.go:158] "Unhandled Error" err="pkg/kubelet/config/apiserver.go:66: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dcrc&resourceVersion=109930\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35166->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.363231 5050 reflector.go:561] object-"openshift-machine-api"/"kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35246->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.363358 5050 trace.go:236] Trace[487167224]: "Reflector ListAndWatch" name:object-"openshift-machine-api"/"kube-rbac-proxy" (11-Dec-2025 15:54:01.652) (total time: 14710ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[487167224]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35246->38.102.83.147:6443: read: connection reset by peer 14710ms (15:54:16.363) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[487167224]: [14.710483748s] [14.710483748s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.363378 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35246->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.384195 5050 reflector.go:561] object-"openstack"/"telemetry-autoscaling-dockercfg-7ght8": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dtelemetry-autoscaling-dockercfg-7ght8&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35364->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.384249 5050 trace.go:236] Trace[759601031]: "Reflector ListAndWatch" name:object-"openstack"/"telemetry-autoscaling-dockercfg-7ght8" (11-Dec-2025 15:54:01.675) (total time: 14708ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[759601031]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dtelemetry-autoscaling-dockercfg-7ght8&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35364->38.102.83.147:6443: read: connection reset by peer 14708ms (15:54:16.384) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[759601031]: [14.708624388s] [14.708624388s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.384267 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"telemetry-autoscaling-dockercfg-7ght8\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dtelemetry-autoscaling-dockercfg-7ght8&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35364->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.403866 5050 reflector.go:561] object-"openshift-console-operator"/"trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35546->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.404185 5050 trace.go:236] Trace[1607673018]: "Reflector ListAndWatch" name:object-"openshift-console-operator"/"trusted-ca" (11-Dec-2025 15:54:01.719) (total time: 14685ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1607673018]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35546->38.102.83.147:6443: read: connection reset by peer 14684ms (15:54:16.403) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1607673018]: [14.685023146s] [14.685023146s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.404224 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35546->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.422322 5050 reflector.go:561] object-"openstack"/"alertmanager-metric-storage-web-config": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dalertmanager-metric-storage-web-config&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35608->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.422390 5050 trace.go:236] Trace[100035428]: "Reflector ListAndWatch" name:object-"openstack"/"alertmanager-metric-storage-web-config" (11-Dec-2025 15:54:01.743) (total time: 14678ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[100035428]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dalertmanager-metric-storage-web-config&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35608->38.102.83.147:6443: read: connection reset by peer 14678ms (15:54:16.422) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[100035428]: [14.678536592s] [14.678536592s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.422402 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"alertmanager-metric-storage-web-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dalertmanager-metric-storage-web-config&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35608->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.442318 5050 reflector.go:561] object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35632->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.442371 5050 trace.go:236] Trace[2130931943]: "Reflector ListAndWatch" name:object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.743) (total time: 14698ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2130931943]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35632->38.102.83.147:6443: read: connection reset by peer 14698ms (15:54:16.442) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2130931943]: [14.698409094s] [14.698409094s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.442382 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35632->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.445237 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-5d666c4679-krnwf" podUID="a6130f1a-c95b-445f-8235-e57fdcb270fe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.462847 5050 reflector.go:561] object-"openshift-oauth-apiserver"/"etcd-client": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35698->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.462931 5050 trace.go:236] Trace[1545629394]: "Reflector ListAndWatch" name:object-"openshift-oauth-apiserver"/"etcd-client" (11-Dec-2025 15:54:01.750) (total time: 14712ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1545629394]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35698->38.102.83.147:6443: read: connection reset by peer 14712ms (15:54:16.462) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1545629394]: [14.712743648s] [14.712743648s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.462951 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35698->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.482963 5050 reflector.go:561] object-"openshift-nmstate"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109851": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35786->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.483065 5050 trace.go:236] Trace[1832556253]: "Reflector ListAndWatch" name:object-"openshift-nmstate"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.766) (total time: 14716ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1832556253]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109851": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35786->38.102.83.147:6443: read: connection reset by peer 14715ms (15:54:16.482) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1832556253]: [14.716053976s] [14.716053976s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.483084 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109851\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35786->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.502920 5050 reflector.go:561] object-"cert-manager"/"cert-manager-dockercfg-8z6ch": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-dockercfg-8z6ch&resourceVersion=109623": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35862->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.503003 5050 trace.go:236] Trace[2141879990]: "Reflector ListAndWatch" name:object-"cert-manager"/"cert-manager-dockercfg-8z6ch" (11-Dec-2025 15:54:01.784) (total time: 14718ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2141879990]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-dockercfg-8z6ch&resourceVersion=109623": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35862->38.102.83.147:6443: read: connection reset by peer 14718ms (15:54:16.502) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2141879990]: [14.718268706s] [14.718268706s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.503063 5050 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"cert-manager-dockercfg-8z6ch\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-dockercfg-8z6ch&resourceVersion=109623\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35862->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.522898 5050 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-operator-dockercfg-2bh8d&resourceVersion=109225": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35900->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.522957 5050 trace.go:236] Trace[1787903684]: "Reflector ListAndWatch" name:object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" (11-Dec-2025 15:54:01.790) (total time: 14732ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1787903684]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-operator-dockercfg-2bh8d&resourceVersion=109225": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35900->38.102.83.147:6443: read: connection reset by peer 14732ms (15:54:16.522) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1787903684]: [14.732154098s] [14.732154098s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.522972 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2bh8d\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-operator-dockercfg-2bh8d&resourceVersion=109225\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35900->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.542144 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-5bddd4b946-644bs" podUID="5f0d9e74-baf0-4759-bb9f-3ff0a25e95d9" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.51:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.542162 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-5bddd4b946-644bs" podUID="5f0d9e74-baf0-4759-bb9f-3ff0a25e95d9" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.51:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.542598 5050 reflector.go:561] object-"openstack"/"nova-scheduler-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-scheduler-config-data&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35950->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.542678 5050 trace.go:236] Trace[1996307970]: "Reflector ListAndWatch" name:object-"openstack"/"nova-scheduler-config-data" (11-Dec-2025 15:54:01.799) (total time: 14743ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1996307970]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-scheduler-config-data&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35950->38.102.83.147:6443: read: connection reset by peer 14743ms (15:54:16.542) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1996307970]: [14.743566923s] [14.743566923s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.542697 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-scheduler-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-scheduler-config-data&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35950->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.562616 5050 reflector.go:561] object-"openshift-console"/"default-dockercfg-chnjx": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-chnjx&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36072->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.562689 5050 trace.go:236] Trace[1181121588]: "Reflector ListAndWatch" name:object-"openshift-console"/"default-dockercfg-chnjx" (11-Dec-2025 15:54:01.815) (total time: 14747ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1181121588]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-chnjx&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36072->38.102.83.147:6443: read: connection reset by peer 14747ms (15:54:16.562) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1181121588]: [14.747294523s] [14.747294523s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.562703 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"default-dockercfg-chnjx\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-chnjx&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36072->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.583082 5050 reflector.go:561] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-serving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35996->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.583131 5050 trace.go:236] Trace[393627118]: "Reflector ListAndWatch" name:object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" (11-Dec-2025 15:54:01.807) (total time: 14775ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[393627118]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-serving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35996->38.102.83.147:6443: read: connection reset by peer 14775ms (15:54:16.583) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[393627118]: [14.775812367s] [14.775812367s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.583143 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-serving-cert&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35996->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.602624 5050 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-operator-dockercfg-98p87&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35296->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.602667 5050 trace.go:236] Trace[1878093767]: "Reflector ListAndWatch" name:object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" (11-Dec-2025 15:54:01.667) (total time: 14935ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1878093767]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-operator-dockercfg-98p87&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35296->38.102.83.147:6443: read: connection reset by peer 14935ms (15:54:16.602) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1878093767]: [14.935407014s] [14.935407014s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.602679 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-98p87\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-operator-dockercfg-98p87&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35296->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.622879 5050 reflector.go:561] object-"openstack"/"glance-default-external-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-default-external-config-data&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35316->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.622958 5050 trace.go:236] Trace[657577680]: "Reflector ListAndWatch" name:object-"openstack"/"glance-default-external-config-data" (11-Dec-2025 15:54:01.672) (total time: 14950ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[657577680]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-default-external-config-data&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35316->38.102.83.147:6443: read: connection reset by peer 14950ms (15:54:16.622) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[657577680]: [14.950679822s] [14.950679822s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.622980 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"glance-default-external-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-default-external-config-data&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35316->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.642820 5050 reflector.go:561] object-"openstack-operators"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35542->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.642892 5050 trace.go:236] Trace[1397284253]: "Reflector ListAndWatch" name:object-"openstack-operators"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.716) (total time: 14926ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1397284253]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35542->38.102.83.147:6443: read: connection reset by peer 14926ms (15:54:16.642) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1397284253]: [14.926285639s] [14.926285639s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.642922 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35542->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.663179 5050 reflector.go:561] object-"openshift-console"/"console-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dconsole-config&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35874->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.663268 5050 trace.go:236] Trace[477263311]: "Reflector ListAndWatch" name:object-"openshift-console"/"console-config" (11-Dec-2025 15:54:01.789) (total time: 14874ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[477263311]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dconsole-config&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35874->38.102.83.147:6443: read: connection reset by peer 14874ms (15:54:16.663) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[477263311]: [14.874088741s] [14.874088741s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.663286 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dconsole-config&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35874->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.683123 5050 reflector.go:561] object-"openstack"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36066->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.683202 5050 trace.go:236] Trace[1589905328]: "Reflector ListAndWatch" name:object-"openstack"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.814) (total time: 14868ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1589905328]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36066->38.102.83.147:6443: read: connection reset by peer 14868ms (15:54:16.683) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1589905328]: [14.868920562s] [14.868920562s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.683224 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36066->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.695197 5050 patch_prober.go:28] interesting pod/dns-default-25p7l container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.217.0.27:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.695249 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-25p7l" podUID="66dfe3f5-9e7a-4e1c-b03a-b6f01dab6ef4" containerName="dns" probeResult="failure" output="Get \"http://10.217.0.27:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.702716 5050 reflector.go:561] object-"openstack"/"placement-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35998->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.702780 5050 trace.go:236] Trace[2095790701]: "Reflector ListAndWatch" name:object-"openstack"/"placement-config-data" (11-Dec-2025 15:54:01.807) (total time: 14895ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2095790701]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35998->38.102.83.147:6443: read: connection reset by peer 14895ms (15:54:16.702) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2095790701]: [14.895444322s] [14.895444322s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.702797 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"placement-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35998->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.723229 5050 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36028->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.723284 5050 trace.go:236] Trace[1373311325]: "Reflector ListAndWatch" name:object-"openshift-controller-manager"/"client-ca" (11-Dec-2025 15:54:01.810) (total time: 14913ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1373311325]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36028->38.102.83.147:6443: read: connection reset by peer 14913ms (15:54:16.723) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1373311325]: [14.913070145s] [14.913070145s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.723297 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36028->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.742664 5050 reflector.go:561] object-"openstack"/"ovsdbserver-nb": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovsdbserver-nb&resourceVersion=109851": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35144->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.742732 5050 trace.go:236] Trace[1247166438]: "Reflector ListAndWatch" name:object-"openstack"/"ovsdbserver-nb" (11-Dec-2025 15:54:01.630) (total time: 15112ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1247166438]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovsdbserver-nb&resourceVersion=109851": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35144->38.102.83.147:6443: read: connection reset by peer 15112ms (15:54:16.742) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1247166438]: [15.112614331s] [15.112614331s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.742750 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovsdbserver-nb\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovsdbserver-nb&resourceVersion=109851\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35144->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.763308 5050 reflector.go:561] object-"openstack"/"rabbitmq-server-dockercfg-zd6qh": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-server-dockercfg-zd6qh&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35222->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.763362 5050 trace.go:236] Trace[201843827]: "Reflector ListAndWatch" name:object-"openstack"/"rabbitmq-server-dockercfg-zd6qh" (11-Dec-2025 15:54:01.649) (total time: 15114ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[201843827]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-server-dockercfg-zd6qh&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35222->38.102.83.147:6443: read: connection reset by peer 15114ms (15:54:16.763) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[201843827]: [15.114181302s] [15.114181302s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.763380 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-server-dockercfg-zd6qh\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-server-dockercfg-zd6qh&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35222->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.783021 5050 reflector.go:561] object-"openstack"/"openstack-cell1-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-cell1-scripts&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35276->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.783072 5050 trace.go:236] Trace[2005840293]: "Reflector ListAndWatch" name:object-"openstack"/"openstack-cell1-scripts" (11-Dec-2025 15:54:01.655) (total time: 15127ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2005840293]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-cell1-scripts&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35276->38.102.83.147:6443: read: connection reset by peer 15127ms (15:54:16.782) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2005840293]: [15.127216091s] [15.127216091s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.783090 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-cell1-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-cell1-scripts&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35276->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.803293 5050 reflector.go:561] object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-8nf9c": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dcinder-operator-controller-manager-dockercfg-8nf9c&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35338->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.803348 5050 trace.go:236] Trace[1139766935]: "Reflector ListAndWatch" name:object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-8nf9c" (11-Dec-2025 15:54:01.674) (total time: 15128ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1139766935]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dcinder-operator-controller-manager-dockercfg-8nf9c&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35338->38.102.83.147:6443: read: connection reset by peer 15128ms (15:54:16.803) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1139766935]: [15.128915568s] [15.128915568s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.803366 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"cinder-operator-controller-manager-dockercfg-8nf9c\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dcinder-operator-controller-manager-dockercfg-8nf9c&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35338->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.822583 5050 reflector.go:561] object-"openstack"/"openstack-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-config&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35434->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.822626 5050 trace.go:236] Trace[368177459]: "Reflector ListAndWatch" name:object-"openstack"/"openstack-config" (11-Dec-2025 15:54:01.692) (total time: 15130ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[368177459]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-config&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35434->38.102.83.147:6443: read: connection reset by peer 15129ms (15:54:16.822) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[368177459]: [15.130028627s] [15.130028627s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.822639 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-config&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35434->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.842771 5050 reflector.go:561] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-serving-cert&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35508->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.842856 5050 trace.go:236] Trace[1310353037]: "Reflector ListAndWatch" name:object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" (11-Dec-2025 15:54:01.711) (total time: 15131ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1310353037]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-serving-cert&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35508->38.102.83.147:6443: read: connection reset by peer 15131ms (15:54:16.842) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1310353037]: [15.131759834s] [15.131759834s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.842883 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-serving-cert&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35508->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.863134 5050 reflector.go:561] object-"openstack"/"manila-manila-dockercfg-d7578": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-manila-dockercfg-d7578&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35648->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.863211 5050 trace.go:236] Trace[120986438]: "Reflector ListAndWatch" name:object-"openstack"/"manila-manila-dockercfg-d7578" (11-Dec-2025 15:54:01.743) (total time: 15119ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[120986438]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-manila-dockercfg-d7578&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35648->38.102.83.147:6443: read: connection reset by peer 15119ms (15:54:16.863) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[120986438]: [15.119285589s] [15.119285589s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.863228 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"manila-manila-dockercfg-d7578\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-manila-dockercfg-d7578&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35648->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.883047 5050 reflector.go:561] object-"openstack"/"glance-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-scripts&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35708->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.883103 5050 trace.go:236] Trace[1177344209]: "Reflector ListAndWatch" name:object-"openstack"/"glance-scripts" (11-Dec-2025 15:54:01.750) (total time: 15132ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1177344209]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-scripts&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35708->38.102.83.147:6443: read: connection reset by peer 15132ms (15:54:16.883) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1177344209]: [15.132864123s] [15.132864123s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.883117 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"glance-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-scripts&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35708->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.902415 5050 reflector.go:561] object-"cert-manager-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35740->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.902483 5050 trace.go:236] Trace[230937174]: "Reflector ListAndWatch" name:object-"cert-manager-operator"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.754) (total time: 15148ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[230937174]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35740->38.102.83.147:6443: read: connection reset by peer 15148ms (15:54:16.902) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[230937174]: [15.148323547s] [15.148323547s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.902502 5050 reflector.go:158] "Unhandled Error" err="object-\"cert-manager-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35740->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.922900 5050 reflector.go:561] object-"openstack"/"cert-galera-openstack-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-galera-openstack-svc&resourceVersion=109087": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35760->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.922946 5050 trace.go:236] Trace[294537504]: "Reflector ListAndWatch" name:object-"openstack"/"cert-galera-openstack-svc" (11-Dec-2025 15:54:01.760) (total time: 15162ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[294537504]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-galera-openstack-svc&resourceVersion=109087": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35760->38.102.83.147:6443: read: connection reset by peer 15162ms (15:54:16.922) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[294537504]: [15.162403814s] [15.162403814s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.922957 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-galera-openstack-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-galera-openstack-svc&resourceVersion=109087\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35760->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.928316 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-w4tzc" podUID="54e98831-cd88-4dee-90db-e8fbb006e9c3" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": dial tcp [::1]:29150: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.942345 5050 reflector.go:561] object-"openshift-ingress"/"router-stats-default": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-stats-default&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35840->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.942418 5050 trace.go:236] Trace[1947211573]: "Reflector ListAndWatch" name:object-"openshift-ingress"/"router-stats-default" (11-Dec-2025 15:54:01.775) (total time: 15166ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1947211573]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-stats-default&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35840->38.102.83.147:6443: read: connection reset by peer 15166ms (15:54:16.942) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1947211573]: [15.166632277s] [15.166632277s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.942438 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"router-stats-default\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-stats-default&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35840->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.962411 5050 request.go:700] Waited for 5.072585583s, retries: 1, retry-after: 1s - retry-reason: due to retryable error, error: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=109512": read tcp 38.102.83.147:35886->38.102.83.147:6443: read: connection reset by peer - request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=109512 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.962939 5050 reflector.go:561] object-"openshift-ingress-operator"/"metrics-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35886->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.963036 5050 trace.go:236] Trace[1616514383]: "Reflector ListAndWatch" name:object-"openshift-ingress-operator"/"metrics-tls" (11-Dec-2025 15:54:01.790) (total time: 15172ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1616514383]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35886->38.102.83.147:6443: read: connection reset by peer 15172ms (15:54:16.962) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1616514383]: [15.172254288s] [15.172254288s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.963055 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35886->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:16.982878 5050 reflector.go:561] object-"openstack"/"barbican-worker-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-worker-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35944->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:16.982927 5050 trace.go:236] Trace[1268848094]: "Reflector ListAndWatch" name:object-"openstack"/"barbican-worker-config-data" (11-Dec-2025 15:54:01.797) (total time: 15185ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1268848094]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-worker-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35944->38.102.83.147:6443: read: connection reset by peer 15185ms (15:54:16.982) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1268848094]: [15.185778881s] [15.185778881s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:16.982941 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"barbican-worker-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-worker-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35944->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.003155 5050 reflector.go:561] object-"openstack"/"metric-storage-prometheus-dockercfg-4drvb": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmetric-storage-prometheus-dockercfg-4drvb&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35966->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.003214 5050 trace.go:236] Trace[1775948046]: "Reflector ListAndWatch" name:object-"openstack"/"metric-storage-prometheus-dockercfg-4drvb" (11-Dec-2025 15:54:01.805) (total time: 15198ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1775948046]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmetric-storage-prometheus-dockercfg-4drvb&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35966->38.102.83.147:6443: read: connection reset by peer 15198ms (15:54:17.003) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1775948046]: [15.198099501s] [15.198099501s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.003229 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"metric-storage-prometheus-dockercfg-4drvb\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmetric-storage-prometheus-dockercfg-4drvb&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35966->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.022779 5050 reflector.go:561] object-"openstack"/"octavia-housekeeping-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-housekeeping-config-data&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33970->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.022846 5050 trace.go:236] Trace[1628743076]: "Reflector ListAndWatch" name:object-"openstack"/"octavia-housekeeping-config-data" (11-Dec-2025 15:54:01.382) (total time: 15640ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1628743076]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-housekeeping-config-data&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33970->38.102.83.147:6443: read: connection reset by peer 15640ms (15:54:17.022) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1628743076]: [15.640494182s] [15.640494182s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.022865 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-housekeeping-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-housekeeping-config-data&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33970->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.042729 5050 reflector.go:561] object-"cert-manager"/"cert-manager-cainjector-dockercfg-sxrxp": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-cainjector-dockercfg-sxrxp&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34030->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.042809 5050 trace.go:236] Trace[1208759519]: "Reflector ListAndWatch" name:object-"cert-manager"/"cert-manager-cainjector-dockercfg-sxrxp" (11-Dec-2025 15:54:01.390) (total time: 15652ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1208759519]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-cainjector-dockercfg-sxrxp&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34030->38.102.83.147:6443: read: connection reset by peer 15652ms (15:54:17.042) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1208759519]: [15.652438582s] [15.652438582s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.042834 5050 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-sxrxp\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-cainjector-dockercfg-sxrxp&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34030->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.063194 5050 reflector.go:561] object-"openshift-service-ca"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34040->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.063261 5050 trace.go:236] Trace[1103522978]: "Reflector ListAndWatch" name:object-"openshift-service-ca"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.390) (total time: 15672ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1103522978]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34040->38.102.83.147:6443: read: connection reset by peer 15672ms (15:54:17.063) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1103522978]: [15.67288876s] [15.67288876s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.063287 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34040->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.082430 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-error": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-error&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34062->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.082472 5050 trace.go:236] Trace[365976472]: "Reflector ListAndWatch" name:object-"openshift-authentication"/"v4-0-config-user-template-error" (11-Dec-2025 15:54:01.392) (total time: 15689ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[365976472]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-error&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34062->38.102.83.147:6443: read: connection reset by peer 15689ms (15:54:17.082) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[365976472]: [15.689896115s] [15.689896115s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.082490 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-error&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34062->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.102645 5050 reflector.go:561] object-"metallb-system"/"metallb-webhook-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-webhook-cert&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34138->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.102691 5050 trace.go:236] Trace[1551805569]: "Reflector ListAndWatch" name:object-"metallb-system"/"metallb-webhook-cert" (11-Dec-2025 15:54:01.400) (total time: 15702ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1551805569]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-webhook-cert&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34138->38.102.83.147:6443: read: connection reset by peer 15702ms (15:54:17.102) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1551805569]: [15.70239043s] [15.70239043s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.102703 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-webhook-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-webhook-cert&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34138->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.122344 5050 reflector.go:561] object-"openshift-multus"/"multus-ac-dockercfg-9lkdf": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-ac-dockercfg-9lkdf&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34914->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.122394 5050 trace.go:236] Trace[557861111]: "Reflector ListAndWatch" name:object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" (11-Dec-2025 15:54:01.572) (total time: 15550ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[557861111]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-ac-dockercfg-9lkdf&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34914->38.102.83.147:6443: read: connection reset by peer 15550ms (15:54:17.122) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[557861111]: [15.550336867s] [15.550336867s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.122410 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-ac-dockercfg-9lkdf\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-ac-dockercfg-9lkdf&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34914->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.142900 5050 reflector.go:561] object-"openstack"/"nova-metadata-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-metadata-config-data&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35370->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.142971 5050 trace.go:236] Trace[1335347747]: "Reflector ListAndWatch" name:object-"openstack"/"nova-metadata-config-data" (11-Dec-2025 15:54:01.680) (total time: 15462ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1335347747]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-metadata-config-data&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35370->38.102.83.147:6443: read: connection reset by peer 15461ms (15:54:17.142) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1335347747]: [15.462050132s] [15.462050132s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.142990 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-metadata-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-metadata-config-data&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35370->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.163442 5050 reflector.go:561] object-"hostpath-provisioner"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35514->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.163506 5050 trace.go:236] Trace[511451528]: "Reflector ListAndWatch" name:object-"hostpath-provisioner"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.713) (total time: 15450ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[511451528]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35514->38.102.83.147:6443: read: connection reset by peer 15450ms (15:54:17.163) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[511451528]: [15.450224745s] [15.450224745s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.163520 5050 reflector.go:158] "Unhandled Error" err="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35514->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.178175 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" podUID="2b344bba-8e1d-415b-9b5f-e21d3144fe42" containerName="hostpath-provisioner" probeResult="failure" output="Get \"http://10.217.0.31:9898/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.182955 5050 reflector.go:561] object-"openstack"/"octavia-rsyslog-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-rsyslog-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35884->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.183060 5050 trace.go:236] Trace[753298856]: "Reflector ListAndWatch" name:object-"openstack"/"octavia-rsyslog-config-data" (11-Dec-2025 15:54:01.790) (total time: 15392ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[753298856]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-rsyslog-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35884->38.102.83.147:6443: read: connection reset by peer 15392ms (15:54:17.182) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[753298856]: [15.392346114s] [15.392346114s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.183081 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-rsyslog-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-rsyslog-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35884->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.202507 5050 reflector.go:561] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35932->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.202579 5050 trace.go:236] Trace[483918128]: "Reflector ListAndWatch" name:object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.795) (total time: 15407ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[483918128]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35932->38.102.83.147:6443: read: connection reset by peer 15407ms (15:54:17.202) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[483918128]: [15.407516391s] [15.407516391s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.202591 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35932->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.223093 5050 reflector.go:561] object-"openstack"/"memcached-memcached-dockercfg-kl4q7": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmemcached-memcached-dockercfg-kl4q7&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36152->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.223199 5050 trace.go:236] Trace[1648706864]: "Reflector ListAndWatch" name:object-"openstack"/"memcached-memcached-dockercfg-kl4q7" (11-Dec-2025 15:54:01.822) (total time: 15400ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1648706864]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmemcached-memcached-dockercfg-kl4q7&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36152->38.102.83.147:6443: read: connection reset by peer 15400ms (15:54:17.223) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1648706864]: [15.400255946s] [15.400255946s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.223223 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"memcached-memcached-dockercfg-kl4q7\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmemcached-memcached-dockercfg-kl4q7&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36152->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.243169 5050 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serviceaccount-dockercfg-rq7zk&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35914->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.243240 5050 trace.go:236] Trace[69292036]: "Reflector ListAndWatch" name:object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" (11-Dec-2025 15:54:01.791) (total time: 15451ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[69292036]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serviceaccount-dockercfg-rq7zk&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35914->38.102.83.147:6443: read: connection reset by peer 15451ms (15:54:17.243) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[69292036]: [15.451294733s] [15.451294733s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.243258 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-rq7zk\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serviceaccount-dockercfg-rq7zk&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35914->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.263086 5050 reflector.go:561] object-"openstack"/"nova-cell0-conductor-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell0-conductor-config-data&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36156->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.263165 5050 trace.go:236] Trace[422411413]: "Reflector ListAndWatch" name:object-"openstack"/"nova-cell0-conductor-config-data" (11-Dec-2025 15:54:01.827) (total time: 15435ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[422411413]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell0-conductor-config-data&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36156->38.102.83.147:6443: read: connection reset by peer 15435ms (15:54:17.263) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[422411413]: [15.435614224s] [15.435614224s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.263188 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-cell0-conductor-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell0-conductor-config-data&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36156->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.278988 5050 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r54sd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:5443/healthz\": dial tcp 10.217.0.23:5443: connect: connection refused" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.279054 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" podUID="dbd5b107-5d08-43af-881c-11540f395267" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.23:5443/healthz\": dial tcp 10.217.0.23:5443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.282710 5050 reflector.go:561] object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-25r77": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dinfra-operator-controller-manager-dockercfg-25r77&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35948->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.282805 5050 trace.go:236] Trace[567724077]: "Reflector ListAndWatch" name:object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-25r77" (11-Dec-2025 15:54:01.798) (total time: 15483ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[567724077]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dinfra-operator-controller-manager-dockercfg-25r77&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35948->38.102.83.147:6443: read: connection reset by peer 15483ms (15:54:17.282) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[567724077]: [15.48399859s] [15.48399859s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.282828 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"infra-operator-controller-manager-dockercfg-25r77\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dinfra-operator-controller-manager-dockercfg-25r77&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35948->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.302867 5050 reflector.go:561] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-apiserver-operator-config&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36132->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.302939 5050 trace.go:236] Trace[233289760]: "Reflector ListAndWatch" name:object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" (11-Dec-2025 15:54:01.820) (total time: 15482ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[233289760]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-apiserver-operator-config&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36132->38.102.83.147:6443: read: connection reset by peer 15482ms (15:54:17.302) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[233289760]: [15.48212194s] [15.48212194s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.302956 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-apiserver-operator-config&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36132->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.322462 5050 reflector.go:561] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-dockercfg-x57mr&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36166->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.322559 5050 trace.go:236] Trace[557330250]: "Reflector ListAndWatch" name:object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" (11-Dec-2025 15:54:01.828) (total time: 15494ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[557330250]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-dockercfg-x57mr&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36166->38.102.83.147:6443: read: connection reset by peer 15494ms (15:54:17.322) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[557330250]: [15.49445169s] [15.49445169s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.322579 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-x57mr\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-dockercfg-x57mr&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36166->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.343413 5050 reflector.go:561] object-"openshift-image-registry"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36144->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.343511 5050 trace.go:236] Trace[631691658]: "Reflector ListAndWatch" name:object-"openshift-image-registry"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.822) (total time: 15520ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[631691658]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36144->38.102.83.147:6443: read: connection reset by peer 15520ms (15:54:17.343) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[631691658]: [15.52061213s] [15.52061213s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.343530 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109067\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36144->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.363170 5050 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-script-lib&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33998->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.363219 5050 trace.go:236] Trace[1479463843]: "Reflector ListAndWatch" name:object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" (11-Dec-2025 15:54:01.383) (total time: 15979ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1479463843]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-script-lib&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33998->38.102.83.147:6443: read: connection reset by peer 15979ms (15:54:17.363) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1479463843]: [15.979728531s] [15.979728531s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.363237 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-script-lib&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:33998->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.383194 5050 reflector.go:561] object-"openstack"/"heat-heat-dockercfg-mz9rx": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dheat-heat-dockercfg-mz9rx&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34016->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.383254 5050 trace.go:236] Trace[1730244217]: "Reflector ListAndWatch" name:object-"openstack"/"heat-heat-dockercfg-mz9rx" (11-Dec-2025 15:54:01.390) (total time: 15993ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1730244217]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dheat-heat-dockercfg-mz9rx&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34016->38.102.83.147:6443: read: connection reset by peer 15992ms (15:54:17.383) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1730244217]: [15.993017186s] [15.993017186s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.383272 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"heat-heat-dockercfg-mz9rx\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dheat-heat-dockercfg-mz9rx&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34016->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.403074 5050 reflector.go:561] object-"openstack"/"prometheus-metric-storage": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34096->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.403132 5050 trace.go:236] Trace[1784137419]: "Reflector ListAndWatch" name:object-"openstack"/"prometheus-metric-storage" (11-Dec-2025 15:54:01.396) (total time: 16006ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1784137419]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34096->38.102.83.147:6443: read: connection reset by peer 16006ms (15:54:17.403) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1784137419]: [16.006266672s] [16.006266672s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.403151 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"prometheus-metric-storage\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34096->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.423462 5050 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-controller-dockercfg-c2lfx&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34150->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.423536 5050 trace.go:236] Trace[1575888510]: "Reflector ListAndWatch" name:object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" (11-Dec-2025 15:54:01.400) (total time: 16023ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1575888510]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-controller-dockercfg-c2lfx&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34150->38.102.83.147:6443: read: connection reset by peer 16023ms (15:54:17.423) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1575888510]: [16.023167284s] [16.023167284s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.423557 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-c2lfx\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-controller-dockercfg-c2lfx&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34150->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.443308 5050 reflector.go:561] object-"metallb-system"/"metallb-operator-webhook-server-service-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-operator-webhook-server-service-cert&resourceVersion=109215": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34170->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.443368 5050 trace.go:236] Trace[1540433026]: "Reflector ListAndWatch" name:object-"metallb-system"/"metallb-operator-webhook-server-service-cert" (11-Dec-2025 15:54:01.404) (total time: 16039ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1540433026]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-operator-webhook-server-service-cert&resourceVersion=109215": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34170->38.102.83.147:6443: read: connection reset by peer 16039ms (15:54:17.443) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1540433026]: [16.039190464s] [16.039190464s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.443386 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-operator-webhook-server-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-operator-webhook-server-service-cert&resourceVersion=109215\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34170->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.463402 5050 reflector.go:561] object-"openstack"/"neutron-config": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-config&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34184->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.463460 5050 trace.go:236] Trace[617214269]: "Reflector ListAndWatch" name:object-"openstack"/"neutron-config" (11-Dec-2025 15:54:01.406) (total time: 16057ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[617214269]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-config&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34184->38.102.83.147:6443: read: connection reset by peer 16057ms (15:54:17.463) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[617214269]: [16.057155475s] [16.057155475s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.463481 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"neutron-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-config&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34184->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.482529 5050 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-images": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dmachine-api-operator-images&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34198->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.482588 5050 trace.go:236] Trace[556345382]: "Reflector ListAndWatch" name:object-"openshift-machine-api"/"machine-api-operator-images" (11-Dec-2025 15:54:01.407) (total time: 16075ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[556345382]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dmachine-api-operator-images&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34198->38.102.83.147:6443: read: connection reset by peer 16075ms (15:54:17.482) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[556345382]: [16.075138036s] [16.075138036s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.482610 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dmachine-api-operator-images&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34198->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.503222 5050 reflector.go:561] object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-crgwv": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Drabbitmq-cluster-operator-controller-manager-dockercfg-crgwv&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34268->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.503297 5050 trace.go:236] Trace[1755242269]: "Reflector ListAndWatch" name:object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-crgwv" (11-Dec-2025 15:54:01.426) (total time: 16076ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1755242269]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Drabbitmq-cluster-operator-controller-manager-dockercfg-crgwv&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34268->38.102.83.147:6443: read: connection reset by peer 16076ms (15:54:17.503) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1755242269]: [16.076597906s] [16.076597906s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.503322 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"rabbitmq-cluster-operator-controller-manager-dockercfg-crgwv\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Drabbitmq-cluster-operator-controller-manager-dockercfg-crgwv&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34268->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.523371 5050 reflector.go:561] object-"openshift-marketplace"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36108->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.523455 5050 trace.go:236] Trace[1586036604]: "Reflector ListAndWatch" name:object-"openshift-marketplace"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.820) (total time: 15702ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1586036604]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36108->38.102.83.147:6443: read: connection reset by peer 15702ms (15:54:17.523) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1586036604]: [15.702699539s] [15.702699539s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.523475 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36108->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.543364 5050 reflector.go:561] object-"openshift-multus"/"multus-daemon-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dmultus-daemon-config&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34290->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.543436 5050 trace.go:236] Trace[1546112763]: "Reflector ListAndWatch" name:object-"openshift-multus"/"multus-daemon-config" (11-Dec-2025 15:54:01.435) (total time: 16107ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1546112763]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dmultus-daemon-config&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34290->38.102.83.147:6443: read: connection reset by peer 16107ms (15:54:17.543) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1546112763]: [16.107562125s] [16.107562125s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.543460 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-daemon-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dmultus-daemon-config&resourceVersion=109067\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34290->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.562794 5050 reflector.go:561] object-"openshift-machine-config-operator"/"node-bootstrapper-token": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dnode-bootstrapper-token&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36052->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.562882 5050 trace.go:236] Trace[1393547291]: "Reflector ListAndWatch" name:object-"openshift-machine-config-operator"/"node-bootstrapper-token" (11-Dec-2025 15:54:01.812) (total time: 15750ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1393547291]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dnode-bootstrapper-token&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36052->38.102.83.147:6443: read: connection reset by peer 15750ms (15:54:17.562) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1393547291]: [15.750800527s] [15.750800527s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.562901 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dnode-bootstrapper-token&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36052->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.582736 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-login": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-login&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34334->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.582788 5050 trace.go:236] Trace[890475045]: "Reflector ListAndWatch" name:object-"openshift-authentication"/"v4-0-config-user-template-login" (11-Dec-2025 15:54:01.450) (total time: 16132ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[890475045]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-login&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34334->38.102.83.147:6443: read: connection reset by peer 16132ms (15:54:17.582) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[890475045]: [16.132672518s] [16.132672518s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.582806 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-login&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34334->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.602750 5050 reflector.go:561] object-"openshift-marketplace"/"community-operators-dockercfg-dmngl": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcommunity-operators-dockercfg-dmngl&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34400->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.602788 5050 trace.go:236] Trace[1522124097]: "Reflector ListAndWatch" name:object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" (11-Dec-2025 15:54:01.457) (total time: 16145ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1522124097]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcommunity-operators-dockercfg-dmngl&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34400->38.102.83.147:6443: read: connection reset by peer 16145ms (15:54:17.602) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1522124097]: [16.145101131s] [16.145101131s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.602800 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"community-operators-dockercfg-dmngl\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcommunity-operators-dockercfg-dmngl&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34400->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.622240 5050 reflector.go:561] object-"openshift-service-ca-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35206->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.622280 5050 trace.go:236] Trace[181809022]: "Reflector ListAndWatch" name:object-"openshift-service-ca-operator"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.644) (total time: 15977ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[181809022]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35206->38.102.83.147:6443: read: connection reset by peer 15977ms (15:54:17.622) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[181809022]: [15.977361857s] [15.977361857s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.622293 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35206->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.642673 5050 reflector.go:561] object-"openshift-machine-api"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35262->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.642733 5050 trace.go:236] Trace[2059961787]: "Reflector ListAndWatch" name:object-"openshift-machine-api"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.652) (total time: 15989ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2059961787]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35262->38.102.83.147:6443: read: connection reset by peer 15989ms (15:54:17.642) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2059961787]: [15.989900173s] [15.989900173s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.642754 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35262->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.663196 5050 reflector.go:561] object-"openstack"/"heat-api-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dheat-api-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35402->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.663242 5050 trace.go:236] Trace[781471845]: "Reflector ListAndWatch" name:object-"openstack"/"heat-api-config-data" (11-Dec-2025 15:54:01.687) (total time: 15976ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[781471845]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dheat-api-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35402->38.102.83.147:6443: read: connection reset by peer 15976ms (15:54:17.663) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[781471845]: [15.976181365s] [15.976181365s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.663255 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"heat-api-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dheat-api-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35402->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.682597 5050 reflector.go:561] object-"openstack"/"openstack-cell1": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-cell1&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34440->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.682640 5050 trace.go:236] Trace[1130002851]: "Reflector ListAndWatch" name:object-"openstack"/"openstack-cell1" (11-Dec-2025 15:54:01.476) (total time: 16205ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1130002851]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-cell1&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34440->38.102.83.147:6443: read: connection reset by peer 16205ms (15:54:17.682) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1130002851]: [16.205820627s] [16.205820627s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.682655 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-cell1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-cell1&resourceVersion=109067\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34440->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.702379 5050 reflector.go:561] object-"openshift-dns-operator"/"metrics-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35474->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.702421 5050 trace.go:236] Trace[409215368]: "Reflector ListAndWatch" name:object-"openshift-dns-operator"/"metrics-tls" (11-Dec-2025 15:54:01.702) (total time: 15999ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[409215368]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35474->38.102.83.147:6443: read: connection reset by peer 15999ms (15:54:17.702) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[409215368]: [15.99947685s] [15.99947685s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.702436 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns-operator\"/\"metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35474->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.722515 5050 reflector.go:561] object-"openshift-dns-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34510->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.722580 5050 trace.go:236] Trace[442989232]: "Reflector ListAndWatch" name:object-"openshift-dns-operator"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.492) (total time: 16230ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[442989232]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34510->38.102.83.147:6443: read: connection reset by peer 16230ms (15:54:17.722) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[442989232]: [16.230240352s] [16.230240352s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.722601 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34510->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.742604 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-idp-0-file-data&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35558->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.742658 5050 trace.go:236] Trace[1098903201]: "Reflector ListAndWatch" name:object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" (11-Dec-2025 15:54:01.722) (total time: 16020ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1098903201]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-idp-0-file-data&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35558->38.102.83.147:6443: read: connection reset by peer 16020ms (15:54:17.742) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1098903201]: [16.020066521s] [16.020066521s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.742676 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-idp-0-file-data&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35558->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.762524 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-provider-selection": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-provider-selection&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34618->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.762583 5050 trace.go:236] Trace[311424124]: "Reflector ListAndWatch" name:object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" (11-Dec-2025 15:54:01.512) (total time: 16249ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[311424124]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-provider-selection&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34618->38.102.83.147:6443: read: connection reset by peer 16249ms (15:54:17.762) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[311424124]: [16.249707243s] [16.249707243s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.762601 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-provider-selection&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34618->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.782717 5050 reflector.go:561] object-"openstack"/"aodh-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Daodh-scripts&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35660->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.782788 5050 trace.go:236] Trace[544109871]: "Reflector ListAndWatch" name:object-"openstack"/"aodh-scripts" (11-Dec-2025 15:54:01.747) (total time: 16034ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[544109871]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Daodh-scripts&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35660->38.102.83.147:6443: read: connection reset by peer 16034ms (15:54:17.782) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[544109871]: [16.034818947s] [16.034818947s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.782807 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"aodh-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Daodh-scripts&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35660->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.802760 5050 reflector.go:561] object-"openstack"/"prometheus-metric-storage-web-config": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage-web-config&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35732->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.802840 5050 trace.go:236] Trace[302918928]: "Reflector ListAndWatch" name:object-"openstack"/"prometheus-metric-storage-web-config" (11-Dec-2025 15:54:01.753) (total time: 16049ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[302918928]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage-web-config&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35732->38.102.83.147:6443: read: connection reset by peer 16049ms (15:54:17.802) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[302918928]: [16.049770847s] [16.049770847s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.802860 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"prometheus-metric-storage-web-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage-web-config&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35732->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.807265 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="7fb00b03-fe6e-4c66-bd36-adf9443871a8" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.1.135:9090/-/healthy\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.823370 5050 reflector.go:561] object-"openshift-console"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34642->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.823426 5050 trace.go:236] Trace[1155947553]: "Reflector ListAndWatch" name:object-"openshift-console"/"trusted-ca-bundle" (11-Dec-2025 15:54:01.521) (total time: 16302ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1155947553]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34642->38.102.83.147:6443: read: connection reset by peer 16301ms (15:54:17.823) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1155947553]: [16.302021824s] [16.302021824s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.823445 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34642->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.843289 5050 reflector.go:561] object-"openshift-operators"/"observability-operator-sa-dockercfg-f5g9s": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobservability-operator-sa-dockercfg-f5g9s&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35810->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.843363 5050 trace.go:236] Trace[888052752]: "Reflector ListAndWatch" name:object-"openshift-operators"/"observability-operator-sa-dockercfg-f5g9s" (11-Dec-2025 15:54:01.768) (total time: 16075ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[888052752]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobservability-operator-sa-dockercfg-f5g9s&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35810->38.102.83.147:6443: read: connection reset by peer 16075ms (15:54:17.843) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[888052752]: [16.075154437s] [16.075154437s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.843378 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-f5g9s\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobservability-operator-sa-dockercfg-f5g9s&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35810->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.863219 5050 reflector.go:561] object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage-thanos-prometheus-http-client-file&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34668->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.863261 5050 trace.go:236] Trace[1477760124]: "Reflector ListAndWatch" name:object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" (11-Dec-2025 15:54:01.523) (total time: 16340ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1477760124]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage-thanos-prometheus-http-client-file&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34668->38.102.83.147:6443: read: connection reset by peer 16340ms (15:54:17.863) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1477760124]: [16.340185837s] [16.340185837s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.863287 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"prometheus-metric-storage-thanos-prometheus-http-client-file\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage-thanos-prometheus-http-client-file&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34668->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.882558 5050 reflector.go:561] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35858->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.882662 5050 trace.go:236] Trace[796635115]: "Reflector ListAndWatch" name:object-"openshift-machine-config-operator"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.780) (total time: 16102ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[796635115]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35858->38.102.83.147:6443: read: connection reset by peer 16102ms (15:54:17.882) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[796635115]: [16.102126649s] [16.102126649s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.882687 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35858->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.902830 5050 reflector.go:561] object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-operators-dockercfg-ct8rh&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34682->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.902892 5050 trace.go:236] Trace[1231429083]: "Reflector ListAndWatch" name:object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" (11-Dec-2025 15:54:01.523) (total time: 16379ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1231429083]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-operators-dockercfg-ct8rh&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34682->38.102.83.147:6443: read: connection reset by peer 16379ms (15:54:17.902) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1231429083]: [16.379305385s] [16.379305385s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.902911 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-ct8rh\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-operators-dockercfg-ct8rh&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34682->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.922580 5050 reflector.go:561] object-"openshift-oauth-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35962->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.922804 5050 trace.go:236] Trace[1236068606]: "Reflector ListAndWatch" name:object-"openshift-oauth-apiserver"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.800) (total time: 16121ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1236068606]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35962->38.102.83.147:6443: read: connection reset by peer 16121ms (15:54:17.922) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1236068606]: [16.121891849s] [16.121891849s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.922820 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35962->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.942755 5050 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34746->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.942802 5050 trace.go:236] Trace[1691873623]: "Reflector ListAndWatch" name:object-"openshift-apiserver-operator"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.538) (total time: 16403ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1691873623]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34746->38.102.83.147:6443: read: connection reset by peer 16403ms (15:54:17.942) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1691873623]: [16.403966426s] [16.403966426s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.942818 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34746->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.962652 5050 request.go:700] Waited for 6.072739686s, retries: 1, retry-after: 1s - retry-reason: due to retryable error, error: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-glance-dockercfg-ndgnr&resourceVersion=109743": read tcp 38.102.83.147:34766->38.102.83.147:6443: read: connection reset by peer - request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-glance-dockercfg-ndgnr&resourceVersion=109743 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.962898 5050 reflector.go:561] object-"openstack"/"glance-glance-dockercfg-ndgnr": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-glance-dockercfg-ndgnr&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34766->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.962936 5050 trace.go:236] Trace[1442374055]: "Reflector ListAndWatch" name:object-"openstack"/"glance-glance-dockercfg-ndgnr" (11-Dec-2025 15:54:01.542) (total time: 16420ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1442374055]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-glance-dockercfg-ndgnr&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34766->38.102.83.147:6443: read: connection reset by peer 16420ms (15:54:17.962) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1442374055]: [16.420821777s] [16.420821777s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.962953 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"glance-glance-dockercfg-ndgnr\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-glance-dockercfg-ndgnr&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34766->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:17.982817 5050 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dauthentication-operator-config&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36222->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:17.982880 5050 trace.go:236] Trace[155795549]: "Reflector ListAndWatch" name:object-"openshift-authentication-operator"/"authentication-operator-config" (11-Dec-2025 15:54:01.843) (total time: 16139ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[155795549]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dauthentication-operator-config&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36222->38.102.83.147:6443: read: connection reset by peer 16139ms (15:54:17.982) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[155795549]: [16.13947798s] [16.13947798s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:17.982892 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dauthentication-operator-config&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36222->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.002664 5050 reflector.go:561] object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-marketplace-dockercfg-x2ctb&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34776->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.002731 5050 trace.go:236] Trace[25472371]: "Reflector ListAndWatch" name:object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" (11-Dec-2025 15:54:01.544) (total time: 16458ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[25472371]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-marketplace-dockercfg-x2ctb&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34776->38.102.83.147:6443: read: connection reset by peer 16458ms (15:54:18.002) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[25472371]: [16.458365063s] [16.458365063s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.002751 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-x2ctb\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-marketplace-dockercfg-x2ctb&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:34776->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.022607 5050 reflector.go:561] object-"openstack"/"ovndbcluster-sb-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-sb-config&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35054->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.022658 5050 trace.go:236] Trace[7818612]: "Reflector ListAndWatch" name:object-"openstack"/"ovndbcluster-sb-config" (11-Dec-2025 15:54:01.606) (total time: 16416ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[7818612]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-sb-config&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35054->38.102.83.147:6443: read: connection reset by peer 16416ms (15:54:18.022) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[7818612]: [16.416148472s] [16.416148472s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.022677 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovndbcluster-sb-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-sb-config&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35054->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.042879 5050 reflector.go:561] object-"openstack"/"ovncontroller-metrics-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovncontroller-metrics-config&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35168->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.042947 5050 trace.go:236] Trace[1569357896]: "Reflector ListAndWatch" name:object-"openstack"/"ovncontroller-metrics-config" (11-Dec-2025 15:54:01.634) (total time: 16408ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1569357896]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovncontroller-metrics-config&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35168->38.102.83.147:6443: read: connection reset by peer 16408ms (15:54:18.042) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1569357896]: [16.40825214s] [16.40825214s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.042971 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovncontroller-metrics-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovncontroller-metrics-config&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:35168->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.063128 5050 reflector.go:561] object-"cert-manager"/"cert-manager-webhook-dockercfg-kbhwz": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-webhook-dockercfg-kbhwz&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36012->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.063201 5050 trace.go:236] Trace[980362619]: "Reflector ListAndWatch" name:object-"cert-manager"/"cert-manager-webhook-dockercfg-kbhwz" (11-Dec-2025 15:54:01.808) (total time: 16254ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[980362619]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-webhook-dockercfg-kbhwz&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36012->38.102.83.147:6443: read: connection reset by peer 16254ms (15:54:18.063) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[980362619]: [16.254722037s] [16.254722037s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.063218 5050 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-kbhwz\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-webhook-dockercfg-kbhwz&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36012->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.068218 5050 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-9jns7 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.068257 5050 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-9jns7 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.068278 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jns7" podUID="d1f35830-d883-4b41-ab97-7c382dec0387" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.068301 5050 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-t75hp container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.068360 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-t75hp" podUID="dfc40369-6f7d-4ab2-89b0-60bfc12e9bfe" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.068311 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jns7" podUID="d1f35830-d883-4b41-ab97-7c382dec0387" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.083363 5050 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36064->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.083440 5050 trace.go:236] Trace[967617999]: "Reflector ListAndWatch" name:object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.813) (total time: 16270ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[967617999]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36064->38.102.83.147:6443: read: connection reset by peer 16270ms (15:54:18.083) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[967617999]: [16.270279144s] [16.270279144s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.083457 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36064->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.096118 5050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="6.4s" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.103040 5050 reflector.go:561] object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-gkldc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmanila-operator-controller-manager-dockercfg-gkldc&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36092->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.103129 5050 trace.go:236] Trace[312296324]: "Reflector ListAndWatch" name:object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-gkldc" (11-Dec-2025 15:54:01.817) (total time: 16285ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[312296324]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmanila-operator-controller-manager-dockercfg-gkldc&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36092->38.102.83.147:6443: read: connection reset by peer 16285ms (15:54:18.103) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[312296324]: [16.285531542s] [16.285531542s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.103150 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"manila-operator-controller-manager-dockercfg-gkldc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmanila-operator-controller-manager-dockercfg-gkldc&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36092->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.123172 5050 reflector.go:561] object-"cert-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36178->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.123262 5050 trace.go:236] Trace[105694376]: "Reflector ListAndWatch" name:object-"cert-manager"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.833) (total time: 16289ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[105694376]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36178->38.102.83.147:6443: read: connection reset by peer 16289ms (15:54:18.123) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[105694376]: [16.289585032s] [16.289585032s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.123281 5050 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36178->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.143065 5050 reflector.go:561] object-"openshift-ingress-canary"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36264->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.143121 5050 trace.go:236] Trace[485305858]: "Reflector ListAndWatch" name:object-"openshift-ingress-canary"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.852) (total time: 16291ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[485305858]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36264->38.102.83.147:6443: read: connection reset by peer 16291ms (15:54:18.143) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[485305858]: [16.291065921s] [16.291065921s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.143137 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36264->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.162699 5050 reflector.go:561] object-"metallb-system"/"metallb-operator-controller-manager-service-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-operator-controller-manager-service-cert&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36236->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.162785 5050 trace.go:236] Trace[918492790]: "Reflector ListAndWatch" name:object-"metallb-system"/"metallb-operator-controller-manager-service-cert" (11-Dec-2025 15:54:01.843) (total time: 16319ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[918492790]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-operator-controller-manager-service-cert&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36236->38.102.83.147:6443: read: connection reset by peer 16319ms (15:54:18.162) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[918492790]: [16.319342349s] [16.319342349s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.162805 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-operator-controller-manager-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-operator-controller-manager-service-cert&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36236->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.182553 5050 reflector.go:561] object-"openstack"/"nova-cell1-novncproxy-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell1-novncproxy-config-data&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36288->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.182631 5050 trace.go:236] Trace[650595230]: "Reflector ListAndWatch" name:object-"openstack"/"nova-cell1-novncproxy-config-data" (11-Dec-2025 15:54:01.857) (total time: 16324ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[650595230]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell1-novncproxy-config-data&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36288->38.102.83.147:6443: read: connection reset by peer 16324ms (15:54:18.182) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[650595230]: [16.324688301s] [16.324688301s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.182650 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-cell1-novncproxy-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell1-novncproxy-config-data&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36288->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.202921 5050 reflector.go:561] object-"openshift-operators"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36312->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.203059 5050 trace.go:236] Trace[1277426026]: "Reflector ListAndWatch" name:object-"openshift-operators"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.859) (total time: 16343ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1277426026]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36312->38.102.83.147:6443: read: connection reset by peer 16343ms (15:54:18.202) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1277426026]: [16.343948148s] [16.343948148s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.203089 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36312->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.223413 5050 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-tls&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36302->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.223549 5050 trace.go:236] Trace[1820929891]: "Reflector ListAndWatch" name:object-"openshift-cluster-machine-approver"/"machine-approver-tls" (11-Dec-2025 15:54:01.859) (total time: 16364ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1820929891]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-tls&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36302->38.102.83.147:6443: read: connection reset by peer 16364ms (15:54:18.223) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1820929891]: [16.364476977s] [16.364476977s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.223579 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-tls&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36302->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.242657 5050 reflector.go:561] object-"openshift-apiserver"/"encryption-config-1": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36206->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.242733 5050 trace.go:236] Trace[884225336]: "Reflector ListAndWatch" name:object-"openshift-apiserver"/"encryption-config-1" (11-Dec-2025 15:54:01.840) (total time: 16402ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[884225336]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36206->38.102.83.147:6443: read: connection reset by peer 16402ms (15:54:18.242) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[884225336]: [16.402305732s] [16.402305732s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.242755 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36206->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.262735 5050 reflector.go:561] object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-55k4b": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovnnorthd-ovnnorthd-dockercfg-55k4b&resourceVersion=109623": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36262->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.262835 5050 trace.go:236] Trace[1288059017]: "Reflector ListAndWatch" name:object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-55k4b" (11-Dec-2025 15:54:01.845) (total time: 16417ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1288059017]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovnnorthd-ovnnorthd-dockercfg-55k4b&resourceVersion=109623": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36262->38.102.83.147:6443: read: connection reset by peer 16417ms (15:54:18.262) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1288059017]: [16.417132499s] [16.417132499s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.262859 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovnnorthd-ovnnorthd-dockercfg-55k4b\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovnnorthd-ovnnorthd-dockercfg-55k4b&resourceVersion=109623\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36262->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.282661 5050 reflector.go:561] object-"openshift-image-registry"/"trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36258->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.282756 5050 trace.go:236] Trace[850417238]: "Reflector ListAndWatch" name:object-"openshift-image-registry"/"trusted-ca" (11-Dec-2025 15:54:01.845) (total time: 16437ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[850417238]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36258->38.102.83.147:6443: read: connection reset by peer 16437ms (15:54:18.282) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[850417238]: [16.437087183s] [16.437087183s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.282779 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36258->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.303444 5050 reflector.go:561] object-"openshift-ingress"/"router-certs-default": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-certs-default&resourceVersion=109087": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36330->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.303526 5050 trace.go:236] Trace[291920929]: "Reflector ListAndWatch" name:object-"openshift-ingress"/"router-certs-default" (11-Dec-2025 15:54:01.861) (total time: 16442ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[291920929]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-certs-default&resourceVersion=109087": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36330->38.102.83.147:6443: read: connection reset by peer 16442ms (15:54:18.303) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[291920929]: [16.442276072s] [16.442276072s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.303545 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"router-certs-default\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-certs-default&resourceVersion=109087\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36330->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.323449 5050 reflector.go:561] object-"openstack"/"aodh-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Daodh-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36204->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.323527 5050 trace.go:236] Trace[1617069025]: "Reflector ListAndWatch" name:object-"openstack"/"aodh-config-data" (11-Dec-2025 15:54:01.837) (total time: 16485ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1617069025]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Daodh-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36204->38.102.83.147:6443: read: connection reset by peer 16485ms (15:54:18.323) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1617069025]: [16.485796808s] [16.485796808s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.323547 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"aodh-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Daodh-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36204->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.342807 5050 reflector.go:561] object-"openstack"/"openstack-cell1-config-data": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-cell1-config-data&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36184->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.342892 5050 trace.go:236] Trace[374593118]: "Reflector ListAndWatch" name:object-"openstack"/"openstack-cell1-config-data" (11-Dec-2025 15:54:01.836) (total time: 16506ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[374593118]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-cell1-config-data&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36184->38.102.83.147:6443: read: connection reset by peer 16506ms (15:54:18.342) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[374593118]: [16.506249696s] [16.506249696s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.342911 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-cell1-config-data\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-cell1-config-data&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36184->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.362838 5050 reflector.go:561] object-"openstack"/"ovncontroller-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovncontroller-scripts&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36274->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.362918 5050 trace.go:236] Trace[1531459156]: "Reflector ListAndWatch" name:object-"openstack"/"ovncontroller-scripts" (11-Dec-2025 15:54:01.855) (total time: 16507ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1531459156]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovncontroller-scripts&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36274->38.102.83.147:6443: read: connection reset by peer 16507ms (15:54:18.362) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1531459156]: [16.50715401s] [16.50715401s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.362934 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovncontroller-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovncontroller-scripts&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36274->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.365185 5050 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8qmpk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.365231 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk" podUID="81255772-fddc-4936-8de3-da4649c32d1f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.365367 5050 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8qmpk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": context deadline exceeded" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.365449 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk" podUID="81255772-fddc-4936-8de3-da4649c32d1f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": context deadline exceeded" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.382639 5050 reflector.go:561] object-"openshift-image-registry"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36368->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.382739 5050 trace.go:236] Trace[760787568]: "Reflector ListAndWatch" name:object-"openshift-image-registry"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.863) (total time: 16519ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[760787568]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36368->38.102.83.147:6443: read: connection reset by peer 16519ms (15:54:18.382) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[760787568]: [16.519235794s] [16.519235794s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.382761 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36368->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.403109 5050 reflector.go:561] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36150->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.403195 5050 trace.go:236] Trace[2089377594]: "Reflector ListAndWatch" name:object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.822) (total time: 16580ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2089377594]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36150->38.102.83.147:6443: read: connection reset by peer 16580ms (15:54:18.403) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2089377594]: [16.580281929s] [16.580281929s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.403217 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36150->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.423098 5050 reflector.go:561] object-"openstack"/"openstack-config-data": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-config-data&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36188->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.423181 5050 trace.go:236] Trace[94028359]: "Reflector ListAndWatch" name:object-"openstack"/"openstack-config-data" (11-Dec-2025 15:54:01.837) (total time: 16585ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[94028359]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-config-data&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36188->38.102.83.147:6443: read: connection reset by peer 16585ms (15:54:18.423) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[94028359]: [16.585434467s] [16.585434467s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.423210 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-config-data\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-config-data&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36188->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.443422 5050 reflector.go:561] object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-d5dn6": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dhorizon-operator-controller-manager-dockercfg-d5dn6&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36346->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.443527 5050 trace.go:236] Trace[318763863]: "Reflector ListAndWatch" name:object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-d5dn6" (11-Dec-2025 15:54:01.863) (total time: 16580ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[318763863]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dhorizon-operator-controller-manager-dockercfg-d5dn6&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36346->38.102.83.147:6443: read: connection reset by peer 16579ms (15:54:18.443) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[318763863]: [16.580089074s] [16.580089074s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.443552 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"horizon-operator-controller-manager-dockercfg-d5dn6\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dhorizon-operator-controller-manager-dockercfg-d5dn6&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36346->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.463164 5050 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-dockercfg-xtcjv&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36036->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.463283 5050 trace.go:236] Trace[100306970]: "Reflector ListAndWatch" name:object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" (11-Dec-2025 15:54:01.810) (total time: 16652ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[100306970]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-dockercfg-xtcjv&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36036->38.102.83.147:6443: read: connection reset by peer 16652ms (15:54:18.463) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[100306970]: [16.652294919s] [16.652294919s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.463310 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-xtcjv\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-dockercfg-xtcjv&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36036->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.468460 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="71218193-88fc-4811-bf04-33a4f4a87898" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.1.133:8081/readyz\": dial tcp 10.217.1.133:8081: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.483324 5050 reflector.go:561] object-"openshift-console"/"service-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dservice-ca&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36384->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.483390 5050 trace.go:236] Trace[409755286]: "Reflector ListAndWatch" name:object-"openshift-console"/"service-ca" (11-Dec-2025 15:54:01.866) (total time: 16616ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[409755286]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dservice-ca&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36384->38.102.83.147:6443: read: connection reset by peer 16616ms (15:54:18.483) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[409755286]: [16.616972462s] [16.616972462s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.483405 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"service-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dservice-ca&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36384->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.503263 5050 reflector.go:561] object-"openshift-ingress-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36352->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.503348 5050 trace.go:236] Trace[1915439322]: "Reflector ListAndWatch" name:object-"openshift-ingress-operator"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.863) (total time: 16639ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1915439322]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36352->38.102.83.147:6443: read: connection reset by peer 16639ms (15:54:18.503) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1915439322]: [16.639877685s] [16.639877685s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.503368 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36352->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.523026 5050 reflector.go:561] object-"openstack"/"galera-openstack-cell1-dockercfg-vwxp8": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dgalera-openstack-cell1-dockercfg-vwxp8&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36088->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.523104 5050 trace.go:236] Trace[1548469597]: "Reflector ListAndWatch" name:object-"openstack"/"galera-openstack-cell1-dockercfg-vwxp8" (11-Dec-2025 15:54:01.815) (total time: 16707ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1548469597]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dgalera-openstack-cell1-dockercfg-vwxp8&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36088->38.102.83.147:6443: read: connection reset by peer 16707ms (15:54:18.523) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1548469597]: [16.707687113s] [16.707687113s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.523121 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"galera-openstack-cell1-dockercfg-vwxp8\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dgalera-openstack-cell1-dockercfg-vwxp8&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36088->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.543312 5050 reflector.go:561] object-"metallb-system"/"controller-certs-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dcontroller-certs-secret&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36248->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.543400 5050 trace.go:236] Trace[1045729116]: "Reflector ListAndWatch" name:object-"metallb-system"/"controller-certs-secret" (11-Dec-2025 15:54:01.845) (total time: 16697ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1045729116]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dcontroller-certs-secret&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36248->38.102.83.147:6443: read: connection reset by peer 16697ms (15:54:18.543) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1045729116]: [16.697783257s] [16.697783257s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.543421 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"controller-certs-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dcontroller-certs-secret&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36248->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.563468 5050 reflector.go:561] object-"openstack-operators"/"webhook-server-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dwebhook-server-cert&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36314->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.563539 5050 trace.go:236] Trace[117759967]: "Reflector ListAndWatch" name:object-"openstack-operators"/"webhook-server-cert" (11-Dec-2025 15:54:01.861) (total time: 16702ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[117759967]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dwebhook-server-cert&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36314->38.102.83.147:6443: read: connection reset by peer 16702ms (15:54:18.563) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[117759967]: [16.702310518s] [16.702310518s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.563563 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"webhook-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dwebhook-server-cert&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36314->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.582751 5050 reflector.go:561] object-"openstack"/"rabbitmq-cell1-erlang-cookie": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-erlang-cookie&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36488->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.582833 5050 trace.go:236] Trace[2057172693]: "Reflector ListAndWatch" name:object-"openstack"/"rabbitmq-cell1-erlang-cookie" (11-Dec-2025 15:54:01.885) (total time: 16697ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2057172693]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-erlang-cookie&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36488->38.102.83.147:6443: read: connection reset by peer 16697ms (15:54:18.582) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2057172693]: [16.697663234s] [16.697663234s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.582862 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-cell1-erlang-cookie\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-erlang-cookie&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36488->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.603087 5050 reflector.go:561] object-"openshift-marketplace"/"marketplace-operator-metrics": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-metrics&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36398->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.603171 5050 trace.go:236] Trace[211617405]: "Reflector ListAndWatch" name:object-"openshift-marketplace"/"marketplace-operator-metrics" (11-Dec-2025 15:54:01.870) (total time: 16732ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[211617405]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-metrics&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36398->38.102.83.147:6443: read: connection reset by peer 16732ms (15:54:18.603) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[211617405]: [16.732870437s] [16.732870437s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.603188 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-metrics&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36398->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.622923 5050 reflector.go:561] object-"metallb-system"/"frr-k8s-certs-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dfrr-k8s-certs-secret&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36440->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.623003 5050 trace.go:236] Trace[819030494]: "Reflector ListAndWatch" name:object-"metallb-system"/"frr-k8s-certs-secret" (11-Dec-2025 15:54:01.878) (total time: 16744ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[819030494]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dfrr-k8s-certs-secret&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36440->38.102.83.147:6443: read: connection reset by peer 16744ms (15:54:18.622) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[819030494]: [16.744948861s] [16.744948861s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.623034 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"frr-k8s-certs-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dfrr-k8s-certs-secret&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36440->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.643163 5050 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-config&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36504->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.643269 5050 trace.go:236] Trace[91253501]: "Reflector ListAndWatch" name:object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" (11-Dec-2025 15:54:01.886) (total time: 16756ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[91253501]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-config&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36504->38.102.83.147:6443: read: connection reset by peer 16756ms (15:54:18.643) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[91253501]: [16.75687864s] [16.75687864s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.643294 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-config&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36504->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.663402 5050 reflector.go:561] object-"openshift-apiserver"/"audit-1": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36442->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.663495 5050 trace.go:236] Trace[153231714]: "Reflector ListAndWatch" name:object-"openshift-apiserver"/"audit-1" (11-Dec-2025 15:54:01.879) (total time: 16784ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[153231714]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36442->38.102.83.147:6443: read: connection reset by peer 16784ms (15:54:18.663) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[153231714]: [16.784297545s] [16.784297545s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.663516 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36442->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.683208 5050 reflector.go:561] object-"openstack"/"rabbitmq-plugins-conf": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-plugins-conf&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36444->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.683305 5050 trace.go:236] Trace[548718711]: "Reflector ListAndWatch" name:object-"openstack"/"rabbitmq-plugins-conf" (11-Dec-2025 15:54:01.880) (total time: 16802ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[548718711]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-plugins-conf&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36444->38.102.83.147:6443: read: connection reset by peer 16802ms (15:54:18.683) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[548718711]: [16.802486512s] [16.802486512s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.683325 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-plugins-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-plugins-conf&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36444->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.702823 5050 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-operator-images": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dmachine-config-operator-images&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36406->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.702902 5050 trace.go:236] Trace[651037052]: "Reflector ListAndWatch" name:object-"openshift-machine-config-operator"/"machine-config-operator-images" (11-Dec-2025 15:54:01.872) (total time: 16830ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[651037052]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dmachine-config-operator-images&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36406->38.102.83.147:6443: read: connection reset by peer 16830ms (15:54:18.702) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[651037052]: [16.830530443s] [16.830530443s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.702920 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dmachine-config-operator-images&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36406->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.723631 5050 reflector.go:561] object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36534->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.723747 5050 trace.go:236] Trace[1018258026]: "Reflector ListAndWatch" name:object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.886) (total time: 16837ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1018258026]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36534->38.102.83.147:6443: read: connection reset by peer 16837ms (15:54:18.723) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1018258026]: [16.837296955s] [16.837296955s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.723774 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36534->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.743379 5050 reflector.go:561] object-"openstack"/"metric-storage-alertmanager-dockercfg-c2fxf": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmetric-storage-alertmanager-dockercfg-c2fxf&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36458->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.743480 5050 trace.go:236] Trace[1317189047]: "Reflector ListAndWatch" name:object-"openstack"/"metric-storage-alertmanager-dockercfg-c2fxf" (11-Dec-2025 15:54:01.883) (total time: 16859ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1317189047]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmetric-storage-alertmanager-dockercfg-c2fxf&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36458->38.102.83.147:6443: read: connection reset by peer 16859ms (15:54:18.743) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1317189047]: [16.859449558s] [16.859449558s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.743510 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"metric-storage-alertmanager-dockercfg-c2fxf\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmetric-storage-alertmanager-dockercfg-c2fxf&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36458->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.762643 5050 reflector.go:561] object-"openstack"/"octavia-certs-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-certs-secret&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36472->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.762735 5050 trace.go:236] Trace[915524689]: "Reflector ListAndWatch" name:object-"openstack"/"octavia-certs-secret" (11-Dec-2025 15:54:01.884) (total time: 16878ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[915524689]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-certs-secret&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36472->38.102.83.147:6443: read: connection reset by peer 16878ms (15:54:18.762) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[915524689]: [16.878678123s] [16.878678123s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.762754 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-certs-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-certs-secret&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36472->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.782843 5050 reflector.go:561] object-"openstack"/"ovsdbserver-sb": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovsdbserver-sb&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36430->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.785506 5050 trace.go:236] Trace[208854203]: "Reflector ListAndWatch" name:object-"openstack"/"ovsdbserver-sb" (11-Dec-2025 15:54:01.874) (total time: 16908ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[208854203]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovsdbserver-sb&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36430->38.102.83.147:6443: read: connection reset by peer 16908ms (15:54:18.782) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[208854203]: [16.908240545s] [16.908240545s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.785535 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovsdbserver-sb\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovsdbserver-sb&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36430->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.802502 5050 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36546->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.802602 5050 trace.go:236] Trace[1617225264]: "Reflector ListAndWatch" name:object-"openshift-controller-manager"/"kube-root-ca.crt" (11-Dec-2025 15:54:01.890) (total time: 16912ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1617225264]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36546->38.102.83.147:6443: read: connection reset by peer 16912ms (15:54:18.802) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1617225264]: [16.912126649s] [16.912126649s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.802623 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36546->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.823372 5050 reflector.go:561] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-dockercfg-gkqpw&resourceVersion=109322": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36382->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.823422 5050 trace.go:236] Trace[730958435]: "Reflector ListAndWatch" name:object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" (11-Dec-2025 15:54:01.863) (total time: 16959ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[730958435]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-dockercfg-gkqpw&resourceVersion=109322": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36382->38.102.83.147:6443: read: connection reset by peer 16959ms (15:54:18.823) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[730958435]: [16.959916529s] [16.959916529s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.823434 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-gkqpw\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-dockercfg-gkqpw&resourceVersion=109322\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36382->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.842708 5050 reflector.go:561] object-"openshift-authentication-operator"/"service-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36570->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.842794 5050 trace.go:236] Trace[2106038589]: "Reflector ListAndWatch" name:object-"openshift-authentication-operator"/"service-ca-bundle" (11-Dec-2025 15:54:01.892) (total time: 16949ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2106038589]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36570->38.102.83.147:6443: read: connection reset by peer 16949ms (15:54:18.842) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2106038589]: [16.949973983s] [16.949973983s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.842814 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36570->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.862676 5050 reflector.go:561] object-"openshift-image-registry"/"image-registry-operator-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dimage-registry-operator-tls&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36416->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.862779 5050 trace.go:236] Trace[1643791728]: "Reflector ListAndWatch" name:object-"openshift-image-registry"/"image-registry-operator-tls" (11-Dec-2025 15:54:01.874) (total time: 16988ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1643791728]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dimage-registry-operator-tls&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36416->38.102.83.147:6443: read: connection reset by peer 16988ms (15:54:18.862) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1643791728]: [16.988164926s] [16.988164926s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.862804 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"image-registry-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dimage-registry-operator-tls&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36416->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.882998 5050 reflector.go:561] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Dnode-resolver-dockercfg-kz9s7&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36438->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.883104 5050 trace.go:236] Trace[2061327927]: "Reflector ListAndWatch" name:object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" (11-Dec-2025 15:54:01.875) (total time: 17007ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2061327927]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Dnode-resolver-dockercfg-kz9s7&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36438->38.102.83.147:6443: read: connection reset by peer 17007ms (15:54:18.882) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[2061327927]: [17.007279898s] [17.007279898s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.883126 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"node-resolver-dockercfg-kz9s7\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Dnode-resolver-dockercfg-kz9s7&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36438->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.903062 5050 reflector.go:561] object-"openstack"/"dns": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Ddns&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36518->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.903138 5050 trace.go:236] Trace[1026136920]: "Reflector ListAndWatch" name:object-"openstack"/"dns" (11-Dec-2025 15:54:01.886) (total time: 17016ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1026136920]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Ddns&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36518->38.102.83.147:6443: read: connection reset by peer 17016ms (15:54:18.903) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[1026136920]: [17.016716101s] [17.016716101s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.903156 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"dns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Ddns&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36518->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.922994 5050 reflector.go:561] object-"openshift-dns"/"dns-default-metrics-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-default-metrics-tls&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36558->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.923067 5050 trace.go:236] Trace[189112308]: "Reflector ListAndWatch" name:object-"openshift-dns"/"dns-default-metrics-tls" (11-Dec-2025 15:54:01.891) (total time: 17031ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[189112308]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-default-metrics-tls&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36558->38.102.83.147:6443: read: connection reset by peer 17031ms (15:54:18.922) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[189112308]: [17.031409385s] [17.031409385s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.923079 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"dns-default-metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-default-metrics-tls&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36558->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.932243 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="7fb00b03-fe6e-4c66-bd36-adf9443871a8" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.1.135:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.942389 5050 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/ovndbcluster-sb-etc-ovn-ovsdbserver-sb-2: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/ovndbcluster-sb-etc-ovn-ovsdbserver-sb-2\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36604->38.102.83.147:6443: read: connection reset by peer" pod="openstack/ovsdbserver-sb-2" volumeName="ovndbcluster-sb-etc-ovn" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.963177 5050 request.go:700] Waited for 7.072811258s, retries: 1, retry-after: 1s - retry-reason: due to retryable error, error: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109522": read tcp 38.102.83.147:36578->38.102.83.147:6443: read: connection reset by peer - request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109522 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:18.963476 5050 reflector.go:561] object-"openshift-console"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36578->38.102.83.147:6443: read: connection reset by peer Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.963539 5050 trace.go:236] Trace[118594553]: "Reflector ListAndWatch" name:object-"openshift-console"/"openshift-service-ca.crt" (11-Dec-2025 15:54:01.897) (total time: 17066ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[118594553]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36578->38.102.83.147:6443: read: connection reset by peer 17066ms (15:54:18.963) Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[118594553]: [17.066438643s] [17.066438643s] END Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:18.963560 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36578->38.102.83.147:6443: read: connection reset by peer" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:18.982423 5050 status_manager.go:851] "Failed to get status for pod" podUID="dbd5b107-5d08-43af-881c-11540f395267" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/packageserver-d55dfcdfc-r54sd\": dial tcp 38.102.83.147:6443: connect: connection refused - error from a previous attempt: read tcp 38.102.83.147:36592->38.102.83.147:6443: read: connection reset by peer" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.003418 5050 reflector.go:561] object-"openstack"/"alertmanager-metric-storage-generated": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dalertmanager-metric-storage-generated&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.003462 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"alertmanager-metric-storage-generated\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dalertmanager-metric-storage-generated&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.022662 5050 reflector.go:561] object-"openstack"/"rabbitmq-server-conf": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-server-conf&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.022723 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-server-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-server-conf&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.043224 5050 reflector.go:561] object-"openshift-console-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.043270 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.062769 5050 reflector.go:561] object-"openstack"/"openstack-cell1-dockercfg-mvxd9": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dopenstack-cell1-dockercfg-mvxd9&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.062846 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-cell1-dockercfg-mvxd9\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dopenstack-cell1-dockercfg-mvxd9&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.082430 5050 reflector.go:561] object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-dockercfg-k9rxt&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.082463 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-k9rxt\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-dockercfg-k9rxt&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.102721 5050 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=109851": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.102809 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=109851\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.123030 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=109167": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.123106 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=109167\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.142577 5050 reflector.go:561] object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Droute-controller-manager-sa-dockercfg-h2zr2&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.142631 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-h2zr2\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Droute-controller-manager-sa-dockercfg-h2zr2&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.163237 5050 reflector.go:561] object-"openshift-nmstate"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.163282 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.184409 5050 reflector.go:561] object-"openstack"/"openstack-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-scripts&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.184458 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-scripts&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.210003 5050 reflector.go:561] object-"openshift-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.210075 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.231369 5050 reflector.go:561] object-"openshift-image-registry"/"installation-pull-secrets": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dinstallation-pull-secrets&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.231451 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"installation-pull-secrets\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dinstallation-pull-secrets&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.242713 5050 reflector.go:561] object-"openshift-marketplace"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.242794 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109067\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.262913 5050 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.262982 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-qx5rd\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:19.273261 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" podUID="f5bec9f7-072c-4c21-80ea-af9f59313eef" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:19.273413 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.282986 5050 reflector.go:561] object-"openstack"/"horizon-config-data": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dhorizon-config-data&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.283069 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"horizon-config-data\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dhorizon-config-data&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.303228 5050 reflector.go:561] object-"openshift-console"/"console-oauth-config": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-oauth-config&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.303300 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-oauth-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-oauth-config&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.323417 5050 reflector.go:561] object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-baremetal-operator-webhook-server-cert&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.323471 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openstack-baremetal-operator-webhook-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-baremetal-operator-webhook-server-cert&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.342783 5050 reflector.go:561] object-"openshift-ingress"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.342847 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.362514 5050 reflector.go:561] object-"openshift-authentication"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.362583 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.382538 5050 reflector.go:561] object-"openshift-network-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.382586 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.402697 5050 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-kubernetes-node-dockercfg-pwtwl&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.402773 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-pwtwl\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-kubernetes-node-dockercfg-pwtwl&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.423431 5050 reflector.go:561] object-"openstack"/"octavia-worker-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-worker-config-data&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.423501 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-worker-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-worker-config-data&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.443549 5050 reflector.go:561] object-"openshift-apiserver-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.443670 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.462686 5050 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcatalog-operator-serving-cert&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.462801 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcatalog-operator-serving-cert&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.482713 5050 reflector.go:561] object-"openshift-multus"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.482782 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.503373 5050 reflector.go:561] object-"openshift-machine-config-operator"/"mco-proxy-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmco-proxy-tls&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.503451 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmco-proxy-tls&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.523371 5050 reflector.go:561] object-"openstack"/"horizon-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dhorizon-scripts&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.523450 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"horizon-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dhorizon-scripts&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.543461 5050 reflector.go:561] object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-2bw5c": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dneutron-operator-controller-manager-dockercfg-2bw5c&resourceVersion=109623": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.543528 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"neutron-operator-controller-manager-dockercfg-2bw5c\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dneutron-operator-controller-manager-dockercfg-2bw5c&resourceVersion=109623\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.562949 5050 reflector.go:561] object-"openstack"/"dns-svc": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Ddns-svc&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.563047 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"dns-svc\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Ddns-svc&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.582956 5050 reflector.go:561] object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-sa-dockercfg-djjff&resourceVersion=109407": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.583044 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-djjff\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-sa-dockercfg-djjff&resourceVersion=109407\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.603222 5050 reflector.go:561] object-"openshift-dns"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.603300 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.623068 5050 reflector.go:561] object-"openstack"/"rabbitmq-default-user": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-default-user&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.623124 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-default-user\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-default-user&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.642979 5050 reflector.go:561] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dnode-ca-dockercfg-4777p&resourceVersion=109434": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.643065 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"node-ca-dockercfg-4777p\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dnode-ca-dockercfg-4777p&resourceVersion=109434\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.662907 5050 reflector.go:561] object-"openshift-console-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.662960 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.682379 5050 reflector.go:561] object-"openshift-cluster-version"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.682430 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.703641 5050 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-global-ca&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.703715 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-global-ca&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.722817 5050 reflector.go:561] object-"openstack"/"cinder-backup-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-backup-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.722918 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-backup-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-backup-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:19.738150 5050 patch_prober.go:28] interesting pod/dns-default-25p7l container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.217.0.27:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:19.738194 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-25p7l" podUID="66dfe3f5-9e7a-4e1c-b03a-b6f01dab6ef4" containerName="dns" probeResult="failure" output="Get \"http://10.217.0.27:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.742567 5050 reflector.go:561] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.742609 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.762824 5050 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-msq4c&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.762902 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-msq4c&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.783212 5050 reflector.go:561] object-"openstack"/"alertmanager-metric-storage-cluster-tls-config": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dalertmanager-metric-storage-cluster-tls-config&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.783262 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"alertmanager-metric-storage-cluster-tls-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dalertmanager-metric-storage-cluster-tls-config&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.802876 5050 reflector.go:561] object-"openshift-service-ca-operator"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.802944 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.822352 5050 reflector.go:561] object-"openstack-operators"/"openstack-operator-index-dockercfg-f86tg": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-index-dockercfg-f86tg&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.822420 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openstack-operator-index-dockercfg-f86tg\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-index-dockercfg-f86tg&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.843353 5050 reflector.go:561] object-"openshift-etcd-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.843403 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.863105 5050 reflector.go:561] object-"openshift-oauth-apiserver"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.863174 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.890747 5050 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-control-plane-metrics-cert&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.890830 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-control-plane-metrics-cert&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.903058 5050 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.903117 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.922540 5050 reflector.go:561] object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Ddns-operator-dockercfg-9mqw5&resourceVersion=109382": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.922610 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-9mqw5\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Ddns-operator-dockercfg-9mqw5&resourceVersion=109382\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.943248 5050 reflector.go:561] object-"cert-manager-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.943323 5050 reflector.go:158] "Unhandled Error" err="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.963019 5050 reflector.go:561] object-"openstack"/"ovnnorthd-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovnnorthd-scripts&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.963086 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovnnorthd-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovnnorthd-scripts&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:19.982443 5050 request.go:700] Waited for 5.715227178s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109762 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:19.982905 5050 reflector.go:561] object-"openshift-service-ca"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:19.982980 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.002764 5050 reflector.go:561] object-"openshift-ingress-canary"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.002821 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.022772 5050 reflector.go:561] object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.022835 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.042775 5050 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackageserver-service-cert&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.042840 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackageserver-service-cert&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.062829 5050 reflector.go:561] object-"metallb-system"/"speaker-certs-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dspeaker-certs-secret&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.062905 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"speaker-certs-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dspeaker-certs-secret&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.082855 5050 reflector.go:561] object-"openstack"/"cinder-cinder-dockercfg-lvj2r": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-cinder-dockercfg-lvj2r&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.082922 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-cinder-dockercfg-lvj2r\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-cinder-dockercfg-lvj2r&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.102835 5050 reflector.go:561] object-"openshift-network-node-identity"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.102899 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.122653 5050 reflector.go:561] object-"openstack-operators"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.122713 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.142904 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=109780": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.142974 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=109780\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.162660 5050 reflector.go:561] object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-n57x7": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dnova-operator-controller-manager-dockercfg-n57x7&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.162724 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"nova-operator-controller-manager-dockercfg-n57x7\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dnova-operator-controller-manager-dockercfg-n57x7&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.182821 5050 reflector.go:561] object-"openstack"/"keystone-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-config-data&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.182891 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"keystone-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-config-data&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.202629 5050 reflector.go:561] object-"openshift-ingress-canary"/"default-dockercfg-2llfx": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2llfx&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.202727 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-canary\"/\"default-dockercfg-2llfx\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2llfx&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:20.221199 5050 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-cnp7n container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:20.221258 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" podUID="1d89350d-55e9-4ef6-8182-287894b6c14b" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:20.221314 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" podUID="2b344bba-8e1d-415b-9b5f-e21d3144fe42" containerName="hostpath-provisioner" probeResult="failure" output="Get \"http://10.217.0.31:9898/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.222825 5050 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"pprof-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpprof-cert&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.222876 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpprof-cert&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.242602 5050 reflector.go:561] object-"openstack"/"combined-ca-bundle": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcombined-ca-bundle&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.242664 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"combined-ca-bundle\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcombined-ca-bundle&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.262487 5050 reflector.go:561] object-"openshift-apiserver"/"image-import-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dimage-import-ca&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.262557 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"image-import-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dimage-import-ca&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.283215 5050 reflector.go:561] object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.283279 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.303421 5050 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-serving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.303517 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-serving-cert&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:20.315217 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" podUID="f5bec9f7-072c-4c21-80ea-af9f59313eef" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.322416 5050 reflector.go:561] object-"openstack"/"octavia-worker-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-worker-scripts&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.322495 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-worker-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-worker-scripts&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.342523 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-ocp-branding-template&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.342578 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-ocp-branding-template&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.362457 5050 reflector.go:561] object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-zlz4m": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dbarbican-operator-controller-manager-dockercfg-zlz4m&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.362534 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"barbican-operator-controller-manager-dockercfg-zlz4m\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dbarbican-operator-controller-manager-dockercfg-zlz4m&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.383288 5050 reflector.go:561] object-"openstack"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109642": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.383365 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109642\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.402946 5050 reflector.go:561] object-"openstack-operators"/"metrics-server-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmetrics-server-cert&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.403033 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"metrics-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmetrics-server-cert&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.422701 5050 reflector.go:561] object-"openstack"/"rabbitmq-cell1-server-conf": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-cell1-server-conf&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.422759 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-cell1-server-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-cell1-server-conf&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.442460 5050 reflector.go:561] object-"openshift-oauth-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.442522 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.462694 5050 reflector.go:561] object-"openstack"/"nova-api-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-api-config-data&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.462748 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-api-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-api-config-data&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.482998 5050 reflector.go:561] object-"openshift-console-operator"/"console-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dconsole-operator-config&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.483076 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"console-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dconsole-operator-config&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.503399 5050 reflector.go:561] object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-whqpr": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncluster-ovndbcluster-sb-dockercfg-whqpr&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.503473 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovncluster-ovndbcluster-sb-dockercfg-whqpr\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncluster-ovndbcluster-sb-dockercfg-whqpr&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.523213 5050 reflector.go:561] object-"openstack"/"octavia-rsyslog-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-rsyslog-scripts&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.523287 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-rsyslog-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-rsyslog-scripts&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.543107 5050 reflector.go:561] object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/secrets?fieldSelector=metadata.name%3Dopenshift-config-operator-dockercfg-7pc5z&resourceVersion=109307": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.543188 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-7pc5z\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/secrets?fieldSelector=metadata.name%3Dopenshift-config-operator-dockercfg-7pc5z&resourceVersion=109307\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:20.563439 5050 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:20.563492 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.563498 5050 reflector.go:561] object-"openshift-multus"/"multus-admission-controller-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-admission-controller-secret&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.563562 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-admission-controller-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-admission-controller-secret&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.583513 5050 reflector.go:561] object-"openshift-machine-api"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.583621 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.602405 5050 reflector.go:561] object-"openshift-machine-config-operator"/"proxy-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dproxy-tls&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.602460 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dproxy-tls&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.623352 5050 reflector.go:561] object-"openstack"/"barbican-api-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-api-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.623429 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"barbican-api-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-api-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.642920 5050 reflector.go:561] object-"openstack"/"ceilometer-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dceilometer-scripts&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.642964 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ceilometer-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dceilometer-scripts&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:20.643155 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-operator-799b66f579-tqs2v" podUID="53a1a6d1-6999-4fa1-a0ce-a20b83f1f347" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.73:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.663180 5050 reflector.go:561] object-"openstack"/"alertmanager-metric-storage-tls-assets-0": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dalertmanager-metric-storage-tls-assets-0&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.663258 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"alertmanager-metric-storage-tls-assets-0\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dalertmanager-metric-storage-tls-assets-0&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.682486 5050 reflector.go:561] object-"openstack"/"barbican-keystone-listener-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-keystone-listener-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.682556 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"barbican-keystone-listener-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-keystone-listener-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.703645 5050 reflector.go:561] object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmetrics-daemon-sa-dockercfg-d427c&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.703732 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-d427c\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmetrics-daemon-sa-dockercfg-d427c&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.722546 5050 reflector.go:561] object-"openshift-multus"/"cni-copy-resources": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dcni-copy-resources&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.722604 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"cni-copy-resources\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dcni-copy-resources&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.743231 5050 reflector.go:561] object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.743290 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.763595 5050 reflector.go:561] object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-djswv": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Doctavia-operator-controller-manager-dockercfg-djswv&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.763655 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"octavia-operator-controller-manager-dockercfg-djswv\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Doctavia-operator-controller-manager-dockercfg-djswv&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.782424 5050 reflector.go:561] object-"openshift-network-node-identity"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.782481 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.802648 5050 reflector.go:561] object-"openshift-oauth-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109117": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.802711 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109117\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.823465 5050 reflector.go:561] object-"openshift-multus"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.823527 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.842531 5050 reflector.go:561] object-"openshift-ingress"/"router-dockercfg-zdk86": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-dockercfg-zdk86&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.842600 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"router-dockercfg-zdk86\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-dockercfg-zdk86&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.862806 5050 reflector.go:561] object-"openstack"/"nova-cell1-conductor-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell1-conductor-config-data&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.862934 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-cell1-conductor-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell1-conductor-config-data&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.883449 5050 reflector.go:561] object-"openshift-network-node-identity"/"ovnkube-identity-cm": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dovnkube-identity-cm&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.883520 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dovnkube-identity-cm&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.890574 5050 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/events\": dial tcp 38.102.83.147:6443: connect: connection refused" event=< Dec 11 15:54:41 crc kubenswrapper[5050]: &Event{ObjectMeta:{packageserver-d55dfcdfc-r54sd.18803427859863d6 openshift-operator-lifecycle-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-operator-lifecycle-manager,Name:packageserver-d55dfcdfc-r54sd,UID:dbd5b107-5d08-43af-881c-11540f395267,APIVersion:v1,ResourceVersion:27090,FieldPath:spec.containers{packageserver},},Reason:ProbeError,Message:Liveness probe error: Get "https://10.217.0.23:5443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Dec 11 15:54:41 crc kubenswrapper[5050]: body: Dec 11 15:54:41 crc kubenswrapper[5050]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 15:53:18.278960086 +0000 UTC m=+7489.122682672,LastTimestamp:2025-12-11 15:53:18.278960086 +0000 UTC m=+7489.122682672,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 11 15:54:41 crc kubenswrapper[5050]: > Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.903311 5050 reflector.go:561] object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dkube-scheduler-operator-serving-cert&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.903393 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dkube-scheduler-operator-serving-cert&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.923614 5050 reflector.go:561] object-"openstack"/"neutron-neutron-dockercfg-ctkbt": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-neutron-dockercfg-ctkbt&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.923681 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"neutron-neutron-dockercfg-ctkbt\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-neutron-dockercfg-ctkbt&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.942630 5050 reflector.go:561] object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dservice-ca-operator-dockercfg-rg9jl&resourceVersion=109623": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.942692 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-rg9jl\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dservice-ca-operator-dockercfg-rg9jl&resourceVersion=109623\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.963100 5050 reflector.go:561] object-"openstack"/"rabbitmq-cell1-plugins-conf": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-cell1-plugins-conf&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.963219 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-cell1-plugins-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-cell1-plugins-conf&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:20.982766 5050 request.go:700] Waited for 6.234974502s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-admission-webhook-dockercfg-g8gr8&resourceVersion=109512 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:20.983437 5050 reflector.go:561] object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-g8gr8": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-admission-webhook-dockercfg-g8gr8&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:20.983520 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-g8gr8\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-admission-webhook-dockercfg-g8gr8&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.002743 5050 reflector.go:561] object-"openshift-console"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.002856 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.023575 5050 reflector.go:561] object-"openshift-ovn-kubernetes"/"env-overrides": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Denv-overrides&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.023648 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Denv-overrides&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.042671 5050 reflector.go:561] object-"openshift-operators"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.042760 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.063473 5050 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-server-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-tls&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.063580 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-tls&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.083327 5050 reflector.go:561] object-"openstack"/"openstack-config-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dopenstack-config-secret&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.083433 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-config-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dopenstack-config-secret&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.102684 5050 reflector.go:561] object-"openstack"/"cinder-scheduler-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-scheduler-config-data&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.102761 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-scheduler-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-scheduler-config-data&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.122871 5050 reflector.go:561] object-"openstack"/"barbican-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.123247 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"barbican-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.143127 5050 reflector.go:561] object-"openshift-authentication"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.143201 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.163091 5050 reflector.go:561] object-"openshift-oauth-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.163164 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.182441 5050 reflector.go:561] object-"openshift-authentication-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.182507 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.203394 5050 reflector.go:561] object-"openstack"/"ovncontroller-ovncontroller-dockercfg-fjq8l": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncontroller-ovncontroller-dockercfg-fjq8l&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.203460 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovncontroller-ovncontroller-dockercfg-fjq8l\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncontroller-ovncontroller-dockercfg-fjq8l&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.223196 5050 reflector.go:561] object-"openshift-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.223236 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.242805 5050 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.242838 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.262209 5050 reflector.go:561] object-"openshift-nmstate"/"openshift-nmstate-webhook": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dopenshift-nmstate-webhook&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.262247 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"openshift-nmstate-webhook\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dopenshift-nmstate-webhook&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.282628 5050 reflector.go:561] object-"openstack"/"horizon": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dhorizon&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.282681 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"horizon\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dhorizon&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.303278 5050 reflector.go:561] object-"metallb-system"/"controller-dockercfg-5zwsv": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dcontroller-dockercfg-5zwsv&resourceVersion=109087": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.303338 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"controller-dockercfg-5zwsv\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dcontroller-dockercfg-5zwsv&resourceVersion=109087\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.322539 5050 reflector.go:561] object-"openstack"/"keystone-keystone-dockercfg-jrbb7": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-keystone-dockercfg-jrbb7&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.322582 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"keystone-keystone-dockercfg-jrbb7\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-keystone-dockercfg-jrbb7&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.342341 5050 reflector.go:561] object-"openstack"/"heat-cfnapi-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dheat-cfnapi-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.342386 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"heat-cfnapi-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dheat-cfnapi-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.362951 5050 reflector.go:561] object-"openshift-nmstate"/"plugin-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dplugin-serving-cert&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.362994 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"plugin-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dplugin-serving-cert&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.383131 5050 reflector.go:561] object-"openshift-ingress-operator"/"trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.383210 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.402870 5050 reflector.go:561] object-"openshift-multus"/"default-dockercfg-2q5b6": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2q5b6&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.402932 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-dockercfg-2q5b6\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2q5b6&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.422527 5050 reflector.go:561] object-"openshift-nmstate"/"default-dockercfg-vtnxn": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-vtnxn&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.422600 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"default-dockercfg-vtnxn\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-vtnxn&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.443267 5050 reflector.go:561] object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-admission-webhook-service-cert&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.443346 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-admission-webhook-service-cert&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.462881 5050 reflector.go:561] object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-68tnd": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-controller-operator-dockercfg-68tnd&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.462958 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openstack-operator-controller-operator-dockercfg-68tnd\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-controller-operator-dockercfg-68tnd&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.483070 5050 reflector.go:561] object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-7zqpj": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmariadb-operator-controller-manager-dockercfg-7zqpj&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.483140 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"mariadb-operator-controller-manager-dockercfg-7zqpj\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmariadb-operator-controller-manager-dockercfg-7zqpj&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.502682 5050 reflector.go:561] object-"metallb-system"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.502742 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.522678 5050 reflector.go:561] object-"openshift-dns"/"dns-dockercfg-jwfmh": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-dockercfg-jwfmh&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.522722 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"dns-dockercfg-jwfmh\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-dockercfg-jwfmh&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.543659 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-cliconfig": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-cliconfig&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.543742 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-cliconfig&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.562903 5050 reflector.go:561] object-"openstack"/"ovndbcluster-sb-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-sb-scripts&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.562971 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovndbcluster-sb-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-sb-scripts&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.582926 5050 reflector.go:561] object-"openstack"/"nova-nova-dockercfg-mdjbl": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-nova-dockercfg-mdjbl&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.582994 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-nova-dockercfg-mdjbl\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-nova-dockercfg-mdjbl&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.603226 5050 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.603299 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:21.613795 5050 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:21.613851 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:21.613906 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:21.614547 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"696de0133c65c6e6ea70d6299312593ddfc638a01a5f4783ed9082195fb6fb31"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed liveness probe, will be restarted" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:21.614643 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://696de0133c65c6e6ea70d6299312593ddfc638a01a5f4783ed9082195fb6fb31" gracePeriod=30 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.622986 5050 reflector.go:561] object-"openstack"/"octavia-healthmanager-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-healthmanager-scripts&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.623060 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-healthmanager-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-healthmanager-scripts&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.643318 5050 reflector.go:561] object-"openshift-authentication-operator"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.643392 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.662618 5050 reflector.go:561] object-"metallb-system"/"metallb-memberlist": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-memberlist&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.662686 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-memberlist\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-memberlist&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.682508 5050 reflector.go:561] object-"cert-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.682593 5050 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.702573 5050 reflector.go:561] object-"openshift-service-ca-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.702639 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.722516 5050 reflector.go:561] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-controller-manager-operator-config&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.722555 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-controller-manager-operator-config&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.743287 5050 reflector.go:561] object-"openshift-operators"/"perses-operator-dockercfg-xflrf": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dperses-operator-dockercfg-xflrf&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.743392 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"perses-operator-dockercfg-xflrf\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dperses-operator-dockercfg-xflrf&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.763124 5050 reflector.go:561] object-"openstack"/"barbican-barbican-dockercfg-fxl2b": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-barbican-dockercfg-fxl2b&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.763187 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"barbican-barbican-dockercfg-fxl2b\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-barbican-dockercfg-fxl2b&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.782954 5050 reflector.go:561] object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Doauth-apiserver-sa-dockercfg-6r2bq&resourceVersion=109501": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.783074 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-6r2bq\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Doauth-apiserver-sa-dockercfg-6r2bq&resourceVersion=109501\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.803425 5050 reflector.go:561] object-"openshift-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.803501 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.822469 5050 reflector.go:561] object-"openshift-route-controller-manager"/"config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.822553 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.842403 5050 reflector.go:561] object-"openshift-console"/"oauth-serving-cert": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Doauth-serving-cert&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.842482 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"oauth-serving-cert\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Doauth-serving-cert&resourceVersion=109067\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.862510 5050 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-kubernetes-control-plane-dockercfg-gs7dd&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.862558 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-gs7dd\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-kubernetes-control-plane-dockercfg-gs7dd&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.882928 5050 reflector.go:561] object-"openstack"/"openstack-aee-default-env": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-aee-default-env&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.883004 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-aee-default-env\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-aee-default-env&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.902701 5050 reflector.go:561] object-"openshift-ingress"/"router-metrics-certs-default": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-metrics-certs-default&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.902754 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"router-metrics-certs-default\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-metrics-certs-default&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.922971 5050 reflector.go:561] object-"openstack"/"cinder-volume-volume1-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-volume-volume1-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.923079 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-volume-volume1-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-volume-volume1-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.943003 5050 reflector.go:561] object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-config&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.943080 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-config&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.963238 5050 reflector.go:561] object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dingress-operator-dockercfg-7lnqk&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.963318 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-7lnqk\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dingress-operator-dockercfg-7lnqk&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:21.983078 5050 reflector.go:561] object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-7bf58": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dswift-operator-controller-manager-dockercfg-7bf58&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:21.983154 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"swift-operator-controller-manager-dockercfg-7bf58\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dswift-operator-controller-manager-dockercfg-7bf58&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:21.991221 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-share-share1-0" podUID="29321ad8-528b-46ed-8c14-21a74038cddb" containerName="manila-share" probeResult="failure" output="Get \"http://10.217.1.146:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:21.991235 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-scheduler-0" podUID="9c5fd2fd-4df8-4f0f-982c-d3e6df852669" containerName="manila-scheduler" probeResult="failure" output="Get \"http://10.217.1.145:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:22.003664 5050 request.go:700] Waited for 6.653599647s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dironic-operator-controller-manager-dockercfg-qd9ll&resourceVersion=109846 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.004102 5050 reflector.go:561] object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-qd9ll": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dironic-operator-controller-manager-dockercfg-qd9ll&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.004161 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"ironic-operator-controller-manager-dockercfg-qd9ll\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dironic-operator-controller-manager-dockercfg-qd9ll&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.023088 5050 reflector.go:561] object-"openshift-image-registry"/"image-registry-certificates": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dimage-registry-certificates&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.023156 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"image-registry-certificates\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dimage-registry-certificates&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.043379 5050 reflector.go:561] object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-dockercfg-5nsgg&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.043439 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-5nsgg\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-dockercfg-5nsgg&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.063368 5050 reflector.go:561] object-"openstack"/"horizon-horizon-dockercfg-d7bqh": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dhorizon-horizon-dockercfg-d7bqh&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.063440 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"horizon-horizon-dockercfg-d7bqh\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dhorizon-horizon-dockercfg-d7bqh&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.082951 5050 reflector.go:561] object-"openshift-dns"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.084039 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.103201 5050 reflector.go:561] object-"openstack"/"dnsmasq-dns-dockercfg-kpmgv": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddnsmasq-dns-dockercfg-kpmgv&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.103246 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"dnsmasq-dns-dockercfg-kpmgv\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddnsmasq-dns-dockercfg-kpmgv&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.123081 5050 reflector.go:561] object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-4x88l": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dovn-operator-controller-manager-dockercfg-4x88l&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.123121 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"ovn-operator-controller-manager-dockercfg-4x88l\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dovn-operator-controller-manager-dockercfg-4x88l&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.142875 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.142933 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.163256 5050 reflector.go:561] object-"openstack"/"octavia-healthmanager-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-healthmanager-config-data&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.163327 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-healthmanager-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-healthmanager-config-data&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.182632 5050 reflector.go:561] object-"openshift-multus"/"default-cni-sysctl-allowlist": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Ddefault-cni-sysctl-allowlist&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.182693 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Ddefault-cni-sysctl-allowlist&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.203042 5050 reflector.go:561] object-"openstack"/"heat-engine-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dheat-engine-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.203082 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"heat-engine-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dheat-engine-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.222840 5050 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.222894 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109067\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.243290 5050 reflector.go:561] object-"openshift-operators"/"observability-operator-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobservability-operator-tls&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.243371 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"observability-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobservability-operator-tls&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.262985 5050 reflector.go:561] object-"openshift-cluster-version"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.263051 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.282389 5050 reflector.go:561] object-"openshift-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.282464 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.303123 5050 reflector.go:561] object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-sa-dockercfg-5xfcg&resourceVersion=109237": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.303197 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-5xfcg\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-sa-dockercfg-5xfcg&resourceVersion=109237\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.322755 5050 reflector.go:561] object-"openshift-machine-config-operator"/"kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=109851": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.322812 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=109851\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.342461 5050 reflector.go:561] object-"openstack"/"placement-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-scripts&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.342508 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"placement-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-scripts&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.363054 5050 reflector.go:561] object-"openshift-etcd-operator"/"etcd-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-operator-config&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.363117 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-operator-config&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.383118 5050 reflector.go:561] object-"openstack"/"heat-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dheat-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.383178 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"heat-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dheat-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.403137 5050 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.403205 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:22.415304 5050 patch_prober.go:28] interesting pod/dns-default-25p7l container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:22.415352 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-25p7l" podUID="66dfe3f5-9e7a-4e1c-b03a-b6f01dab6ef4" containerName="dns" probeResult="failure" output="Get \"http://10.217.0.27:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.423029 5050 reflector.go:561] object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-mc6vn": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Ddesignate-operator-controller-manager-dockercfg-mc6vn&resourceVersion=109087": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.423100 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"designate-operator-controller-manager-dockercfg-mc6vn\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Ddesignate-operator-controller-manager-dockercfg-mc6vn&resourceVersion=109087\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.442886 5050 reflector.go:561] object-"openstack"/"prometheus-metric-storage-rulefiles-0": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dprometheus-metric-storage-rulefiles-0&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.442951 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"prometheus-metric-storage-rulefiles-0\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dprometheus-metric-storage-rulefiles-0&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.462826 5050 reflector.go:561] object-"openshift-network-node-identity"/"network-node-identity-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/secrets?fieldSelector=metadata.name%3Dnetwork-node-identity-cert&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.462888 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/secrets?fieldSelector=metadata.name%3Dnetwork-node-identity-cert&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.482784 5050 reflector.go:561] object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-p54cz": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager-operator/secrets?fieldSelector=metadata.name%3Dcert-manager-operator-controller-manager-dockercfg-p54cz&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.482810 5050 reflector.go:158] "Unhandled Error" err="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-p54cz\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager-operator/secrets?fieldSelector=metadata.name%3Dcert-manager-operator-controller-manager-dockercfg-p54cz&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.502436 5050 reflector.go:561] object-"openstack-operators"/"infra-operator-webhook-server-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dinfra-operator-webhook-server-cert&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.502483 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"infra-operator-webhook-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dinfra-operator-webhook-server-cert&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.522645 5050 reflector.go:561] object-"metallb-system"/"frr-startup": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dfrr-startup&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.522712 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"frr-startup\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dfrr-startup&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.542926 5050 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.542968 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.563069 5050 reflector.go:561] object-"openstack"/"prometheus-metric-storage-tls-assets-0": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage-tls-assets-0&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.563132 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"prometheus-metric-storage-tls-assets-0\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage-tls-assets-0&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.582822 5050 reflector.go:561] object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-jbnlr": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-baremetal-operator-controller-manager-dockercfg-jbnlr&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.582868 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openstack-baremetal-operator-controller-manager-dockercfg-jbnlr\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-baremetal-operator-controller-manager-dockercfg-jbnlr&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:22.598200 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-qdrgd" podUID="4b4e8aa9-a0f6-4bd2-92d2-c524aedf6389" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.602853 5050 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-serving-cert&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.602941 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-serving-cert&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.622616 5050 reflector.go:561] object-"openshift-network-operator"/"iptables-alerter-script": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Diptables-alerter-script&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.622693 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"iptables-alerter-script\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Diptables-alerter-script&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.643098 5050 reflector.go:561] object-"openshift-network-node-identity"/"env-overrides": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Denv-overrides&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.643184 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Denv-overrides&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.663697 5050 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-config&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.663768 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-config&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:22.674235 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp" podUID="85683dfb-37fb-4301-8c7a-fbb7453b303d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.682838 5050 reflector.go:561] object-"openstack"/"telemetry-ceilometer-dockercfg-nl629": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dtelemetry-ceilometer-dockercfg-nl629&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.682891 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"telemetry-ceilometer-dockercfg-nl629\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dtelemetry-ceilometer-dockercfg-nl629&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.702595 5050 reflector.go:561] object-"openstack"/"cinder-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-scripts&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.702660 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-scripts&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.722959 5050 reflector.go:561] object-"openshift-cluster-version"/"default-dockercfg-gxtc4": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-gxtc4&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.723040 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"default-dockercfg-gxtc4\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-gxtc4&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.742734 5050 reflector.go:561] object-"openshift-authentication-operator"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.742797 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.762609 5050 reflector.go:561] object-"openshift-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.762695 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:22.771595 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="14ad1594-090d-4024-a999-9ffe77ce58d8" containerName="galera" probeResult="failure" output="command timed out" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:22.780286 5050 patch_prober.go:28] interesting pod/dns-default-25p7l container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.217.0.27:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:22.780342 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-25p7l" podUID="66dfe3f5-9e7a-4e1c-b03a-b6f01dab6ef4" containerName="dns" probeResult="failure" output="Get \"http://10.217.0.27:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.783272 5050 reflector.go:561] object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-operator-dockercfg-r9srn&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.783318 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-r9srn\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-operator-dockercfg-r9srn&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.802908 5050 reflector.go:561] object-"openshift-machine-api"/"control-plane-machine-set-operator-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-tls&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.803028 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-tls&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.822767 5050 reflector.go:561] object-"openstack"/"octavia-hmport-map": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Doctavia-hmport-map&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.822854 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-hmport-map\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Doctavia-hmport-map&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.843165 5050 reflector.go:561] object-"openshift-cluster-version"/"cluster-version-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/secrets?fieldSelector=metadata.name%3Dcluster-version-operator-serving-cert&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.843236 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/secrets?fieldSelector=metadata.name%3Dcluster-version-operator-serving-cert&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.863160 5050 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:22.863230 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="7fb00b03-fe6e-4c66-bd36-adf9443871a8" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.1.135:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.863245 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:22.863376 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="7fb00b03-fe6e-4c66-bd36-adf9443871a8" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.1.135:9090/-/healthy\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.882672 5050 reflector.go:561] object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/secrets?fieldSelector=metadata.name%3Dconsole-operator-dockercfg-4xjcr&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.882766 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"console-operator-dockercfg-4xjcr\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/secrets?fieldSelector=metadata.name%3Dconsole-operator-dockercfg-4xjcr&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.902634 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-service-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-service-ca&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.902664 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-service-ca&resourceVersion=109067\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.923276 5050 reflector.go:561] object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-glgrh": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dheat-operator-controller-manager-dockercfg-glgrh&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.923333 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"heat-operator-controller-manager-dockercfg-glgrh\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dheat-operator-controller-manager-dockercfg-glgrh&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.943579 5050 reflector.go:561] object-"openshift-etcd-operator"/"etcd-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-ca-bundle&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.943647 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-ca-bundle&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.963322 5050 reflector.go:561] object-"openshift-route-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.963391 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:22.982271 5050 reflector.go:561] object-"openshift-route-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109642": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:22.982313 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109642\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.003264 5050 reflector.go:561] object-"hostpath-provisioner"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.003352 5050 reflector.go:158] "Unhandled Error" err="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:23.022223 5050 request.go:700] Waited for 6.710798969s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-sa-dockercfg-nl2j4&resourceVersion=109586 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.022499 5050 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-sa-dockercfg-nl2j4&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.022527 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-nl2j4\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-sa-dockercfg-nl2j4&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:23.042236 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qbc2f" podUID="9726c3f9-bcae-4722-a054-5a66c161953b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:23.042261 5050 patch_prober.go:28] interesting pod/route-controller-manager-767f6d799d-cv7mn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:23.042304 5050 patch_prober.go:28] interesting pod/route-controller-manager-767f6d799d-cv7mn container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:23.042368 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn" podUID="5b7ceea3-4e92-46ee-81de-5b8f932144ad" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:23.042369 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn" podUID="5b7ceea3-4e92-46ee-81de-5b8f932144ad" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.042453 5050 reflector.go:561] object-"openstack"/"keystone": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.042491 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"keystone\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.063020 5050 reflector.go:561] object-"openshift-apiserver"/"etcd-client": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.063095 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.082352 5050 reflector.go:561] object-"openstack"/"octavia-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-config-data&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.082415 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-config-data&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.103001 5050 reflector.go:561] object-"openshift-console"/"console-dockercfg-f62pw": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-dockercfg-f62pw&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.103133 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-dockercfg-f62pw\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-dockercfg-f62pw&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.123489 5050 reflector.go:561] object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/secrets?fieldSelector=metadata.name%3Dcsi-hostpath-provisioner-sa-dockercfg-qd74k&resourceVersion=109184": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.123550 5050 reflector.go:158] "Unhandled Error" err="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-qd74k\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/secrets?fieldSelector=metadata.name%3Dcsi-hostpath-provisioner-sa-dockercfg-qd74k&resourceVersion=109184\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.142379 5050 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.142411 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.163120 5050 reflector.go:561] object-"openstack"/"keystone-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-scripts&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.163152 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"keystone-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-scripts&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:23.179193 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2" podUID="06df6c8c-640d-431b-b216-78345a9054e1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.182581 5050 reflector.go:561] object-"openshift-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.182626 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.202553 5050 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-dockercfg-mfbb7&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.202614 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-mfbb7\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-dockercfg-mfbb7&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.222481 5050 reflector.go:561] object-"openshift-console-operator"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.222528 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.242711 5050 reflector.go:561] object-"openstack"/"ceph-conf-files": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dceph-conf-files&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.242767 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ceph-conf-files\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dceph-conf-files&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.262733 5050 reflector.go:561] object-"openstack"/"glance-default-internal-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-default-internal-config-data&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.262797 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"glance-default-internal-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-default-internal-config-data&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:23.263126 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" podUID="2b344bba-8e1d-415b-9b5f-e21d3144fe42" containerName="hostpath-provisioner" probeResult="failure" output="Get \"http://10.217.0.31:9898/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.282320 5050 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-tls&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.282362 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-tls&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.302422 5050 reflector.go:561] object-"openstack"/"ovn-data-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovn-data-cert&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.302478 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovn-data-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovn-data-cert&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.323318 5050 reflector.go:561] object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-nffdg": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dglance-operator-controller-manager-dockercfg-nffdg&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.323368 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"glance-operator-controller-manager-dockercfg-nffdg\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dglance-operator-controller-manager-dockercfg-nffdg&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.342501 5050 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-node-metrics-cert&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.342543 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-node-metrics-cert&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:23.351467 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-65swh" podUID="58a60a6f-fcf5-4aa0-b6e4-97d7df2b2bd2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.362419 5050 reflector.go:561] object-"openstack"/"manila-scheduler-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-scheduler-config-data&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.362485 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"manila-scheduler-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-scheduler-config-data&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.382647 5050 reflector.go:561] object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dcluster-samples-operator-dockercfg-xpp9w&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.382715 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-xpp9w\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dcluster-samples-operator-dockercfg-xpp9w&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.402486 5050 reflector.go:561] object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-dockercfg-qt55r&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.402535 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-qt55r\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-dockercfg-qt55r&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.422955 5050 reflector.go:561] object-"openshift-ingress-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.423024 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.442866 5050 reflector.go:561] object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-6tg85": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dplacement-operator-controller-manager-dockercfg-6tg85&resourceVersion=109087": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.442917 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"placement-operator-controller-manager-dockercfg-6tg85\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dplacement-operator-controller-manager-dockercfg-6tg85&resourceVersion=109087\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.463137 5050 reflector.go:561] object-"openshift-cluster-samples-operator"/"samples-operator-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dsamples-operator-tls&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.463252 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dsamples-operator-tls&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.483142 5050 reflector.go:561] object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-x9p8j": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dkeystone-operator-controller-manager-dockercfg-x9p8j&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.483193 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"keystone-operator-controller-manager-dockercfg-x9p8j\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dkeystone-operator-controller-manager-dockercfg-x9p8j&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.502673 5050 reflector.go:561] object-"openshift-authentication"/"audit": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Daudit&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.502719 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"audit\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Daudit&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.522630 5050 reflector.go:561] object-"openstack"/"dataplane-adoption-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddataplane-adoption-secret&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.522674 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"dataplane-adoption-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddataplane-adoption-secret&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.543075 5050 reflector.go:561] object-"openstack"/"galera-openstack-dockercfg-5gcmv": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dgalera-openstack-dockercfg-5gcmv&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.543126 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"galera-openstack-dockercfg-5gcmv\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dgalera-openstack-dockercfg-5gcmv&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.563061 5050 reflector.go:561] object-"openshift-apiserver"/"config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.563100 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:23.582234 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pjhfc" podUID="3c9e825c-0aee-42b9-a7a5-3191486f301d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.582590 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=109666": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.582629 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=109666\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.602618 5050 reflector.go:561] object-"openshift-authentication-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.602653 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.622566 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-session": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-session&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.622614 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-session\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-session&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:23.637177 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lvbdb" podUID="fa0985f7-7d87-41b3-9916-f22375a0489c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.642635 5050 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.642670 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.662823 5050 reflector.go:561] object-"openshift-network-operator"/"metrics-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.662863 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.682659 5050 reflector.go:561] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-ancillary-tools-dockercfg-vnmsz&resourceVersion=109360": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.682731 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-vnmsz\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-ancillary-tools-dockercfg-vnmsz&resourceVersion=109360\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.702540 5050 reflector.go:561] object-"openstack"/"octavia-api-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-api-config-data&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.702574 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-api-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-api-config-data&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.722717 5050 reflector.go:561] object-"openstack"/"memcached-config-data": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dmemcached-config-data&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.722795 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"memcached-config-data\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dmemcached-config-data&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.743463 5050 reflector.go:561] object-"openshift-image-registry"/"registry-dockercfg-kzzsd": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dregistry-dockercfg-kzzsd&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.743618 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"registry-dockercfg-kzzsd\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dregistry-dockercfg-kzzsd&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.762932 5050 reflector.go:561] object-"openstack"/"ceilometer-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dceilometer-config-data&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.762979 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ceilometer-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dceilometer-config-data&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:23.775266 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="fef8d631-c968-4ccd-92ec-e6fc5a2f6731" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:23.775340 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.783335 5050 reflector.go:561] object-"openshift-route-controller-manager"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.783383 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.803065 5050 reflector.go:561] object-"openshift-nmstate"/"nmstate-operator-dockercfg-b8vjd": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dnmstate-operator-dockercfg-b8vjd&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.803114 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"nmstate-operator-dockercfg-b8vjd\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dnmstate-operator-dockercfg-b8vjd&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:23.803405 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-5854674fcc-qqb7f" podUID="12778398-2baa-44cb-9fd1-f2034870e9fc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:23.803405 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ttg8w" podUID="b885fa10-3ed3-41fd-94ae-2b7442519450" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.823102 5050 reflector.go:561] object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-w847r": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dwatcher-operator-controller-manager-dockercfg-w847r&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.823174 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"watcher-operator-controller-manager-dockercfg-w847r\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dwatcher-operator-controller-manager-dockercfg-w847r&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.843517 5050 reflector.go:561] object-"openstack"/"octavia-octavia-dockercfg-h4g5n": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-octavia-dockercfg-h4g5n&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.843588 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-octavia-dockercfg-h4g5n\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-octavia-dockercfg-h4g5n&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.863264 5050 reflector.go:561] object-"openshift-dns"/"dns-default": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Ddns-default&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.863377 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"dns-default\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Ddns-default&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.883313 5050 reflector.go:561] object-"openstack"/"manila-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-scripts&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.883358 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"manila-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-scripts&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:23.890228 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-grhdp" podUID="712f888b-7c45-4c1f-95d8-ccc464b7c15f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:23.890269 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-8jnzj" podUID="a6345cf8-abc2-4c9a-bfe6-8b65187ada2d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.902580 5050 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.902624 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.923222 5050 reflector.go:561] object-"openstack"/"default-dockercfg-tmtdn": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-tmtdn&resourceVersion=109623": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.923305 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"default-dockercfg-tmtdn\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-tmtdn&resourceVersion=109623\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.943382 5050 reflector.go:561] object-"openshift-etcd-operator"/"etcd-client": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.943441 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.963478 5050 reflector.go:561] object-"openstack"/"rabbitmq-erlang-cookie": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-erlang-cookie&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.963529 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-erlang-cookie\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-erlang-cookie&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:23.982580 5050 reflector.go:561] object-"openshift-controller-manager-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:23.982613 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.002280 5050 reflector.go:561] object-"openshift-apiserver"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.002311 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.022679 5050 reflector.go:561] object-"openshift-dns-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.022963 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:24.042511 5050 request.go:700] Waited for 6.645170751s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovnnorthd-config&resourceVersion=109602 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.042851 5050 reflector.go:561] object-"openstack"/"ovnnorthd-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovnnorthd-config&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.042884 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovnnorthd-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovnnorthd-config&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.062947 5050 reflector.go:561] object-"openshift-nmstate"/"nmstate-handler-dockercfg-94hht": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dnmstate-handler-dockercfg-94hht&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.063025 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"nmstate-handler-dockercfg-94hht\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dnmstate-handler-dockercfg-94hht&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.082995 5050 reflector.go:561] object-"openshift-console"/"console-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-serving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.083047 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-serving-cert&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.102463 5050 reflector.go:561] object-"openstack"/"cinder-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-config-data&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.102509 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-config-data&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.123324 5050 reflector.go:561] object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-mz95s": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-controller-manager-dockercfg-mz95s&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.123374 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openstack-operator-controller-manager-dockercfg-mz95s\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-controller-manager-dockercfg-mz95s&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.143248 5050 reflector.go:561] object-"openshift-operators"/"obo-prometheus-operator-dockercfg-mpfzr": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-dockercfg-mpfzr&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.143292 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-mpfzr\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-dockercfg-mpfzr&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.162427 5050 reflector.go:561] object-"openstack"/"rabbitmq-cell1-server-dockercfg-bvxnm": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-server-dockercfg-bvxnm&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.162474 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-cell1-server-dockercfg-bvxnm\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-server-dockercfg-bvxnm&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.183418 5050 reflector.go:561] object-"openshift-ingress"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.183472 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109067\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.202865 5050 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serving-cert&resourceVersion=109623": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.202905 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serving-cert&resourceVersion=109623\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.223212 5050 reflector.go:561] object-"openshift-ingress"/"service-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.223251 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.242582 5050 reflector.go:561] object-"metallb-system"/"manager-account-dockercfg-m6zt9": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmanager-account-dockercfg-m6zt9&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.242637 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"manager-account-dockercfg-m6zt9\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmanager-account-dockercfg-m6zt9&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.263287 5050 reflector.go:561] object-"metallb-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.263342 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.282502 5050 reflector.go:561] object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-t4t27": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncluster-ovndbcluster-nb-dockercfg-t4t27&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.282539 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovncluster-ovndbcluster-nb-dockercfg-t4t27\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncluster-ovndbcluster-nb-dockercfg-t4t27&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.302890 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-router-certs": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-router-certs&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.302938 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-router-certs&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.322502 5050 reflector.go:561] object-"openshift-config-operator"/"config-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/secrets?fieldSelector=metadata.name%3Dconfig-operator-serving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.322545 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"config-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/secrets?fieldSelector=metadata.name%3Dconfig-operator-serving-cert&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.342513 5050 reflector.go:561] object-"openstack"/"manila-api-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-api-config-data&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.342560 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"manila-api-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-api-config-data&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.362541 5050 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dmachine-approver-config&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.362589 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dmachine-approver-config&resourceVersion=109067\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.382702 5050 reflector.go:561] object-"openstack"/"dataplanenodeset-openstack-cell1": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddataplanenodeset-openstack-cell1&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.382745 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"dataplanenodeset-openstack-cell1\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddataplanenodeset-openstack-cell1&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.402513 5050 reflector.go:561] object-"openshift-etcd-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.402566 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.422569 5050 reflector.go:561] object-"openshift-route-controller-manager"/"client-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.422620 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.442412 5050 reflector.go:561] object-"openshift-nmstate"/"nginx-conf": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/configmaps?fieldSelector=metadata.name%3Dnginx-conf&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.442454 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"nginx-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/configmaps?fieldSelector=metadata.name%3Dnginx-conf&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.462504 5050 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-daemon-dockercfg-r5tcq&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.462577 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-r5tcq\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-daemon-dockercfg-r5tcq&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.482490 5050 reflector.go:561] object-"openstack"/"rabbitmq-cell1-default-user": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-default-user&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.482562 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-cell1-default-user\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-default-user&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.497083 5050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="7s" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:24.502206 5050 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:24.502266 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:24.502421 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.502664 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-trusted-ca-bundle&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.502707 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-trusted-ca-bundle&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.522617 5050 reflector.go:561] object-"openstack"/"cert-galera-openstack-cell1-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-galera-openstack-cell1-svc&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.522686 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-galera-openstack-cell1-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-galera-openstack-cell1-svc&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.542534 5050 reflector.go:561] object-"openstack"/"openstackclient-openstackclient-dockercfg-c88pf": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dopenstackclient-openstackclient-dockercfg-c88pf&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.542579 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstackclient-openstackclient-dockercfg-c88pf\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dopenstackclient-openstackclient-dockercfg-c88pf&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.571502 5050 reflector.go:561] object-"openstack"/"ovndbcluster-nb-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-nb-scripts&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.571654 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovndbcluster-nb-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-nb-scripts&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.583430 5050 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=109623": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.583623 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=109623\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.606111 5050 reflector.go:561] object-"openstack"/"manila-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-config-data&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.606164 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"manila-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-config-data&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.622656 5050 reflector.go:561] object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Doauth-openshift-dockercfg-znhcc&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.622700 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-znhcc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Doauth-openshift-dockercfg-znhcc&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.642733 5050 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/secrets?fieldSelector=metadata.name%3Dauthentication-operator-dockercfg-mz9bj&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.642774 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-mz9bj\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/secrets?fieldSelector=metadata.name%3Dauthentication-operator-dockercfg-mz9bj&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.662714 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-serving-cert&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.662754 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-serving-cert&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.683027 5050 reflector.go:561] object-"metallb-system"/"frr-k8s-webhook-server-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dfrr-k8s-webhook-server-cert&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.683059 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"frr-k8s-webhook-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dfrr-k8s-webhook-server-cert&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.702365 5050 reflector.go:561] object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-f52rf": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dtelemetry-operator-controller-manager-dockercfg-f52rf&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.702406 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"telemetry-operator-controller-manager-dockercfg-f52rf\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dtelemetry-operator-controller-manager-dockercfg-f52rf&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.722561 5050 reflector.go:561] object-"openshift-service-ca"/"signing-cabundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dsigning-cabundle&resourceVersion=109642": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.722599 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"signing-cabundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dsigning-cabundle&resourceVersion=109642\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:24.723158 5050 patch_prober.go:28] interesting pod/observability-operator-d8bb48f5d-wwdcc container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.1.128:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:24.723199 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" podUID="1c8331c1-b8ee-456b-baa6-110917427b64" containerName="operator" probeResult="failure" output="Get \"http://10.217.1.128:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.742803 5050 reflector.go:561] object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.742838 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.763357 5050 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackage-server-manager-serving-cert&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.763401 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackage-server-manager-serving-cert&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.782860 5050 reflector.go:561] object-"openstack-operators"/"test-operator-controller-manager-dockercfg-cclxg": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dtest-operator-controller-manager-dockercfg-cclxg&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.782889 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"test-operator-controller-manager-dockercfg-cclxg\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dtest-operator-controller-manager-dockercfg-cclxg&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.803052 5050 reflector.go:561] object-"openstack"/"octavia-housekeeping-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-housekeeping-scripts&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.803101 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-housekeeping-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-housekeeping-scripts&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.823007 5050 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-config&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.823072 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-config&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.842714 5050 reflector.go:561] object-"openshift-image-registry"/"image-registry-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dimage-registry-tls&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.842751 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"image-registry-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dimage-registry-tls&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.862773 5050 reflector.go:561] object-"openshift-network-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.862824 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.883257 5050 reflector.go:561] object-"openshift-etcd-operator"/"etcd-service-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-service-ca-bundle&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.883311 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-service-ca-bundle&resourceVersion=109067\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.903030 5050 reflector.go:561] object-"openshift-multus"/"metrics-daemon-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmetrics-daemon-secret&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.903072 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"metrics-daemon-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmetrics-daemon-secret&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.923000 5050 reflector.go:561] object-"openstack"/"telemetry-autoscaling-dockercfg-7ght8": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dtelemetry-autoscaling-dockercfg-7ght8&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.923075 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"telemetry-autoscaling-dockercfg-7ght8\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dtelemetry-autoscaling-dockercfg-7ght8&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.942879 5050 reflector.go:561] object-"openstack"/"ovndbcluster-nb-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-nb-config&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.942906 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovndbcluster-nb-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-nb-config&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:24.954121 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" podUID="3477354d-838b-48cc-a6c3-612088d82640" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:24.954195 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" podUID="3477354d-838b-48cc-a6c3-612088d82640" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:24.954235 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:24.955040 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manager" containerStatusID={"Type":"cri-o","ID":"466b93fe028fca07c950e859d36217259a41f4f7bfb1b3eeba0ddc9b195b96a1"} pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" containerMessage="Container manager failed liveness probe, will be restarted" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:24.955105 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" podUID="3477354d-838b-48cc-a6c3-612088d82640" containerName="manager" containerID="cri-o://466b93fe028fca07c950e859d36217259a41f4f7bfb1b3eeba0ddc9b195b96a1" gracePeriod=10 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.962327 5050 reflector.go:561] object-"openstack"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.962383 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:24.983315 5050 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:24.983354 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=109067\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.003560 5050 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-dockercfg-vw8fw&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.003625 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-vw8fw\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-dockercfg-vw8fw&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:25.016178 5050 patch_prober.go:28] interesting pod/perses-operator-5446b9c989-dqk4m container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.1.129:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:25.016227 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5446b9c989-dqk4m" podUID="051b7665-675e-4109-a8e8-5a416c8b49cc" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.1.129:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:25.016333 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-dqk4m" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.023236 5050 reflector.go:561] object-"openstack"/"rabbitmq-server-dockercfg-zd6qh": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-server-dockercfg-zd6qh&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.023268 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-server-dockercfg-zd6qh\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-server-dockercfg-zd6qh&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.042555 5050 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-operator-dockercfg-2bh8d&resourceVersion=109225": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.042588 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2bh8d\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-operator-dockercfg-2bh8d&resourceVersion=109225\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:25.062223 5050 request.go:700] Waited for 6.585493022s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=109522 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.062695 5050 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.062731 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.083082 5050 reflector.go:561] object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcertified-operators-dockercfg-4rs5g&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.083113 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-4rs5g\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcertified-operators-dockercfg-4rs5g&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.102654 5050 reflector.go:561] object-"openshift-cluster-samples-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.102689 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.122635 5050 reflector.go:561] object-"openshift-ingress-canary"/"canary-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Dcanary-serving-cert&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.122695 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-canary\"/\"canary-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Dcanary-serving-cert&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:25.127805 5050 patch_prober.go:28] interesting pod/controller-manager-69cffd76bd-8bkp6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:25.127846 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-69cffd76bd-8bkp6" podUID="6a956942-e4db-4c66-a7c2-1c370c1569f4" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:25.127899 5050 patch_prober.go:28] interesting pod/controller-manager-69cffd76bd-8bkp6 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:25.127911 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-69cffd76bd-8bkp6" podUID="6a956942-e4db-4c66-a7c2-1c370c1569f4" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:25.127937 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-69cffd76bd-8bkp6" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:25.128702 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller-manager" containerStatusID={"Type":"cri-o","ID":"abbb98db2a4a22273080da7e82e528f04ecd9efd37eee52f71d3c5fe7d719895"} pod="openshift-controller-manager/controller-manager-69cffd76bd-8bkp6" containerMessage="Container controller-manager failed liveness probe, will be restarted" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:25.128736 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-69cffd76bd-8bkp6" podUID="6a956942-e4db-4c66-a7c2-1c370c1569f4" containerName="controller-manager" containerID="cri-o://abbb98db2a4a22273080da7e82e528f04ecd9efd37eee52f71d3c5fe7d719895" gracePeriod=30 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.143040 5050 reflector.go:561] object-"openstack"/"placement-placement-dockercfg-4zzmp": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-placement-dockercfg-4zzmp&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.143089 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"placement-placement-dockercfg-4zzmp\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-placement-dockercfg-4zzmp&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.163238 5050 reflector.go:561] object-"openstack-operators"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.163290 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.183438 5050 reflector.go:561] object-"openstack"/"glance-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-scripts&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.183491 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"glance-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-scripts&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.203216 5050 reflector.go:561] object-"openshift-service-ca"/"signing-key": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dsigning-key&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.203269 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"signing-key\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dsigning-key&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:25.209772 5050 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cd66n container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:25.209834 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" podUID="ea884e88-c0df-4212-976a-0d7ce1731fdc" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:25.209848 5050 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cd66n container/openshift-apiserver namespace/openshift-apiserver: Liveness probe status=failure output="Get \"https://10.217.0.9:8443/livez?exclude=etcd\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:25.209892 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" podUID="ea884e88-c0df-4212-976a-0d7ce1731fdc" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.9:8443/livez?exclude=etcd\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:25.217646 5050 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-cnp7n container/oauth-apiserver namespace/openshift-oauth-apiserver: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/livez?exclude=etcd\": context deadline exceeded" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:25.217673 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" podUID="1d89350d-55e9-4ef6-8182-287894b6c14b" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.10:8443/livez?exclude=etcd\": context deadline exceeded" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.222699 5050 reflector.go:561] object-"openshift-machine-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.222746 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.243478 5050 reflector.go:561] object-"openshift-service-ca"/"service-ca-dockercfg-pn86c": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dservice-ca-dockercfg-pn86c&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.243533 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"service-ca-dockercfg-pn86c\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dservice-ca-dockercfg-pn86c&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.263304 5050 reflector.go:561] object-"openstack"/"placement-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.263344 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"placement-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.283142 5050 reflector.go:561] object-"openshift-oauth-apiserver"/"encryption-config-1": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.283178 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.303224 5050 reflector.go:561] object-"openshift-machine-config-operator"/"mcc-proxy-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.303261 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.322675 5050 reflector.go:561] object-"openstack"/"neutron-httpd-config": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-httpd-config&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.322707 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"neutron-httpd-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-httpd-config&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.342484 5050 reflector.go:561] object-"metallb-system"/"frr-k8s-daemon-dockercfg-wks79": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dfrr-k8s-daemon-dockercfg-wks79&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.342520 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"frr-k8s-daemon-dockercfg-wks79\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dfrr-k8s-daemon-dockercfg-wks79&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.362795 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-error": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-error&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.362844 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-error&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.382729 5050 reflector.go:561] object-"openstack"/"metric-storage-prometheus-dockercfg-4drvb": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmetric-storage-prometheus-dockercfg-4drvb&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.382780 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"metric-storage-prometheus-dockercfg-4drvb\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmetric-storage-prometheus-dockercfg-4drvb&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:25.401217 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-5d666c4679-krnwf" podUID="a6130f1a-c95b-445f-8235-e57fdcb270fe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.402954 5050 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.402987 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.423365 5050 reflector.go:561] pkg/kubelet/config/apiserver.go:66: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dcrc&resourceVersion=109930": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.423406 5050 reflector.go:158] "Unhandled Error" err="pkg/kubelet/config/apiserver.go:66: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dcrc&resourceVersion=109930\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.442710 5050 reflector.go:561] object-"openstack"/"cinder-api-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-api-config-data&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.442761 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-api-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-api-config-data&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.462642 5050 reflector.go:561] object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.462682 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.482603 5050 reflector.go:561] object-"openstack"/"manila-share-share1-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-share-share1-config-data&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.482631 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"manila-share-share1-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-share-share1-config-data&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.502472 5050 reflector.go:561] object-"cert-manager"/"cert-manager-cainjector-dockercfg-sxrxp": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-cainjector-dockercfg-sxrxp&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.502521 5050 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-sxrxp\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-cainjector-dockercfg-sxrxp&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.522964 5050 reflector.go:561] object-"openstack"/"nova-scheduler-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-scheduler-config-data&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.523056 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-scheduler-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-scheduler-config-data&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.543140 5050 reflector.go:561] object-"openshift-service-ca-operator"/"service-ca-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-operator-config&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.543185 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-operator-config&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.563234 5050 reflector.go:561] object-"openstack"/"alertmanager-metric-storage-web-config": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dalertmanager-metric-storage-web-config&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:25.563228 5050 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.563275 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"alertmanager-metric-storage-web-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dalertmanager-metric-storage-web-config&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:25.563316 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.582876 5050 reflector.go:561] object-"cert-manager-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.582927 5050 reflector.go:158] "Unhandled Error" err="object-\"cert-manager-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.603353 5050 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-operator-dockercfg-98p87&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.603399 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-98p87\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-operator-dockercfg-98p87&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.623329 5050 reflector.go:561] object-"openshift-nmstate"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109851": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.623376 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109851\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.642542 5050 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/ovndbcluster-nb-etc-ovn-ovsdbserver-nb-2: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/ovndbcluster-nb-etc-ovn-ovsdbserver-nb-2\": dial tcp 38.102.83.147:6443: connect: connection refused" pod="openstack/ovsdbserver-nb-2" volumeName="ovndbcluster-nb-etc-ovn" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.662476 5050 reflector.go:561] object-"metallb-system"/"metallb-excludel2": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dmetallb-excludel2&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.662745 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-excludel2\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dmetallb-excludel2&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.682753 5050 reflector.go:561] object-"openshift-oauth-apiserver"/"audit-1": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&resourceVersion=109642": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.682784 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&resourceVersion=109642\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:25.702741 5050 status_manager.go:851] "Failed to get status for pod" podUID="85683dfb-37fb-4301-8c7a-fbb7453b303d" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/cinder-operator-controller-manager-6c677c69b-n7crp\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.722634 5050 reflector.go:561] object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dcluster-image-registry-operator-dockercfg-m4qtx&resourceVersion=109397": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.722674 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-m4qtx\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dcluster-image-registry-operator-dockercfg-m4qtx&resourceVersion=109397\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.742854 5050 reflector.go:561] object-"metallb-system"/"metallb-webhook-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-webhook-cert&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.742890 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-webhook-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-webhook-cert&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.763040 5050 reflector.go:561] object-"openshift-etcd-operator"/"etcd-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-operator-serving-cert&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.763085 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-operator-serving-cert&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.783332 5050 reflector.go:561] object-"openstack"/"octavia-api-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-api-scripts&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.783369 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-api-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-api-scripts&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.802688 5050 reflector.go:561] object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.802725 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:25.822262 5050 patch_prober.go:28] interesting pod/dns-default-25p7l container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.217.0.27:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:25.822313 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-25p7l" podUID="66dfe3f5-9e7a-4e1c-b03a-b6f01dab6ef4" containerName="dns" probeResult="failure" output="Get \"http://10.217.0.27:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.822905 5050 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-controller-dockercfg-c2lfx&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.823041 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-c2lfx\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-controller-dockercfg-c2lfx&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.842883 5050 reflector.go:561] object-"openstack"/"openstack-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-config&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.842964 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-config&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.862724 5050 reflector.go:561] object-"openshift-image-registry"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.862764 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109067\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.882921 5050 reflector.go:561] object-"openshift-ingress"/"router-stats-default": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-stats-default&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.882966 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"router-stats-default\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-stats-default&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.903357 5050 reflector.go:561] object-"openstack"/"manila-manila-dockercfg-d7578": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-manila-dockercfg-d7578&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.903395 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"manila-manila-dockercfg-d7578\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-manila-dockercfg-d7578&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.922946 5050 reflector.go:561] object-"openstack"/"heat-heat-dockercfg-mz9rx": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dheat-heat-dockercfg-mz9rx&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.922975 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"heat-heat-dockercfg-mz9rx\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dheat-heat-dockercfg-mz9rx&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.942504 5050 reflector.go:561] object-"openshift-oauth-apiserver"/"etcd-client": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.942527 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.963279 5050 reflector.go:561] object-"openstack"/"openstack-cell1-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-cell1-scripts&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.963328 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-cell1-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-cell1-scripts&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:25.983349 5050 reflector.go:561] object-"openshift-machine-api"/"kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:25.983380 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:25.996150 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" podUID="3477354d-838b-48cc-a6c3-612088d82640" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.003243 5050 reflector.go:561] object-"openshift-service-ca"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.003316 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.022540 5050 reflector.go:561] object-"openstack"/"barbican-worker-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-worker-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.022636 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"barbican-worker-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-worker-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.043048 5050 reflector.go:561] object-"openstack"/"prometheus-metric-storage": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.043100 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"prometheus-metric-storage\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:26.059170 5050 patch_prober.go:28] interesting pod/perses-operator-5446b9c989-dqk4m container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.1.129:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:26.059229 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5446b9c989-dqk4m" podUID="051b7665-675e-4109-a8e8-5a416c8b49cc" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.1.129:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:26.062432 5050 request.go:700] Waited for 6.826803108s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-script-lib&resourceVersion=109685 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.062907 5050 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-script-lib&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.062961 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-script-lib&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.082718 5050 reflector.go:561] object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-8nf9c": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dcinder-operator-controller-manager-dockercfg-8nf9c&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.082784 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"cinder-operator-controller-manager-dockercfg-8nf9c\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dcinder-operator-controller-manager-dockercfg-8nf9c&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.103207 5050 reflector.go:561] object-"openshift-marketplace"/"marketplace-trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dmarketplace-trusted-ca&resourceVersion=109117": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.103287 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dmarketplace-trusted-ca&resourceVersion=109117\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.123061 5050 reflector.go:561] object-"openshift-console"/"console-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dconsole-config&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.123124 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dconsole-config&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.143143 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-login": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-login&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.143181 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-login&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.163132 5050 reflector.go:561] object-"cert-manager"/"cert-manager-dockercfg-8z6ch": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-dockercfg-8z6ch&resourceVersion=109623": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.163200 5050 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"cert-manager-dockercfg-8z6ch\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-dockercfg-8z6ch&resourceVersion=109623\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.183179 5050 reflector.go:561] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-serving-cert&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.183259 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-serving-cert&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.203365 5050 reflector.go:561] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.203435 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.223536 5050 reflector.go:561] object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-crgwv": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Drabbitmq-cluster-operator-controller-manager-dockercfg-crgwv&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.223626 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"rabbitmq-cluster-operator-controller-manager-dockercfg-crgwv\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Drabbitmq-cluster-operator-controller-manager-dockercfg-crgwv&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:26.240162 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5fb79d99b5-m4xgd" podUID="2086bc41-00a2-4c97-a491-08511f3ed6e5" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.117:8080/dashboard/auth/login/?next=/dashboard/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:26.240272 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5fb79d99b5-m4xgd" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:26.240561 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/horizon-5fb79d99b5-m4xgd" podUID="2086bc41-00a2-4c97-a491-08511f3ed6e5" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.117:8080/dashboard/auth/login/?next=/dashboard/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:26.240637 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/horizon-5fb79d99b5-m4xgd" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:26.241044 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"c2abb4cc449a9d8127c16b79dfe68ac27a46606a1e98e66a409122244f1e0137"} pod="openstack/horizon-5fb79d99b5-m4xgd" containerMessage="Container horizon failed liveness probe, will be restarted" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:26.241071 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5fb79d99b5-m4xgd" podUID="2086bc41-00a2-4c97-a491-08511f3ed6e5" containerName="horizon" containerID="cri-o://c2abb4cc449a9d8127c16b79dfe68ac27a46606a1e98e66a409122244f1e0137" gracePeriod=30 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.242486 5050 reflector.go:561] object-"openstack"/"nova-cell0-conductor-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell0-conductor-config-data&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.242564 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-cell0-conductor-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell0-conductor-config-data&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.262667 5050 reflector.go:561] object-"openshift-multus"/"multus-daemon-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dmultus-daemon-config&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.262732 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-daemon-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dmultus-daemon-config&resourceVersion=109067\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.282517 5050 reflector.go:561] object-"openshift-marketplace"/"community-operators-dockercfg-dmngl": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcommunity-operators-dockercfg-dmngl&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.282584 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"community-operators-dockercfg-dmngl\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcommunity-operators-dockercfg-dmngl&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.303085 5050 reflector.go:561] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-serving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.303158 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-serving-cert&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:26.304195 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" podUID="2b344bba-8e1d-415b-9b5f-e21d3144fe42" containerName="hostpath-provisioner" probeResult="failure" output="Get \"http://10.217.0.31:9898/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:26.304251 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:26.305029 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="hostpath-provisioner" containerStatusID={"Type":"cri-o","ID":"b5bea0bcdd2e66523511acdf8482695e27c2297dc7d6b7729ac1ce7866fb5763"} pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" containerMessage="Container hostpath-provisioner failed liveness probe, will be restarted" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:26.305101 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" podUID="2b344bba-8e1d-415b-9b5f-e21d3144fe42" containerName="hostpath-provisioner" containerID="cri-o://b5bea0bcdd2e66523511acdf8482695e27c2297dc7d6b7729ac1ce7866fb5763" gracePeriod=30 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.323145 5050 reflector.go:561] object-"metallb-system"/"speaker-dockercfg-p2rzt": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dspeaker-dockercfg-p2rzt&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.323250 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"speaker-dockercfg-p2rzt\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dspeaker-dockercfg-p2rzt&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.343111 5050 reflector.go:561] object-"openshift-dns-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.343171 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.362994 5050 reflector.go:561] object-"openstack"/"glance-default-external-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-default-external-config-data&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.363052 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"glance-default-external-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-default-external-config-data&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.382804 5050 reflector.go:561] object-"openshift-machine-config-operator"/"node-bootstrapper-token": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dnode-bootstrapper-token&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.382843 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dnode-bootstrapper-token&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.402991 5050 reflector.go:561] object-"openshift-console"/"default-dockercfg-chnjx": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-chnjx&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.403063 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"default-dockercfg-chnjx\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-chnjx&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.422986 5050 reflector.go:561] object-"openstack"/"nova-metadata-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-metadata-config-data&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.423052 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-metadata-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-metadata-config-data&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.442498 5050 reflector.go:561] object-"metallb-system"/"metallb-operator-webhook-server-service-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-operator-webhook-server-service-cert&resourceVersion=109215": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.442571 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-operator-webhook-server-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-operator-webhook-server-service-cert&resourceVersion=109215\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.462829 5050 reflector.go:561] object-"openshift-console-operator"/"trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.462898 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.483296 5050 reflector.go:561] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-dockercfg-x57mr&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.483367 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-x57mr\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-dockercfg-x57mr&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.502689 5050 reflector.go:561] object-"openstack"/"ovsdbserver-nb": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovsdbserver-nb&resourceVersion=109851": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.502767 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovsdbserver-nb\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovsdbserver-nb&resourceVersion=109851\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.522720 5050 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-images": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dmachine-api-operator-images&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.522845 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dmachine-api-operator-images&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:26.540155 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-5bddd4b946-644bs" podUID="5f0d9e74-baf0-4759-bb9f-3ff0a25e95d9" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.51:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:26.540201 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-5bddd4b946-644bs" podUID="5f0d9e74-baf0-4759-bb9f-3ff0a25e95d9" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.51:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.542625 5050 reflector.go:561] object-"openshift-service-ca-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.542699 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.562393 5050 reflector.go:561] object-"openstack"/"octavia-housekeeping-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-housekeeping-config-data&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.562489 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-housekeeping-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-housekeeping-config-data&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.582643 5050 reflector.go:561] object-"openshift-ingress-operator"/"metrics-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.582745 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.603157 5050 reflector.go:561] object-"hostpath-provisioner"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.603235 5050 reflector.go:158] "Unhandled Error" err="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.623268 5050 reflector.go:561] object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-gkldc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmanila-operator-controller-manager-dockercfg-gkldc&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.623332 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"manila-operator-controller-manager-dockercfg-gkldc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmanila-operator-controller-manager-dockercfg-gkldc&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.643185 5050 reflector.go:561] object-"openshift-marketplace"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.643273 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.663541 5050 reflector.go:561] object-"openshift-multus"/"multus-ac-dockercfg-9lkdf": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-ac-dockercfg-9lkdf&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.663614 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-ac-dockercfg-9lkdf\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-ac-dockercfg-9lkdf&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.683404 5050 reflector.go:561] object-"openstack"/"aodh-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Daodh-scripts&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.683452 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"aodh-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Daodh-scripts&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.703498 5050 reflector.go:561] object-"openshift-operators"/"observability-operator-sa-dockercfg-f5g9s": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobservability-operator-sa-dockercfg-f5g9s&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.703585 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-f5g9s\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobservability-operator-sa-dockercfg-f5g9s&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.723256 5050 reflector.go:561] object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-operators-dockercfg-ct8rh&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.723330 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-ct8rh\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-operators-dockercfg-ct8rh&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.742613 5050 reflector.go:561] object-"openstack"/"memcached-memcached-dockercfg-kl4q7": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmemcached-memcached-dockercfg-kl4q7&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.742708 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"memcached-memcached-dockercfg-kl4q7\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmemcached-memcached-dockercfg-kl4q7&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.763141 5050 reflector.go:561] object-"openshift-dns-operator"/"metrics-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.763214 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns-operator\"/\"metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.783546 5050 reflector.go:561] object-"openstack"/"octavia-rsyslog-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-rsyslog-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.783674 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-rsyslog-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-rsyslog-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.803108 5050 reflector.go:561] object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-25r77": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dinfra-operator-controller-manager-dockercfg-25r77&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.803170 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"infra-operator-controller-manager-dockercfg-25r77\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dinfra-operator-controller-manager-dockercfg-25r77&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.823295 5050 reflector.go:561] object-"openshift-operators"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.823475 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.843065 5050 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-tls&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.843160 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-tls&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.862673 5050 reflector.go:561] object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage-thanos-prometheus-http-client-file&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.862717 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"prometheus-metric-storage-thanos-prometheus-http-client-file\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage-thanos-prometheus-http-client-file&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.882515 5050 reflector.go:561] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.882553 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.903066 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-idp-0-file-data&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.903115 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-idp-0-file-data&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.923383 5050 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serviceaccount-dockercfg-rq7zk&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.923463 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-rq7zk\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serviceaccount-dockercfg-rq7zk&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:26.927816 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-w4tzc" podUID="54e98831-cd88-4dee-90db-e8fbb006e9c3" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": dial tcp [::1]:29150: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.943375 5050 reflector.go:561] object-"openstack"/"cert-galera-openstack-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-galera-openstack-svc&resourceVersion=109087": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.943446 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-galera-openstack-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-galera-openstack-svc&resourceVersion=109087\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.963313 5050 reflector.go:561] object-"metallb-system"/"metallb-operator-controller-manager-service-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-operator-controller-manager-service-cert&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.963375 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-operator-controller-manager-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-operator-controller-manager-service-cert&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:26.983330 5050 reflector.go:561] object-"openshift-ingress-canary"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:26.983373 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.002757 5050 reflector.go:561] object-"openstack"/"heat-api-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dheat-api-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.002836 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"heat-api-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dheat-api-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.022895 5050 reflector.go:561] object-"openstack"/"ovncontroller-metrics-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovncontroller-metrics-config&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.022974 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovncontroller-metrics-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovncontroller-metrics-config&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.042992 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-provider-selection": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-provider-selection&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.043094 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-provider-selection&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.062635 5050 reflector.go:561] object-"openstack"/"openstack-cell1": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-cell1&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.062755 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-cell1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-cell1&resourceVersion=109067\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:27.082458 5050 request.go:700] Waited for 6.865956346s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-metrics&resourceVersion=109137 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.082815 5050 reflector.go:561] object-"openshift-marketplace"/"marketplace-operator-metrics": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-metrics&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.082876 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-metrics&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.103511 5050 reflector.go:561] object-"openshift-ingress"/"router-certs-default": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-certs-default&resourceVersion=109087": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.103593 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"router-certs-default\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-certs-default&resourceVersion=109087\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.122877 5050 reflector.go:561] object-"openstack"/"glance-glance-dockercfg-ndgnr": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-glance-dockercfg-ndgnr&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.122952 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"glance-glance-dockercfg-ndgnr\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-glance-dockercfg-ndgnr&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.143042 5050 reflector.go:561] object-"openshift-apiserver"/"encryption-config-1": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.143116 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.162932 5050 reflector.go:561] object-"openstack"/"prometheus-metric-storage-web-config": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage-web-config&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.163059 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"prometheus-metric-storage-web-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage-web-config&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.183201 5050 reflector.go:561] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.183248 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.203382 5050 reflector.go:561] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-apiserver-operator-config&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.203434 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-apiserver-operator-config&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.222944 5050 reflector.go:561] object-"openstack"/"galera-openstack-cell1-dockercfg-vwxp8": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dgalera-openstack-cell1-dockercfg-vwxp8&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.222990 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"galera-openstack-cell1-dockercfg-vwxp8\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dgalera-openstack-cell1-dockercfg-vwxp8&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.242867 5050 reflector.go:561] object-"openstack"/"ovndbcluster-sb-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-sb-config&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.242904 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovndbcluster-sb-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-sb-config&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.262428 5050 reflector.go:561] object-"openshift-console"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.262507 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:27.279458 5050 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r54sd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:5443/healthz\": dial tcp 10.217.0.23:5443: connect: connection refused" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:27.279511 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" podUID="dbd5b107-5d08-43af-881c-11540f395267" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.23:5443/healthz\": dial tcp 10.217.0.23:5443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.282402 5050 reflector.go:561] object-"openstack"/"neutron-config": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-config&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.282474 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"neutron-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-config&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.303041 5050 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.303095 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.322960 5050 reflector.go:561] object-"openshift-image-registry"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.323025 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.343084 5050 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.343199 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.362739 5050 reflector.go:561] object-"openshift-image-registry"/"image-registry-operator-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dimage-registry-operator-tls&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.362795 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"image-registry-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dimage-registry-operator-tls&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.382803 5050 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.382852 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.403104 5050 reflector.go:561] object-"openstack-operators"/"webhook-server-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dwebhook-server-cert&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.403151 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"webhook-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dwebhook-server-cert&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.423019 5050 reflector.go:561] object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-marketplace-dockercfg-x2ctb&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.423094 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-x2ctb\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-marketplace-dockercfg-x2ctb&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.443346 5050 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dauthentication-operator-config&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.443389 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dauthentication-operator-config&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.463314 5050 reflector.go:561] object-"openshift-image-registry"/"trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.463372 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.482915 5050 reflector.go:561] object-"openstack"/"openstack-config-data": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-config-data&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.483000 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-config-data\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-config-data&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.502699 5050 reflector.go:561] object-"openstack"/"rabbitmq-cell1-erlang-cookie": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-erlang-cookie&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.502757 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-cell1-erlang-cookie\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-erlang-cookie&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.522884 5050 reflector.go:561] object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-d5dn6": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dhorizon-operator-controller-manager-dockercfg-d5dn6&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.522940 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"horizon-operator-controller-manager-dockercfg-d5dn6\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dhorizon-operator-controller-manager-dockercfg-d5dn6&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.542404 5050 reflector.go:561] object-"openshift-machine-api"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.542484 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:27.547130 5050 scope.go:117] "RemoveContainer" containerID="33899e24adacef82929b0fbbcd58af7cb4f7aa3935b49ec9b5b4045a51da1d14" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.547333 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.562862 5050 reflector.go:561] object-"cert-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.562925 5050 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.583068 5050 reflector.go:561] object-"openstack"/"aodh-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Daodh-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.583147 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"aodh-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Daodh-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.602624 5050 reflector.go:561] object-"metallb-system"/"frr-k8s-certs-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dfrr-k8s-certs-secret&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.602940 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"frr-k8s-certs-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dfrr-k8s-certs-secret&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.622499 5050 reflector.go:561] object-"openshift-apiserver"/"audit-1": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.622571 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.643351 5050 reflector.go:561] object-"openshift-ingress-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.643416 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.662584 5050 reflector.go:561] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-dockercfg-gkqpw&resourceVersion=109322": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.662665 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-gkqpw\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-dockercfg-gkqpw&resourceVersion=109322\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.682642 5050 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-config&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.682718 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-config&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.703471 5050 reflector.go:561] object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-55k4b": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovnnorthd-ovnnorthd-dockercfg-55k4b&resourceVersion=109623": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.703578 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovnnorthd-ovnnorthd-dockercfg-55k4b\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovnnorthd-ovnnorthd-dockercfg-55k4b&resourceVersion=109623\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.723401 5050 reflector.go:561] object-"openshift-oauth-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.723464 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.742981 5050 reflector.go:561] object-"cert-manager"/"cert-manager-webhook-dockercfg-kbhwz": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-webhook-dockercfg-kbhwz&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.743083 5050 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-kbhwz\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-webhook-dockercfg-kbhwz&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.763114 5050 reflector.go:561] object-"openshift-console"/"service-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dservice-ca&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.763190 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"service-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dservice-ca&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.783257 5050 reflector.go:561] object-"openstack"/"openstack-cell1-config-data": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-cell1-config-data&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.783342 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-cell1-config-data\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-cell1-config-data&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.802658 5050 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-operator-images": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dmachine-config-operator-images&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.802777 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dmachine-config-operator-images&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.822832 5050 reflector.go:561] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Dnode-resolver-dockercfg-kz9s7&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.822903 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"node-resolver-dockercfg-kz9s7\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Dnode-resolver-dockercfg-kz9s7&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.842976 5050 reflector.go:561] object-"openstack"/"ovncontroller-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovncontroller-scripts&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.843075 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovncontroller-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovncontroller-scripts&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:27.846208 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="7fb00b03-fe6e-4c66-bd36-adf9443871a8" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.1.135:9090/-/healthy\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:27.846223 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="7fb00b03-fe6e-4c66-bd36-adf9443871a8" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.1.135:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:27.846300 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:27.847216 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="prometheus" containerStatusID={"Type":"cri-o","ID":"0c5246f2871c8299a1d9fbf47c96087b750b411a58afd3370be94ca88a66119e"} pod="openstack/prometheus-metric-storage-0" containerMessage="Container prometheus failed liveness probe, will be restarted" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:27.847315 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="7fb00b03-fe6e-4c66-bd36-adf9443871a8" containerName="prometheus" containerID="cri-o://0c5246f2871c8299a1d9fbf47c96087b750b411a58afd3370be94ca88a66119e" gracePeriod=600 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.863157 5050 reflector.go:561] object-"openstack"/"rabbitmq-plugins-conf": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-plugins-conf&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.863232 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-plugins-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-plugins-conf&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.883199 5050 reflector.go:561] object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.883254 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.903213 5050 reflector.go:561] object-"openstack"/"nova-cell1-novncproxy-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell1-novncproxy-config-data&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.903271 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-cell1-novncproxy-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell1-novncproxy-config-data&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.922702 5050 reflector.go:561] object-"openstack"/"dns": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Ddns&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.922760 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"dns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Ddns&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.943356 5050 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-dockercfg-xtcjv&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.943415 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-xtcjv\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-dockercfg-xtcjv&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.963129 5050 reflector.go:561] object-"openshift-console"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.963180 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:27.982779 5050 reflector.go:561] object-"metallb-system"/"controller-certs-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dcontroller-certs-secret&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:27.982827 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"controller-certs-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dcontroller-certs-secret&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.003334 5050 reflector.go:561] object-"openstack"/"metric-storage-alertmanager-dockercfg-c2fxf": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmetric-storage-alertmanager-dockercfg-c2fxf&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.003387 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"metric-storage-alertmanager-dockercfg-c2fxf\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmetric-storage-alertmanager-dockercfg-c2fxf&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.023241 5050 reflector.go:561] object-"openshift-dns"/"dns-default-metrics-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-default-metrics-tls&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.023296 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"dns-default-metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-default-metrics-tls&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.043207 5050 reflector.go:561] object-"openshift-authentication-operator"/"service-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.043286 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.062897 5050 reflector.go:561] object-"openstack"/"octavia-certs-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-certs-secret&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.062964 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-certs-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-certs-secret&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:28.071125 5050 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-9jns7 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:28.071144 5050 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-t75hp container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.34:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:28.071165 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jns7" podUID="d1f35830-d883-4b41-ab97-7c382dec0387" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:28.071173 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-t75hp" podUID="dfc40369-6f7d-4ab2-89b0-60bfc12e9bfe" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.34:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:28.071133 5050 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-9jns7 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:28.071211 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jns7" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:28.071215 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jns7" podUID="d1f35830-d883-4b41-ab97-7c382dec0387" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:28.071276 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jns7" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:28.072146 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="package-server-manager" containerStatusID={"Type":"cri-o","ID":"9bcb72a7122b2373360a63afde5cbe6a0bd4eb7948390af3be0726b4527fa52b"} pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jns7" containerMessage="Container package-server-manager failed liveness probe, will be restarted" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:28.072191 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jns7" podUID="d1f35830-d883-4b41-ab97-7c382dec0387" containerName="package-server-manager" containerID="cri-o://9bcb72a7122b2373360a63afde5cbe6a0bd4eb7948390af3be0726b4527fa52b" gracePeriod=30 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.082667 5050 reflector.go:561] object-"openstack"/"ovsdbserver-sb": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovsdbserver-sb&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.082719 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovsdbserver-sb\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovsdbserver-sb&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:28.102431 5050 request.go:700] Waited for 5.82468858s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-server-conf&resourceVersion=109762 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.102833 5050 reflector.go:561] object-"openstack"/"rabbitmq-server-conf": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-server-conf&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.102900 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-server-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-server-conf&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.123248 5050 reflector.go:561] object-"openshift-console"/"console-oauth-config": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-oauth-config&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.123298 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-oauth-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-oauth-config&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.142972 5050 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcatalog-operator-serving-cert&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.143027 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcatalog-operator-serving-cert&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.162914 5050 reflector.go:561] object-"openshift-nmstate"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.162984 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.182821 5050 reflector.go:561] object-"openshift-marketplace"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.182863 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109067\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.203028 5050 reflector.go:561] object-"openshift-dns"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.203059 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.223168 5050 reflector.go:561] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.223213 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.243107 5050 reflector.go:561] object-"openstack"/"horizon-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dhorizon-scripts&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.243146 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"horizon-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dhorizon-scripts&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.263059 5050 reflector.go:561] object-"openshift-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.263117 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.282866 5050 reflector.go:561] object-"openshift-oauth-apiserver"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.282908 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.302727 5050 reflector.go:561] object-"openshift-apiserver-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.302773 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.323082 5050 reflector.go:561] object-"openshift-multus"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.323128 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.343245 5050 reflector.go:561] object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-n57x7": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dnova-operator-controller-manager-dockercfg-n57x7&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.343302 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"nova-operator-controller-manager-dockercfg-n57x7\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dnova-operator-controller-manager-dockercfg-n57x7&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.362997 5050 reflector.go:561] object-"openshift-machine-config-operator"/"mco-proxy-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmco-proxy-tls&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.363039 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmco-proxy-tls&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:28.363959 5050 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8qmpk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:28.364003 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk" podUID="81255772-fddc-4936-8de3-da4649c32d1f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:28.364103 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:28.364230 5050 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8qmpk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:28.364257 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk" podUID="81255772-fddc-4936-8de3-da4649c32d1f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:28.364287 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:28.364772 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="catalog-operator" containerStatusID={"Type":"cri-o","ID":"737f9cb69bc5ba682bce46695449bf052170034d47a489f229d1389e142c9bb4"} pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk" containerMessage="Container catalog-operator failed liveness probe, will be restarted" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:28.364818 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk" podUID="81255772-fddc-4936-8de3-da4649c32d1f" containerName="catalog-operator" containerID="cri-o://737f9cb69bc5ba682bce46695449bf052170034d47a489f229d1389e142c9bb4" gracePeriod=30 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.382907 5050 reflector.go:561] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dnode-ca-dockercfg-4777p&resourceVersion=109434": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.382940 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"node-ca-dockercfg-4777p\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dnode-ca-dockercfg-4777p&resourceVersion=109434\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.402444 5050 reflector.go:561] object-"openstack"/"octavia-worker-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-worker-config-data&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.402474 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-worker-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-worker-config-data&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.422466 5050 reflector.go:561] object-"openstack"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109642": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.422502 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109642\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.442256 5050 reflector.go:561] object-"openstack"/"alertmanager-metric-storage-cluster-tls-config": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dalertmanager-metric-storage-cluster-tls-config&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.442291 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"alertmanager-metric-storage-cluster-tls-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dalertmanager-metric-storage-cluster-tls-config&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.463072 5050 reflector.go:561] object-"openstack"/"dns-svc": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Ddns-svc&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.463108 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"dns-svc\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Ddns-svc&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:28.469253 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="71218193-88fc-4811-bf04-33a4f4a87898" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.1.133:8081/readyz\": dial tcp 10.217.1.133:8081: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.483150 5050 reflector.go:561] object-"cert-manager-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.483198 5050 reflector.go:158] "Unhandled Error" err="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.502973 5050 reflector.go:561] object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-2bw5c": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dneutron-operator-controller-manager-dockercfg-2bw5c&resourceVersion=109623": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.503029 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"neutron-operator-controller-manager-dockercfg-2bw5c\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dneutron-operator-controller-manager-dockercfg-2bw5c&resourceVersion=109623\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.522860 5050 reflector.go:561] object-"openshift-network-node-identity"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.522888 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.542871 5050 reflector.go:561] object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-sa-dockercfg-djjff&resourceVersion=109407": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.542900 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-djjff\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-sa-dockercfg-djjff&resourceVersion=109407\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.562738 5050 reflector.go:561] object-"openstack"/"ovnnorthd-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovnnorthd-scripts&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.562805 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovnnorthd-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovnnorthd-scripts&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.582823 5050 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-control-plane-metrics-cert&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.582875 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-control-plane-metrics-cert&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.602883 5050 reflector.go:561] object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.602948 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.623253 5050 reflector.go:561] object-"openshift-ingress-canary"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.623302 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.643511 5050 reflector.go:561] object-"openstack-operators"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.643576 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.663593 5050 reflector.go:561] object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Droute-controller-manager-sa-dockercfg-h2zr2&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.663675 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-h2zr2\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Droute-controller-manager-sa-dockercfg-h2zr2&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.682591 5050 reflector.go:561] object-"openshift-service-ca-operator"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.682647 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.702806 5050 reflector.go:561] object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-whqpr": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncluster-ovndbcluster-sb-dockercfg-whqpr&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.702863 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovncluster-ovndbcluster-sb-dockercfg-whqpr\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncluster-ovndbcluster-sb-dockercfg-whqpr&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.722455 5050 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=109851": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.722537 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=109851\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.742462 5050 reflector.go:561] object-"openstack"/"keystone-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-config-data&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.742529 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"keystone-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-config-data&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.762483 5050 reflector.go:561] object-"openstack"/"cinder-backup-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-backup-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.762530 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-backup-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-backup-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.782826 5050 reflector.go:561] object-"openshift-multus"/"multus-admission-controller-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-admission-controller-secret&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.782897 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-admission-controller-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-admission-controller-secret&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.803562 5050 reflector.go:561] object-"openstack"/"alertmanager-metric-storage-generated": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dalertmanager-metric-storage-generated&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.803639 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"alertmanager-metric-storage-generated\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dalertmanager-metric-storage-generated&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.823534 5050 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-msq4c&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.823631 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-msq4c&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.843341 5050 reflector.go:561] object-"openshift-authentication"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.843409 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.862570 5050 reflector.go:561] object-"openstack"/"octavia-worker-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-worker-scripts&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.862618 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-worker-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-worker-scripts&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:28.864173 5050 patch_prober.go:28] interesting pod/dns-default-25p7l container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.217.0.27:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:28.864219 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-25p7l" podUID="66dfe3f5-9e7a-4e1c-b03a-b6f01dab6ef4" containerName="dns" probeResult="failure" output="Get \"http://10.217.0.27:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.883147 5050 reflector.go:561] object-"openstack"/"nova-cell1-conductor-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell1-conductor-config-data&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.883204 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-cell1-conductor-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell1-conductor-config-data&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.902977 5050 reflector.go:561] object-"openshift-console-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.903076 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.922422 5050 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.922507 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-qx5rd\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.942751 5050 reflector.go:561] object-"openshift-oauth-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.942793 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.962969 5050 reflector.go:561] object-"openshift-image-registry"/"installation-pull-secrets": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dinstallation-pull-secrets&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.963018 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"installation-pull-secrets\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dinstallation-pull-secrets&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:28.982494 5050 reflector.go:561] object-"openstack"/"cinder-scheduler-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-scheduler-config-data&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:28.982624 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-scheduler-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-scheduler-config-data&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.002636 5050 reflector.go:561] object-"openshift-ingress"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.002684 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.022598 5050 reflector.go:561] object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-g8gr8": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-admission-webhook-dockercfg-g8gr8&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.022639 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-g8gr8\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-admission-webhook-dockercfg-g8gr8&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.043002 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-ocp-branding-template&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.043124 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-ocp-branding-template&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.062886 5050 reflector.go:561] object-"openstack"/"openstack-config-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dopenstack-config-secret&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.062992 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-config-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dopenstack-config-secret&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.082820 5050 reflector.go:561] object-"openstack"/"cinder-cinder-dockercfg-lvj2r": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-cinder-dockercfg-lvj2r&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.082933 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-cinder-dockercfg-lvj2r\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-cinder-dockercfg-lvj2r&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:29.102591 5050 request.go:700] Waited for 4.339527124s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dcontroller-dockercfg-5zwsv&resourceVersion=109087 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.103288 5050 reflector.go:561] object-"metallb-system"/"controller-dockercfg-5zwsv": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dcontroller-dockercfg-5zwsv&resourceVersion=109087": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.103399 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"controller-dockercfg-5zwsv\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dcontroller-dockercfg-5zwsv&resourceVersion=109087\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:29.112232 5050 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-9jns7 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:29.112309 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jns7" podUID="d1f35830-d883-4b41-ab97-7c382dec0387" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.125086 5050 reflector.go:561] object-"openstack"/"horizon": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dhorizon&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.125217 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"horizon\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dhorizon&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.144616 5050 reflector.go:561] object-"openstack-operators"/"openstack-operator-index-dockercfg-f86tg": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-index-dockercfg-f86tg&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.144769 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openstack-operator-index-dockercfg-f86tg\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-index-dockercfg-f86tg&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.162839 5050 reflector.go:561] object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmetrics-daemon-sa-dockercfg-d427c&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.162926 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-d427c\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmetrics-daemon-sa-dockercfg-d427c&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.182640 5050 reflector.go:561] object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.182713 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.203600 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=109167": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.203745 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=109167\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.223110 5050 reflector.go:561] object-"openshift-apiserver"/"image-import-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dimage-import-ca&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.223231 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"image-import-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dimage-import-ca&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.242612 5050 reflector.go:561] object-"openstack"/"ovncontroller-ovncontroller-dockercfg-fjq8l": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncontroller-ovncontroller-dockercfg-fjq8l&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.242678 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovncontroller-ovncontroller-dockercfg-fjq8l\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncontroller-ovncontroller-dockercfg-fjq8l&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.262448 5050 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.262518 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.282556 5050 reflector.go:561] object-"openstack"/"rabbitmq-cell1-server-conf": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-cell1-server-conf&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.282615 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-cell1-server-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-cell1-server-conf&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.303367 5050 reflector.go:561] object-"openstack"/"combined-ca-bundle": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcombined-ca-bundle&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.303436 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"combined-ca-bundle\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcombined-ca-bundle&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:29.312220 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" podUID="f5bec9f7-072c-4c21-80ea-af9f59313eef" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:29.312258 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" podUID="f5bec9f7-072c-4c21-80ea-af9f59313eef" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.323341 5050 reflector.go:561] object-"openshift-console"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.323407 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.343126 5050 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-global-ca&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.343199 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-global-ca&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.363168 5050 reflector.go:561] object-"openstack"/"openstack-cell1-dockercfg-mvxd9": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dopenstack-cell1-dockercfg-mvxd9&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.363247 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-cell1-dockercfg-mvxd9\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dopenstack-cell1-dockercfg-mvxd9&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:29.364394 5050 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8qmpk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:29.364440 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk" podUID="81255772-fddc-4936-8de3-da4649c32d1f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.383453 5050 reflector.go:561] object-"openshift-ingress-canary"/"default-dockercfg-2llfx": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2llfx&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.383503 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-canary\"/\"default-dockercfg-2llfx\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2llfx&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.403286 5050 reflector.go:561] object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-dockercfg-k9rxt&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.403324 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-k9rxt\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-dockercfg-k9rxt&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.422804 5050 reflector.go:561] object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-baremetal-operator-webhook-server-cert&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.422866 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openstack-baremetal-operator-webhook-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-baremetal-operator-webhook-server-cert&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.443106 5050 reflector.go:561] object-"openshift-route-controller-manager"/"config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.443188 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.463422 5050 reflector.go:561] object-"openshift-network-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.463476 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.488664 5050 reflector.go:561] object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.488726 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.503899 5050 reflector.go:561] object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-admission-webhook-service-cert&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.503977 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-admission-webhook-service-cert&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.523465 5050 reflector.go:561] object-"openshift-etcd-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.523530 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.542665 5050 reflector.go:561] object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Ddns-operator-dockercfg-9mqw5&resourceVersion=109382": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.542732 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-9mqw5\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Ddns-operator-dockercfg-9mqw5&resourceVersion=109382\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.563338 5050 reflector.go:561] object-"openstack"/"keystone-keystone-dockercfg-jrbb7": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-keystone-dockercfg-jrbb7&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.563406 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"keystone-keystone-dockercfg-jrbb7\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-keystone-dockercfg-jrbb7&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.584620 5050 reflector.go:561] object-"openshift-console-operator"/"console-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dconsole-operator-config&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.584685 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"console-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dconsole-operator-config&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.602335 5050 reflector.go:561] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-controller-manager-operator-config&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.602404 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-controller-manager-operator-config&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.623252 5050 reflector.go:561] object-"openshift-operators"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.623315 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.643392 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-cliconfig": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-cliconfig&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.643459 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-cliconfig&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.663172 5050 reflector.go:561] object-"openstack"/"rabbitmq-cell1-plugins-conf": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-cell1-plugins-conf&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.663244 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-cell1-plugins-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-cell1-plugins-conf&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.682566 5050 reflector.go:561] object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-7bf58": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dswift-operator-controller-manager-dockercfg-7bf58&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.682670 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"swift-operator-controller-manager-dockercfg-7bf58\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dswift-operator-controller-manager-dockercfg-7bf58&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.702788 5050 reflector.go:561] object-"openshift-cluster-version"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.702853 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.722978 5050 reflector.go:561] object-"openstack"/"barbican-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.723076 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"barbican-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.742722 5050 reflector.go:561] object-"openshift-console-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.742785 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.762620 5050 reflector.go:561] object-"openstack"/"ceilometer-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dceilometer-scripts&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.762687 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ceilometer-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dceilometer-scripts&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.783208 5050 reflector.go:561] object-"openshift-multus"/"default-dockercfg-2q5b6": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2q5b6&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.783281 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-dockercfg-2q5b6\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2q5b6&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.802866 5050 reflector.go:561] object-"openshift-image-registry"/"image-registry-certificates": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dimage-registry-certificates&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.802976 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"image-registry-certificates\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dimage-registry-certificates&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.822463 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.822600 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.843215 5050 reflector.go:561] object-"openstack"/"neutron-neutron-dockercfg-ctkbt": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-neutron-dockercfg-ctkbt&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.843276 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"neutron-neutron-dockercfg-ctkbt\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-neutron-dockercfg-ctkbt&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.862761 5050 reflector.go:561] object-"openshift-network-node-identity"/"ovnkube-identity-cm": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dovnkube-identity-cm&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.862824 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dovnkube-identity-cm&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.882565 5050 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-serving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.882617 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-serving-cert&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.903339 5050 reflector.go:561] object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-sa-dockercfg-5xfcg&resourceVersion=109237": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.903412 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-5xfcg\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-sa-dockercfg-5xfcg&resourceVersion=109237\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:29.921195 5050 patch_prober.go:28] interesting pod/etcd-crc container/etcd namespace/openshift-etcd: Readiness probe status=failure output="Get \"https://192.168.126.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:29.921240 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-crc" podUID="2139d3e2895fc6797b9c76a1b4c9886d" containerName="etcd" probeResult="failure" output="Get \"https://192.168.126.11:9980/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.923037 5050 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-kubernetes-node-dockercfg-pwtwl&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.923088 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-pwtwl\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-kubernetes-node-dockercfg-pwtwl&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.942469 5050 reflector.go:561] object-"openstack"/"openstack-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-scripts&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.942510 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-scripts&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.963331 5050 reflector.go:561] object-"openshift-service-ca-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.963389 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:29.982649 5050 reflector.go:561] object-"openshift-ovn-kubernetes"/"env-overrides": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Denv-overrides&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:29.982717 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Denv-overrides&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.003514 5050 reflector.go:561] object-"openstack"/"rabbitmq-default-user": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-default-user&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.003568 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-default-user\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-default-user&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.022546 5050 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/ovndbcluster-nb-etc-ovn-ovsdbserver-nb-1: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/ovndbcluster-nb-etc-ovn-ovsdbserver-nb-1\": dial tcp 38.102.83.147:6443: connect: connection refused" pod="openstack/ovsdbserver-nb-1" volumeName="ovndbcluster-nb-etc-ovn" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.042410 5050 reflector.go:561] object-"openshift-ingress-operator"/"trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.042481 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.062827 5050 reflector.go:561] object-"openstack"/"horizon-config-data": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dhorizon-config-data&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.062888 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"horizon-config-data\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dhorizon-config-data&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:30.082574 5050 status_manager.go:851] "Failed to get status for pod" podUID="4b4e8aa9-a0f6-4bd2-92d2-c524aedf6389" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-qdrgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/barbican-operator-controller-manager-7d9dfd778-qdrgd\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.102945 5050 reflector.go:561] object-"openshift-authentication"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.103073 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:30.122814 5050 request.go:700] Waited for 4.385835564s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dproxy-tls&resourceVersion=109671 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.123226 5050 reflector.go:561] object-"openshift-machine-config-operator"/"proxy-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dproxy-tls&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.123272 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dproxy-tls&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.143028 5050 reflector.go:561] object-"openshift-oauth-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.143115 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.163269 5050 reflector.go:561] object-"openstack"/"nova-api-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-api-config-data&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.163337 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-api-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-api-config-data&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.182877 5050 reflector.go:561] object-"openshift-ingress"/"router-dockercfg-zdk86": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-dockercfg-zdk86&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.182911 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"router-dockercfg-zdk86\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-dockercfg-zdk86&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.203099 5050 reflector.go:561] object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dkube-scheduler-operator-serving-cert&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.203153 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dkube-scheduler-operator-serving-cert&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:30.221808 5050 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-cnp7n container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/readyz\": context deadline exceeded" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:30.221864 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" podUID="1d89350d-55e9-4ef6-8182-287894b6c14b" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.10:8443/readyz\": context deadline exceeded" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:30.222053 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.222464 5050 reflector.go:561] object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-qd9ll": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dironic-operator-controller-manager-dockercfg-qd9ll&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.222519 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"ironic-operator-controller-manager-dockercfg-qd9ll\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dironic-operator-controller-manager-dockercfg-qd9ll&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.243340 5050 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"pprof-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpprof-cert&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.243417 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpprof-cert&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.262581 5050 reflector.go:561] object-"openshift-nmstate"/"openshift-nmstate-webhook": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dopenshift-nmstate-webhook&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.262640 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"openshift-nmstate-webhook\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dopenshift-nmstate-webhook&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.282340 5050 reflector.go:561] object-"openshift-machine-config-operator"/"kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=109851": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.282415 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=109851\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.302522 5050 reflector.go:561] object-"metallb-system"/"speaker-certs-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dspeaker-certs-secret&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.302601 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"speaker-certs-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dspeaker-certs-secret&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.323334 5050 reflector.go:561] object-"openshift-nmstate"/"default-dockercfg-vtnxn": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-vtnxn&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.323425 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"default-dockercfg-vtnxn\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-vtnxn&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.343186 5050 reflector.go:561] object-"openshift-dns"/"dns-dockercfg-jwfmh": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-dockercfg-jwfmh&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.343308 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"dns-dockercfg-jwfmh\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-dockercfg-jwfmh&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.362816 5050 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackageserver-service-cert&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.362889 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackageserver-service-cert&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.383222 5050 reflector.go:561] object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dservice-ca-operator-dockercfg-rg9jl&resourceVersion=109623": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.383288 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-rg9jl\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dservice-ca-operator-dockercfg-rg9jl&resourceVersion=109623\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.403136 5050 reflector.go:561] object-"openshift-network-node-identity"/"network-node-identity-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/secrets?fieldSelector=metadata.name%3Dnetwork-node-identity-cert&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.403215 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/secrets?fieldSelector=metadata.name%3Dnetwork-node-identity-cert&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.423159 5050 reflector.go:561] object-"cert-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.423232 5050 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.442668 5050 reflector.go:561] object-"openstack"/"ovndbcluster-sb-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-sb-scripts&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.442751 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovndbcluster-sb-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-sb-scripts&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.462904 5050 reflector.go:561] object-"openstack"/"barbican-api-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-api-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.462969 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"barbican-api-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-api-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.483180 5050 reflector.go:561] object-"openstack"/"openstack-aee-default-env": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-aee-default-env&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.483252 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-aee-default-env\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-aee-default-env&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.521715 5050 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.521804 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.522222 5050 reflector.go:561] object-"openshift-operators"/"observability-operator-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobservability-operator-tls&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.522260 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"observability-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobservability-operator-tls&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.543501 5050 reflector.go:561] object-"openshift-service-ca"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.543680 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.563023 5050 reflector.go:561] object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/secrets?fieldSelector=metadata.name%3Dopenshift-config-operator-dockercfg-7pc5z&resourceVersion=109307": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.563103 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-7pc5z\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/secrets?fieldSelector=metadata.name%3Dopenshift-config-operator-dockercfg-7pc5z&resourceVersion=109307\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:30.563510 5050 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:30.563558 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.583072 5050 reflector.go:561] object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-djswv": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Doctavia-operator-controller-manager-dockercfg-djswv&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.583148 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"octavia-operator-controller-manager-dockercfg-djswv\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Doctavia-operator-controller-manager-dockercfg-djswv&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.602663 5050 reflector.go:561] object-"openshift-apiserver"/"etcd-client": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.602747 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.622647 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=109780": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.622699 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=109780\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.643073 5050 reflector.go:561] object-"openstack"/"cinder-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-scripts&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.643140 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-scripts&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.662648 5050 reflector.go:561] object-"openshift-route-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109642": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.662716 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109642\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.683157 5050 reflector.go:561] object-"openshift-operators"/"perses-operator-dockercfg-xflrf": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dperses-operator-dockercfg-xflrf&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.683215 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"perses-operator-dockercfg-xflrf\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dperses-operator-dockercfg-xflrf&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:30.684191 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-operator-799b66f579-tqs2v" podUID="53a1a6d1-6999-4fa1-a0ce-a20b83f1f347" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.73:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:30.684477 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-operator-799b66f579-tqs2v" podUID="53a1a6d1-6999-4fa1-a0ce-a20b83f1f347" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.73:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:30.684507 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-controller-operator-799b66f579-tqs2v" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:30.685216 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="operator" containerStatusID={"Type":"cri-o","ID":"5c7d365dce07000662fe39681396cb0bff613093313821486caa669cf3a6ba43"} pod="openstack-operators/openstack-operator-controller-operator-799b66f579-tqs2v" containerMessage="Container operator failed liveness probe, will be restarted" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:30.685243 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-controller-operator-799b66f579-tqs2v" podUID="53a1a6d1-6999-4fa1-a0ce-a20b83f1f347" containerName="operator" containerID="cri-o://5c7d365dce07000662fe39681396cb0bff613093313821486caa669cf3a6ba43" gracePeriod=10 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.702696 5050 reflector.go:561] object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-zlz4m": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dbarbican-operator-controller-manager-dockercfg-zlz4m&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.702814 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"barbican-operator-controller-manager-dockercfg-zlz4m\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dbarbican-operator-controller-manager-dockercfg-zlz4m&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.722785 5050 reflector.go:561] object-"openshift-cluster-version"/"default-dockercfg-gxtc4": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-gxtc4&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.722844 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"default-dockercfg-gxtc4\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-gxtc4&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.744653 5050 reflector.go:561] object-"openshift-nmstate"/"plugin-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dplugin-serving-cert&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.744709 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"plugin-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dplugin-serving-cert&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.763162 5050 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.763234 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.782514 5050 reflector.go:561] object-"openstack"/"heat-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dheat-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.782573 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"heat-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dheat-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.803139 5050 reflector.go:561] object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-7zqpj": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmariadb-operator-controller-manager-dockercfg-7zqpj&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.803218 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"mariadb-operator-controller-manager-dockercfg-7zqpj\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmariadb-operator-controller-manager-dockercfg-7zqpj&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.823213 5050 reflector.go:561] object-"openstack"/"glance-default-internal-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-default-internal-config-data&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.823300 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"glance-default-internal-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-default-internal-config-data&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.842655 5050 reflector.go:561] object-"openstack-operators"/"infra-operator-webhook-server-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dinfra-operator-webhook-server-cert&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.842719 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"infra-operator-webhook-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dinfra-operator-webhook-server-cert&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.863091 5050 reflector.go:561] object-"openstack"/"barbican-barbican-dockercfg-fxl2b": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-barbican-dockercfg-fxl2b&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.863161 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"barbican-barbican-dockercfg-fxl2b\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-barbican-dockercfg-fxl2b&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.882989 5050 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-sa-dockercfg-nl2j4&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.883160 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-nl2j4\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-sa-dockercfg-nl2j4&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:30.888279 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="7fb00b03-fe6e-4c66-bd36-adf9443871a8" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.1.135:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.891180 5050 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/events\": dial tcp 38.102.83.147:6443: connect: connection refused" event=< Dec 11 15:54:41 crc kubenswrapper[5050]: &Event{ObjectMeta:{packageserver-d55dfcdfc-r54sd.18803427859863d6 openshift-operator-lifecycle-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-operator-lifecycle-manager,Name:packageserver-d55dfcdfc-r54sd,UID:dbd5b107-5d08-43af-881c-11540f395267,APIVersion:v1,ResourceVersion:27090,FieldPath:spec.containers{packageserver},},Reason:ProbeError,Message:Liveness probe error: Get "https://10.217.0.23:5443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Dec 11 15:54:41 crc kubenswrapper[5050]: body: Dec 11 15:54:41 crc kubenswrapper[5050]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 15:53:18.278960086 +0000 UTC m=+7489.122682672,LastTimestamp:2025-12-11 15:53:18.278960086 +0000 UTC m=+7489.122682672,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 11 15:54:41 crc kubenswrapper[5050]: > Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.902765 5050 reflector.go:561] object-"openstack"/"octavia-rsyslog-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-rsyslog-scripts&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.902840 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-rsyslog-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-rsyslog-scripts&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.922753 5050 reflector.go:561] object-"metallb-system"/"frr-startup": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dfrr-startup&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.922802 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"frr-startup\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dfrr-startup&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.942437 5050 reflector.go:561] object-"openstack"/"barbican-keystone-listener-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-keystone-listener-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.942505 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"barbican-keystone-listener-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-keystone-listener-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.963080 5050 reflector.go:561] object-"openshift-console"/"oauth-serving-cert": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Doauth-serving-cert&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.963152 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"oauth-serving-cert\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Doauth-serving-cert&resourceVersion=109067\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:30.982372 5050 reflector.go:561] object-"openshift-authentication-operator"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:30.982450 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.003059 5050 reflector.go:561] object-"openshift-multus"/"cni-copy-resources": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dcni-copy-resources&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.003121 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"cni-copy-resources\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dcni-copy-resources&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.022900 5050 reflector.go:561] object-"openstack"/"alertmanager-metric-storage-tls-assets-0": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dalertmanager-metric-storage-tls-assets-0&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.022967 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"alertmanager-metric-storage-tls-assets-0\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dalertmanager-metric-storage-tls-assets-0&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.042778 5050 reflector.go:561] object-"hostpath-provisioner"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.042844 5050 reflector.go:158] "Unhandled Error" err="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.062843 5050 reflector.go:561] object-"openshift-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.063210 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.082439 5050 reflector.go:561] object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-glgrh": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dheat-operator-controller-manager-dockercfg-glgrh&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.082482 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"heat-operator-controller-manager-dockercfg-glgrh\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dheat-operator-controller-manager-dockercfg-glgrh&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.102923 5050 reflector.go:561] object-"openstack-operators"/"metrics-server-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmetrics-server-cert&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.103047 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"metrics-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmetrics-server-cert&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.122685 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=109666": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.122739 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=109666\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:31.142710 5050 request.go:700] Waited for 4.319888208s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.143159 5050 reflector.go:561] object-"openshift-machine-api"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.143234 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.162533 5050 reflector.go:561] object-"openstack"/"keystone": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.162598 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"keystone\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.183501 5050 reflector.go:561] object-"openstack"/"octavia-api-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-api-config-data&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.183577 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-api-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-api-config-data&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.202646 5050 reflector.go:561] object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-dockercfg-5nsgg&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.202713 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-5nsgg\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-dockercfg-5nsgg&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:31.211794 5050 patch_prober.go:28] interesting pod/etcd-crc container/etcd namespace/openshift-etcd: Liveness probe status=failure output="Get \"https://192.168.126.11:9980/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:31.211876 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd/etcd-crc" podUID="2139d3e2895fc6797b9c76a1b4c9886d" containerName="etcd" probeResult="failure" output="Get \"https://192.168.126.11:9980/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.223329 5050 reflector.go:561] object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-config&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.223411 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-config&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.242728 5050 reflector.go:561] object-"openshift-multus"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.242805 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.262435 5050 reflector.go:561] object-"openshift-console"/"console-dockercfg-f62pw": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-dockercfg-f62pw&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.262518 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-dockercfg-f62pw\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-dockercfg-f62pw&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.282715 5050 reflector.go:561] object-"openshift-machine-api"/"control-plane-machine-set-operator-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-tls&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.282784 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-tls&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:31.283211 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5fb79d99b5-m4xgd" podUID="2086bc41-00a2-4c97-a491-08511f3ed6e5" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.117:8080/dashboard/auth/login/?next=/dashboard/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.302627 5050 reflector.go:561] object-"openstack"/"nova-nova-dockercfg-mdjbl": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-nova-dockercfg-mdjbl&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.302703 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-nova-dockercfg-mdjbl\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-nova-dockercfg-mdjbl&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.322623 5050 reflector.go:561] object-"openshift-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.322696 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.343610 5050 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-server-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-tls&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.343692 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-tls&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.362641 5050 reflector.go:561] object-"openshift-network-node-identity"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.362686 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.382673 5050 reflector.go:561] object-"openstack"/"placement-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-scripts&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.382759 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"placement-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-scripts&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.403617 5050 reflector.go:561] object-"openshift-network-node-identity"/"env-overrides": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Denv-overrides&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.403701 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Denv-overrides&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.422879 5050 reflector.go:561] object-"openshift-authentication-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.422978 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.443188 5050 reflector.go:561] object-"openshift-oauth-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109117": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.443257 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109117\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.463301 5050 reflector.go:561] object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-jbnlr": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-baremetal-operator-controller-manager-dockercfg-jbnlr&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.463374 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openstack-baremetal-operator-controller-manager-dockercfg-jbnlr\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-baremetal-operator-controller-manager-dockercfg-jbnlr&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.483078 5050 reflector.go:561] object-"openstack"/"heat-cfnapi-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dheat-cfnapi-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.483140 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"heat-cfnapi-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dheat-cfnapi-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.497725 5050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="7s" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.503129 5050 reflector.go:561] object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-dockercfg-qt55r&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.503178 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-qt55r\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-dockercfg-qt55r&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.522909 5050 reflector.go:561] object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-p54cz": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager-operator/secrets?fieldSelector=metadata.name%3Dcert-manager-operator-controller-manager-dockercfg-p54cz&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.522993 5050 reflector.go:158] "Unhandled Error" err="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-p54cz\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager-operator/secrets?fieldSelector=metadata.name%3Dcert-manager-operator-controller-manager-dockercfg-p54cz&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.543306 5050 reflector.go:561] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-ancillary-tools-dockercfg-vnmsz&resourceVersion=109360": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.543381 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-vnmsz\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-ancillary-tools-dockercfg-vnmsz&resourceVersion=109360\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:31.558431 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-qdrgd" podUID="4b4e8aa9-a0f6-4bd2-92d2-c524aedf6389" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/readyz\": dial tcp 10.217.0.74:8081: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.562976 5050 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.563084 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.583189 5050 reflector.go:561] object-"openstack"/"octavia-octavia-dockercfg-h4g5n": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-octavia-dockercfg-h4g5n&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.583300 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-octavia-dockercfg-h4g5n\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-octavia-dockercfg-h4g5n&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.602537 5050 reflector.go:561] object-"openstack"/"octavia-healthmanager-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-healthmanager-scripts&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.602614 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-healthmanager-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-healthmanager-scripts&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.623082 5050 reflector.go:561] object-"openshift-cluster-version"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.623149 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:31.631663 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp" podUID="85683dfb-37fb-4301-8c7a-fbb7453b303d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": dial tcp 10.217.0.75:8081: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.643516 5050 reflector.go:561] object-"openstack"/"keystone-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-scripts&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.643583 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"keystone-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-scripts&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.663242 5050 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-dockercfg-mfbb7&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.663321 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-mfbb7\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-dockercfg-mfbb7&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.682914 5050 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-tls&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.682994 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-tls&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.702985 5050 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-config&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.703095 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-config&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.723359 5050 reflector.go:561] object-"metallb-system"/"metallb-memberlist": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-memberlist&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.723414 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-memberlist\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-memberlist&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:31.726214 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-operator-799b66f579-tqs2v" podUID="53a1a6d1-6999-4fa1-a0ce-a20b83f1f347" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.73:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.742735 5050 reflector.go:561] object-"openstack"/"ceph-conf-files": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dceph-conf-files&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.742810 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ceph-conf-files\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dceph-conf-files&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.763331 5050 reflector.go:561] object-"openshift-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.763396 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.783098 5050 reflector.go:561] object-"openstack"/"cinder-volume-volume1-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-volume-volume1-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.783157 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-volume-volume1-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-volume-volume1-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.802465 5050 reflector.go:561] object-"openshift-network-operator"/"iptables-alerter-script": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Diptables-alerter-script&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.802531 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"iptables-alerter-script\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Diptables-alerter-script&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.823183 5050 reflector.go:561] object-"openshift-dns"/"dns-default": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Ddns-default&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.823234 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"dns-default\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Ddns-default&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.842945 5050 reflector.go:561] object-"openstack"/"dnsmasq-dns-dockercfg-kpmgv": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddnsmasq-dns-dockercfg-kpmgv&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.843041 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"dnsmasq-dns-dockercfg-kpmgv\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddnsmasq-dns-dockercfg-kpmgv&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.862454 5050 reflector.go:561] object-"openshift-operators"/"obo-prometheus-operator-dockercfg-mpfzr": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-dockercfg-mpfzr&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.862506 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-mpfzr\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-dockercfg-mpfzr&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.883504 5050 reflector.go:561] object-"openshift-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.883551 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.903248 5050 reflector.go:561] object-"openshift-apiserver"/"config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.903328 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.923060 5050 reflector.go:561] object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-mc6vn": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Ddesignate-operator-controller-manager-dockercfg-mc6vn&resourceVersion=109087": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.923124 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"designate-operator-controller-manager-dockercfg-mc6vn\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Ddesignate-operator-controller-manager-dockercfg-mc6vn&resourceVersion=109087\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.942753 5050 reflector.go:561] object-"metallb-system"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.942797 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:31.947175 5050 patch_prober.go:28] interesting pod/network-check-target-xd92c container/network-check-target-container namespace/openshift-network-diagnostics: Readiness probe status=failure output="Get \"http://10.217.0.4:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:31.947210 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" containerName="network-check-target-container" probeResult="failure" output="Get \"http://10.217.0.4:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:31.947176 5050 patch_prober.go:28] interesting pod/dns-default-25p7l container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.217.0.27:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:31.947284 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-25p7l" podUID="66dfe3f5-9e7a-4e1c-b03a-b6f01dab6ef4" containerName="dns" probeResult="failure" output="Get \"http://10.217.0.27:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.963514 5050 reflector.go:561] object-"openshift-image-registry"/"registry-dockercfg-kzzsd": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dregistry-dockercfg-kzzsd&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.963592 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"registry-dockercfg-kzzsd\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dregistry-dockercfg-kzzsd&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:31.982545 5050 reflector.go:561] object-"openshift-config-operator"/"config-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/secrets?fieldSelector=metadata.name%3Dconfig-operator-serving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:31.982610 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"config-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/secrets?fieldSelector=metadata.name%3Dconfig-operator-serving-cert&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.002487 5050 reflector.go:561] object-"openshift-authentication"/"audit": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Daudit&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.002549 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"audit\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Daudit&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.023364 5050 reflector.go:561] object-"openshift-authentication-operator"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.023535 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.042743 5050 reflector.go:561] object-"openshift-apiserver"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.042809 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.062768 5050 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dmachine-approver-config&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.062845 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dmachine-approver-config&resourceVersion=109067\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.083203 5050 reflector.go:561] object-"openstack"/"rabbitmq-cell1-default-user": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-default-user&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.083276 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-cell1-default-user\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-default-user&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.103093 5050 reflector.go:561] object-"openshift-dns"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.103196 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.122397 5050 reflector.go:561] object-"openstack"/"cert-galera-openstack-cell1-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-galera-openstack-cell1-svc&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.122458 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-galera-openstack-cell1-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-galera-openstack-cell1-svc&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:32.138023 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2" podUID="06df6c8c-640d-431b-b216-78345a9054e1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": dial tcp 10.217.0.82:8081: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.142431 5050 reflector.go:561] object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-68tnd": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-controller-operator-dockercfg-68tnd&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.142475 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openstack-operator-controller-operator-dockercfg-68tnd\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-controller-operator-dockercfg-68tnd&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:32.162291 5050 request.go:700] Waited for 4.308562585s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dprometheus-metric-storage-rulefiles-0&resourceVersion=109888 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.162671 5050 reflector.go:561] object-"openstack"/"prometheus-metric-storage-rulefiles-0": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dprometheus-metric-storage-rulefiles-0&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.162722 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"prometheus-metric-storage-rulefiles-0\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dprometheus-metric-storage-rulefiles-0&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.183187 5050 reflector.go:561] object-"metallb-system"/"manager-account-dockercfg-m6zt9": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmanager-account-dockercfg-m6zt9&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.183284 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"manager-account-dockercfg-m6zt9\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmanager-account-dockercfg-m6zt9&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.203037 5050 reflector.go:561] object-"openshift-cluster-version"/"cluster-version-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/secrets?fieldSelector=metadata.name%3Dcluster-version-operator-serving-cert&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.203108 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/secrets?fieldSelector=metadata.name%3Dcluster-version-operator-serving-cert&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.222797 5050 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.222861 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.242422 5050 reflector.go:561] object-"openshift-ingress"/"router-metrics-certs-default": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-metrics-certs-default&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.242481 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"router-metrics-certs-default\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-metrics-certs-default&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.263156 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-serving-cert&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.263198 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-serving-cert&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.283065 5050 reflector.go:561] object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-mz95s": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-controller-manager-dockercfg-mz95s&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.283126 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openstack-operator-controller-manager-dockercfg-mz95s\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-controller-manager-dockercfg-mz95s&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.303160 5050 reflector.go:561] object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dingress-operator-dockercfg-7lnqk&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.303211 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-7lnqk\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dingress-operator-dockercfg-7lnqk&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.322451 5050 reflector.go:561] object-"openstack"/"rabbitmq-erlang-cookie": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-erlang-cookie&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.322512 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-erlang-cookie\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-erlang-cookie&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.343049 5050 reflector.go:561] object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-nffdg": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dglance-operator-controller-manager-dockercfg-nffdg&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.343103 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"glance-operator-controller-manager-dockercfg-nffdg\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dglance-operator-controller-manager-dockercfg-nffdg&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.363191 5050 reflector.go:561] object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-operator-dockercfg-r9srn&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.363261 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-r9srn\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-operator-dockercfg-r9srn&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.383339 5050 reflector.go:561] object-"openshift-ingress-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.383431 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.402918 5050 reflector.go:561] object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Doauth-apiserver-sa-dockercfg-6r2bq&resourceVersion=109501": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.403031 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-6r2bq\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Doauth-apiserver-sa-dockercfg-6r2bq&resourceVersion=109501\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:32.416322 5050 patch_prober.go:28] interesting pod/dns-default-25p7l container/dns namespace/openshift-dns: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:32.416416 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-dns/dns-default-25p7l" podUID="66dfe3f5-9e7a-4e1c-b03a-b6f01dab6ef4" containerName="dns" probeResult="failure" output="Get \"http://10.217.0.27:8080/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.423196 5050 reflector.go:561] object-"openstack"/"horizon-horizon-dockercfg-d7bqh": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dhorizon-horizon-dockercfg-d7bqh&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.423278 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"horizon-horizon-dockercfg-d7bqh\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dhorizon-horizon-dockercfg-d7bqh&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.442923 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-service-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-service-ca&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.443047 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-service-ca&resourceVersion=109067\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.463567 5050 reflector.go:561] object-"openshift-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.463683 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.482794 5050 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=109623": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.482834 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=109623\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.502819 5050 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-kubernetes-control-plane-dockercfg-gs7dd&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.502876 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-gs7dd\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-kubernetes-control-plane-dockercfg-gs7dd&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.522460 5050 reflector.go:561] object-"openshift-dns-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.522531 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:32.541200 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pjhfc" podUID="3c9e825c-0aee-42b9-a7a5-3191486f301d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": dial tcp 10.217.0.85:8081: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.542564 5050 reflector.go:561] object-"openshift-etcd-operator"/"etcd-service-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-service-ca-bundle&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.542604 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-service-ca-bundle&resourceVersion=109067\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.562669 5050 reflector.go:561] object-"openstack"/"memcached-config-data": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dmemcached-config-data&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.562748 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"memcached-config-data\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dmemcached-config-data&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.583023 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-session": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-session&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.583090 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-session\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-session&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.603234 5050 reflector.go:561] object-"openstack"/"octavia-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-config-data&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.603292 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-config-data&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.622466 5050 reflector.go:561] object-"openstack"/"heat-engine-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dheat-engine-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.622509 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"heat-engine-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dheat-engine-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.642675 5050 reflector.go:561] object-"openstack"/"octavia-healthmanager-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-healthmanager-config-data&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.642739 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-healthmanager-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-healthmanager-config-data&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.663227 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-router-certs": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-router-certs&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.663271 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-router-certs&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.682913 5050 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.682996 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109067\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.702968 5050 reflector.go:561] object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-4x88l": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dovn-operator-controller-manager-dockercfg-4x88l&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.703058 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"ovn-operator-controller-manager-dockercfg-4x88l\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dovn-operator-controller-manager-dockercfg-4x88l&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.722829 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-trusted-ca-bundle&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.722910 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-trusted-ca-bundle&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.742703 5050 reflector.go:561] object-"openstack"/"ovn-data-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovn-data-cert&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.742769 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovn-data-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovn-data-cert&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.763090 5050 reflector.go:561] object-"openshift-cluster-samples-operator"/"samples-operator-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dsamples-operator-tls&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.763246 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dsamples-operator-tls&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:32.764351 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-sqjhh" podUID="0aa7657b-dbca-4b2b-ac62-7000681a918a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:32.764455 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-sqjhh" podUID="0aa7657b-dbca-4b2b-ac62-7000681a918a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:32.773388 5050 kuberuntime_container.go:700] "PreStop hook not completed in grace period" pod="openstack/openstack-galera-0" podUID="14ad1594-090d-4024-a999-9ffe77ce58d8" containerName="galera" containerID="cri-o://11ec72e9f06368997b90f40f70878174683db5a652ebb2eb2c432e34abde950e" gracePeriod=30 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:32.773421 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="14ad1594-090d-4024-a999-9ffe77ce58d8" containerName="galera" containerID="cri-o://11ec72e9f06368997b90f40f70878174683db5a652ebb2eb2c432e34abde950e" gracePeriod=2 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:32.778918 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="14ad1594-090d-4024-a999-9ffe77ce58d8" containerName="galera" probeResult="failure" output="command timed out" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.782663 5050 reflector.go:561] object-"openstack"/"octavia-hmport-map": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Doctavia-hmport-map&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.782727 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-hmport-map\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Doctavia-hmport-map&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.803205 5050 reflector.go:561] object-"openshift-nmstate"/"nmstate-operator-dockercfg-b8vjd": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dnmstate-operator-dockercfg-b8vjd&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.803265 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"nmstate-operator-dockercfg-b8vjd\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dnmstate-operator-dockercfg-b8vjd&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.823112 5050 reflector.go:561] object-"openstack"/"ovnnorthd-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovnnorthd-config&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.823173 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovnnorthd-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovnnorthd-config&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.842856 5050 reflector.go:561] object-"openstack"/"prometheus-metric-storage-tls-assets-0": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage-tls-assets-0&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.842944 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"prometheus-metric-storage-tls-assets-0\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage-tls-assets-0&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.862469 5050 reflector.go:561] object-"openshift-multus"/"default-cni-sysctl-allowlist": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Ddefault-cni-sysctl-allowlist&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.862534 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Ddefault-cni-sysctl-allowlist&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.883422 5050 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-serving-cert&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.883487 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-serving-cert&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.903136 5050 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.903215 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.923315 5050 reflector.go:561] object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-6tg85": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dplacement-operator-controller-manager-dockercfg-6tg85&resourceVersion=109087": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.923405 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"placement-operator-controller-manager-dockercfg-6tg85\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dplacement-operator-controller-manager-dockercfg-6tg85&resourceVersion=109087\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.943510 5050 reflector.go:561] object-"openstack"/"manila-scheduler-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-scheduler-config-data&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.943590 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"manila-scheduler-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-scheduler-config-data&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.962760 5050 reflector.go:561] object-"openshift-etcd-operator"/"etcd-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-ca-bundle&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.962824 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-ca-bundle&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:32.983114 5050 reflector.go:561] object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/secrets?fieldSelector=metadata.name%3Dcsi-hostpath-provisioner-sa-dockercfg-qd74k&resourceVersion=109184": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:32.983176 5050 reflector.go:158] "Unhandled Error" err="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-qd74k\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/secrets?fieldSelector=metadata.name%3Dcsi-hostpath-provisioner-sa-dockercfg-qd74k&resourceVersion=109184\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.003246 5050 reflector.go:561] object-"openstack-operators"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.003303 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.022865 5050 reflector.go:561] object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Doauth-openshift-dockercfg-znhcc&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.022907 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-znhcc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Doauth-openshift-dockercfg-znhcc&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.042960 5050 reflector.go:561] object-"openstack"/"manila-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-scripts&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.043044 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"manila-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-scripts&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.063187 5050 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackage-server-manager-serving-cert&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.063239 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackage-server-manager-serving-cert&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.082658 5050 reflector.go:561] object-"openshift-etcd-operator"/"etcd-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-operator-config&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.082735 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-operator-config&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:33.096148 5050 patch_prober.go:28] interesting pod/route-controller-manager-767f6d799d-cv7mn container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:33.096148 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-9tcm2" podUID="048e17a7-0123-45a2-b698-02def3db74fe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:33.096189 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn" podUID="5b7ceea3-4e92-46ee-81de-5b8f932144ad" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:33.096218 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:33.096234 5050 patch_prober.go:28] interesting pod/route-controller-manager-767f6d799d-cv7mn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:33.096283 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn" podUID="5b7ceea3-4e92-46ee-81de-5b8f932144ad" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:33.096332 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-9tcm2" podUID="048e17a7-0123-45a2-b698-02def3db74fe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:33.096911 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="route-controller-manager" containerStatusID={"Type":"cri-o","ID":"c5f542dfbbbe579335c7c9dd39dbf3a87a4d70edea814f820b53757fc41f7607"} pod="openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn" containerMessage="Container route-controller-manager failed liveness probe, will be restarted" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:33.096948 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn" podUID="5b7ceea3-4e92-46ee-81de-5b8f932144ad" containerName="route-controller-manager" containerID="cri-o://c5f542dfbbbe579335c7c9dd39dbf3a87a4d70edea814f820b53757fc41f7607" gracePeriod=30 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.102947 5050 reflector.go:561] object-"openshift-service-ca"/"service-ca-dockercfg-pn86c": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dservice-ca-dockercfg-pn86c&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.102982 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"service-ca-dockercfg-pn86c\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dservice-ca-dockercfg-pn86c&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.122859 5050 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-node-metrics-cert&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.122936 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-node-metrics-cert&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.142446 5050 reflector.go:561] object-"metallb-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.142511 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:33.162785 5050 request.go:700] Waited for 4.387860579s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dtelemetry-ceilometer-dockercfg-nl629&resourceVersion=109846 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.163411 5050 reflector.go:561] object-"openstack"/"telemetry-ceilometer-dockercfg-nl629": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dtelemetry-ceilometer-dockercfg-nl629&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.163451 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"telemetry-ceilometer-dockercfg-nl629\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dtelemetry-ceilometer-dockercfg-nl629&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:33.178157 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-xl9wl" podUID="105854f4-5cc1-491f-983a-50864b37893f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:33.178185 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-xl9wl" podUID="105854f4-5cc1-491f-983a-50864b37893f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.182280 5050 reflector.go:561] object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dcluster-samples-operator-dockercfg-xpp9w&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.182322 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-xpp9w\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dcluster-samples-operator-dockercfg-xpp9w&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.202614 5050 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.202666 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.223350 5050 reflector.go:561] object-"openshift-etcd-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.223405 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.243239 5050 reflector.go:561] object-"openshift-nmstate"/"nmstate-handler-dockercfg-94hht": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dnmstate-handler-dockercfg-94hht&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.243305 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"nmstate-handler-dockercfg-94hht\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dnmstate-handler-dockercfg-94hht&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.263249 5050 reflector.go:561] object-"openstack"/"manila-share-share1-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-share-share1-config-data&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.263301 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"manila-share-share1-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-share-share1-config-data&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.282430 5050 reflector.go:561] object-"openshift-network-operator"/"metrics-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.282496 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.302592 5050 reflector.go:561] object-"openshift-image-registry"/"image-registry-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dimage-registry-tls&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.302676 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"image-registry-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dimage-registry-tls&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:33.319310 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jq9vt" podUID="5e11a0d1-4179-4621-803d-839196fb940b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:33.319344 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jq9vt" podUID="5e11a0d1-4179-4621-803d-839196fb940b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.323185 5050 reflector.go:561] object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-x9p8j": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dkeystone-operator-controller-manager-dockercfg-x9p8j&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.323249 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"keystone-operator-controller-manager-dockercfg-x9p8j\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dkeystone-operator-controller-manager-dockercfg-x9p8j&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.342397 5050 reflector.go:561] object-"openshift-nmstate"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109851": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.342441 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109851\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:33.361225 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-65swh" podUID="58a60a6f-fcf5-4aa0-b6e4-97d7df2b2bd2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.362443 5050 reflector.go:561] object-"openshift-etcd-operator"/"etcd-client": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.362502 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.382847 5050 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.382912 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:33.402206 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-65swh" podUID="58a60a6f-fcf5-4aa0-b6e4-97d7df2b2bd2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:33.402268 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-65swh" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.402293 5050 reflector.go:561] object-"openstack"/"rabbitmq-server-dockercfg-zd6qh": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-server-dockercfg-zd6qh&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.402323 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-server-dockercfg-zd6qh\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-server-dockercfg-zd6qh&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:33.403067 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manager" containerStatusID={"Type":"cri-o","ID":"979e0deb887552cefe316acef526be5df838912215e620a28e6177f1a500c441"} pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-65swh" containerMessage="Container manager failed liveness probe, will be restarted" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:33.403108 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-65swh" podUID="58a60a6f-fcf5-4aa0-b6e4-97d7df2b2bd2" containerName="manager" containerID="cri-o://979e0deb887552cefe316acef526be5df838912215e620a28e6177f1a500c441" gracePeriod=10 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.422823 5050 reflector.go:561] object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-t4t27": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncluster-ovndbcluster-nb-dockercfg-t4t27&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.422886 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovncluster-ovndbcluster-nb-dockercfg-t4t27\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncluster-ovndbcluster-nb-dockercfg-t4t27&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.442956 5050 reflector.go:561] object-"openshift-controller-manager-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.443040 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.463343 5050 reflector.go:561] object-"openstack"/"cinder-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-config-data&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.463421 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-config-data&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.484841 5050 reflector.go:561] object-"openshift-authentication-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.484912 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.502916 5050 reflector.go:561] object-"openshift-service-ca"/"signing-key": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dsigning-key&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.502969 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"signing-key\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dsigning-key&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.522761 5050 reflector.go:561] object-"openshift-nmstate"/"nginx-conf": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/configmaps?fieldSelector=metadata.name%3Dnginx-conf&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.522821 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"nginx-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/configmaps?fieldSelector=metadata.name%3Dnginx-conf&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.542468 5050 reflector.go:561] object-"openshift-route-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.542535 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.563056 5050 reflector.go:561] object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/secrets?fieldSelector=metadata.name%3Dconsole-operator-dockercfg-4xjcr&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.563143 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"console-operator-dockercfg-4xjcr\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/secrets?fieldSelector=metadata.name%3Dconsole-operator-dockercfg-4xjcr&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.583070 5050 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.583177 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.602875 5050 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.602908 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.622665 5050 reflector.go:561] object-"openstack"/"heat-heat-dockercfg-mz9rx": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dheat-heat-dockercfg-mz9rx&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.622691 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"heat-heat-dockercfg-mz9rx\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dheat-heat-dockercfg-mz9rx&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.642568 5050 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.642602 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.663295 5050 reflector.go:561] object-"openshift-ingress"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.663350 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109067\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:33.667222 5050 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-jmqdr container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:33.667241 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-h4w7p" podUID="9c82f51b-e2a0-49e6-bc0e-d7679e439a6f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:33.667283 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-jmqdr" podUID="64efd0fc-ec3c-403b-ac98-0546f2affa94" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.683242 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-login": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-login&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.683285 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-login&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.702912 5050 reflector.go:561] object-"openstack"/"manila-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-config-data&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.702970 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"manila-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-config-data&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.723034 5050 reflector.go:561] object-"openstack"/"dataplane-adoption-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddataplane-adoption-secret&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.723103 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"dataplane-adoption-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddataplane-adoption-secret&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.742927 5050 reflector.go:561] object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.742962 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:33.749299 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-h4w7p" podUID="9c82f51b-e2a0-49e6-bc0e-d7679e439a6f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:33.749328 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lvbdb" podUID="fa0985f7-7d87-41b3-9916-f22375a0489c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:33.749388 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lvbdb" podUID="fa0985f7-7d87-41b3-9916-f22375a0489c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:33.749459 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lvbdb" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.763290 5050 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-controller-dockercfg-c2lfx&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.763340 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-c2lfx\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-controller-dockercfg-c2lfx&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.783247 5050 reflector.go:561] object-"openshift-console-operator"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.783310 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.802988 5050 reflector.go:561] object-"metallb-system"/"frr-k8s-webhook-server-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dfrr-k8s-webhook-server-cert&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.803054 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"frr-k8s-webhook-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dfrr-k8s-webhook-server-cert&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.823345 5050 reflector.go:561] object-"openshift-oauth-apiserver"/"audit-1": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&resourceVersion=109642": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.823387 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&resourceVersion=109642\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.842647 5050 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serving-cert&resourceVersion=109623": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.842914 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serving-cert&resourceVersion=109623\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.862679 5050 reflector.go:561] object-"openstack"/"ceilometer-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dceilometer-config-data&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.862743 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ceilometer-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dceilometer-config-data&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.882704 5050 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-operator-dockercfg-2bh8d&resourceVersion=109225": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.882759 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2bh8d\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-operator-dockercfg-2bh8d&resourceVersion=109225\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.902923 5050 reflector.go:561] object-"openstack"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.902973 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:33.913174 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-5854674fcc-qqb7f" podUID="12778398-2baa-44cb-9fd1-f2034870e9fc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:33.913262 5050 patch_prober.go:28] interesting pod/observability-operator-d8bb48f5d-wwdcc container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.1.128:8081/healthz\": dial tcp 10.217.1.128:8081: connect: connection refused" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:33.913282 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" podUID="1c8331c1-b8ee-456b-baa6-110917427b64" containerName="operator" probeResult="failure" output="Get \"http://10.217.1.128:8081/healthz\": dial tcp 10.217.1.128:8081: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:33.913394 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ttg8w" podUID="b885fa10-3ed3-41fd-94ae-2b7442519450" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.922435 5050 reflector.go:561] object-"openstack"/"rabbitmq-cell1-server-dockercfg-bvxnm": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-server-dockercfg-bvxnm&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.922503 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-cell1-server-dockercfg-bvxnm\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-server-dockercfg-bvxnm&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.942782 5050 reflector.go:561] object-"openstack"/"ovndbcluster-nb-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-nb-scripts&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.942856 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovndbcluster-nb-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-nb-scripts&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.963184 5050 reflector.go:561] object-"openstack"/"openstackclient-openstackclient-dockercfg-c88pf": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dopenstackclient-openstackclient-dockercfg-c88pf&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.963275 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstackclient-openstackclient-dockercfg-c88pf\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dopenstackclient-openstackclient-dockercfg-c88pf&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:33.982703 5050 reflector.go:561] object-"openshift-console"/"console-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-serving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:33.982780 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-serving-cert&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.002815 5050 reflector.go:561] object-"openshift-machine-api"/"kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.002890 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.022715 5050 reflector.go:561] object-"openshift-marketplace"/"community-operators-dockercfg-dmngl": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcommunity-operators-dockercfg-dmngl&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.022755 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"community-operators-dockercfg-dmngl\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcommunity-operators-dockercfg-dmngl&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.042579 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-error": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-error&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.042649 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-error&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.063261 5050 reflector.go:561] object-"openshift-ingress"/"service-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.063347 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:34.078171 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ttg8w" podUID="b885fa10-3ed3-41fd-94ae-2b7442519450" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:34.078262 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ttg8w" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:34.078494 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-5854674fcc-qqb7f" podUID="12778398-2baa-44cb-9fd1-f2034870e9fc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:34.078177 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-grhdp" podUID="712f888b-7c45-4c1f-95d8-ccc464b7c15f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:34.078569 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-qqb7f" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.083254 5050 reflector.go:561] object-"metallb-system"/"metallb-excludel2": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dmetallb-excludel2&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.083294 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-excludel2\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dmetallb-excludel2&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.102597 5050 reflector.go:561] object-"openstack"/"ovsdbserver-nb": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovsdbserver-nb&resourceVersion=109851": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.102646 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovsdbserver-nb\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovsdbserver-nb&resourceVersion=109851\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:34.119189 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="7fb00b03-fe6e-4c66-bd36-adf9443871a8" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.1.135:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:34.119189 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-8jnzj" podUID="a6345cf8-abc2-4c9a-bfe6-8b65187ada2d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.122539 5050 reflector.go:561] object-"openstack"/"galera-openstack-dockercfg-5gcmv": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dgalera-openstack-dockercfg-5gcmv&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.122613 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"galera-openstack-dockercfg-5gcmv\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dgalera-openstack-dockercfg-5gcmv&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.142538 5050 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-config&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.142586 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-config&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:34.162867 5050 request.go:700] Waited for 4.192072344s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.163292 5050 reflector.go:561] object-"openshift-route-controller-manager"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.163364 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.182650 5050 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/ovn-data: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/ovn-data\": dial tcp 38.102.83.147:6443: connect: connection refused" pod="openstack/ovn-copy-data" volumeName="ovn-data" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:34.201270 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-ssfnh" podUID="bff5d533-0728-4436-bdeb-c725bf04bdb3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.94:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:34.201314 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-8jnzj" podUID="a6345cf8-abc2-4c9a-bfe6-8b65187ada2d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.93:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:34.201353 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-grhdp" podUID="712f888b-7c45-4c1f-95d8-ccc464b7c15f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:34.201503 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-grhdp" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:34.201850 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-ssfnh" podUID="bff5d533-0728-4436-bdeb-c725bf04bdb3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.94:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.203188 5050 reflector.go:561] object-"openstack"/"telemetry-autoscaling-dockercfg-7ght8": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dtelemetry-autoscaling-dockercfg-7ght8&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.203226 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"telemetry-autoscaling-dockercfg-7ght8\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dtelemetry-autoscaling-dockercfg-7ght8&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.223487 5050 reflector.go:561] object-"openstack"/"metric-storage-prometheus-dockercfg-4drvb": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmetric-storage-prometheus-dockercfg-4drvb&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.223539 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"metric-storage-prometheus-dockercfg-4drvb\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmetric-storage-prometheus-dockercfg-4drvb&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:34.226182 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="131d56da-b770-4452-97c9-b585434da431" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.243087 5050 reflector.go:561] object-"openshift-multus"/"metrics-daemon-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmetrics-daemon-secret&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.243139 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"metrics-daemon-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmetrics-daemon-secret&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.263432 5050 reflector.go:561] object-"openstack"/"default-dockercfg-tmtdn": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-tmtdn&resourceVersion=109623": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.263526 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"default-dockercfg-tmtdn\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-tmtdn&resourceVersion=109623\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.282326 5050 reflector.go:561] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-serving-cert&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.282589 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-serving-cert&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.302553 5050 reflector.go:561] object-"openstack"/"glance-default-external-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-default-external-config-data&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.302599 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"glance-default-external-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-default-external-config-data&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.322758 5050 reflector.go:561] object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-crgwv": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Drabbitmq-cluster-operator-controller-manager-dockercfg-crgwv&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.322815 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"rabbitmq-cluster-operator-controller-manager-dockercfg-crgwv\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Drabbitmq-cluster-operator-controller-manager-dockercfg-crgwv&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.343019 5050 reflector.go:561] object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-8nf9c": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dcinder-operator-controller-manager-dockercfg-8nf9c&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.343072 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"cinder-operator-controller-manager-dockercfg-8nf9c\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dcinder-operator-controller-manager-dockercfg-8nf9c&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.363319 5050 reflector.go:561] object-"openshift-oauth-apiserver"/"etcd-client": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.363372 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.382529 5050 reflector.go:561] object-"cert-manager"/"cert-manager-dockercfg-8z6ch": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-dockercfg-8z6ch&resourceVersion=109623": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.382591 5050 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"cert-manager-dockercfg-8z6ch\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-dockercfg-8z6ch&resourceVersion=109623\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:34.403261 5050 status_manager.go:851] "Failed to get status for pod" podUID="a6345cf8-abc2-4c9a-bfe6-8b65187ada2d" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-8jnzj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/swift-operator-controller-manager-9d58d64bc-8jnzj\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.422554 5050 reflector.go:561] object-"openstack"/"octavia-housekeeping-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-housekeeping-config-data&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.422588 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-housekeeping-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-housekeeping-config-data&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.443299 5050 reflector.go:561] object-"openstack"/"alertmanager-metric-storage-web-config": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dalertmanager-metric-storage-web-config&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.443340 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"alertmanager-metric-storage-web-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dalertmanager-metric-storage-web-config&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:34.444388 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-65swh" podUID="58a60a6f-fcf5-4aa0-b6e4-97d7df2b2bd2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.463433 5050 reflector.go:561] pkg/kubelet/config/apiserver.go:66: failed to list *v1.Pod: Get "https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dcrc&resourceVersion=109930": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.463496 5050 reflector.go:158] "Unhandled Error" err="pkg/kubelet/config/apiserver.go:66: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://api-int.crc.testing:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dcrc&resourceVersion=109930\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.483172 5050 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-tls&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.483229 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/secrets?fieldSelector=metadata.name%3Dmachine-approver-tls&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:34.502536 5050 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:34.502578 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.502573 5050 reflector.go:561] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.502731 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.523442 5050 reflector.go:561] object-"openstack"/"nova-cell0-conductor-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell0-conductor-config-data&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.523478 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-cell0-conductor-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell0-conductor-config-data&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.542990 5050 reflector.go:561] object-"openshift-service-ca"/"signing-cabundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dsigning-cabundle&resourceVersion=109642": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.543048 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"signing-cabundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dsigning-cabundle&resourceVersion=109642\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.563649 5050 reflector.go:561] object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-w847r": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dwatcher-operator-controller-manager-dockercfg-w847r&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.563730 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"watcher-operator-controller-manager-dockercfg-w847r\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dwatcher-operator-controller-manager-dockercfg-w847r&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.583479 5050 reflector.go:561] object-"openshift-network-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.583560 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.603245 5050 reflector.go:561] object-"openshift-marketplace"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.603291 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.623402 5050 reflector.go:561] object-"openstack"/"neutron-httpd-config": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-httpd-config&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.623437 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"neutron-httpd-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-httpd-config&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.642870 5050 reflector.go:561] object-"openshift-image-registry"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.642928 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109067\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.662725 5050 reflector.go:561] object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-operators-dockercfg-ct8rh&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.662787 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-ct8rh\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-operators-dockercfg-ct8rh&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.682452 5050 reflector.go:561] object-"openshift-oauth-apiserver"/"encryption-config-1": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.682506 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.702450 5050 reflector.go:561] object-"openshift-multus"/"multus-ac-dockercfg-9lkdf": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-ac-dockercfg-9lkdf&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.702493 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-ac-dockercfg-9lkdf\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-ac-dockercfg-9lkdf&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.722900 5050 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-daemon-dockercfg-r5tcq&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.722991 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-r5tcq\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-daemon-dockercfg-r5tcq&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:34.724300 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-volume-volume1-0" podUID="b94bc94c-a636-4f0d-bd28-0f347e7b1143" containerName="cinder-volume" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.742904 5050 reflector.go:561] object-"openshift-ingress"/"router-stats-default": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-stats-default&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.742971 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"router-stats-default\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-stats-default&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.762598 5050 reflector.go:561] object-"openstack"/"glance-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-scripts&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.762661 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"glance-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-scripts&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.782668 5050 reflector.go:561] object-"openstack"/"manila-api-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-api-config-data&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.782760 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"manila-api-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-api-config-data&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:34.791325 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lvbdb" podUID="fa0985f7-7d87-41b3-9916-f22375a0489c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.795510 5050 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err="command '/bin/bash /var/lib/operator-scripts/mysql_shutdown.sh' exited with 137: " execCommand=["/bin/bash","/var/lib/operator-scripts/mysql_shutdown.sh"] containerName="galera" pod="openstack/openstack-galera-0" message="" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.795549 5050 kuberuntime_container.go:691] "PreStop hook failed" err="command '/bin/bash /var/lib/operator-scripts/mysql_shutdown.sh' exited with 137: " pod="openstack/openstack-galera-0" podUID="14ad1594-090d-4024-a999-9ffe77ce58d8" containerName="galera" containerID="cri-o://11ec72e9f06368997b90f40f70878174683db5a652ebb2eb2c432e34abde950e" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.802448 5050 reflector.go:561] object-"openstack-operators"/"test-operator-controller-manager-dockercfg-cclxg": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dtest-operator-controller-manager-dockercfg-cclxg&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.802504 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"test-operator-controller-manager-dockercfg-cclxg\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dtest-operator-controller-manager-dockercfg-cclxg&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.822576 5050 reflector.go:561] object-"openshift-machine-config-operator"/"mcc-proxy-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.822644 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.842491 5050 reflector.go:561] object-"openstack"/"placement-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.842548 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"placement-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.862550 5050 reflector.go:561] object-"openshift-route-controller-manager"/"client-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.862608 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.882963 5050 reflector.go:561] object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-f52rf": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dtelemetry-operator-controller-manager-dockercfg-f52rf&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.883051 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"telemetry-operator-controller-manager-dockercfg-f52rf\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dtelemetry-operator-controller-manager-dockercfg-f52rf&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.902961 5050 reflector.go:561] object-"metallb-system"/"metallb-operator-controller-manager-service-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-operator-controller-manager-service-cert&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.903045 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-operator-controller-manager-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-operator-controller-manager-service-cert&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:34.912160 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" podUID="3477354d-838b-48cc-a6c3-612088d82640" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.923409 5050 reflector.go:561] object-"cert-manager"/"cert-manager-cainjector-dockercfg-sxrxp": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-cainjector-dockercfg-sxrxp&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.923489 5050 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-sxrxp\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-cainjector-dockercfg-sxrxp&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.942723 5050 reflector.go:561] object-"openshift-console-operator"/"trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.942798 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.962541 5050 reflector.go:561] object-"openshift-marketplace"/"marketplace-trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dmarketplace-trusted-ca&resourceVersion=109117": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.962637 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dmarketplace-trusted-ca&resourceVersion=109117\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:34.982743 5050 reflector.go:561] object-"openshift-etcd-operator"/"etcd-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-operator-serving-cert&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:34.982816 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/secrets?fieldSelector=metadata.name%3Detcd-operator-serving-cert&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:34.989212 5050 patch_prober.go:28] interesting pod/dns-default-25p7l container/dns namespace/openshift-dns: Readiness probe status=failure output="Get \"http://10.217.0.27:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:34.989271 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-dns/dns-default-25p7l" podUID="66dfe3f5-9e7a-4e1c-b03a-b6f01dab6ef4" containerName="dns" probeResult="failure" output="Get \"http://10.217.0.27:8181/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.002633 5050 reflector.go:561] object-"openshift-dns-operator"/"metrics-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.002705 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns-operator\"/\"metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.022528 5050 reflector.go:561] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.022613 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.043625 5050 reflector.go:561] object-"openshift-service-ca-operator"/"service-ca-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-operator-config&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.043689 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-operator-config&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.063474 5050 reflector.go:561] object-"openshift-multus"/"multus-daemon-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dmultus-daemon-config&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.063537 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-daemon-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dmultus-daemon-config&resourceVersion=109067\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:35.071245 5050 patch_prober.go:28] interesting pod/perses-operator-5446b9c989-dqk4m container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.1.129:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:35.071273 5050 patch_prober.go:28] interesting pod/perses-operator-5446b9c989-dqk4m container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.1.129:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:35.071284 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-5446b9c989-dqk4m" podUID="051b7665-675e-4109-a8e8-5a416c8b49cc" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.1.129:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:35.071307 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5446b9c989-dqk4m" podUID="051b7665-675e-4109-a8e8-5a416c8b49cc" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.1.129:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.083520 5050 reflector.go:561] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-serving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.083606 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-serving-cert&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.105573 5050 reflector.go:561] object-"openstack"/"dataplanenodeset-openstack-cell1": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddataplanenodeset-openstack-cell1&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.105673 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"dataplanenodeset-openstack-cell1\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Ddataplanenodeset-openstack-cell1&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.129647 5050 reflector.go:561] object-"hostpath-provisioner"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.130000 5050 reflector.go:158] "Unhandled Error" err="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.145773 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-idp-0-file-data&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.145865 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-idp-0-file-data&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:35.163282 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ttg8w" podUID="b885fa10-3ed3-41fd-94ae-2b7442519450" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:35.163438 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-5854674fcc-qqb7f" podUID="12778398-2baa-44cb-9fd1-f2034870e9fc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:35.163492 5050 patch_prober.go:28] interesting pod/controller-manager-69cffd76bd-8bkp6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:35.163517 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-69cffd76bd-8bkp6" podUID="6a956942-e4db-4c66-a7c2-1c370c1569f4" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:35.163726 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5d666c4679-krnwf" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:35.164086 5050 request.go:700] Waited for 4.362731876s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-placement-dockercfg-4zzmp&resourceVersion=109512 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.165220 5050 reflector.go:561] object-"openstack"/"placement-placement-dockercfg-4zzmp": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-placement-dockercfg-4zzmp&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.165300 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"placement-placement-dockercfg-4zzmp\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-placement-dockercfg-4zzmp&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:35.169927 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-25p7l" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.188387 5050 reflector.go:561] object-"openstack"/"rabbitmq-cell1-erlang-cookie": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-erlang-cookie&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.188535 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-cell1-erlang-cookie\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-erlang-cookie&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.209339 5050 reflector.go:561] object-"openshift-console"/"default-dockercfg-chnjx": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-chnjx&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.209415 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"default-dockercfg-chnjx\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-chnjx&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:35.212604 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-grhdp" podUID="712f888b-7c45-4c1f-95d8-ccc464b7c15f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.95:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:35.213108 5050 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cd66n container/openshift-apiserver namespace/openshift-apiserver: Liveness probe status=failure output="Get \"https://10.217.0.9:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:35.213174 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" podUID="ea884e88-c0df-4212-976a-0d7ce1731fdc" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.9:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:35.213235 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:35.214230 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-apiserver" containerStatusID={"Type":"cri-o","ID":"870ab51995eb595a4e2d4b1790de952cb62e6bd39b249e1ed5b3a9e636607508"} pod="openshift-apiserver/apiserver-76f77b778f-cd66n" containerMessage="Container openshift-apiserver failed liveness probe, will be restarted" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:35.214299 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" podUID="ea884e88-c0df-4212-976a-0d7ce1731fdc" containerName="openshift-apiserver" containerID="cri-o://870ab51995eb595a4e2d4b1790de952cb62e6bd39b249e1ed5b3a9e636607508" gracePeriod=120 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:35.214462 5050 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cd66n container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz?exclude=etcd&exclude=etcd-readiness\": context deadline exceeded" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:35.214489 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" podUID="ea884e88-c0df-4212-976a-0d7ce1731fdc" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz?exclude=etcd&exclude=etcd-readiness\": context deadline exceeded" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:35.214899 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:35.219191 5050 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-cnp7n container/oauth-apiserver namespace/openshift-oauth-apiserver: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/livez?exclude=etcd\": context deadline exceeded" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:35.219251 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" podUID="1d89350d-55e9-4ef6-8182-287894b6c14b" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.10:8443/livez?exclude=etcd\": context deadline exceeded" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.223182 5050 reflector.go:561] object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-gkldc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmanila-operator-controller-manager-dockercfg-gkldc&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.223269 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"manila-operator-controller-manager-dockercfg-gkldc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmanila-operator-controller-manager-dockercfg-gkldc&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.244621 5050 reflector.go:561] object-"openstack"/"octavia-housekeeping-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-housekeeping-scripts&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.244711 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-housekeeping-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-housekeeping-scripts&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:35.258398 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.263620 5050 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.263694 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.284936 5050 reflector.go:561] object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dcluster-image-registry-operator-dockercfg-m4qtx&resourceVersion=109397": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.285046 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-m4qtx\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dcluster-image-registry-operator-dockercfg-m4qtx&resourceVersion=109397\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.307976 5050 reflector.go:561] object-"openshift-console"/"console-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dconsole-config&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.308054 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dconsole-config&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:35.310187 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.324698 5050 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-images": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dmachine-api-operator-images&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.324767 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dmachine-api-operator-images&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.342968 5050 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/secrets?fieldSelector=metadata.name%3Dauthentication-operator-dockercfg-mz9bj&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.343041 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-mz9bj\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/secrets?fieldSelector=metadata.name%3Dauthentication-operator-dockercfg-mz9bj&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.362878 5050 reflector.go:561] object-"openstack"/"manila-manila-dockercfg-d7578": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-manila-dockercfg-d7578&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.362977 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"manila-manila-dockercfg-d7578\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-manila-dockercfg-d7578&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.383248 5050 reflector.go:561] object-"openshift-ingress-canary"/"canary-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Dcanary-serving-cert&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.383312 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-canary\"/\"canary-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Dcanary-serving-cert&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:35.418961 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5fb79d99b5-m4xgd" podUID="2086bc41-00a2-4c97-a491-08511f3ed6e5" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.117:8080/dashboard/auth/login/?next=/dashboard/\": EOF" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.433224 5050 reflector.go:561] object-"openshift-dns-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.433294 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.436036 5050 reflector.go:561] object-"openstack"/"openstack-cell1": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-cell1&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.436102 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-cell1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-cell1&resourceVersion=109067\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.445930 5050 reflector.go:561] object-"openstack"/"barbican-worker-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-worker-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.446037 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"barbican-worker-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-worker-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.483225 5050 reflector.go:561] object-"metallb-system"/"frr-k8s-daemon-dockercfg-wks79": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dfrr-k8s-daemon-dockercfg-wks79&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.483310 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"frr-k8s-daemon-dockercfg-wks79\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dfrr-k8s-daemon-dockercfg-wks79&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.486736 5050 reflector.go:561] object-"openstack"/"openstack-cell1-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-cell1-scripts&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.486819 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-cell1-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-cell1-scripts&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.507367 5050 reflector.go:561] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.507431 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.522886 5050 reflector.go:561] object-"openstack"/"heat-api-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dheat-api-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.523268 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"heat-api-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dheat-api-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.546075 5050 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.546157 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=109067\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.562938 5050 reflector.go:561] object-"metallb-system"/"metallb-webhook-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-webhook-cert&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.563004 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-webhook-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-webhook-cert&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:35.563230 5050 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:35.563264 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.588825 5050 reflector.go:561] object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcertified-operators-dockercfg-4rs5g&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.589060 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-4rs5g\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcertified-operators-dockercfg-4rs5g&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.603350 5050 reflector.go:561] object-"openshift-image-registry"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.603540 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.623147 5050 reflector.go:561] object-"openshift-service-ca"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.623313 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.643225 5050 reflector.go:561] object-"openstack"/"prometheus-metric-storage-web-config": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage-web-config&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.643335 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"prometheus-metric-storage-web-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage-web-config&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.663229 5050 reflector.go:561] object-"openstack"/"ovndbcluster-nb-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-nb-config&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.663317 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovndbcluster-nb-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-nb-config&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.682811 5050 reflector.go:561] object-"openshift-cluster-samples-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.682873 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.706230 5050 reflector.go:561] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-dockercfg-gkqpw&resourceVersion=109322": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.706312 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-gkqpw\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-dockercfg-gkqpw&resourceVersion=109322\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.733386 5050 reflector.go:561] object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.733465 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.744347 5050 reflector.go:561] object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.744414 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:35.753908 5050 trace.go:236] Trace[782884311]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-0" (11-Dec-2025 15:54:17.712) (total time: 18041ms): Dec 11 15:54:41 crc kubenswrapper[5050]: Trace[782884311]: [18.041686049s] [18.041686049s] END Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.775851 5050 reflector.go:561] object-"openshift-marketplace"/"marketplace-operator-metrics": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-metrics&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.775926 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-metrics&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.786931 5050 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-dockercfg-vw8fw&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.787036 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-vw8fw\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-operator-dockercfg-vw8fw&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.811797 5050 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-config&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.811867 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-config&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.836349 5050 reflector.go:561] object-"openshift-operators"/"observability-operator-sa-dockercfg-f5g9s": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobservability-operator-sa-dockercfg-f5g9s&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.836419 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-f5g9s\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobservability-operator-sa-dockercfg-f5g9s&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.848420 5050 reflector.go:561] object-"openstack"/"nova-scheduler-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-scheduler-config-data&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.848494 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-scheduler-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-scheduler-config-data&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.863582 5050 reflector.go:561] object-"openstack"/"galera-openstack-cell1-dockercfg-vwxp8": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dgalera-openstack-cell1-dockercfg-vwxp8&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.863643 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"galera-openstack-cell1-dockercfg-vwxp8\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dgalera-openstack-cell1-dockercfg-vwxp8&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.883115 5050 reflector.go:561] object-"openstack"/"openstack-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-config&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.883157 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-config&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.902548 5050 reflector.go:561] object-"openstack"/"ovsdbserver-sb": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovsdbserver-sb&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.902628 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovsdbserver-sb\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovsdbserver-sb&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.923029 5050 reflector.go:561] object-"openshift-service-ca-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.923100 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.943522 5050 reflector.go:561] object-"openshift-ingress-canary"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.943598 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.962656 5050 reflector.go:561] object-"openshift-machine-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.962718 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:35.982712 5050 reflector.go:561] object-"cert-manager-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:35.982778 5050 reflector.go:158] "Unhandled Error" err="object-\"cert-manager-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.003326 5050 reflector.go:561] object-"openstack"/"ovndbcluster-sb-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-sb-config&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.003384 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovndbcluster-sb-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-sb-config&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.023460 5050 reflector.go:561] object-"openstack"/"glance-glance-dockercfg-ndgnr": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-glance-dockercfg-ndgnr&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.023672 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"glance-glance-dockercfg-ndgnr\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-glance-dockercfg-ndgnr&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.043385 5050 reflector.go:561] object-"openstack"/"dns": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Ddns&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.043449 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"dns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Ddns&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.063253 5050 reflector.go:561] object-"openstack"/"cinder-api-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-api-config-data&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.063309 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-api-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-api-config-data&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.082937 5050 reflector.go:561] object-"openshift-dns"/"dns-default-metrics-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-default-metrics-tls&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.082989 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"dns-default-metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-default-metrics-tls&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.103096 5050 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-script-lib&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.103166 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-script-lib&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.124569 5050 reflector.go:561] object-"openstack"/"octavia-api-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-api-scripts&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.124629 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-api-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-api-scripts&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.142545 5050 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-operator-dockercfg-98p87&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.142640 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-98p87\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-operator-dockercfg-98p87&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.162833 5050 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serviceaccount-dockercfg-rq7zk&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.162894 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-rq7zk\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dolm-operator-serviceaccount-dockercfg-rq7zk&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:36.182137 5050 request.go:700] Waited for 4.391019494s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Daodh-scripts&resourceVersion=109512 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.182600 5050 reflector.go:561] object-"openstack"/"aodh-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Daodh-scripts&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.182650 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"aodh-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Daodh-scripts&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.202791 5050 reflector.go:561] object-"openstack"/"nova-metadata-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-metadata-config-data&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.202882 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-metadata-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-metadata-config-data&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.222569 5050 reflector.go:561] object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-25r77": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dinfra-operator-controller-manager-dockercfg-25r77&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.222948 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"infra-operator-controller-manager-dockercfg-25r77\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dinfra-operator-controller-manager-dockercfg-25r77&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.242519 5050 reflector.go:561] object-"metallb-system"/"metallb-operator-webhook-server-service-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-operator-webhook-server-service-cert&resourceVersion=109215": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.242598 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-operator-webhook-server-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dmetallb-operator-webhook-server-service-cert&resourceVersion=109215\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.263513 5050 reflector.go:561] object-"metallb-system"/"speaker-dockercfg-p2rzt": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dspeaker-dockercfg-p2rzt&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.263578 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"speaker-dockercfg-p2rzt\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dspeaker-dockercfg-p2rzt&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.283170 5050 reflector.go:561] object-"openstack"/"prometheus-metric-storage": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.283237 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"prometheus-metric-storage\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.302665 5050 reflector.go:561] object-"openstack"/"nova-cell1-novncproxy-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell1-novncproxy-config-data&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.302740 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-cell1-novncproxy-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell1-novncproxy-config-data&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.323186 5050 reflector.go:561] object-"openstack-operators"/"webhook-server-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dwebhook-server-cert&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.323262 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"webhook-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dwebhook-server-cert&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.343048 5050 reflector.go:561] object-"openshift-image-registry"/"image-registry-operator-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dimage-registry-operator-tls&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.343123 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"image-registry-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dimage-registry-operator-tls&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.363430 5050 reflector.go:561] object-"metallb-system"/"frr-k8s-certs-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dfrr-k8s-certs-secret&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.363796 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"frr-k8s-certs-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dfrr-k8s-certs-secret&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.382707 5050 reflector.go:561] object-"openstack"/"octavia-rsyslog-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-rsyslog-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.382778 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-rsyslog-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-rsyslog-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.403389 5050 reflector.go:561] object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage-thanos-prometheus-http-client-file&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.403471 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"prometheus-metric-storage-thanos-prometheus-http-client-file\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dprometheus-metric-storage-thanos-prometheus-http-client-file&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.422680 5050 reflector.go:561] object-"metallb-system"/"controller-certs-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dcontroller-certs-secret&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.422747 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"controller-certs-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dcontroller-certs-secret&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.443449 5050 reflector.go:561] object-"openshift-apiserver"/"encryption-config-1": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.443521 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.462734 5050 reflector.go:561] object-"openshift-apiserver"/"audit-1": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.462807 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.482911 5050 reflector.go:561] object-"openstack"/"memcached-memcached-dockercfg-kl4q7": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmemcached-memcached-dockercfg-kl4q7&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.482984 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"memcached-memcached-dockercfg-kl4q7\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmemcached-memcached-dockercfg-kl4q7&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.502772 5050 reflector.go:561] object-"openshift-ingress-operator"/"metrics-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.502838 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.523265 5050 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-dockercfg-xtcjv&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.523348 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-xtcjv\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-dockercfg-xtcjv&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.543082 5050 reflector.go:561] object-"openstack"/"neutron-config": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-config&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.543156 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"neutron-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-config&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.562720 5050 reflector.go:561] object-"openshift-operators"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.562780 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.583309 5050 reflector.go:561] object-"openshift-machine-config-operator"/"node-bootstrapper-token": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dnode-bootstrapper-token&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.583377 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dnode-bootstrapper-token&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.602602 5050 reflector.go:561] object-"openstack"/"ovncontroller-metrics-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovncontroller-metrics-config&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.602679 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovncontroller-metrics-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovncontroller-metrics-config&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.622927 5050 reflector.go:561] object-"openshift-console"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.622997 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.642529 5050 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.642601 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.663041 5050 reflector.go:561] object-"openshift-console"/"service-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dservice-ca&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.663112 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"service-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dservice-ca&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:36.669167 5050 patch_prober.go:28] interesting pod/console-operator-58897d9998-mv9g5 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:36.669218 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-mv9g5" podUID="112006c5-e3a9-4fbb-813c-f195e98277bc" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.683321 5050 reflector.go:561] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-apiserver-operator-config&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.683396 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-apiserver-operator-config&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.703291 5050 reflector.go:561] object-"openstack"/"rabbitmq-plugins-conf": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-plugins-conf&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.703389 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-plugins-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-plugins-conf&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.722905 5050 reflector.go:561] object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-dockercfg-x57mr&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.722999 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-x57mr\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/secrets?fieldSelector=metadata.name%3Dkube-apiserver-operator-dockercfg-x57mr&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.742821 5050 reflector.go:561] object-"openshift-oauth-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.742898 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.763255 5050 reflector.go:561] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Dnode-resolver-dockercfg-kz9s7&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.763331 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"node-resolver-dockercfg-kz9s7\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Dnode-resolver-dockercfg-kz9s7&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.783275 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-provider-selection": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-provider-selection&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.783345 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-provider-selection&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.803277 5050 reflector.go:561] object-"openstack"/"ovncontroller-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovncontroller-scripts&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.803353 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovncontroller-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovncontroller-scripts&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.823243 5050 reflector.go:561] object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.823329 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.843475 5050 reflector.go:561] object-"openshift-image-registry"/"trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.843523 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.863390 5050 reflector.go:561] object-"openstack"/"cert-galera-openstack-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-galera-openstack-svc&resourceVersion=109087": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.863427 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-galera-openstack-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-galera-openstack-svc&resourceVersion=109087\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.883088 5050 reflector.go:561] object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-marketplace-dockercfg-x2ctb&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.883158 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-x2ctb\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dredhat-marketplace-dockercfg-x2ctb&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.902958 5050 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dauthentication-operator-config&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.903056 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dauthentication-operator-config&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.923070 5050 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-operator-images": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dmachine-config-operator-images&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.923146 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dmachine-config-operator-images&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:36.927877 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-w4tzc" podUID="54e98831-cd88-4dee-90db-e8fbb006e9c3" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": dial tcp [::1]:29150: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.942686 5050 reflector.go:561] object-"openshift-console"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.942765 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.962865 5050 reflector.go:561] object-"openshift-ingress"/"router-certs-default": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-certs-default&resourceVersion=109087": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.962939 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"router-certs-default\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-certs-default&resourceVersion=109087\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:36.983064 5050 reflector.go:561] object-"openshift-ingress-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:36.983158 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:36.987880 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jns7" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.003211 5050 reflector.go:561] object-"openstack"/"aodh-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Daodh-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.003296 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"aodh-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Daodh-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.023166 5050 reflector.go:561] object-"openstack"/"openstack-config-data": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-config-data&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.023246 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-config-data\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-config-data&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.042531 5050 reflector.go:561] object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.042601 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.063149 5050 reflector.go:561] object-"openstack"/"metric-storage-alertmanager-dockercfg-c2fxf": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmetric-storage-alertmanager-dockercfg-c2fxf&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.063224 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"metric-storage-alertmanager-dockercfg-c2fxf\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmetric-storage-alertmanager-dockercfg-c2fxf&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.083028 5050 reflector.go:561] object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-55k4b": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovnnorthd-ovnnorthd-dockercfg-55k4b&resourceVersion=109623": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.083114 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovnnorthd-ovnnorthd-dockercfg-55k4b\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovnnorthd-ovnnorthd-dockercfg-55k4b&resourceVersion=109623\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.102751 5050 reflector.go:561] object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-d5dn6": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dhorizon-operator-controller-manager-dockercfg-d5dn6&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.102827 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"horizon-operator-controller-manager-dockercfg-d5dn6\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dhorizon-operator-controller-manager-dockercfg-d5dn6&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.122713 5050 reflector.go:561] object-"openstack"/"openstack-cell1-config-data": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-cell1-config-data&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.122774 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-cell1-config-data\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-cell1-config-data&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.142821 5050 reflector.go:561] object-"cert-manager"/"cert-manager-webhook-dockercfg-kbhwz": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-webhook-dockercfg-kbhwz&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.142898 5050 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-kbhwz\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/secrets?fieldSelector=metadata.name%3Dcert-manager-webhook-dockercfg-kbhwz&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.163155 5050 reflector.go:561] object-"cert-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.163238 5050 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:37.182206 5050 request.go:700] Waited for 3.495947834s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.182678 5050 reflector.go:561] object-"openshift-machine-api"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.182746 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.203627 5050 reflector.go:561] object-"openstack"/"octavia-certs-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-certs-secret&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.203701 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-certs-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-certs-secret&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.222838 5050 reflector.go:561] object-"openshift-authentication-operator"/"service-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.222896 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dservice-ca-bundle&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.242712 5050 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/prometheus-metric-storage-db-prometheus-metric-storage-0: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/prometheus-metric-storage-db-prometheus-metric-storage-0\": dial tcp 38.102.83.147:6443: connect: connection refused" pod="openstack/prometheus-metric-storage-0" volumeName="prometheus-metric-storage-db" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:37.263098 5050 status_manager.go:851] "Failed to get status for pod" podUID="3477354d-838b-48cc-a6c3-612088d82640" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/infra-operator-controller-manager-78d48bff9d-5g8lw\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:37.278674 5050 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r54sd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:5443/healthz\": dial tcp 10.217.0.23:5443: connect: connection refused" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:37.278720 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" podUID="dbd5b107-5d08-43af-881c-11540f395267" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.23:5443/healthz\": dial tcp 10.217.0.23:5443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.283401 5050 reflector.go:561] object-"openshift-marketplace"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.283463 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109067\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.303378 5050 reflector.go:561] object-"openstack"/"rabbitmq-server-conf": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-server-conf&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.303416 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-server-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-server-conf&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.323812 5050 reflector.go:561] object-"openshift-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.323888 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.343211 5050 reflector.go:561] object-"openshift-machine-config-operator"/"mco-proxy-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmco-proxy-tls&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.343282 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmco-proxy-tls&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.363165 5050 reflector.go:561] object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-n57x7": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dnova-operator-controller-manager-dockercfg-n57x7&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.363230 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"nova-operator-controller-manager-dockercfg-n57x7\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dnova-operator-controller-manager-dockercfg-n57x7&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:37.364568 5050 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8qmpk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:37.364679 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk" podUID="81255772-fddc-4936-8de3-da4649c32d1f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.383272 5050 reflector.go:561] object-"openstack-operators"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.383331 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.403251 5050 reflector.go:561] object-"openshift-multus"/"multus-admission-controller-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-admission-controller-secret&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.403334 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-admission-controller-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-admission-controller-secret&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.423119 5050 reflector.go:561] object-"openshift-dns"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.423202 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.442424 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-ocp-branding-template&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.442467 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-ocp-branding-template&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.463257 5050 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-control-plane-metrics-cert&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.463329 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-control-plane-metrics-cert&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.483262 5050 reflector.go:561] object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmetrics-daemon-sa-dockercfg-d427c&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.483337 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-d427c\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmetrics-daemon-sa-dockercfg-d427c&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.503345 5050 reflector.go:561] object-"openstack"/"keystone-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-config-data&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.503411 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"keystone-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-config-data&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.523262 5050 reflector.go:561] object-"openshift-route-controller-manager"/"config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.523343 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.542424 5050 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-msq4c&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.542486 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-msq4c&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.562616 5050 reflector.go:561] object-"openstack"/"ovncontroller-ovncontroller-dockercfg-fjq8l": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncontroller-ovncontroller-dockercfg-fjq8l&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.562688 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovncontroller-ovncontroller-dockercfg-fjq8l\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncontroller-ovncontroller-dockercfg-fjq8l&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.583242 5050 reflector.go:561] object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.583358 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.603568 5050 reflector.go:561] object-"openstack"/"horizon-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dhorizon-scripts&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.603647 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"horizon-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dhorizon-scripts&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.622761 5050 reflector.go:561] object-"openstack"/"octavia-worker-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-worker-config-data&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.622832 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-worker-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-worker-config-data&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.643134 5050 reflector.go:561] object-"openstack"/"cinder-backup-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-backup-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.643191 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-backup-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-backup-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.663097 5050 reflector.go:561] object-"openshift-authentication"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.663177 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.682789 5050 reflector.go:561] object-"openstack"/"openstack-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-scripts&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.682874 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-scripts&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.702946 5050 reflector.go:561] object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.703033 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.723587 5050 reflector.go:561] object-"openshift-oauth-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.723676 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.742890 5050 reflector.go:561] object-"openshift-oauth-apiserver"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.743314 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:37.748647 5050 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]leaderElection failed: reason withheld Dec 11 15:54:41 crc kubenswrapper[5050]: [+]serviceaccount-token-controller ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]ttl-after-finished-controller ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]ephemeral-volume-controller ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]namespace-controller ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]job-controller ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]certificatesigningrequest-signing-controller ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]persistentvolume-binder-controller ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]persistentvolume-attach-detach-controller ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]endpointslice-controller ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]garbage-collector-controller ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]disruption-controller ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]node-lifecycle-controller ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]resourcequota-controller ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]statefulset-controller ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]cronjob-controller ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]certificatesigningrequest-approving-controller ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]daemonset-controller ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]certificatesigningrequest-cleaner-controller ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]legacy-serviceaccount-token-cleaner-controller ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]endpoints-controller ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]serviceaccount-controller ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]persistentvolume-expander-controller ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]persistentvolumeclaim-protection-controller ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]taint-eviction-controller ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]replicationcontroller-controller ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]horizontal-pod-autoscaler-controller ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]clusterrole-aggregation-controller ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]root-ca-certificate-publisher-controller ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]validatingadmissionpolicy-status-controller ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]endpointslice-mirroring-controller ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]pod-garbage-collector-controller ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]deployment-controller ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]persistentvolume-protection-controller ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]replicaset-controller ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]service-ca-certificate-publisher-controller ok Dec 11 15:54:41 crc kubenswrapper[5050]: healthz check failed Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:37.748723 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.762860 5050 reflector.go:561] object-"openstack"/"openstack-cell1-dockercfg-mvxd9": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dopenstack-cell1-dockercfg-mvxd9&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.762939 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-cell1-dockercfg-mvxd9\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dopenstack-cell1-dockercfg-mvxd9&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.783369 5050 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.783452 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-qx5rd\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:37.791682 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="7fb00b03-fe6e-4c66-bd36-adf9443871a8" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.1.135:9090/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.804859 5050 reflector.go:561] object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.804944 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.822404 5050 reflector.go:561] object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-admission-webhook-service-cert&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.822463 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-admission-webhook-service-cert&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.842841 5050 reflector.go:561] object-"openstack"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109642": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.842903 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109642\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.863440 5050 reflector.go:561] object-"openshift-cluster-version"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.863544 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.882821 5050 reflector.go:561] object-"openstack"/"alertmanager-metric-storage-cluster-tls-config": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dalertmanager-metric-storage-cluster-tls-config&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.882884 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"alertmanager-metric-storage-cluster-tls-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dalertmanager-metric-storage-cluster-tls-config&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.902559 5050 reflector.go:561] object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dkube-scheduler-operator-serving-cert&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.902631 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/secrets?fieldSelector=metadata.name%3Dkube-scheduler-operator-serving-cert&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.922337 5050 reflector.go:561] object-"openshift-network-node-identity"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.922405 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.942933 5050 reflector.go:561] object-"openshift-operators"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.943024 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.963308 5050 reflector.go:561] object-"openshift-apiserver-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.963630 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:37.983214 5050 reflector.go:561] object-"openshift-machine-config-operator"/"proxy-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dproxy-tls&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:37.983272 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dproxy-tls&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.002594 5050 reflector.go:561] object-"openstack"/"openstack-aee-default-env": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-aee-default-env&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.002673 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-aee-default-env\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dopenstack-aee-default-env&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.023304 5050 reflector.go:561] object-"openshift-authentication"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.023374 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.043534 5050 reflector.go:561] object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dservice-ca-operator-dockercfg-rg9jl&resourceVersion=109623": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.043654 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-rg9jl\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dservice-ca-operator-dockercfg-rg9jl&resourceVersion=109623\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.062461 5050 reflector.go:561] object-"openshift-network-node-identity"/"ovnkube-identity-cm": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dovnkube-identity-cm&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.062509 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dovnkube-identity-cm&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.082971 5050 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-global-ca&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.083026 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-global-ca&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.102732 5050 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.147:6443: connect: connection refused" pod="openshift-image-registry/image-registry-66df7c8f76-xnlm8" volumeName="registry-storage" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.122789 5050 reflector.go:561] object-"openstack-operators"/"openstack-operator-index-dockercfg-f86tg": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-index-dockercfg-f86tg&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.122906 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openstack-operator-index-dockercfg-f86tg\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-index-dockercfg-f86tg&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.142563 5050 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-kubernetes-node-dockercfg-pwtwl&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.142641 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-pwtwl\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/secrets?fieldSelector=metadata.name%3Dovn-kubernetes-node-dockercfg-pwtwl&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:38.155114 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-volume-volume1-0" podUID="b94bc94c-a636-4f0d-bd28-0f347e7b1143" containerName="cinder-volume" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:38.163260 5050 status_manager.go:851] "Failed to get status for pod" podUID="71218193-88fc-4811-bf04-33a4f4a87898" pod="openstack/kube-state-metrics-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/kube-state-metrics-0\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:38.166925 5050 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:38.171880 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="131d56da-b770-4452-97c9-b585434da431" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.182594 5050 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.182656 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.202729 5050 reflector.go:561] object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-dockercfg-k9rxt&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.202805 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-k9rxt\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-dockercfg-k9rxt&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.222922 5050 reflector.go:561] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.223182 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:38.235242 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.242834 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.242895 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.262636 5050 reflector.go:561] object-"openshift-nmstate"/"default-dockercfg-vtnxn": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-vtnxn&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.262704 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"default-dockercfg-vtnxn\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-vtnxn&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.283245 5050 reflector.go:561] object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-whqpr": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncluster-ovndbcluster-sb-dockercfg-whqpr&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.283311 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovncluster-ovndbcluster-sb-dockercfg-whqpr\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dovncluster-ovndbcluster-sb-dockercfg-whqpr&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.303145 5050 reflector.go:561] object-"openshift-oauth-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.303220 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.322642 5050 reflector.go:561] object-"openshift-service-ca"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.322733 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.342881 5050 reflector.go:561] object-"openshift-image-registry"/"node-ca-dockercfg-4777p": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dnode-ca-dockercfg-4777p&resourceVersion=109434": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.342972 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"node-ca-dockercfg-4777p\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dnode-ca-dockercfg-4777p&resourceVersion=109434\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.363361 5050 reflector.go:561] object-"openshift-machine-api"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.363457 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.383618 5050 reflector.go:561] object-"openstack"/"barbican-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.383992 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"barbican-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.403293 5050 reflector.go:561] object-"openshift-nmstate"/"openshift-nmstate-webhook": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dopenshift-nmstate-webhook&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.403372 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"openshift-nmstate-webhook\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dopenshift-nmstate-webhook&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.423093 5050 reflector.go:561] object-"openstack"/"cinder-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-scripts&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.423163 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-scripts&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.443306 5050 reflector.go:561] object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Droute-controller-manager-sa-dockercfg-h2zr2&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.443379 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-h2zr2\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Droute-controller-manager-sa-dockercfg-h2zr2&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:38.444200 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-backup-0" podUID="0f954250-5982-4088-839a-8faf7bfe203c" containerName="cinder-backup" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.462694 5050 reflector.go:561] object-"openshift-ovn-kubernetes"/"env-overrides": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Denv-overrides&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.462802 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Denv-overrides&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:38.468923 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="71218193-88fc-4811-bf04-33a4f4a87898" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.1.133:8081/readyz\": dial tcp 10.217.1.133:8081: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:38.475941 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-backup-0" podUID="0f954250-5982-4088-839a-8faf7bfe203c" containerName="cinder-backup" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.485641 5050 reflector.go:561] object-"openstack"/"neutron-neutron-dockercfg-ctkbt": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-neutron-dockercfg-ctkbt&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.485712 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"neutron-neutron-dockercfg-ctkbt\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-neutron-dockercfg-ctkbt&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.499088 5050 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.147:6443: connect: connection refused" interval="7s" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.502835 5050 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.502903 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.522808 5050 reflector.go:561] object-"metallb-system"/"controller-dockercfg-5zwsv": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dcontroller-dockercfg-5zwsv&resourceVersion=109087": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.522888 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"controller-dockercfg-5zwsv\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dcontroller-dockercfg-5zwsv&resourceVersion=109087\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.542738 5050 reflector.go:561] object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Ddns-operator-dockercfg-9mqw5&resourceVersion=109382": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.542820 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-9mqw5\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/secrets?fieldSelector=metadata.name%3Ddns-operator-dockercfg-9mqw5&resourceVersion=109382\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:38.546564 5050 scope.go:117] "RemoveContainer" containerID="33899e24adacef82929b0fbbcd58af7cb4f7aa3935b49ec9b5b4045a51da1d14" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.546845 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.562554 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=109666": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.562645 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=109666\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.583515 5050 reflector.go:561] object-"openshift-multus"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.583592 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.602941 5050 reflector.go:561] object-"openshift-nmstate"/"plugin-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dplugin-serving-cert&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.603037 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"plugin-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dplugin-serving-cert&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.622725 5050 reflector.go:561] object-"metallb-system"/"speaker-certs-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dspeaker-certs-secret&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.622797 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"speaker-certs-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/secrets?fieldSelector=metadata.name%3Dspeaker-certs-secret&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.642532 5050 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/persistence-rabbitmq-cell1-server-0: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/persistence-rabbitmq-cell1-server-0\": dial tcp 38.102.83.147:6443: connect: connection refused" pod="openstack/rabbitmq-cell1-server-0" volumeName="persistence" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.664278 5050 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcatalog-operator-serving-cert&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.664354 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcatalog-operator-serving-cert&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:38.667465 5050 generic.go:334] "Generic (PLEG): container finished" podID="1c8331c1-b8ee-456b-baa6-110917427b64" containerID="bc1a7c414fb7ed1343e941a8a7ba794d8eed3ccaad27a52f69cfbbfb3ef248e7" exitCode=137 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:38.667534 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" event={"ID":"1c8331c1-b8ee-456b-baa6-110917427b64","Type":"ContainerDied","Data":"bc1a7c414fb7ed1343e941a8a7ba794d8eed3ccaad27a52f69cfbbfb3ef248e7"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:38.670507 5050 generic.go:334] "Generic (PLEG): container finished" podID="3c9e825c-0aee-42b9-a7a5-3191486f301d" containerID="ce2bc8fd07e25246673f9423055ae960e439b06b2f99f0e8154eb011bf21074d" exitCode=137 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:38.670580 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pjhfc" event={"ID":"3c9e825c-0aee-42b9-a7a5-3191486f301d","Type":"ContainerDied","Data":"ce2bc8fd07e25246673f9423055ae960e439b06b2f99f0e8154eb011bf21074d"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:38.672812 5050 generic.go:334] "Generic (PLEG): container finished" podID="06df6c8c-640d-431b-b216-78345a9054e1" containerID="9285260a37630061624ba74a7c92327ead3d7a69163e896c20c90b3da8d7a4b6" exitCode=137 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:38.672855 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2" event={"ID":"06df6c8c-640d-431b-b216-78345a9054e1","Type":"ContainerDied","Data":"9285260a37630061624ba74a7c92327ead3d7a69163e896c20c90b3da8d7a4b6"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:38.677602 5050 generic.go:334] "Generic (PLEG): container finished" podID="3477354d-838b-48cc-a6c3-612088d82640" containerID="466b93fe028fca07c950e859d36217259a41f4f7bfb1b3eeba0ddc9b195b96a1" exitCode=137 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:38.677636 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" event={"ID":"3477354d-838b-48cc-a6c3-612088d82640","Type":"ContainerDied","Data":"466b93fe028fca07c950e859d36217259a41f4f7bfb1b3eeba0ddc9b195b96a1"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:38.679562 5050 generic.go:334] "Generic (PLEG): container finished" podID="4b4e8aa9-a0f6-4bd2-92d2-c524aedf6389" containerID="af7a54f4263f021d4aff8a9a4ae17f2163472123dd835db7a0edb1c97d4ed3a2" exitCode=137 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:38.679616 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-qdrgd" event={"ID":"4b4e8aa9-a0f6-4bd2-92d2-c524aedf6389","Type":"ContainerDied","Data":"af7a54f4263f021d4aff8a9a4ae17f2163472123dd835db7a0edb1c97d4ed3a2"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:38.681415 5050 generic.go:334] "Generic (PLEG): container finished" podID="85683dfb-37fb-4301-8c7a-fbb7453b303d" containerID="17ce3bb0fb08a6af9be92adfec958eef19d6b30d1b82e27337ec77e555a96524" exitCode=137 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:38.681434 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp" event={"ID":"85683dfb-37fb-4301-8c7a-fbb7453b303d","Type":"ContainerDied","Data":"17ce3bb0fb08a6af9be92adfec958eef19d6b30d1b82e27337ec77e555a96524"} Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.682633 5050 reflector.go:561] object-"openstack"/"octavia-rsyslog-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-rsyslog-scripts&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.682690 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-rsyslog-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-rsyslog-scripts&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:38.703050 5050 status_manager.go:851] "Failed to get status for pod" podUID="06df6c8c-640d-431b-b216-78345a9054e1" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/keystone-operator-controller-manager-7765d96ddf-xgbp2\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.737811 5050 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=109167": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.737894 5050 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=109167\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.742547 5050 reflector.go:561] object-"openstack"/"dns-svc": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Ddns-svc&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.742620 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"dns-svc\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Ddns-svc&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.763891 5050 reflector.go:561] object-"openstack"/"openstack-config-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dopenstack-config-secret&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.764299 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstack-config-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dopenstack-config-secret&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.785249 5050 reflector.go:561] object-"openshift-config-operator"/"config-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/secrets?fieldSelector=metadata.name%3Dconfig-operator-serving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.785343 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"config-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/secrets?fieldSelector=metadata.name%3Dconfig-operator-serving-cert&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.802681 5050 reflector.go:561] object-"openshift-network-node-identity"/"network-node-identity-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/secrets?fieldSelector=metadata.name%3Dnetwork-node-identity-cert&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.802746 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/secrets?fieldSelector=metadata.name%3Dnetwork-node-identity-cert&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.822799 5050 reflector.go:561] object-"openshift-apiserver"/"image-import-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dimage-import-ca&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.822868 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"image-import-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dimage-import-ca&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.842961 5050 reflector.go:561] object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-g8gr8": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-admission-webhook-dockercfg-g8gr8&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.843055 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-g8gr8\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobo-prometheus-operator-admission-webhook-dockercfg-g8gr8&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.862932 5050 reflector.go:561] object-"openstack"/"rabbitmq-cell1-plugins-conf": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-cell1-plugins-conf&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.862994 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-cell1-plugins-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-cell1-plugins-conf&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.883003 5050 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/ovndbcluster-sb-etc-ovn-ovsdbserver-sb-1: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/ovndbcluster-sb-etc-ovn-ovsdbserver-sb-1\": dial tcp 38.102.83.147:6443: connect: connection refused" pod="openstack/ovsdbserver-sb-1" volumeName="ovndbcluster-sb-etc-ovn" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.902472 5050 reflector.go:561] object-"openshift-cluster-version"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.902549 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.923636 5050 reflector.go:561] object-"openstack"/"barbican-api-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-api-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.923728 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"barbican-api-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-api-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:38.942424 5050 status_manager.go:851] "Failed to get status for pod" podUID="1c8331c1-b8ee-456b-baa6-110917427b64" pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/pods/observability-operator-d8bb48f5d-wwdcc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.963157 5050 reflector.go:561] object-"openshift-nmstate"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.963239 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:38.982179 5050 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cgxqx container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:38.982226 5050 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cgxqx container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:38.982272 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-cgxqx" podUID="fd564500-1ab5-401f-84a8-79c80dfe50ab" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:38.982228 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cgxqx" podUID="fd564500-1ab5-401f-84a8-79c80dfe50ab" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:38.982550 5050 reflector.go:561] object-"openshift-oauth-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109117": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:38.982607 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109117\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:39.002729 5050 reflector.go:561] object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-2bw5c": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dneutron-operator-controller-manager-dockercfg-2bw5c&resourceVersion=109623": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.002847 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"neutron-operator-controller-manager-dockercfg-2bw5c\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dneutron-operator-controller-manager-dockercfg-2bw5c&resourceVersion=109623\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:39.022991 5050 reflector.go:561] object-"openshift-ingress-canary"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.023174 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:39.043833 5050 reflector.go:561] object-"openshift-service-ca-operator"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.043986 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:39.065174 5050 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-tls&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.065234 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-tls&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:39.083287 5050 reflector.go:561] object-"openstack-operators"/"infra-operator-webhook-server-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dinfra-operator-webhook-server-cert&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.083348 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"infra-operator-webhook-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dinfra-operator-webhook-server-cert&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:39.103286 5050 status_manager.go:851] "Failed to get status for pod" podUID="3477354d-838b-48cc-a6c3-612088d82640" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/infra-operator-controller-manager-78d48bff9d-5g8lw\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:39.123667 5050 reflector.go:561] object-"openshift-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.123755 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.142417 5050 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/mysql-db-openstack-cell1-galera-0: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/mysql-db-openstack-cell1-galera-0\": dial tcp 38.102.83.147:6443: connect: connection refused" pod="openstack/openstack-cell1-galera-0" volumeName="mysql-db" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:39.162891 5050 reflector.go:561] object-"openshift-image-registry"/"registry-dockercfg-kzzsd": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dregistry-dockercfg-kzzsd&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.162972 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"registry-dockercfg-kzzsd\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dregistry-dockercfg-kzzsd&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:39.183162 5050 reflector.go:561] object-"openshift-console"/"console-oauth-config": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-oauth-config&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.183226 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"console-oauth-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/secrets?fieldSelector=metadata.name%3Dconsole-oauth-config&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:39.203267 5050 reflector.go:561] object-"openshift-dns"/"dns-dockercfg-jwfmh": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-dockercfg-jwfmh&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.203326 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"dns-dockercfg-jwfmh\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-dockercfg-jwfmh&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:39.224192 5050 status_manager.go:851] "Failed to get status for pod" podUID="051b7665-675e-4109-a8e8-5a416c8b49cc" pod="openshift-operators/perses-operator-5446b9c989-dqk4m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/pods/perses-operator-5446b9c989-dqk4m\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:39.242740 5050 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackageserver-service-cert&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.242814 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackageserver-service-cert&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.262585 5050 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/ovndbcluster-sb-etc-ovn-ovsdbserver-sb-0\": dial tcp 38.102.83.147:6443: connect: connection refused" pod="openstack/ovsdbserver-sb-0" volumeName="ovndbcluster-sb-etc-ovn" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:39.282999 5050 status_manager.go:851] "Failed to get status for pod" podUID="2086bc41-00a2-4c97-a491-08511f3ed6e5" pod="openstack/horizon-5fb79d99b5-m4xgd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/horizon-5fb79d99b5-m4xgd\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:39.302654 5050 reflector.go:561] object-"cert-manager-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.302743 5050 reflector.go:158] "Unhandled Error" err="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:39.322365 5050 reflector.go:561] object-"openshift-machine-api"/"control-plane-machine-set-operator-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-tls&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.322428 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dcontrol-plane-machine-set-operator-tls&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.343233 5050 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/ovndbcluster-nb-etc-ovn-ovsdbserver-nb-0\": dial tcp 38.102.83.147:6443: connect: connection refused" pod="openstack/ovsdbserver-nb-0" volumeName="ovndbcluster-nb-etc-ovn" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:39.362656 5050 status_manager.go:851] "Failed to get status for pod" podUID="1d89350d-55e9-4ef6-8182-287894b6c14b" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-cnp7n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/pods/apiserver-7bbb656c7d-cnp7n\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:39.382441 5050 reflector.go:561] object-"openstack"/"rabbitmq-erlang-cookie": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-erlang-cookie&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.382511 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-erlang-cookie\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-erlang-cookie&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.403004 5050 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/persistence-rabbitmq-server-0: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/persistence-rabbitmq-server-0\": dial tcp 38.102.83.147:6443: connect: connection refused" pod="openstack/rabbitmq-server-0" volumeName="persistence" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:39.422722 5050 reflector.go:561] object-"openshift-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.422800 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Detcd-serving-ca&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:39.442833 5050 reflector.go:561] object-"openstack"/"horizon": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dhorizon&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.442907 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"horizon\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dhorizon&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:39.463264 5050 status_manager.go:851] "Failed to get status for pod" podUID="71218193-88fc-4811-bf04-33a4f4a87898" pod="openstack/kube-state-metrics-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/kube-state-metrics-0\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.483458 5050 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/mariadb-data: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/mariadb-data\": dial tcp 38.102.83.147:6443: connect: connection refused" pod="openstack/mariadb-copy-data" volumeName="mariadb-data" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:39.503043 5050 reflector.go:561] object-"openstack"/"combined-ca-bundle": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcombined-ca-bundle&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.503125 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"combined-ca-bundle\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcombined-ca-bundle&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:39.522739 5050 reflector.go:561] object-"openshift-ingress-operator"/"trusted-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.522823 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"trusted-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:39.543107 5050 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.543191 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:39.562672 5050 reflector.go:561] object-"openstack"/"cinder-scheduler-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-scheduler-config-data&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.562767 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-scheduler-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-scheduler-config-data&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:39.582538 5050 reflector.go:561] object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-baremetal-operator-webhook-server-cert&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.582604 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openstack-baremetal-operator-webhook-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-baremetal-operator-webhook-server-cert&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:39.601554 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-operator-799b66f579-tqs2v" podUID="53a1a6d1-6999-4fa1-a0ce-a20b83f1f347" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.73:8081/readyz\": dial tcp 10.217.0.73:8081: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:39.603172 5050 reflector.go:561] object-"openshift-network-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.603255 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:39.622518 5050 reflector.go:561] object-"openstack"/"ceilometer-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dceilometer-scripts&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.622609 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ceilometer-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dceilometer-scripts&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:39.643096 5050 status_manager.go:851] "Failed to get status for pod" podUID="06df6c8c-640d-431b-b216-78345a9054e1" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/keystone-operator-controller-manager-7765d96ddf-xgbp2\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.663218 5050 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openstack/ovndbcluster-sb-etc-ovn-ovsdbserver-sb-2: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/persistentvolumeclaims/ovndbcluster-sb-etc-ovn-ovsdbserver-sb-2\": dial tcp 38.102.83.147:6443: connect: connection refused" pod="openstack/ovsdbserver-sb-2" volumeName="ovndbcluster-sb-etc-ovn" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:39.684479 5050 reflector.go:561] object-"openshift-operators"/"observability-operator-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobservability-operator-tls&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.684552 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operators\"/\"observability-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/secrets?fieldSelector=metadata.name%3Dobservability-operator-tls&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:39.703207 5050 reflector.go:561] object-"openstack"/"ovnnorthd-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovnnorthd-scripts&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.703336 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovnnorthd-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovnnorthd-scripts&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:39.722793 5050 reflector.go:561] object-"openstack"/"octavia-healthmanager-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-healthmanager-scripts&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.722941 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-healthmanager-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-healthmanager-scripts&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:39.742443 5050 reflector.go:561] object-"openstack"/"ovnnorthd-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovnnorthd-config&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.742512 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovnnorthd-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovnnorthd-config&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:39.762994 5050 reflector.go:561] object-"openstack"/"rabbitmq-default-user": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-default-user&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.763067 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-default-user\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-default-user&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:39.763839 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="7fb00b03-fe6e-4c66-bd36-adf9443871a8" containerName="prometheus" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:39.781351 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-d55dfcdfc-r54sd_dbd5b107-5d08-43af-881c-11540f395267/packageserver/0.log" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:39.781435 5050 generic.go:334] "Generic (PLEG): container finished" podID="dbd5b107-5d08-43af-881c-11540f395267" containerID="c4fcd3e0277409e398f4cc4d635cc0accc5ab2df1e1cb5b5780d5fabcb6748cf" exitCode=137 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:39.781532 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" event={"ID":"dbd5b107-5d08-43af-881c-11540f395267","Type":"ContainerDied","Data":"c4fcd3e0277409e398f4cc4d635cc0accc5ab2df1e1cb5b5780d5fabcb6748cf"} Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:39.782481 5050 reflector.go:561] object-"openshift-network-node-identity"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.782563 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:39.784365 5050 generic.go:334] "Generic (PLEG): container finished" podID="54e98831-cd88-4dee-90db-e8fbb006e9c3" containerID="dc143bce1375c19fe135afda15f93779f6bf099de8cfba514d7efadf417d3fa5" exitCode=137 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:39.784431 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-w4tzc" event={"ID":"54e98831-cd88-4dee-90db-e8fbb006e9c3","Type":"ContainerDied","Data":"dc143bce1375c19fe135afda15f93779f6bf099de8cfba514d7efadf417d3fa5"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:39.786823 5050 generic.go:334] "Generic (PLEG): container finished" podID="14ad1594-090d-4024-a999-9ffe77ce58d8" containerID="11ec72e9f06368997b90f40f70878174683db5a652ebb2eb2c432e34abde950e" exitCode=137 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:39.786864 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"14ad1594-090d-4024-a999-9ffe77ce58d8","Type":"ContainerDied","Data":"11ec72e9f06368997b90f40f70878174683db5a652ebb2eb2c432e34abde950e"} Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:39.803096 5050 reflector.go:561] object-"openshift-ingress-canary"/"default-dockercfg-2llfx": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2llfx&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.803169 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-canary\"/\"default-dockercfg-2llfx\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2llfx&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:39.823384 5050 reflector.go:561] object-"openshift-ingress"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.823466 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:39.842566 5050 reflector.go:561] object-"openshift-cluster-machine-approver"/"kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=109851": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.842631 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-machine-approver/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=109851\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:39.862974 5050 status_manager.go:851] "Failed to get status for pod" podUID="58a60a6f-fcf5-4aa0-b6e4-97d7df2b2bd2" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-65swh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/mariadb-operator-controller-manager-79c8c4686c-65swh\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:39.883114 5050 reflector.go:561] object-"openstack"/"prometheus-metric-storage-rulefiles-0": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dprometheus-metric-storage-rulefiles-0&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.883200 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"prometheus-metric-storage-rulefiles-0\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dprometheus-metric-storage-rulefiles-0&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:39.903357 5050 reflector.go:561] object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-4x88l": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dovn-operator-controller-manager-dockercfg-4x88l&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.903436 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"ovn-operator-controller-manager-dockercfg-4x88l\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dovn-operator-controller-manager-dockercfg-4x88l&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:39.923330 5050 reflector.go:561] object-"openstack"/"nova-cell1-conductor-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell1-conductor-config-data&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.923412 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-cell1-conductor-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-cell1-conductor-config-data&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:39.943557 5050 reflector.go:561] object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-glgrh": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dheat-operator-controller-manager-dockercfg-glgrh&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.943636 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"heat-operator-controller-manager-dockercfg-glgrh\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dheat-operator-controller-manager-dockercfg-glgrh&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:39.962577 5050 reflector.go:561] object-"openshift-apiserver"/"etcd-client": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.962655 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:39.983583 5050 reflector.go:561] object-"openstack"/"manila-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-scripts&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:39.983650 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"manila-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dmanila-scripts&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.002550 5050 reflector.go:561] object-"openstack"/"octavia-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-config-data&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.002637 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-config-data&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:40.007523 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="131d56da-b770-4452-97c9-b585434da431" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:40.008308 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:40.010049 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"d3e1fc093c150e4e423d3557b3ec8c637526707132012ee62f3903c848934b44"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:40.010113 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="131d56da-b770-4452-97c9-b585434da431" containerName="cinder-scheduler" containerID="cri-o://d3e1fc093c150e4e423d3557b3ec8c637526707132012ee62f3903c848934b44" gracePeriod=30 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.022615 5050 reflector.go:561] object-"openstack-operators"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.022704 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:40.042591 5050 status_manager.go:851] "Failed to get status for pod" podUID="3c9e825c-0aee-42b9-a7a5-3191486f301d" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pjhfc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/neutron-operator-controller-manager-5fdfd5b6b5-pjhfc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.064598 5050 reflector.go:561] object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-68tnd": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-controller-operator-dockercfg-68tnd&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.064669 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openstack-operator-controller-operator-dockercfg-68tnd\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-controller-operator-dockercfg-68tnd&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.083272 5050 reflector.go:561] object-"openstack"/"barbican-barbican-dockercfg-fxl2b": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-barbican-dockercfg-fxl2b&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.083342 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"barbican-barbican-dockercfg-fxl2b\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-barbican-dockercfg-fxl2b&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.102918 5050 reflector.go:561] object-"openshift-console"/"oauth-serving-cert": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Doauth-serving-cert&resourceVersion=109067": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.102992 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"oauth-serving-cert\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Doauth-serving-cert&resourceVersion=109067\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.123107 5050 reflector.go:561] object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-7bf58": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dswift-operator-controller-manager-dockercfg-7bf58&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.123211 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"swift-operator-controller-manager-dockercfg-7bf58\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dswift-operator-controller-manager-dockercfg-7bf58&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.143185 5050 reflector.go:561] object-"openshift-route-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109642": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.143300 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109642\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.163437 5050 reflector.go:561] object-"openshift-authentication-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.163524 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.182850 5050 reflector.go:561] object-"openshift-dns"/"dns-default": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Ddns-default&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.182964 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"dns-default\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Ddns-default&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.203695 5050 reflector.go:561] object-"openshift-multus"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.203818 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:40.212713 5050 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cd66n container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]log ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]etcd excluded: ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]etcd-readiness excluded: ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]informer-sync ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]poststarthook/generic-apiserver-start-informers ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]poststarthook/max-in-flight-filter ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]poststarthook/project.openshift.io-projectcache ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]poststarthook/openshift.io-startinformers ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 11 15:54:41 crc kubenswrapper[5050]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 11 15:54:41 crc kubenswrapper[5050]: [-]shutdown failed: reason withheld Dec 11 15:54:41 crc kubenswrapper[5050]: readyz check failed Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:40.212797 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" podUID="ea884e88-c0df-4212-976a-0d7ce1731fdc" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.223625 5050 reflector.go:561] object-"openshift-image-registry"/"image-registry-certificates": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dimage-registry-certificates&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.224438 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"image-registry-certificates\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/configmaps?fieldSelector=metadata.name%3Dimage-registry-certificates&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.243363 5050 reflector.go:561] object-"openstack"/"rabbitmq-cell1-server-conf": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-cell1-server-conf&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.243427 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-cell1-server-conf\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Drabbitmq-cell1-server-conf&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.264068 5050 reflector.go:561] object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-controller-manager-operator-config&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.264162 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-controller-manager-operator-config&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:40.283246 5050 status_manager.go:851] "Failed to get status for pod" podUID="712f888b-7c45-4c1f-95d8-ccc464b7c15f" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-grhdp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/watcher-operator-controller-manager-75944c9b7-grhdp\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:40.301441 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_71bb4a3aecc4ba5b26c4b7318770ce13/kube-apiserver/0.log" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.303527 5050 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-dockercfg-mfbb7&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.303603 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-mfbb7\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/secrets?fieldSelector=metadata.name%3Dmachine-api-operator-dockercfg-mfbb7&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:40.305588 5050 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="bdc5023a5b80cbc534e9fae8c924add56e2bae71eb5c0725fc0e14f4b3419495" exitCode=137 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:40.305663 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"bdc5023a5b80cbc534e9fae8c924add56e2bae71eb5c0725fc0e14f4b3419495"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:40.308944 5050 generic.go:334] "Generic (PLEG): container finished" podID="81255772-fddc-4936-8de3-da4649c32d1f" containerID="737f9cb69bc5ba682bce46695449bf052170034d47a489f229d1389e142c9bb4" exitCode=0 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:40.308978 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk" event={"ID":"81255772-fddc-4936-8de3-da4649c32d1f","Type":"ContainerDied","Data":"737f9cb69bc5ba682bce46695449bf052170034d47a489f229d1389e142c9bb4"} Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.322797 5050 reflector.go:561] object-"openshift-ingress"/"router-dockercfg-zdk86": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-dockercfg-zdk86&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.322874 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress\"/\"router-dockercfg-zdk86\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-dockercfg-zdk86&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.343269 5050 reflector.go:561] object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Doauth-openshift-dockercfg-znhcc&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.343360 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-znhcc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Doauth-openshift-dockercfg-znhcc&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.363713 5050 reflector.go:561] object-"openshift-controller-manager-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.363784 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.394829 5050 reflector.go:561] object-"metallb-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.395193 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109602\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.403351 5050 reflector.go:561] object-"openshift-route-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.403439 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.423641 5050 reflector.go:561] object-"openshift-network-operator"/"metrics-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.423728 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"metrics-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.446769 5050 reflector.go:561] object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-dockercfg-5nsgg&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.446844 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-5nsgg\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dmarketplace-operator-dockercfg-5nsgg&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:40.464105 5050 status_manager.go:851] "Failed to get status for pod" podUID="ea884e88-c0df-4212-976a-0d7ce1731fdc" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-76f77b778f-cd66n\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.484308 5050 reflector.go:561] object-"openstack"/"telemetry-ceilometer-dockercfg-nl629": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dtelemetry-ceilometer-dockercfg-nl629&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.484410 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"telemetry-ceilometer-dockercfg-nl629\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dtelemetry-ceilometer-dockercfg-nl629&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.513713 5050 reflector.go:561] object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-p54cz": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager-operator/secrets?fieldSelector=metadata.name%3Dcert-manager-operator-controller-manager-dockercfg-p54cz&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.513844 5050 reflector.go:158] "Unhandled Error" err="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-p54cz\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager-operator/secrets?fieldSelector=metadata.name%3Dcert-manager-operator-controller-manager-dockercfg-p54cz&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.522748 5050 reflector.go:561] object-"openshift-authentication"/"audit": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Daudit&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.522951 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"audit\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Daudit&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.542813 5050 reflector.go:561] object-"openshift-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.542893 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.562776 5050 reflector.go:561] object-"openstack"/"nova-nova-dockercfg-mdjbl": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-nova-dockercfg-mdjbl&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.562854 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"nova-nova-dockercfg-mdjbl\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dnova-nova-dockercfg-mdjbl&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:40.563640 5050 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:40.563699 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.583296 5050 reflector.go:561] object-"openshift-machine-config-operator"/"kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=109851": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.583382 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=109851\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.605610 5050 reflector.go:561] object-"openshift-etcd-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.605692 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:40.622741 5050 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.643455 5050 reflector.go:561] object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-qd9ll": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dironic-operator-controller-manager-dockercfg-qd9ll&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.643543 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"ironic-operator-controller-manager-dockercfg-qd9ll\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dironic-operator-controller-manager-dockercfg-qd9ll&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.663393 5050 reflector.go:561] object-"openstack"/"alertmanager-metric-storage-generated": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dalertmanager-metric-storage-generated&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.663482 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"alertmanager-metric-storage-generated\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dalertmanager-metric-storage-generated&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.683252 5050 reflector.go:561] object-"openshift-machine-api"/"kube-rbac-proxy": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.683333 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.702796 5050 reflector.go:561] object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/secrets?fieldSelector=metadata.name%3Dopenshift-config-operator-dockercfg-7pc5z&resourceVersion=109307": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.702881 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-7pc5z\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/secrets?fieldSelector=metadata.name%3Dopenshift-config-operator-dockercfg-7pc5z&resourceVersion=109307\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:40.723608 5050 status_manager.go:851] "Failed to get status for pod" podUID="f5bec9f7-072c-4c21-80ea-af9f59313eef" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/pods/openstack-baremetal-operator-controller-manager-5c747c4-l2hpx\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:40.737058 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-volume-volume1-0" podUID="b94bc94c-a636-4f0d-bd28-0f347e7b1143" containerName="cinder-volume" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:40.737143 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:40.738265 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-volume" containerStatusID={"Type":"cri-o","ID":"be93e909105a092ced14aca5a09c48a140e87c3ee935eba54b3b801f908c1e84"} pod="openstack/cinder-volume-volume1-0" containerMessage="Container cinder-volume failed liveness probe, will be restarted" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:40.738333 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-volume-volume1-0" podUID="b94bc94c-a636-4f0d-bd28-0f347e7b1143" containerName="cinder-volume" containerID="cri-o://be93e909105a092ced14aca5a09c48a140e87c3ee935eba54b3b801f908c1e84" gracePeriod=30 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.743840 5050 reflector.go:561] object-"openshift-apiserver"/"config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=109685": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.743918 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=109685\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.762814 5050 reflector.go:561] object-"openshift-cluster-version"/"default-dockercfg-gxtc4": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-gxtc4&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.762882 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"default-dockercfg-gxtc4\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-gxtc4&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.782707 5050 reflector.go:561] object-"openshift-dns"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.782801 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.803810 5050 reflector.go:561] object-"openstack"/"octavia-octavia-dockercfg-h4g5n": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-octavia-dockercfg-h4g5n&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.803899 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-octavia-dockercfg-h4g5n\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-octavia-dockercfg-h4g5n&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.823035 5050 reflector.go:561] object-"openstack"/"placement-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-scripts&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.823401 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"placement-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dplacement-scripts&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.892745 5050 reflector.go:561] object-"openstack"/"glance-default-internal-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-default-internal-config-data&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.892812 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"glance-default-internal-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dglance-default-internal-config-data&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.892712 5050 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/events\": dial tcp 38.102.83.147:6443: connect: connection refused" event=< Dec 11 15:54:41 crc kubenswrapper[5050]: &Event{ObjectMeta:{packageserver-d55dfcdfc-r54sd.18803427859863d6 openshift-operator-lifecycle-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-operator-lifecycle-manager,Name:packageserver-d55dfcdfc-r54sd,UID:dbd5b107-5d08-43af-881c-11540f395267,APIVersion:v1,ResourceVersion:27090,FieldPath:spec.containers{packageserver},},Reason:ProbeError,Message:Liveness probe error: Get "https://10.217.0.23:5443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Dec 11 15:54:41 crc kubenswrapper[5050]: body: Dec 11 15:54:41 crc kubenswrapper[5050]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 15:53:18.278960086 +0000 UTC m=+7489.122682672,LastTimestamp:2025-12-11 15:53:18.278960086 +0000 UTC m=+7489.122682672,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 11 15:54:41 crc kubenswrapper[5050]: > Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.892832 5050 reflector.go:561] object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-sa-dockercfg-5xfcg&resourceVersion=109237": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.892915 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-5xfcg\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/secrets?fieldSelector=metadata.name%3Dkube-storage-version-migrator-sa-dockercfg-5xfcg&resourceVersion=109237\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.894082 5050 reflector.go:561] object-"openshift-route-controller-manager"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.894174 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:40.902630 5050 status_manager.go:851] "Failed to get status for pod" podUID="14ad1594-090d-4024-a999-9ffe77ce58d8" pod="openstack/openstack-galera-0" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/pods/openstack-galera-0\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.924310 5050 reflector.go:561] object-"openshift-network-node-identity"/"env-overrides": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Denv-overrides&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.924378 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-node-identity\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-node-identity/configmaps?fieldSelector=metadata.name%3Denv-overrides&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.943254 5050 reflector.go:561] object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-sa-dockercfg-djjff&resourceVersion=109407": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.943320 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-djjff\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-sa-dockercfg-djjff&resourceVersion=109407\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.967580 5050 reflector.go:561] object-"openshift-marketplace"/"community-operators-dockercfg-dmngl": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcommunity-operators-dockercfg-dmngl&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.967637 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"community-operators-dockercfg-dmngl\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcommunity-operators-dockercfg-dmngl&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:40.985283 5050 reflector.go:561] object-"metallb-system"/"frr-startup": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dfrr-startup&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:40.985348 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"frr-startup\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dfrr-startup&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.005626 5050 reflector.go:561] object-"metallb-system"/"metallb-excludel2": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dmetallb-excludel2&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.005724 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"metallb-excludel2\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dmetallb-excludel2&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.020535 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 11ec72e9f06368997b90f40f70878174683db5a652ebb2eb2c432e34abde950e is running failed: container process not found" containerID="11ec72e9f06368997b90f40f70878174683db5a652ebb2eb2c432e34abde950e" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.023488 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-error": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-error&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.023542 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-user-template-error&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.025448 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 11ec72e9f06368997b90f40f70878174683db5a652ebb2eb2c432e34abde950e is running failed: container process not found" containerID="11ec72e9f06368997b90f40f70878174683db5a652ebb2eb2c432e34abde950e" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.026127 5050 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 11ec72e9f06368997b90f40f70878174683db5a652ebb2eb2c432e34abde950e is running failed: container process not found" containerID="11ec72e9f06368997b90f40f70878174683db5a652ebb2eb2c432e34abde950e" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.026160 5050 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 11ec72e9f06368997b90f40f70878174683db5a652ebb2eb2c432e34abde950e is running failed: container process not found" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="14ad1594-090d-4024-a999-9ffe77ce58d8" containerName="galera" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.043586 5050 reflector.go:561] object-"openstack"/"octavia-worker-scripts": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-worker-scripts&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.043663 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"octavia-worker-scripts\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Doctavia-worker-scripts&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.063064 5050 reflector.go:561] object-"openstack"/"keystone-keystone-dockercfg-jrbb7": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-keystone-dockercfg-jrbb7&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.063136 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"keystone-keystone-dockercfg-jrbb7\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dkeystone-keystone-dockercfg-jrbb7&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.085082 5050 reflector.go:561] object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-zlz4m": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dbarbican-operator-controller-manager-dockercfg-zlz4m&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.085164 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"barbican-operator-controller-manager-dockercfg-zlz4m\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dbarbican-operator-controller-manager-dockercfg-zlz4m&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.103338 5050 reflector.go:561] object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/secrets?fieldSelector=metadata.name%3Dcsi-hostpath-provisioner-sa-dockercfg-qd74k&resourceVersion=109184": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.103415 5050 reflector.go:158] "Unhandled Error" err="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-qd74k\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/secrets?fieldSelector=metadata.name%3Dcsi-hostpath-provisioner-sa-dockercfg-qd74k&resourceVersion=109184\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.123300 5050 reflector.go:561] object-"openshift-multus"/"default-dockercfg-2q5b6": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2q5b6&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.123389 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-dockercfg-2q5b6\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2q5b6&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.143218 5050 reflector.go:561] object-"cert-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.143293 5050 reflector.go:158] "Unhandled Error" err="object-\"cert-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/cert-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.162998 5050 reflector.go:561] object-"metallb-system"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.163090 5050 reflector.go:158] "Unhandled Error" err="object-\"metallb-system\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/metallb-system/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.183641 5050 status_manager.go:851] "Failed to get status for pod" podUID="81255772-fddc-4936-8de3-da4649c32d1f" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/catalog-operator-68c6474976-8qmpk\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.202573 5050 reflector.go:561] object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-config&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.202650 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-scheduler-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-kube-scheduler-operator-config&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.223428 5050 reflector.go:561] object-"openshift-service-ca"/"service-ca-dockercfg-pn86c": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dservice-ca-dockercfg-pn86c&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.223933 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"service-ca-dockercfg-pn86c\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dservice-ca-dockercfg-pn86c&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.243583 5050 reflector.go:561] object-"openshift-multus"/"cni-copy-resources": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dcni-copy-resources&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.243663 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"cni-copy-resources\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/configmaps?fieldSelector=metadata.name%3Dcni-copy-resources&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.263309 5050 reflector.go:561] object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-djswv": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Doctavia-operator-controller-manager-dockercfg-djswv&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.263404 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"octavia-operator-controller-manager-dockercfg-djswv\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Doctavia-operator-controller-manager-dockercfg-djswv&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.283276 5050 reflector.go:561] object-"openshift-console"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109559": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.283356 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109559\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.303350 5050 reflector.go:561] object-"openshift-oauth-apiserver"/"etcd-client": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.303416 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Detcd-client&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.328726 5050 reflector.go:561] object-"openshift-dns-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.328791 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.335517 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-backup-0" podUID="0f954250-5982-4088-839a-8faf7bfe203c" containerName="cinder-backup" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.335583 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-backup-0" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.336188 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-backup" containerStatusID={"Type":"cri-o","ID":"2b1e9cc4138c3cee1e26c2a91b99b470ec5f4b50f5007ad3b68d6bbde49915a9"} pod="openstack/cinder-backup-0" containerMessage="Container cinder-backup failed liveness probe, will be restarted" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.336255 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-backup-0" podUID="0f954250-5982-4088-839a-8faf7bfe203c" containerName="cinder-backup" containerID="cri-o://2b1e9cc4138c3cee1e26c2a91b99b470ec5f4b50f5007ad3b68d6bbde49915a9" gracePeriod=30 Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.342874 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-trusted-ca-bundle&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.342952 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-trusted-ca-bundle&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.343894 5050 generic.go:334] "Generic (PLEG): container finished" podID="12778398-2baa-44cb-9fd1-f2034870e9fc" containerID="d01989086e6e233a8a7e550489bb5ac0e0797c26c266f1d28706afc6f3accdda" exitCode=1 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.343951 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-qqb7f" event={"ID":"12778398-2baa-44cb-9fd1-f2034870e9fc","Type":"ContainerDied","Data":"d01989086e6e233a8a7e550489bb5ac0e0797c26c266f1d28706afc6f3accdda"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.344770 5050 scope.go:117] "RemoveContainer" containerID="d01989086e6e233a8a7e550489bb5ac0e0797c26c266f1d28706afc6f3accdda" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.350808 5050 generic.go:334] "Generic (PLEG): container finished" podID="bff5d533-0728-4436-bdeb-c725bf04bdb3" containerID="cc1aebcc94f6c4f1c94befd20ad91bc84f12d60166bde82ab44388d1cde4d3bb" exitCode=1 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.350858 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-ssfnh" event={"ID":"bff5d533-0728-4436-bdeb-c725bf04bdb3","Type":"ContainerDied","Data":"cc1aebcc94f6c4f1c94befd20ad91bc84f12d60166bde82ab44388d1cde4d3bb"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.351413 5050 scope.go:117] "RemoveContainer" containerID="cc1aebcc94f6c4f1c94befd20ad91bc84f12d60166bde82ab44388d1cde4d3bb" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.356774 5050 generic.go:334] "Generic (PLEG): container finished" podID="048e17a7-0123-45a2-b698-02def3db74fe" containerID="90984eafc9bf0a2dfd6de44205765a78318573224fe9dac04cbc812f5a363bc3" exitCode=1 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.356850 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-9tcm2" event={"ID":"048e17a7-0123-45a2-b698-02def3db74fe","Type":"ContainerDied","Data":"90984eafc9bf0a2dfd6de44205765a78318573224fe9dac04cbc812f5a363bc3"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.357791 5050 scope.go:117] "RemoveContainer" containerID="90984eafc9bf0a2dfd6de44205765a78318573224fe9dac04cbc812f5a363bc3" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.363418 5050 reflector.go:561] object-"openstack-operators"/"metrics-server-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmetrics-server-cert&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.363540 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"metrics-server-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmetrics-server-cert&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.365540 5050 generic.go:334] "Generic (PLEG): container finished" podID="53a1a6d1-6999-4fa1-a0ce-a20b83f1f347" containerID="5c7d365dce07000662fe39681396cb0bff613093313821486caa669cf3a6ba43" exitCode=0 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.365675 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-799b66f579-tqs2v" event={"ID":"53a1a6d1-6999-4fa1-a0ce-a20b83f1f347","Type":"ContainerDied","Data":"5c7d365dce07000662fe39681396cb0bff613093313821486caa669cf3a6ba43"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.370104 5050 generic.go:334] "Generic (PLEG): container finished" podID="6a956942-e4db-4c66-a7c2-1c370c1569f4" containerID="abbb98db2a4a22273080da7e82e528f04ecd9efd37eee52f71d3c5fe7d719895" exitCode=0 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.370178 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69cffd76bd-8bkp6" event={"ID":"6a956942-e4db-4c66-a7c2-1c370c1569f4","Type":"ContainerDied","Data":"abbb98db2a4a22273080da7e82e528f04ecd9efd37eee52f71d3c5fe7d719895"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.376414 5050 generic.go:334] "Generic (PLEG): container finished" podID="d7b70b3b-5481-4ac2-8e60-256e2690752f" containerID="91f1d18ce3c391a2d8cfcf26b53b446461d57ed0ef6059ae81ecc6c004531c75" exitCode=1 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.376547 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-7stc2" event={"ID":"d7b70b3b-5481-4ac2-8e60-256e2690752f","Type":"ContainerDied","Data":"91f1d18ce3c391a2d8cfcf26b53b446461d57ed0ef6059ae81ecc6c004531c75"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.377562 5050 scope.go:117] "RemoveContainer" containerID="91f1d18ce3c391a2d8cfcf26b53b446461d57ed0ef6059ae81ecc6c004531c75" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.379222 5050 generic.go:334] "Generic (PLEG): container finished" podID="2ebcfee9-160d-4440-b885-66ae4d5d66a7" containerID="b5b238e11deb76d8ada2567393d84664750165ff357bd9ffde0f805ab53433f1" exitCode=1 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.379317 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-wksw5" event={"ID":"2ebcfee9-160d-4440-b885-66ae4d5d66a7","Type":"ContainerDied","Data":"b5b238e11deb76d8ada2567393d84664750165ff357bd9ffde0f805ab53433f1"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.380306 5050 scope.go:117] "RemoveContainer" containerID="b5b238e11deb76d8ada2567393d84664750165ff357bd9ffde0f805ab53433f1" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.382950 5050 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-server-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-tls&resourceVersion=109887": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.383054 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-tls&resourceVersion=109887\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.392303 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.393594 5050 generic.go:334] "Generic (PLEG): container finished" podID="ef543e1b-8068-4ea3-b32a-61027b32e95d" containerID="61e3950610f64909e8cac86e375c431913174fe7038ade2f26dffb72452219d0" exitCode=1 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.393760 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerDied","Data":"61e3950610f64909e8cac86e375c431913174fe7038ade2f26dffb72452219d0"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.395695 5050 scope.go:117] "RemoveContainer" containerID="61e3950610f64909e8cac86e375c431913174fe7038ade2f26dffb72452219d0" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.403295 5050 reflector.go:561] object-"openshift-ingress-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.403404 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.412501 5050 generic.go:334] "Generic (PLEG): container finished" podID="105854f4-5cc1-491f-983a-50864b37893f" containerID="559fe69c268927c75729b6de3c99a83913eae1a1967823238e78bb9a9e507a28" exitCode=1 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.412617 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-xl9wl" event={"ID":"105854f4-5cc1-491f-983a-50864b37893f","Type":"ContainerDied","Data":"559fe69c268927c75729b6de3c99a83913eae1a1967823238e78bb9a9e507a28"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.413452 5050 scope.go:117] "RemoveContainer" containerID="559fe69c268927c75729b6de3c99a83913eae1a1967823238e78bb9a9e507a28" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.422558 5050 reflector.go:561] object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-mc6vn": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Ddesignate-operator-controller-manager-dockercfg-mc6vn&resourceVersion=109087": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.422630 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"designate-operator-controller-manager-dockercfg-mc6vn\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Ddesignate-operator-controller-manager-dockercfg-mc6vn&resourceVersion=109087\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.426155 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp" event={"ID":"85683dfb-37fb-4301-8c7a-fbb7453b303d","Type":"ContainerStarted","Data":"d0c66c987d202aed1c5e082e7328fa81cd946732334534ec2127461cf702b73c"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.426289 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.436650 5050 generic.go:334] "Generic (PLEG): container finished" podID="b885fa10-3ed3-41fd-94ae-2b7442519450" containerID="aec394cc63296e9f55ffee2ce659881063d43cd980faccaee47e6fbb8456acd5" exitCode=1 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.436802 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ttg8w" event={"ID":"b885fa10-3ed3-41fd-94ae-2b7442519450","Type":"ContainerDied","Data":"aec394cc63296e9f55ffee2ce659881063d43cd980faccaee47e6fbb8456acd5"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.439114 5050 scope.go:117] "RemoveContainer" containerID="aec394cc63296e9f55ffee2ce659881063d43cd980faccaee47e6fbb8456acd5" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.443716 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-cliconfig": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-cliconfig&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.443812 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/configmaps?fieldSelector=metadata.name%3Dv4-0-config-system-cliconfig&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.446110 5050 generic.go:334] "Generic (PLEG): container finished" podID="5b7ceea3-4e92-46ee-81de-5b8f932144ad" containerID="c5f542dfbbbe579335c7c9dd39dbf3a87a4d70edea814f820b53757fc41f7607" exitCode=0 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.446233 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn" event={"ID":"5b7ceea3-4e92-46ee-81de-5b8f932144ad","Type":"ContainerDied","Data":"c5f542dfbbbe579335c7c9dd39dbf3a87a4d70edea814f820b53757fc41f7607"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.446266 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn" event={"ID":"5b7ceea3-4e92-46ee-81de-5b8f932144ad","Type":"ContainerStarted","Data":"9481be7905ec767d1eb25b75503baf4e58a32d6dfba0336334447c0581b2b468"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.446556 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.447213 5050 patch_prober.go:28] interesting pod/route-controller-manager-767f6d799d-cv7mn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.447256 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn" podUID="5b7ceea3-4e92-46ee-81de-5b8f932144ad" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.456362 5050 generic.go:334] "Generic (PLEG): container finished" podID="a6345cf8-abc2-4c9a-bfe6-8b65187ada2d" containerID="05e1cfed054359e4690695e0e19eb3250041fc5e66353373c649dc422c3083c9" exitCode=1 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.456455 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-8jnzj" event={"ID":"a6345cf8-abc2-4c9a-bfe6-8b65187ada2d","Type":"ContainerDied","Data":"05e1cfed054359e4690695e0e19eb3250041fc5e66353373c649dc422c3083c9"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.457341 5050 scope.go:117] "RemoveContainer" containerID="05e1cfed054359e4690695e0e19eb3250041fc5e66353373c649dc422c3083c9" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.468578 5050 reflector.go:561] object-"openshift-operator-lifecycle-manager"/"pprof-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpprof-cert&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.469263 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpprof-cert&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.472330 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.479110 5050 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="696de0133c65c6e6ea70d6299312593ddfc638a01a5f4783ed9082195fb6fb31" exitCode=0 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.479182 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"696de0133c65c6e6ea70d6299312593ddfc638a01a5f4783ed9082195fb6fb31"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.479217 5050 scope.go:117] "RemoveContainer" containerID="134dcca79c4eddbde6843383841b57eedfeb7b3ef8680a39b2a851f7ca914c11" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.482946 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-router-certs": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-router-certs&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.483098 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-router-certs&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.484604 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pjhfc" event={"ID":"3c9e825c-0aee-42b9-a7a5-3191486f301d","Type":"ContainerStarted","Data":"e92adb0547b2d146e8e2ad33a0f2c36f2e91892a42c77e4be71a05a2c6216d43"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.485220 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pjhfc" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.497117 5050 generic.go:334] "Generic (PLEG): container finished" podID="9c82f51b-e2a0-49e6-bc0e-d7679e439a6f" containerID="bfed92f2b27b2071868d25582796802cd9202ad55e3580db605a4a3f8e78aa24" exitCode=1 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.497205 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-h4w7p" event={"ID":"9c82f51b-e2a0-49e6-bc0e-d7679e439a6f","Type":"ContainerDied","Data":"bfed92f2b27b2071868d25582796802cd9202ad55e3580db605a4a3f8e78aa24"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.498057 5050 scope.go:117] "RemoveContainer" containerID="bfed92f2b27b2071868d25582796802cd9202ad55e3580db605a4a3f8e78aa24" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.505592 5050 generic.go:334] "Generic (PLEG): container finished" podID="58a60a6f-fcf5-4aa0-b6e4-97d7df2b2bd2" containerID="979e0deb887552cefe316acef526be5df838912215e620a28e6177f1a500c441" exitCode=0 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.505683 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-65swh" event={"ID":"58a60a6f-fcf5-4aa0-b6e4-97d7df2b2bd2","Type":"ContainerDied","Data":"979e0deb887552cefe316acef526be5df838912215e620a28e6177f1a500c441"} Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.508433 5050 reflector.go:561] object-"openshift-service-ca-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.508506 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.512484 5050 generic.go:334] "Generic (PLEG): container finished" podID="5e11a0d1-4179-4621-803d-839196fb940b" containerID="0227acc6f5c529de7ffe8cf29065a76cc73e2ec49c7d56ced235fe90d0badca7" exitCode=1 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.512625 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jq9vt" event={"ID":"5e11a0d1-4179-4621-803d-839196fb940b","Type":"ContainerDied","Data":"0227acc6f5c529de7ffe8cf29065a76cc73e2ec49c7d56ced235fe90d0badca7"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.513540 5050 scope.go:117] "RemoveContainer" containerID="0227acc6f5c529de7ffe8cf29065a76cc73e2ec49c7d56ced235fe90d0badca7" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.523130 5050 reflector.go:561] object-"openshift-console-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.523212 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=109888\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.524488 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" event={"ID":"f5bec9f7-072c-4c21-80ea-af9f59313eef","Type":"ContainerDied","Data":"f01e89cc8a0c33752a723a1e08042cee2c7882d75ad6f74fc2adb84f7939c81f"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.529139 5050 generic.go:334] "Generic (PLEG): container finished" podID="f5bec9f7-072c-4c21-80ea-af9f59313eef" containerID="f01e89cc8a0c33752a723a1e08042cee2c7882d75ad6f74fc2adb84f7939c81f" exitCode=1 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.533148 5050 scope.go:117] "RemoveContainer" containerID="f01e89cc8a0c33752a723a1e08042cee2c7882d75ad6f74fc2adb84f7939c81f" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.533660 5050 generic.go:334] "Generic (PLEG): container finished" podID="cedca20e-aaaa-4190-944d-8f18bd93f737" containerID="a8d9d63a9e63134c857b78bccb377dd075cf8e58cb3004339094bafbbee23e50" exitCode=1 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.533721 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-wsc2k" event={"ID":"cedca20e-aaaa-4190-944d-8f18bd93f737","Type":"ContainerDied","Data":"a8d9d63a9e63134c857b78bccb377dd075cf8e58cb3004339094bafbbee23e50"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.534602 5050 scope.go:117] "RemoveContainer" containerID="a8d9d63a9e63134c857b78bccb377dd075cf8e58cb3004339094bafbbee23e50" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.539183 5050 generic.go:334] "Generic (PLEG): container finished" podID="d1f35830-d883-4b41-ab97-7c382dec0387" containerID="9bcb72a7122b2373360a63afde5cbe6a0bd4eb7948390af3be0726b4527fa52b" exitCode=0 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.539251 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jns7" event={"ID":"d1f35830-d883-4b41-ab97-7c382dec0387","Type":"ContainerDied","Data":"9bcb72a7122b2373360a63afde5cbe6a0bd4eb7948390af3be0726b4527fa52b"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.541543 5050 generic.go:334] "Generic (PLEG): container finished" podID="b3b941f1-576d-4b49-871b-3666eda635ff" containerID="2c79f39871dcc870bbb0bf089f57ee1c74e03d8f6076ab4cbe7a430584e5b026" exitCode=1 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.541585 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5b74fbd87-zqsjt" event={"ID":"b3b941f1-576d-4b49-871b-3666eda635ff","Type":"ContainerDied","Data":"2c79f39871dcc870bbb0bf089f57ee1c74e03d8f6076ab4cbe7a430584e5b026"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.542276 5050 scope.go:117] "RemoveContainer" containerID="2c79f39871dcc870bbb0bf089f57ee1c74e03d8f6076ab4cbe7a430584e5b026" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.545844 5050 reflector.go:561] object-"openshift-console-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.545908 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.552955 5050 generic.go:334] "Generic (PLEG): container finished" podID="7fb00b03-fe6e-4c66-bd36-adf9443871a8" containerID="0c5246f2871c8299a1d9fbf47c96087b750b411a58afd3370be94ca88a66119e" exitCode=0 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.555722 5050 generic.go:334] "Generic (PLEG): container finished" podID="6c2d8343-b085-4545-9a26-2dd0bf907b5e" containerID="e507e88b18168e702516add495e9defa31579d4b24045010d2d2bbc99fd7ddb9" exitCode=0 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.561302 5050 generic.go:334] "Generic (PLEG): container finished" podID="fd564500-1ab5-401f-84a8-79c80dfe50ab" containerID="8524af2e52e0f2f7c5a578351b1c48256af5c75dc11b8e910edae9209aea1b48" exitCode=0 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.561368 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-qdrgd" podUID="4b4e8aa9-a0f6-4bd2-92d2-c524aedf6389" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/readyz\": dial tcp 10.217.0.74:8081: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.562570 5050 reflector.go:561] object-"openshift-network-operator"/"iptables-alerter-script": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Diptables-alerter-script&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.562650 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-operator\"/\"iptables-alerter-script\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/configmaps?fieldSelector=metadata.name%3Diptables-alerter-script&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.569951 5050 generic.go:334] "Generic (PLEG): container finished" podID="dc74a2ef-5885-462e-a5b8-7b50454df35b" containerID="dcfa17f99a1c1948ff38bd67fedb10de764c36a7bd5d77919ea42d684801b05d" exitCode=1 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.572331 5050 generic.go:334] "Generic (PLEG): container finished" podID="fa0985f7-7d87-41b3-9916-f22375a0489c" containerID="0d0410cf581b6a3bf74633d4810a438a458442f5365cab80578272c2419bfbe3" exitCode=1 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.576624 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7fb00b03-fe6e-4c66-bd36-adf9443871a8","Type":"ContainerDied","Data":"0c5246f2871c8299a1d9fbf47c96087b750b411a58afd3370be94ca88a66119e"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.576765 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-5pb7q" event={"ID":"6c2d8343-b085-4545-9a26-2dd0bf907b5e","Type":"ContainerDied","Data":"e507e88b18168e702516add495e9defa31579d4b24045010d2d2bbc99fd7ddb9"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.576798 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.576813 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cgxqx" event={"ID":"fd564500-1ab5-401f-84a8-79c80dfe50ab","Type":"ContainerDied","Data":"8524af2e52e0f2f7c5a578351b1c48256af5c75dc11b8e910edae9209aea1b48"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.576828 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mcj5n" event={"ID":"dc74a2ef-5885-462e-a5b8-7b50454df35b","Type":"ContainerDied","Data":"dcfa17f99a1c1948ff38bd67fedb10de764c36a7bd5d77919ea42d684801b05d"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.576844 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lvbdb" event={"ID":"fa0985f7-7d87-41b3-9916-f22375a0489c","Type":"ContainerDied","Data":"0d0410cf581b6a3bf74633d4810a438a458442f5365cab80578272c2419bfbe3"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.576953 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" event={"ID":"3477354d-838b-48cc-a6c3-612088d82640","Type":"ContainerStarted","Data":"5598ff8b0c33bcd4f4af65853a83d65c9f82e43c9536b2d9effa915626ca1091"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.577537 5050 scope.go:117] "RemoveContainer" containerID="8524af2e52e0f2f7c5a578351b1c48256af5c75dc11b8e910edae9209aea1b48" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.577755 5050 scope.go:117] "RemoveContainer" containerID="dcfa17f99a1c1948ff38bd67fedb10de764c36a7bd5d77919ea42d684801b05d" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.579645 5050 generic.go:334] "Generic (PLEG): container finished" podID="a6130f1a-c95b-445f-8235-e57fdcb270fe" containerID="d17d46308a7245c9596076d60060e727d0919075ec397f55611d87cd27d538dc" exitCode=1 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.580419 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5d666c4679-krnwf" event={"ID":"a6130f1a-c95b-445f-8235-e57fdcb270fe","Type":"ContainerDied","Data":"d17d46308a7245c9596076d60060e727d0919075ec397f55611d87cd27d538dc"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.580884 5050 scope.go:117] "RemoveContainer" containerID="d17d46308a7245c9596076d60060e727d0919075ec397f55611d87cd27d538dc" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.582887 5050 status_manager.go:851] "Failed to get status for pod" podUID="d1f35830-d883-4b41-ab97-7c382dec0387" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jns7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/pods/package-server-manager-789f6589d5-9jns7\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.584179 5050 scope.go:117] "RemoveContainer" containerID="0d0410cf581b6a3bf74633d4810a438a458442f5365cab80578272c2419bfbe3" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.584625 5050 generic.go:334] "Generic (PLEG): container finished" podID="9726c3f9-bcae-4722-a054-5a66c161953b" containerID="67f5e275e8639848b24d74b4f8b7bbeea49770d73edebe643bd6731490eadeaf" exitCode=1 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.584696 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qbc2f" event={"ID":"9726c3f9-bcae-4722-a054-5a66c161953b","Type":"ContainerDied","Data":"67f5e275e8639848b24d74b4f8b7bbeea49770d73edebe643bd6731490eadeaf"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.585670 5050 scope.go:117] "RemoveContainer" containerID="67f5e275e8639848b24d74b4f8b7bbeea49770d73edebe643bd6731490eadeaf" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.590310 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-d55dfcdfc-r54sd_dbd5b107-5d08-43af-881c-11540f395267/packageserver/0.log" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.590837 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" event={"ID":"dbd5b107-5d08-43af-881c-11540f395267","Type":"ContainerStarted","Data":"de1b1f9c175c871aa7da4a382e9da58de46fecd277b0865c0531e6f8a35d26d6"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.593061 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.593173 5050 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r54sd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:5443/healthz\": dial tcp 10.217.0.23:5443: connect: connection refused" start-of-body= Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.593320 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" podUID="dbd5b107-5d08-43af-881c-11540f395267" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.23:5443/healthz\": dial tcp 10.217.0.23:5443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.596055 5050 generic.go:334] "Generic (PLEG): container finished" podID="712f888b-7c45-4c1f-95d8-ccc464b7c15f" containerID="cee7a07a8266303d97a8acfa2e535e8bc4b66e8fe482d34d7f556f927bc90d01" exitCode=1 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.596118 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-grhdp" event={"ID":"712f888b-7c45-4c1f-95d8-ccc464b7c15f","Type":"ContainerDied","Data":"cee7a07a8266303d97a8acfa2e535e8bc4b66e8fe482d34d7f556f927bc90d01"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.597074 5050 scope.go:117] "RemoveContainer" containerID="cee7a07a8266303d97a8acfa2e535e8bc4b66e8fe482d34d7f556f927bc90d01" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.598802 5050 generic.go:334] "Generic (PLEG): container finished" podID="0aa7657b-dbca-4b2b-ac62-7000681a918a" containerID="759eebf464b9b81e3e452ececdef6061836e4d6a44710cd6dbbb7f8042ffb464" exitCode=1 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.598920 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-sqjhh" event={"ID":"0aa7657b-dbca-4b2b-ac62-7000681a918a","Type":"ContainerDied","Data":"759eebf464b9b81e3e452ececdef6061836e4d6a44710cd6dbbb7f8042ffb464"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.599631 5050 scope.go:117] "RemoveContainer" containerID="759eebf464b9b81e3e452ececdef6061836e4d6a44710cd6dbbb7f8042ffb464" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.603032 5050 reflector.go:561] object-"openshift-etcd-operator"/"etcd-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-operator-config&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.603103 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"etcd-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Detcd-operator-config&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.613400 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-qdrgd" event={"ID":"4b4e8aa9-a0f6-4bd2-92d2-c524aedf6389","Type":"ContainerStarted","Data":"888e15725783c7af0abf41626f87cb03733f21dc8ca26f55cb66249108b41140"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.613704 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-qdrgd" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.622851 5050 reflector.go:561] object-"openshift-etcd-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.623743 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.624151 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-68c6474976-8qmpk_81255772-fddc-4936-8de3-da4649c32d1f/catalog-operator/1.log" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.629970 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk" event={"ID":"81255772-fddc-4936-8de3-da4649c32d1f","Type":"ContainerStarted","Data":"bb525485ebcd5a45f27963f68014603e30e8084aff55811b37cee2e784c94676"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.631117 5050 scope.go:117] "RemoveContainer" containerID="bb525485ebcd5a45f27963f68014603e30e8084aff55811b37cee2e784c94676" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.643431 5050 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-session": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-session&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.643526 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-session\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/secrets?fieldSelector=metadata.name%3Dv4-0-config-system-session&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.666236 5050 reflector.go:561] object-"openstack"/"cinder-volume-volume1-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-volume-volume1-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.666320 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-volume-volume1-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-volume-volume1-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.670250 5050 generic.go:334] "Generic (PLEG): container finished" podID="03c178a1-6fd8-4e37-8894-bcde36cef2b5" containerID="fdd20fe3de48744d9d76bc3a7bc81e20bbb24203abfebe2b9b047ef24da55d83" exitCode=1 Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.670305 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-9sn7j" event={"ID":"03c178a1-6fd8-4e37-8894-bcde36cef2b5","Type":"ContainerDied","Data":"fdd20fe3de48744d9d76bc3a7bc81e20bbb24203abfebe2b9b047ef24da55d83"} Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.671023 5050 scope.go:117] "RemoveContainer" containerID="fdd20fe3de48744d9d76bc3a7bc81e20bbb24203abfebe2b9b047ef24da55d83" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.681206 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-sqjhh" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.689300 5050 reflector.go:561] object-"openshift-cluster-version"/"cluster-version-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/secrets?fieldSelector=metadata.name%3Dcluster-version-operator-serving-cert&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.689376 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-version/secrets?fieldSelector=metadata.name%3Dcluster-version-operator-serving-cert&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.703620 5050 reflector.go:561] object-"openshift-service-ca"/"signing-key": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dsigning-key&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.703707 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-service-ca\"/\"signing-key\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca/secrets?fieldSelector=metadata.name%3Dsigning-key&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.723524 5050 reflector.go:561] object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-mz95s": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-controller-manager-dockercfg-mz95s&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.723618 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"openstack-operator-controller-manager-dockercfg-mz95s\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dopenstack-operator-controller-manager-dockercfg-mz95s&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.743056 5050 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-config&resourceVersion=109762": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.743290 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ovn-kubernetes/configmaps?fieldSelector=metadata.name%3Dovnkube-config&resourceVersion=109762\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.764894 5050 reflector.go:561] object-"openstack"/"cert-galera-openstack-cell1-svc": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-galera-openstack-cell1-svc&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.764981 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cert-galera-openstack-cell1-svc\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcert-galera-openstack-cell1-svc&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.783354 5050 reflector.go:561] object-"openstack"/"horizon-config-data": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dhorizon-config-data&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.783429 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"horizon-config-data\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dhorizon-config-data&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.802843 5050 reflector.go:561] object-"openstack"/"rabbitmq-cell1-server-dockercfg-bvxnm": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-server-dockercfg-bvxnm&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.803203 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-cell1-server-dockercfg-bvxnm\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-server-dockercfg-bvxnm&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.823478 5050 reflector.go:561] object-"openshift-image-registry"/"installation-pull-secrets": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dinstallation-pull-secrets&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.823578 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-image-registry\"/\"installation-pull-secrets\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dinstallation-pull-secrets&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.847842 5050 reflector.go:561] object-"openstack"/"galera-openstack-dockercfg-5gcmv": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dgalera-openstack-dockercfg-5gcmv&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.847926 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"galera-openstack-dockercfg-5gcmv\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dgalera-openstack-dockercfg-5gcmv&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.862937 5050 reflector.go:561] object-"openstack"/"rabbitmq-cell1-default-user": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-default-user&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.863042 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"rabbitmq-cell1-default-user\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Drabbitmq-cell1-default-user&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.883727 5050 reflector.go:561] object-"openshift-cluster-samples-operator"/"samples-operator-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dsamples-operator-tls&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.883947 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/secrets?fieldSelector=metadata.name%3Dsamples-operator-tls&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.908937 5050 reflector.go:561] object-"openstack"/"ovndbcluster-sb-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-sb-scripts&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.909036 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovndbcluster-sb-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-sb-scripts&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.923253 5050 reflector.go:561] object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-x9p8j": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dkeystone-operator-controller-manager-dockercfg-x9p8j&resourceVersion=109586": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.923333 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"keystone-operator-controller-manager-dockercfg-x9p8j\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dkeystone-operator-controller-manager-dockercfg-x9p8j&resourceVersion=109586\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: I1211 15:54:41.943421 5050 status_manager.go:851] "Failed to get status for pod" podUID="1c8331c1-b8ee-456b-baa6-110917427b64" pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operators/pods/observability-operator-d8bb48f5d-wwdcc\": dial tcp 38.102.83.147:6443: connect: connection refused" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.963345 5050 reflector.go:561] object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-7zqpj": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmariadb-operator-controller-manager-dockercfg-7zqpj&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.963425 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"mariadb-operator-controller-manager-dockercfg-7zqpj\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack-operators/secrets?fieldSelector=metadata.name%3Dmariadb-operator-controller-manager-dockercfg-7zqpj&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:41 crc kubenswrapper[5050]: W1211 15:54:41.983777 5050 reflector.go:561] object-"openstack"/"ovndbcluster-nb-scripts": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-nb-scripts&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:41 crc kubenswrapper[5050]: E1211 15:54:41.983855 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovndbcluster-nb-scripts\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/configmaps?fieldSelector=metadata.name%3Dovndbcluster-nb-scripts&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.021304 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-9tcm2" Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.021658 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qbc2f" Dec 11 15:54:42 crc kubenswrapper[5050]: W1211 15:54:42.025191 5050 reflector.go:561] object-"openshift-console-operator"/"console-operator-config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dconsole-operator-config&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:42 crc kubenswrapper[5050]: E1211 15:54:42.025282 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-console-operator\"/\"console-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console-operator/configmaps?fieldSelector=metadata.name%3Dconsole-operator-config&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:42 crc kubenswrapper[5050]: W1211 15:54:42.035268 5050 reflector.go:561] object-"openstack"/"cinder-cinder-dockercfg-lvj2r": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-cinder-dockercfg-lvj2r&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:42 crc kubenswrapper[5050]: E1211 15:54:42.035356 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"cinder-cinder-dockercfg-lvj2r\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dcinder-cinder-dockercfg-lvj2r&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:42 crc kubenswrapper[5050]: W1211 15:54:42.046607 5050 reflector.go:561] object-"openstack"/"heat-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dheat-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:42 crc kubenswrapper[5050]: E1211 15:54:42.046703 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"heat-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dheat-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:42 crc kubenswrapper[5050]: W1211 15:54:42.065105 5050 reflector.go:561] object-"openshift-oauth-apiserver"/"encryption-config-1": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:42 crc kubenswrapper[5050]: E1211 15:54:42.065188 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/secrets?fieldSelector=metadata.name%3Dencryption-config-1&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.082982 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-scheduler-0" podUID="9c5fd2fd-4df8-4f0f-982c-d3e6df852669" containerName="manila-scheduler" probeResult="failure" output="Get \"http://10.217.1.145:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.083382 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-share-share1-0" podUID="29321ad8-528b-46ed-8c14-21a74038cddb" containerName="manila-share" probeResult="failure" output="Get \"http://10.217.1.146:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:42 crc kubenswrapper[5050]: W1211 15:54:42.087514 5050 reflector.go:561] object-"openstack"/"openstackclient-openstackclient-dockercfg-c88pf": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dopenstackclient-openstackclient-dockercfg-c88pf&resourceVersion=109671": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:42 crc kubenswrapper[5050]: E1211 15:54:42.087650 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"openstackclient-openstackclient-dockercfg-c88pf\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dopenstackclient-openstackclient-dockercfg-c88pf&resourceVersion=109671\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.102325 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-xl9wl" Dec 11 15:54:42 crc kubenswrapper[5050]: W1211 15:54:42.113054 5050 reflector.go:561] object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-serving-cert&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:42 crc kubenswrapper[5050]: E1211 15:54:42.113172 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver-operator/secrets?fieldSelector=metadata.name%3Dopenshift-apiserver-operator-serving-cert&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:42 crc kubenswrapper[5050]: W1211 15:54:42.123957 5050 reflector.go:561] object-"openshift-authentication-operator"/"trusted-ca-bundle": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=109807": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:42 crc kubenswrapper[5050]: E1211 15:54:42.124090 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/configmaps?fieldSelector=metadata.name%3Dtrusted-ca-bundle&resourceVersion=109807\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.138272 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2" podUID="06df6c8c-640d-431b-b216-78345a9054e1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": dial tcp 10.217.0.82:8081: connect: connection refused" Dec 11 15:54:42 crc kubenswrapper[5050]: W1211 15:54:42.142805 5050 reflector.go:561] object-"openstack"/"barbican-keystone-listener-config-data": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-keystone-listener-config-data&resourceVersion=109512": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:42 crc kubenswrapper[5050]: E1211 15:54:42.142875 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"barbican-keystone-listener-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dbarbican-keystone-listener-config-data&resourceVersion=109512\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:42 crc kubenswrapper[5050]: W1211 15:54:42.162965 5050 reflector.go:561] object-"openshift-nmstate"/"nmstate-handler-dockercfg-94hht": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dnmstate-handler-dockercfg-94hht&resourceVersion=109743": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:42 crc kubenswrapper[5050]: E1211 15:54:42.163516 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-nmstate\"/\"nmstate-handler-dockercfg-94hht\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-nmstate/secrets?fieldSelector=metadata.name%3Dnmstate-handler-dockercfg-94hht&resourceVersion=109743\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.170076 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-967d97867-7stc2" Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.182540 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="cert-manager/cert-manager-86cb77c54b-9sn7j" Dec 11 15:54:42 crc kubenswrapper[5050]: W1211 15:54:42.183074 5050 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=109522": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:42 crc kubenswrapper[5050]: E1211 15:54:42.183320 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=109522\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:42 crc kubenswrapper[5050]: W1211 15:54:42.204470 5050 reflector.go:561] object-"hostpath-provisioner"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:42 crc kubenswrapper[5050]: E1211 15:54:42.204574 5050 reflector.go:158] "Unhandled Error" err="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=109158\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:42 crc kubenswrapper[5050]: W1211 15:54:42.229158 5050 reflector.go:561] object-"openstack"/"neutron-httpd-config": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-httpd-config&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:42 crc kubenswrapper[5050]: E1211 15:54:42.229238 5050 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"neutron-httpd-config\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openstack/secrets?fieldSelector=metadata.name%3Dneutron-httpd-config&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.236546 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jq9vt" Dec 11 15:54:42 crc kubenswrapper[5050]: W1211 15:54:42.243558 5050 reflector.go:561] object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dingress-operator-dockercfg-7lnqk&resourceVersion=109803": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:42 crc kubenswrapper[5050]: E1211 15:54:42.243631 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-7lnqk\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dingress-operator-dockercfg-7lnqk&resourceVersion=109803\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:42 crc kubenswrapper[5050]: W1211 15:54:42.263911 5050 reflector.go:561] object-"openshift-multus"/"metrics-daemon-secret": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmetrics-daemon-secret&resourceVersion=109137": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:42 crc kubenswrapper[5050]: E1211 15:54:42.264512 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"metrics-daemon-secret\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmetrics-daemon-secret&resourceVersion=109137\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:42 crc kubenswrapper[5050]: W1211 15:54:42.285316 5050 reflector.go:561] object-"openshift-machine-config-operator"/"mcc-proxy-tls": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&resourceVersion=109846": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:42 crc kubenswrapper[5050]: E1211 15:54:42.285407 5050 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmcc-proxy-tls&resourceVersion=109846\": dial tcp 38.102.83.147:6443: connect: connection refused" logger="UnhandledError" Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.312989 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-65swh" podUID="58a60a6f-fcf5-4aa0-b6e4-97d7df2b2bd2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": dial tcp 10.217.0.84:8081: connect: connection refused" Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.583367 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-h4w7p" Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.652276 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-wsc2k" Dec 11 15:54:42 crc kubenswrapper[5050]: E1211 15:54:42.671601 5050 log.go:32] "RunPodSandbox from runtime service failed" err=< Dec 11 15:54:42 crc kubenswrapper[5050]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-g2bpg_openshift-marketplace_daba01c5-6d3c-4e32-84ce-d8f67b685671_0(8f60c47ec4f93aecbf14260674639e12a9365d0c12ec98d2f3c12310bd3a8a92): error adding pod openshift-marketplace_community-operators-g2bpg to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8f60c47ec4f93aecbf14260674639e12a9365d0c12ec98d2f3c12310bd3a8a92" Netns:"/var/run/netns/507597c7-f2c1-4492-9c40-1bf583be392f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-g2bpg;K8S_POD_INFRA_CONTAINER_ID=8f60c47ec4f93aecbf14260674639e12a9365d0c12ec98d2f3c12310bd3a8a92;K8S_POD_UID=daba01c5-6d3c-4e32-84ce-d8f67b685671" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-g2bpg] networking: Multus: [openshift-marketplace/community-operators-g2bpg/daba01c5-6d3c-4e32-84ce-d8f67b685671]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-g2bpg in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-g2bpg in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-g2bpg?timeout=1m0s": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:42 crc kubenswrapper[5050]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 11 15:54:42 crc kubenswrapper[5050]: > Dec 11 15:54:42 crc kubenswrapper[5050]: E1211 15:54:42.671674 5050 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Dec 11 15:54:42 crc kubenswrapper[5050]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-g2bpg_openshift-marketplace_daba01c5-6d3c-4e32-84ce-d8f67b685671_0(8f60c47ec4f93aecbf14260674639e12a9365d0c12ec98d2f3c12310bd3a8a92): error adding pod openshift-marketplace_community-operators-g2bpg to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8f60c47ec4f93aecbf14260674639e12a9365d0c12ec98d2f3c12310bd3a8a92" Netns:"/var/run/netns/507597c7-f2c1-4492-9c40-1bf583be392f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-g2bpg;K8S_POD_INFRA_CONTAINER_ID=8f60c47ec4f93aecbf14260674639e12a9365d0c12ec98d2f3c12310bd3a8a92;K8S_POD_UID=daba01c5-6d3c-4e32-84ce-d8f67b685671" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-g2bpg] networking: Multus: [openshift-marketplace/community-operators-g2bpg/daba01c5-6d3c-4e32-84ce-d8f67b685671]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-g2bpg in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-g2bpg in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-g2bpg?timeout=1m0s": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:42 crc kubenswrapper[5050]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 11 15:54:42 crc kubenswrapper[5050]: > pod="openshift-marketplace/community-operators-g2bpg" Dec 11 15:54:42 crc kubenswrapper[5050]: E1211 15:54:42.671691 5050 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Dec 11 15:54:42 crc kubenswrapper[5050]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-g2bpg_openshift-marketplace_daba01c5-6d3c-4e32-84ce-d8f67b685671_0(8f60c47ec4f93aecbf14260674639e12a9365d0c12ec98d2f3c12310bd3a8a92): error adding pod openshift-marketplace_community-operators-g2bpg to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8f60c47ec4f93aecbf14260674639e12a9365d0c12ec98d2f3c12310bd3a8a92" Netns:"/var/run/netns/507597c7-f2c1-4492-9c40-1bf583be392f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-g2bpg;K8S_POD_INFRA_CONTAINER_ID=8f60c47ec4f93aecbf14260674639e12a9365d0c12ec98d2f3c12310bd3a8a92;K8S_POD_UID=daba01c5-6d3c-4e32-84ce-d8f67b685671" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-g2bpg] networking: Multus: [openshift-marketplace/community-operators-g2bpg/daba01c5-6d3c-4e32-84ce-d8f67b685671]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-g2bpg in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-g2bpg in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-g2bpg?timeout=1m0s": dial tcp 38.102.83.147:6443: connect: connection refused Dec 11 15:54:42 crc kubenswrapper[5050]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 11 15:54:42 crc kubenswrapper[5050]: > pod="openshift-marketplace/community-operators-g2bpg" Dec 11 15:54:42 crc kubenswrapper[5050]: E1211 15:54:42.671756 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"community-operators-g2bpg_openshift-marketplace(daba01c5-6d3c-4e32-84ce-d8f67b685671)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"community-operators-g2bpg_openshift-marketplace(daba01c5-6d3c-4e32-84ce-d8f67b685671)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-g2bpg_openshift-marketplace_daba01c5-6d3c-4e32-84ce-d8f67b685671_0(8f60c47ec4f93aecbf14260674639e12a9365d0c12ec98d2f3c12310bd3a8a92): error adding pod openshift-marketplace_community-operators-g2bpg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"8f60c47ec4f93aecbf14260674639e12a9365d0c12ec98d2f3c12310bd3a8a92\\\" Netns:\\\"/var/run/netns/507597c7-f2c1-4492-9c40-1bf583be392f\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-g2bpg;K8S_POD_INFRA_CONTAINER_ID=8f60c47ec4f93aecbf14260674639e12a9365d0c12ec98d2f3c12310bd3a8a92;K8S_POD_UID=daba01c5-6d3c-4e32-84ce-d8f67b685671\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-marketplace/community-operators-g2bpg] networking: Multus: [openshift-marketplace/community-operators-g2bpg/daba01c5-6d3c-4e32-84ce-d8f67b685671]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-g2bpg in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-g2bpg in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-g2bpg?timeout=1m0s\\\": dial tcp 38.102.83.147:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-marketplace/community-operators-g2bpg" podUID="daba01c5-6d3c-4e32-84ce-d8f67b685671" Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.700623 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"7fb00b03-fe6e-4c66-bd36-adf9443871a8","Type":"ContainerStarted","Data":"8dc8ca67ef08eaff1afb43e4e5dd4fc21abaa34cd9e0500212605dbd4ed89fd5"} Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.705490 5050 generic.go:334] "Generic (PLEG): container finished" podID="131d56da-b770-4452-97c9-b585434da431" containerID="d3e1fc093c150e4e423d3557b3ec8c637526707132012ee62f3903c848934b44" exitCode=0 Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.705549 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"131d56da-b770-4452-97c9-b585434da431","Type":"ContainerDied","Data":"d3e1fc093c150e4e423d3557b3ec8c637526707132012ee62f3903c848934b44"} Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.707511 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-9tcm2" event={"ID":"048e17a7-0123-45a2-b698-02def3db74fe","Type":"ContainerStarted","Data":"3307ce215b26075cbe8e602ed81cb69ff92b62ef98a5a16078e86ac56e82bb30"} Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.708509 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-9tcm2" Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.711115 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qbc2f" event={"ID":"9726c3f9-bcae-4722-a054-5a66c161953b","Type":"ContainerStarted","Data":"3df15f502494a011be415be5d6b8e9b61070df86a148d788a2bcea686ac20d44"} Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.711859 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qbc2f" Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.714924 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-w4tzc" event={"ID":"54e98831-cd88-4dee-90db-e8fbb006e9c3","Type":"ContainerStarted","Data":"8c28e6ca4384eca9dc3ffc0df10a8848d48038639b09923f82aa8378a52968b1"} Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.714968 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-w4tzc" Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.718590 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-wksw5" event={"ID":"2ebcfee9-160d-4440-b885-66ae4d5d66a7","Type":"ContainerStarted","Data":"8a173262017c70458b71edb4e434feca8d66a5bd5b1abdf970f8b9aea973058b"} Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.721831 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"14ad1594-090d-4024-a999-9ffe77ce58d8","Type":"ContainerStarted","Data":"9944e9ac23ee825bdb5db8a849dbd5837dc547c888f332c61b4839e3443bc5e4"} Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.731093 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_71bb4a3aecc4ba5b26c4b7318770ce13/kube-apiserver/0.log" Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.731841 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a2b37a33f6496a949df0a2432d0cc87429b01daac1d77ce0c043d5f394127f35"} Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.734447 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-65swh" event={"ID":"58a60a6f-fcf5-4aa0-b6e4-97d7df2b2bd2","Type":"ContainerStarted","Data":"34bbc027918299bcec70bbb16457499dad6221e2b8ce30c18f74de04ce97c3c3"} Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.735859 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-65swh" Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.740436 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jns7" event={"ID":"d1f35830-d883-4b41-ab97-7c382dec0387","Type":"ContainerStarted","Data":"b388868f2fdfdd707406cafbc92f83788b76bef5a11bfc08956bb27c592330c9"} Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.740541 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jns7" Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.749835 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2017b17d6ba5e36ed50386f4d5635a7ef0198c30696df4a891faf3026b29cd6f"} Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.757504 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-68c6474976-8qmpk_81255772-fddc-4936-8de3-da4649c32d1f/catalog-operator/1.log" Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.758184 5050 generic.go:334] "Generic (PLEG): container finished" podID="81255772-fddc-4936-8de3-da4649c32d1f" containerID="bb525485ebcd5a45f27963f68014603e30e8084aff55811b37cee2e784c94676" exitCode=1 Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.758254 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk" event={"ID":"81255772-fddc-4936-8de3-da4649c32d1f","Type":"ContainerDied","Data":"bb525485ebcd5a45f27963f68014603e30e8084aff55811b37cee2e784c94676"} Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.758510 5050 scope.go:117] "RemoveContainer" containerID="737f9cb69bc5ba682bce46695449bf052170034d47a489f229d1389e142c9bb4" Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.763957 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2" event={"ID":"06df6c8c-640d-431b-b216-78345a9054e1","Type":"ContainerStarted","Data":"f97f06792dadeb94353e16918283fa7f85cf80f6cfc25dd2bcab68035deeb0c4"} Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.764126 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2" Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.768158 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69cffd76bd-8bkp6" event={"ID":"6a956942-e4db-4c66-a7c2-1c370c1569f4","Type":"ContainerStarted","Data":"573d3c4c852a480be0bfa79f08c4c969612d9457e011b10cb200ea064d25ab46"} Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.768369 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-69cffd76bd-8bkp6" Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.777516 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-69cffd76bd-8bkp6" Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.778769 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-799b66f579-tqs2v" event={"ID":"53a1a6d1-6999-4fa1-a0ce-a20b83f1f347","Type":"ContainerStarted","Data":"c3ed6e11049a9febe61b2390df3fa0c7185ed387bfc21e4ab4e164a97c56456a"} Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.781088 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-operator-799b66f579-tqs2v" podUID="53a1a6d1-6999-4fa1-a0ce-a20b83f1f347" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.73:8081/readyz\": dial tcp 10.217.0.73:8081: connect: connection refused" Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.804894 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-8jnzj" Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.821548 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-5fb79d99b5-m4xgd" podUID="2086bc41-00a2-4c97-a491-08511f3ed6e5" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.117:8080/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:33694->10.217.1.117:8080: read: connection reset by peer" Dec 11 15:54:42 crc kubenswrapper[5050]: I1211 15:54:42.971655 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-ssfnh" Dec 11 15:54:43 crc kubenswrapper[5050]: I1211 15:54:43.032231 5050 patch_prober.go:28] interesting pod/route-controller-manager-767f6d799d-cv7mn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:43 crc kubenswrapper[5050]: I1211 15:54:43.032302 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn" podUID="5b7ceea3-4e92-46ee-81de-5b8f932144ad" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:43 crc kubenswrapper[5050]: I1211 15:54:43.682635 5050 patch_prober.go:28] interesting pod/observability-operator-d8bb48f5d-wwdcc container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.1.128:8081/healthz\": dial tcp 10.217.1.128:8081: connect: connection refused" start-of-body= Dec 11 15:54:43 crc kubenswrapper[5050]: I1211 15:54:43.683793 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" podUID="1c8331c1-b8ee-456b-baa6-110917427b64" containerName="operator" probeResult="failure" output="Get \"http://10.217.1.128:8081/healthz\": dial tcp 10.217.1.128:8081: connect: connection refused" Dec 11 15:54:43 crc kubenswrapper[5050]: I1211 15:54:43.779800 5050 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r54sd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:43 crc kubenswrapper[5050]: I1211 15:54:43.779881 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" podUID="dbd5b107-5d08-43af-881c-11540f395267" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.23:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:43 crc kubenswrapper[5050]: I1211 15:54:43.815675 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-xl9wl" event={"ID":"105854f4-5cc1-491f-983a-50864b37893f","Type":"ContainerStarted","Data":"f28e0673984d2eb4e0c7e0ef0ed5b5b5f5a6bb5a446d8f53ce56b1c1a5b9b26a"} Dec 11 15:54:43 crc kubenswrapper[5050]: I1211 15:54:43.816796 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-xl9wl" Dec 11 15:54:43 crc kubenswrapper[5050]: I1211 15:54:43.820112 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jq9vt" event={"ID":"5e11a0d1-4179-4621-803d-839196fb940b","Type":"ContainerStarted","Data":"ecc2c1f96fb02dbf7c8286689af3a530fd5c62d4c8aee1c645c4d36c17061e57"} Dec 11 15:54:43 crc kubenswrapper[5050]: I1211 15:54:43.820632 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jq9vt" Dec 11 15:54:43 crc kubenswrapper[5050]: I1211 15:54:43.829709 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ttg8w" event={"ID":"b885fa10-3ed3-41fd-94ae-2b7442519450","Type":"ContainerStarted","Data":"159d98abc7bbae8fb63f2dae30adf9db9e62ce64a5fba3d733829c8e5c44ddb8"} Dec 11 15:54:43 crc kubenswrapper[5050]: I1211 15:54:43.830384 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ttg8w" Dec 11 15:54:43 crc kubenswrapper[5050]: I1211 15:54:43.838221 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-qqb7f" event={"ID":"12778398-2baa-44cb-9fd1-f2034870e9fc","Type":"ContainerStarted","Data":"6c5ac6258bb4c918ff2e867e39907e9c5523f961e040d5046bab5b4651a5cfb3"} Dec 11 15:54:43 crc kubenswrapper[5050]: I1211 15:54:43.838585 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-qqb7f" Dec 11 15:54:43 crc kubenswrapper[5050]: I1211 15:54:43.842112 5050 generic.go:334] "Generic (PLEG): container finished" podID="2086bc41-00a2-4c97-a491-08511f3ed6e5" containerID="c2abb4cc449a9d8127c16b79dfe68ac27a46606a1e98e66a409122244f1e0137" exitCode=0 Dec 11 15:54:43 crc kubenswrapper[5050]: I1211 15:54:43.842159 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fb79d99b5-m4xgd" event={"ID":"2086bc41-00a2-4c97-a491-08511f3ed6e5","Type":"ContainerDied","Data":"c2abb4cc449a9d8127c16b79dfe68ac27a46606a1e98e66a409122244f1e0137"} Dec 11 15:54:43 crc kubenswrapper[5050]: I1211 15:54:43.847668 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" event={"ID":"f5bec9f7-072c-4c21-80ea-af9f59313eef","Type":"ContainerStarted","Data":"ec26818a32a3159ae9c5e4ccf30b797fea3c2cea7bd7d1722fe9ff4cdd9baa24"} Dec 11 15:54:43 crc kubenswrapper[5050]: I1211 15:54:43.848502 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" Dec 11 15:54:43 crc kubenswrapper[5050]: I1211 15:54:43.856992 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-9sn7j" event={"ID":"03c178a1-6fd8-4e37-8894-bcde36cef2b5","Type":"ContainerStarted","Data":"29c06ec715c0a77aff1612b8fc0641f62ca50428fb68e31a1dcef920055d1b60"} Dec 11 15:54:43 crc kubenswrapper[5050]: I1211 15:54:43.863153 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-7stc2" event={"ID":"d7b70b3b-5481-4ac2-8e60-256e2690752f","Type":"ContainerStarted","Data":"bfcfc29ebf6d52b81735f602a2fe0d237b31f78b94e47ae328ca750785e9e0a7"} Dec 11 15:54:43 crc kubenswrapper[5050]: I1211 15:54:43.864338 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g2bpg" Dec 11 15:54:43 crc kubenswrapper[5050]: I1211 15:54:43.865537 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-799b66f579-tqs2v" Dec 11 15:54:43 crc kubenswrapper[5050]: I1211 15:54:43.868358 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g2bpg" Dec 11 15:54:43 crc kubenswrapper[5050]: I1211 15:54:43.979330 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-dqk4m" Dec 11 15:54:44 crc kubenswrapper[5050]: I1211 15:54:44.033102 5050 patch_prober.go:28] interesting pod/route-controller-manager-767f6d799d-cv7mn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:44 crc kubenswrapper[5050]: I1211 15:54:44.033162 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn" podUID="5b7ceea3-4e92-46ee-81de-5b8f932144ad" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:44 crc kubenswrapper[5050]: I1211 15:54:44.362741 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5d666c4679-krnwf" Dec 11 15:54:44 crc kubenswrapper[5050]: I1211 15:54:44.504278 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 15:54:44 crc kubenswrapper[5050]: I1211 15:54:44.651231 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-controller-manager-5b74fbd87-zqsjt" Dec 11 15:54:44 crc kubenswrapper[5050]: I1211 15:54:44.651292 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5b74fbd87-zqsjt" Dec 11 15:54:44 crc kubenswrapper[5050]: I1211 15:54:44.762763 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Dec 11 15:54:44 crc kubenswrapper[5050]: I1211 15:54:44.881455 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-8jnzj" event={"ID":"a6345cf8-abc2-4c9a-bfe6-8b65187ada2d","Type":"ContainerStarted","Data":"16e66a57987317bee1a4025614993f1cdc65a296602843c3ad51bc93c60e35ba"} Dec 11 15:54:44 crc kubenswrapper[5050]: I1211 15:54:44.882862 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-8jnzj" Dec 11 15:54:44 crc kubenswrapper[5050]: I1211 15:54:44.886351 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5b74fbd87-zqsjt" event={"ID":"b3b941f1-576d-4b49-871b-3666eda635ff","Type":"ContainerStarted","Data":"8632d80fdb2cf8a07bc31376754ae4f45db4841be459f16db21dce2a1f836964"} Dec 11 15:54:44 crc kubenswrapper[5050]: I1211 15:54:44.886691 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5b74fbd87-zqsjt" Dec 11 15:54:44 crc kubenswrapper[5050]: I1211 15:54:44.902704 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-68c6474976-8qmpk_81255772-fddc-4936-8de3-da4649c32d1f/catalog-operator/1.log" Dec 11 15:54:44 crc kubenswrapper[5050]: I1211 15:54:44.911764 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-wsc2k" event={"ID":"cedca20e-aaaa-4190-944d-8f18bd93f737","Type":"ContainerStarted","Data":"23f68c12a7bbc3679717a0824f65e53e43c11a8356567c0c6be36c7fe7bad96e"} Dec 11 15:54:44 crc kubenswrapper[5050]: I1211 15:54:44.912764 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-wsc2k" Dec 11 15:54:44 crc kubenswrapper[5050]: I1211 15:54:44.928314 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-grhdp" event={"ID":"712f888b-7c45-4c1f-95d8-ccc464b7c15f","Type":"ContainerStarted","Data":"c07ebd464281f9e76e0672d77a5f9e85626dbbe9aec7d493b452ee00143d9f76"} Dec 11 15:54:44 crc kubenswrapper[5050]: I1211 15:54:44.929799 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-grhdp" Dec 11 15:54:44 crc kubenswrapper[5050]: I1211 15:54:44.957878 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-sqjhh" event={"ID":"0aa7657b-dbca-4b2b-ac62-7000681a918a","Type":"ContainerStarted","Data":"3efe58f00c6c7a71dbadc35f337e676969f1df43c2dc8c33d4c8215fdbdf0b7a"} Dec 11 15:54:44 crc kubenswrapper[5050]: I1211 15:54:44.958032 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-sqjhh" Dec 11 15:54:44 crc kubenswrapper[5050]: I1211 15:54:44.961176 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Dec 11 15:54:44 crc kubenswrapper[5050]: I1211 15:54:44.961702 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"fd99b17b94ed29798da44b62947af08ead2299e46c33b2a902af3d51791abe6f"} Dec 11 15:54:44 crc kubenswrapper[5050]: I1211 15:54:44.963915 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-ssfnh" event={"ID":"bff5d533-0728-4436-bdeb-c725bf04bdb3","Type":"ContainerStarted","Data":"582118a66baf46f31f61c2afe20d40393ce28e1e823bf7d91b956665d5a15346"} Dec 11 15:54:44 crc kubenswrapper[5050]: I1211 15:54:44.966229 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-ssfnh" Dec 11 15:54:44 crc kubenswrapper[5050]: I1211 15:54:44.982181 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-h4w7p" event={"ID":"9c82f51b-e2a0-49e6-bc0e-d7679e439a6f","Type":"ContainerStarted","Data":"d40b5ce936107d0f1d64d438447191bb9a42b248e1148ff1a0aeb6766504f593"} Dec 11 15:54:44 crc kubenswrapper[5050]: I1211 15:54:44.982490 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-h4w7p" Dec 11 15:54:44 crc kubenswrapper[5050]: I1211 15:54:44.995394 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-mcj5n" event={"ID":"dc74a2ef-5885-462e-a5b8-7b50454df35b","Type":"ContainerStarted","Data":"62ad1d9a2c1ef20b4ee9e0fa016f02782bb5f55837af62a973962276e7b0dcfa"} Dec 11 15:54:44 crc kubenswrapper[5050]: I1211 15:54:44.998089 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-967d97867-7stc2" Dec 11 15:54:45 crc kubenswrapper[5050]: I1211 15:54:45.227698 5050 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cd66n container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]log ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]etcd excluded: ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]etcd-readiness excluded: ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]informer-sync ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/generic-apiserver-start-informers ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/max-in-flight-filter ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/project.openshift.io-projectcache ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/openshift.io-startinformers ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 11 15:54:45 crc kubenswrapper[5050]: [-]shutdown failed: reason withheld Dec 11 15:54:45 crc kubenswrapper[5050]: readyz check failed Dec 11 15:54:45 crc kubenswrapper[5050]: I1211 15:54:45.227752 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" podUID="ea884e88-c0df-4212-976a-0d7ce1731fdc" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:54:45 crc kubenswrapper[5050]: I1211 15:54:45.574044 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 15:54:45 crc kubenswrapper[5050]: I1211 15:54:45.574327 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 15:54:45 crc kubenswrapper[5050]: I1211 15:54:45.575293 5050 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]log ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]etcd ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/openshift.io-api-request-count-filter ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/openshift.io-startkubeinformers ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/generic-apiserver-start-informers ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/priority-and-fairness-config-consumer ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/priority-and-fairness-filter ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/start-apiextensions-informers ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/start-apiextensions-controllers ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/crd-informer-synced ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/start-system-namespaces-controller ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/start-cluster-authentication-info-controller ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/start-legacy-token-tracking-controller ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/start-service-ip-repair-controllers ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/rbac/bootstrap-roles ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/priority-and-fairness-config-producer ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/bootstrap-controller ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/start-kube-aggregator-informers ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/apiservice-status-local-available-controller ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/apiservice-status-remote-available-controller ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/apiservice-registration-controller ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/apiservice-wait-for-first-sync ok Dec 11 15:54:45 crc kubenswrapper[5050]: [-]poststarthook/apiservice-discovery-controller failed: reason withheld Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/kube-apiserver-autoregistration ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]autoregister-completion ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/apiservice-openapi-controller ok Dec 11 15:54:45 crc kubenswrapper[5050]: [+]poststarthook/apiservice-openapiv3-controller ok Dec 11 15:54:45 crc kubenswrapper[5050]: livez check failed Dec 11 15:54:45 crc kubenswrapper[5050]: I1211 15:54:45.575349 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:54:46 crc kubenswrapper[5050]: I1211 15:54:46.052500 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"71218193-88fc-4811-bf04-33a4f4a87898","Type":"ContainerStarted","Data":"a7ba2d2d7e07f813170acfbb98a020417ec47d8f170ae827f70eacd84a2aef3a"} Dec 11 15:54:46 crc kubenswrapper[5050]: I1211 15:54:46.061736 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 11 15:54:46 crc kubenswrapper[5050]: I1211 15:54:46.065083 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5d666c4679-krnwf" event={"ID":"a6130f1a-c95b-445f-8235-e57fdcb270fe","Type":"ContainerStarted","Data":"5258e510cdb070cfdc4d5fd43c3e896b02d03b240033d3d7cfbae24ed309e10f"} Dec 11 15:54:46 crc kubenswrapper[5050]: I1211 15:54:46.066295 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5d666c4679-krnwf" Dec 11 15:54:46 crc kubenswrapper[5050]: I1211 15:54:46.073810 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cgxqx" event={"ID":"fd564500-1ab5-401f-84a8-79c80dfe50ab","Type":"ContainerStarted","Data":"3c98cef26effa952a278a6cc3abac82d634b750022d6974bc6dbd491c972b56b"} Dec 11 15:54:46 crc kubenswrapper[5050]: I1211 15:54:46.075294 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-cgxqx" Dec 11 15:54:46 crc kubenswrapper[5050]: I1211 15:54:46.075365 5050 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cgxqx container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Dec 11 15:54:46 crc kubenswrapper[5050]: I1211 15:54:46.075398 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cgxqx" podUID="fd564500-1ab5-401f-84a8-79c80dfe50ab" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Dec 11 15:54:46 crc kubenswrapper[5050]: I1211 15:54:46.078803 5050 generic.go:334] "Generic (PLEG): container finished" podID="0f954250-5982-4088-839a-8faf7bfe203c" containerID="2b1e9cc4138c3cee1e26c2a91b99b470ec5f4b50f5007ad3b68d6bbde49915a9" exitCode=0 Dec 11 15:54:46 crc kubenswrapper[5050]: I1211 15:54:46.084196 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"0f954250-5982-4088-839a-8faf7bfe203c","Type":"ContainerDied","Data":"2b1e9cc4138c3cee1e26c2a91b99b470ec5f4b50f5007ad3b68d6bbde49915a9"} Dec 11 15:54:46 crc kubenswrapper[5050]: I1211 15:54:46.113687 5050 generic.go:334] "Generic (PLEG): container finished" podID="b94bc94c-a636-4f0d-bd28-0f347e7b1143" containerID="be93e909105a092ced14aca5a09c48a140e87c3ee935eba54b3b801f908c1e84" exitCode=0 Dec 11 15:54:46 crc kubenswrapper[5050]: I1211 15:54:46.113776 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"b94bc94c-a636-4f0d-bd28-0f347e7b1143","Type":"ContainerDied","Data":"be93e909105a092ced14aca5a09c48a140e87c3ee935eba54b3b801f908c1e84"} Dec 11 15:54:46 crc kubenswrapper[5050]: I1211 15:54:46.122053 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-68c6474976-8qmpk_81255772-fddc-4936-8de3-da4649c32d1f/catalog-operator/1.log" Dec 11 15:54:46 crc kubenswrapper[5050]: I1211 15:54:46.122244 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk" event={"ID":"81255772-fddc-4936-8de3-da4649c32d1f","Type":"ContainerStarted","Data":"73f05cd4b05b581b0f5579d73f7d50cf91799969e2b77d8aae8bbcc4a8f16af2"} Dec 11 15:54:46 crc kubenswrapper[5050]: I1211 15:54:46.123081 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk" Dec 11 15:54:46 crc kubenswrapper[5050]: I1211 15:54:46.127233 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8qmpk" Dec 11 15:54:46 crc kubenswrapper[5050]: I1211 15:54:46.131060 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fb79d99b5-m4xgd" event={"ID":"2086bc41-00a2-4c97-a491-08511f3ed6e5","Type":"ContainerStarted","Data":"53622f8671c2479d99b0c3589d2052c01e492456c4e585fc4fd295a95f250319"} Dec 11 15:54:46 crc kubenswrapper[5050]: I1211 15:54:46.136481 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lvbdb" event={"ID":"fa0985f7-7d87-41b3-9916-f22375a0489c","Type":"ContainerStarted","Data":"5c45116219ad7721fca50307e39aa1f28218b7b1d0aa2500170d269f81b54035"} Dec 11 15:54:47 crc kubenswrapper[5050]: I1211 15:54:47.147747 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"131d56da-b770-4452-97c9-b585434da431","Type":"ContainerStarted","Data":"0918c12ef77e4a03c8b33e9194ab473c7649b3f481ba8e874ade29222d0939ed"} Dec 11 15:54:47 crc kubenswrapper[5050]: I1211 15:54:47.151693 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" event={"ID":"1c8331c1-b8ee-456b-baa6-110917427b64","Type":"ContainerStarted","Data":"e7e66b965b928fc7bc5f3a99671afb14871b1ed8fa179aea5edfac2e952469f2"} Dec 11 15:54:47 crc kubenswrapper[5050]: I1211 15:54:47.152992 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" Dec 11 15:54:47 crc kubenswrapper[5050]: I1211 15:54:47.153109 5050 patch_prober.go:28] interesting pod/observability-operator-d8bb48f5d-wwdcc container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.1.128:8081/healthz\": dial tcp 10.217.1.128:8081: connect: connection refused" start-of-body= Dec 11 15:54:47 crc kubenswrapper[5050]: I1211 15:54:47.153153 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" podUID="1c8331c1-b8ee-456b-baa6-110917427b64" containerName="operator" probeResult="failure" output="Get \"http://10.217.1.128:8081/healthz\": dial tcp 10.217.1.128:8081: connect: connection refused" Dec 11 15:54:47 crc kubenswrapper[5050]: I1211 15:54:47.156464 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"0f954250-5982-4088-839a-8faf7bfe203c","Type":"ContainerStarted","Data":"98eff0f3cea0509221eca0e000876fd8812288edb7d37154c570eded55a82147"} Dec 11 15:54:47 crc kubenswrapper[5050]: I1211 15:54:47.161566 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"b94bc94c-a636-4f0d-bd28-0f347e7b1143","Type":"ContainerStarted","Data":"1775318ccb2e6444fd0e7771ecf1272cb1579807c16cb2b926ad61c7397affe6"} Dec 11 15:54:47 crc kubenswrapper[5050]: I1211 15:54:47.164692 5050 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cgxqx container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Dec 11 15:54:47 crc kubenswrapper[5050]: I1211 15:54:47.164918 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cgxqx" podUID="fd564500-1ab5-401f-84a8-79c80dfe50ab" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Dec 11 15:54:47 crc kubenswrapper[5050]: I1211 15:54:47.312748 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Dec 11 15:54:47 crc kubenswrapper[5050]: I1211 15:54:47.989357 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 11 15:54:48 crc kubenswrapper[5050]: I1211 15:54:48.169349 5050 patch_prober.go:28] interesting pod/observability-operator-d8bb48f5d-wwdcc container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.1.128:8081/healthz\": dial tcp 10.217.1.128:8081: connect: connection refused" start-of-body= Dec 11 15:54:48 crc kubenswrapper[5050]: I1211 15:54:48.169398 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" podUID="1c8331c1-b8ee-456b-baa6-110917427b64" containerName="operator" probeResult="failure" output="Get \"http://10.217.1.128:8081/healthz\": dial tcp 10.217.1.128:8081: connect: connection refused" Dec 11 15:54:48 crc kubenswrapper[5050]: I1211 15:54:48.234993 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5c747c4-l2hpx" Dec 11 15:54:48 crc kubenswrapper[5050]: I1211 15:54:48.278615 5050 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r54sd container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.23:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:48 crc kubenswrapper[5050]: I1211 15:54:48.278828 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" podUID="dbd5b107-5d08-43af-881c-11540f395267" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.23:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:48 crc kubenswrapper[5050]: I1211 15:54:48.278726 5050 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-r54sd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:48 crc kubenswrapper[5050]: I1211 15:54:48.279038 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" podUID="dbd5b107-5d08-43af-881c-11540f395267" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.23:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:48 crc kubenswrapper[5050]: I1211 15:54:48.982563 5050 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cgxqx container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Dec 11 15:54:48 crc kubenswrapper[5050]: I1211 15:54:48.982620 5050 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cgxqx container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Dec 11 15:54:48 crc kubenswrapper[5050]: I1211 15:54:48.983047 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cgxqx" podUID="fd564500-1ab5-401f-84a8-79c80dfe50ab" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Dec 11 15:54:48 crc kubenswrapper[5050]: I1211 15:54:48.983192 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-cgxqx" podUID="fd564500-1ab5-401f-84a8-79c80dfe50ab" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Dec 11 15:54:49 crc kubenswrapper[5050]: I1211 15:54:49.180633 5050 patch_prober.go:28] interesting pod/observability-operator-d8bb48f5d-wwdcc container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.1.128:8081/healthz\": dial tcp 10.217.1.128:8081: connect: connection refused" start-of-body= Dec 11 15:54:49 crc kubenswrapper[5050]: I1211 15:54:49.180922 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" podUID="1c8331c1-b8ee-456b-baa6-110917427b64" containerName="operator" probeResult="failure" output="Get \"http://10.217.1.128:8081/healthz\": dial tcp 10.217.1.128:8081: connect: connection refused" Dec 11 15:54:49 crc kubenswrapper[5050]: I1211 15:54:49.275598 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 15:54:49 crc kubenswrapper[5050]: I1211 15:54:49.282105 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 15:54:49 crc kubenswrapper[5050]: I1211 15:54:49.602502 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-799b66f579-tqs2v" Dec 11 15:54:49 crc kubenswrapper[5050]: I1211 15:54:49.665908 5050 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-8z6ch" Dec 11 15:54:49 crc kubenswrapper[5050]: I1211 15:54:49.684895 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 11 15:54:49 crc kubenswrapper[5050]: I1211 15:54:49.706632 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 11 15:54:49 crc kubenswrapper[5050]: I1211 15:54:49.726754 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 11 15:54:49 crc kubenswrapper[5050]: I1211 15:54:49.745209 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 11 15:54:49 crc kubenswrapper[5050]: I1211 15:54:49.765969 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-6tg85" Dec 11 15:54:49 crc kubenswrapper[5050]: I1211 15:54:49.784697 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 11 15:54:49 crc kubenswrapper[5050]: I1211 15:54:49.806805 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 11 15:54:49 crc kubenswrapper[5050]: I1211 15:54:49.813264 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 11 15:54:49 crc kubenswrapper[5050]: I1211 15:54:49.829084 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 11 15:54:49 crc kubenswrapper[5050]: I1211 15:54:49.838813 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 11 15:54:49 crc kubenswrapper[5050]: I1211 15:54:49.846720 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1" Dec 11 15:54:49 crc kubenswrapper[5050]: I1211 15:54:49.867919 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 11 15:54:49 crc kubenswrapper[5050]: I1211 15:54:49.936590 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 11 15:54:49 crc kubenswrapper[5050]: I1211 15:54:49.936830 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-gkldc" Dec 11 15:54:49 crc kubenswrapper[5050]: I1211 15:54:49.960194 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-certs-secret" Dec 11 15:54:49 crc kubenswrapper[5050]: I1211 15:54:49.973669 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 11 15:54:49 crc kubenswrapper[5050]: I1211 15:54:49.973962 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 11 15:54:49 crc kubenswrapper[5050]: I1211 15:54:49.990857 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.013313 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.025951 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.055316 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.065903 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-t4t27" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.093524 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.106516 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.125749 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.149250 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.165491 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.192540 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.210032 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.215357 5050 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cd66n container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 11 15:54:50 crc kubenswrapper[5050]: [+]log ok Dec 11 15:54:50 crc kubenswrapper[5050]: [+]etcd excluded: ok Dec 11 15:54:50 crc kubenswrapper[5050]: [+]etcd-readiness excluded: ok Dec 11 15:54:50 crc kubenswrapper[5050]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 11 15:54:50 crc kubenswrapper[5050]: [+]informer-sync ok Dec 11 15:54:50 crc kubenswrapper[5050]: [+]poststarthook/generic-apiserver-start-informers ok Dec 11 15:54:50 crc kubenswrapper[5050]: [+]poststarthook/max-in-flight-filter ok Dec 11 15:54:50 crc kubenswrapper[5050]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 11 15:54:50 crc kubenswrapper[5050]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 11 15:54:50 crc kubenswrapper[5050]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Dec 11 15:54:50 crc kubenswrapper[5050]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Dec 11 15:54:50 crc kubenswrapper[5050]: [+]poststarthook/project.openshift.io-projectcache ok Dec 11 15:54:50 crc kubenswrapper[5050]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 11 15:54:50 crc kubenswrapper[5050]: [+]poststarthook/openshift.io-startinformers ok Dec 11 15:54:50 crc kubenswrapper[5050]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 11 15:54:50 crc kubenswrapper[5050]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 11 15:54:50 crc kubenswrapper[5050]: [-]shutdown failed: reason withheld Dec 11 15:54:50 crc kubenswrapper[5050]: readyz check failed Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.215421 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" podUID="ea884e88-c0df-4212-976a-0d7ce1731fdc" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.215501 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.229292 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.250230 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.272552 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.285937 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.306698 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.338851 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.360394 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.400519 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.424590 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.470979 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-vwxp8" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.477116 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-4drvb" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.527566 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.527782 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.559402 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.569255 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.588184 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.596819 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.638255 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.661514 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.663675 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.813650 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.842514 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.903389 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-mz9rx" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.938971 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 11 15:54:50 crc kubenswrapper[5050]: I1211 15:54:50.963624 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.006928 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-scripts" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.018422 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.018729 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.021328 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.027835 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.027835 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.032638 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-crgwv" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.033124 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.043772 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.105057 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.105083 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.115272 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.156668 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5fb79d99b5-m4xgd" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.156786 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5fb79d99b5-m4xgd" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.201999 5050 generic.go:334] "Generic (PLEG): container finished" podID="fef8d631-c968-4ccd-92ec-e6fc5a2f6731" containerID="166c7ea8f22887bff1aac5363e204edce18d6f260bad2031a294155043ca2094" exitCode=137 Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.202994 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fef8d631-c968-4ccd-92ec-e6fc5a2f6731","Type":"ContainerDied","Data":"166c7ea8f22887bff1aac5363e204edce18d6f260bad2031a294155043ca2094"} Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.213217 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.213801 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.230343 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.255417 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.283181 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.287437 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-scripts" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.327773 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.328599 5050 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-sxrxp" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.336912 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.338999 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-7ght8" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.349595 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.380526 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.380986 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.396523 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.450418 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.478067 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.500638 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.545951 5050 scope.go:117] "RemoveContainer" containerID="33899e24adacef82929b0fbbcd58af7cb4f7aa3935b49ec9b5b4045a51da1d14" Dec 11 15:54:51 crc kubenswrapper[5050]: E1211 15:54:51.549115 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.662480 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-55k4b" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.662661 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.662863 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.708840 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-sqjhh" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.720467 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.731450 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.810545 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-ndgnr" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.835668 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-25r77" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.873220 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 11 15:54:51 crc kubenswrapper[5050]: I1211 15:54:51.923834 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-volume-volume1-0" podUID="b94bc94c-a636-4f0d-bd28-0f347e7b1143" containerName="cinder-volume" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:54:52 crc kubenswrapper[5050]: I1211 15:54:52.087366 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 11 15:54:52 crc kubenswrapper[5050]: I1211 15:54:52.091949 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 11 15:54:52 crc kubenswrapper[5050]: I1211 15:54:52.092401 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 11 15:54:52 crc kubenswrapper[5050]: I1211 15:54:52.215416 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 11 15:54:52 crc kubenswrapper[5050]: I1211 15:54:52.224978 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 11 15:54:52 crc kubenswrapper[5050]: I1211 15:54:52.247270 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 11 15:54:52 crc kubenswrapper[5050]: I1211 15:54:52.266341 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Dec 11 15:54:52 crc kubenswrapper[5050]: I1211 15:54:52.305130 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 11 15:54:52 crc kubenswrapper[5050]: I1211 15:54:52.425490 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 11 15:54:52 crc kubenswrapper[5050]: I1211 15:54:52.453558 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 15:54:52 crc kubenswrapper[5050]: I1211 15:54:52.465197 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 11 15:54:52 crc kubenswrapper[5050]: I1211 15:54:52.595542 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lvbdb" Dec 11 15:54:52 crc kubenswrapper[5050]: I1211 15:54:52.602314 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-qdrgd" podUID="4b4e8aa9-a0f6-4bd2-92d2-c524aedf6389" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:52 crc kubenswrapper[5050]: I1211 15:54:52.603326 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-backup-0" podUID="0f954250-5982-4088-839a-8faf7bfe203c" containerName="cinder-backup" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:54:52 crc kubenswrapper[5050]: I1211 15:54:52.606652 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-lvbdb" Dec 11 15:54:52 crc kubenswrapper[5050]: I1211 15:54:52.617428 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-h4w7p" Dec 11 15:54:52 crc kubenswrapper[5050]: I1211 15:54:52.632219 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 11 15:54:52 crc kubenswrapper[5050]: I1211 15:54:52.673204 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp" podUID="85683dfb-37fb-4301-8c7a-fbb7453b303d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:52 crc kubenswrapper[5050]: I1211 15:54:52.679730 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-wsc2k" Dec 11 15:54:52 crc kubenswrapper[5050]: I1211 15:54:52.685923 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 11 15:54:52 crc kubenswrapper[5050]: I1211 15:54:52.709687 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 11 15:54:52 crc kubenswrapper[5050]: I1211 15:54:52.710139 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 11 15:54:52 crc kubenswrapper[5050]: I1211 15:54:52.727621 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-ttg8w" Dec 11 15:54:52 crc kubenswrapper[5050]: I1211 15:54:52.745520 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-qqb7f" Dec 11 15:54:52 crc kubenswrapper[5050]: I1211 15:54:52.752397 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-scripts" Dec 11 15:54:52 crc kubenswrapper[5050]: I1211 15:54:52.752750 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-d7578" Dec 11 15:54:52 crc kubenswrapper[5050]: I1211 15:54:52.775926 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 11 15:54:52 crc kubenswrapper[5050]: I1211 15:54:52.777234 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 11 15:54:52 crc kubenswrapper[5050]: I1211 15:54:52.807339 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-8jnzj" Dec 11 15:54:52 crc kubenswrapper[5050]: I1211 15:54:52.817899 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-f5g9s" Dec 11 15:54:52 crc kubenswrapper[5050]: I1211 15:54:52.822836 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-grhdp" Dec 11 15:54:52 crc kubenswrapper[5050]: I1211 15:54:52.846239 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 11 15:54:52 crc kubenswrapper[5050]: I1211 15:54:52.860386 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 11 15:54:52 crc kubenswrapper[5050]: I1211 15:54:52.875662 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-5zwsv" Dec 11 15:54:52 crc kubenswrapper[5050]: I1211 15:54:52.930438 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 11 15:54:52 crc kubenswrapper[5050]: I1211 15:54:52.971609 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-ssfnh" Dec 11 15:54:52 crc kubenswrapper[5050]: I1211 15:54:52.979340 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.015375 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.050736 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.084188 5050 patch_prober.go:28] interesting pod/route-controller-manager-767f6d799d-cv7mn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.084257 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn" podUID="5b7ceea3-4e92-46ee-81de-5b8f932144ad" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.084191 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-9tcm2" podUID="048e17a7-0123-45a2-b698-02def3db74fe" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.084363 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qbc2f" podUID="9726c3f9-bcae-4722-a054-5a66c161953b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.090582 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.091072 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.094860 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.101400 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.140721 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.183174 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-xl9wl" podUID="105854f4-5cc1-491f-983a-50864b37893f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.183174 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2" podUID="06df6c8c-640d-431b-b216-78345a9054e1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.225666 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-967d97867-7stc2" podUID="d7b70b3b-5481-4ac2-8e60-256e2690752f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.226067 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="131d56da-b770-4452-97c9-b585434da431" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.246200 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-notification-agent" containerStatusID={"Type":"cri-o","ID":"bfec6cc59ed05acb62e1f0f824dd77874de93bb26b4282c3f9bd4cc5ffdc26e5"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-notification-agent failed liveness probe, will be restarted" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.246564 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fef8d631-c968-4ccd-92ec-e6fc5a2f6731" containerName="ceilometer-notification-agent" containerID="cri-o://bfec6cc59ed05acb62e1f0f824dd77874de93bb26b4282c3f9bd4cc5ffdc26e5" gracePeriod=30 Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.247143 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fef8d631-c968-4ccd-92ec-e6fc5a2f6731","Type":"ContainerStarted","Data":"f3ace645af49f9a75a9a80d347febc566f52339b75de545ceab035587328f8fb"} Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.279597 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jq9vt" podUID="5e11a0d1-4179-4621-803d-839196fb940b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.294796 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.351269 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-65swh" podUID="58a60a6f-fcf5-4aa0-b6e4-97d7df2b2bd2" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.355384 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.410776 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.421556 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.468750 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.492490 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-ctkbt" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.504582 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.549997 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.582473 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pjhfc" podUID="3c9e825c-0aee-42b9-a7a5-3191486f301d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.590027 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.590034 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-vtnxn" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.614278 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.683603 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-wwdcc" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.684913 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-p2rzt" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.685212 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-c2fxf" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.708686 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-d5dn6" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.800228 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.803618 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.833513 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.866436 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.875088 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.962343 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.978356 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 11 15:54:53 crc kubenswrapper[5050]: I1211 15:54:53.994600 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.009515 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.058002 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.128499 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.158683 5050 trace.go:236] Trace[1082271157]: "Reflector ListAndWatch" name:object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" (11-Dec-2025 15:54:43.696) (total time: 10462ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1082271157]: ---"Objects listed" error: 10462ms (15:54:54.158) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1082271157]: [10.462566277s] [10.462566277s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.158709 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.185715 5050 trace.go:236] Trace[1143127693]: "Reflector ListAndWatch" name:object-"openstack"/"aodh-config-data" (11-Dec-2025 15:54:43.715) (total time: 10470ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1143127693]: ---"Objects listed" error: 10470ms (15:54:54.185) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1143127693]: [10.470099379s] [10.470099379s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.185747 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.188460 5050 trace.go:236] Trace[655354960]: "Reflector ListAndWatch" name:object-"openstack"/"default-dockercfg-tmtdn" (11-Dec-2025 15:54:42.630) (total time: 11557ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[655354960]: ---"Objects listed" error: 11557ms (15:54:54.188) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[655354960]: [11.557719406s] [11.557719406s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.188478 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-tmtdn" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.189513 5050 trace.go:236] Trace[1011623842]: "Reflector ListAndWatch" name:object-"metallb-system"/"frr-k8s-webhook-server-cert" (11-Dec-2025 15:54:42.794) (total time: 11395ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1011623842]: ---"Objects listed" error: 11395ms (15:54:54.189) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1011623842]: [11.395291175s] [11.395291175s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.189539 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.190090 5050 trace.go:236] Trace[16863226]: "Reflector ListAndWatch" name:object-"openshift-operators"/"perses-operator-dockercfg-xflrf" (11-Dec-2025 15:54:42.696) (total time: 11493ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[16863226]: ---"Objects listed" error: 11493ms (15:54:54.190) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[16863226]: [11.493870306s] [11.493870306s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.190108 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-xflrf" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.190853 5050 trace.go:236] Trace[603568399]: "Reflector ListAndWatch" name:object-"openshift-nmstate"/"openshift-service-ca.crt" (11-Dec-2025 15:54:42.925) (total time: 11265ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[603568399]: ---"Objects listed" error: 11265ms (15:54:54.190) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[603568399]: [11.265490018s] [11.265490018s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.190868 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.191160 5050 trace.go:236] Trace[111826479]: "Reflector ListAndWatch" name:object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" (11-Dec-2025 15:54:42.899) (total time: 11291ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[111826479]: ---"Objects listed" error: 11291ms (15:54:54.191) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[111826479]: [11.291317269s] [11.291317269s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.191173 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.191670 5050 trace.go:236] Trace[1904601909]: "Reflector ListAndWatch" name:object-"openshift-apiserver"/"kube-root-ca.crt" (11-Dec-2025 15:54:42.877) (total time: 11314ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1904601909]: ---"Objects listed" error: 11314ms (15:54:54.191) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1904601909]: [11.314279044s] [11.314279044s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.191822 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.195647 5050 trace.go:236] Trace[291459027]: "Reflector ListAndWatch" name:object-"openshift-image-registry"/"image-registry-tls" (11-Dec-2025 15:54:42.702) (total time: 11492ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[291459027]: ---"Objects listed" error: 11492ms (15:54:54.195) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[291459027]: [11.492877578s] [11.492877578s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.195678 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.195884 5050 trace.go:236] Trace[1244636768]: "Reflector ListAndWatch" name:object-"openstack"/"nova-metadata-config-data" (11-Dec-2025 15:54:42.697) (total time: 11498ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1244636768]: ---"Objects listed" error: 11498ms (15:54:54.195) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1244636768]: [11.498187461s] [11.498187461s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.195928 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.196127 5050 trace.go:236] Trace[1818788500]: "Reflector ListAndWatch" name:object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" (11-Dec-2025 15:54:42.861) (total time: 11334ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1818788500]: ---"Objects listed" error: 11334ms (15:54:54.196) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1818788500]: [11.334771044s] [11.334771044s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.196279 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.196397 5050 trace.go:236] Trace[810638599]: "Reflector ListAndWatch" name:object-"openstack"/"heat-engine-config-data" (11-Dec-2025 15:54:42.875) (total time: 11320ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[810638599]: ---"Objects listed" error: 11320ms (15:54:54.196) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[810638599]: [11.320414589s] [11.320414589s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.196427 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.196207 5050 trace.go:236] Trace[197021213]: "Reflector ListAndWatch" name:object-"openshift-cluster-machine-approver"/"machine-approver-config" (11-Dec-2025 15:54:42.695) (total time: 11500ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[197021213]: ---"Objects listed" error: 11500ms (15:54:54.196) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[197021213]: [11.500537894s] [11.500537894s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.196536 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.196936 5050 trace.go:236] Trace[316557414]: "Reflector ListAndWatch" name:object-"openshift-kube-storage-version-migrator-operator"/"config" (11-Dec-2025 15:54:42.824) (total time: 11372ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[316557414]: ---"Objects listed" error: 11372ms (15:54:54.196) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[316557414]: [11.372007571s] [11.372007571s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.196954 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.196960 5050 trace.go:236] Trace[1345726947]: "Reflector ListAndWatch" name:object-"openshift-ingress"/"service-ca-bundle" (11-Dec-2025 15:54:42.738) (total time: 11458ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1345726947]: ---"Objects listed" error: 11458ms (15:54:54.196) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1345726947]: [11.45822183s] [11.45822183s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.196974 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.197085 5050 trace.go:236] Trace[817261583]: "Reflector ListAndWatch" name:object-"openstack"/"glance-default-external-config-data" (11-Dec-2025 15:54:42.804) (total time: 11392ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[817261583]: ---"Objects listed" error: 11392ms (15:54:54.197) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[817261583]: [11.392154711s] [11.392154711s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.197094 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.197488 5050 trace.go:236] Trace[1345622470]: "Reflector ListAndWatch" name:object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" (11-Dec-2025 15:54:42.657) (total time: 11539ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1345622470]: ---"Objects listed" error: 11539ms (15:54:54.197) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1345622470]: [11.539669522s] [11.539669522s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.197622 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.198694 5050 trace.go:236] Trace[1490621697]: "Reflector ListAndWatch" name:object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" (11-Dec-2025 15:54:42.763) (total time: 11434ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1490621697]: ---"Objects listed" error: 11434ms (15:54:54.198) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1490621697]: [11.43469673s] [11.43469673s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.198846 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.199150 5050 trace.go:236] Trace[2034140653]: "Reflector ListAndWatch" name:object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" (11-Dec-2025 15:54:42.738) (total time: 11460ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[2034140653]: ---"Objects listed" error: 11460ms (15:54:54.199) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[2034140653]: [11.460455901s] [11.460455901s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.199177 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.199510 5050 trace.go:236] Trace[105933806]: "Reflector ListAndWatch" name:object-"openstack"/"memcached-memcached-dockercfg-kl4q7" (11-Dec-2025 15:54:42.915) (total time: 11284ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[105933806]: ---"Objects listed" error: 11284ms (15:54:54.199) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[105933806]: [11.284067244s] [11.284067244s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.199716 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-kl4q7" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.199910 5050 trace.go:236] Trace[1420603794]: "Reflector ListAndWatch" name:object-"openstack"/"octavia-healthmanager-config-data" (11-Dec-2025 15:54:42.909) (total time: 11289ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1420603794]: ---"Objects listed" error: 11289ms (15:54:54.199) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1420603794]: [11.289985873s] [11.289985873s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.200098 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-healthmanager-config-data" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.200319 5050 trace.go:236] Trace[489132421]: "Reflector ListAndWatch" name:object-"openshift-marketplace"/"marketplace-operator-metrics" (11-Dec-2025 15:54:42.873) (total time: 11327ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[489132421]: ---"Objects listed" error: 11327ms (15:54:54.200) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[489132421]: [11.327150519s] [11.327150519s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.200467 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.203678 5050 trace.go:236] Trace[32774589]: "Reflector ListAndWatch" name:object-"openshift-controller-manager"/"openshift-service-ca.crt" (11-Dec-2025 15:54:42.942) (total time: 11261ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[32774589]: ---"Objects listed" error: 11261ms (15:54:54.203) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[32774589]: [11.261287684s] [11.261287684s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.203713 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.204125 5050 trace.go:236] Trace[1381268424]: "Reflector ListAndWatch" name:object-"hostpath-provisioner"/"openshift-service-ca.crt" (11-Dec-2025 15:54:43.405) (total time: 10799ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1381268424]: ---"Objects listed" error: 10799ms (15:54:54.204) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1381268424]: [10.799047472s] [10.799047472s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.204320 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.208229 5050 trace.go:236] Trace[366131423]: "Reflector ListAndWatch" name:object-"openstack"/"keystone" (11-Dec-2025 15:54:43.478) (total time: 10729ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[366131423]: ---"Objects listed" error: 10729ms (15:54:54.208) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[366131423]: [10.729629151s] [10.729629151s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.208251 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.208433 5050 trace.go:236] Trace[23111345]: "Reflector ListAndWatch" name:object-"openshift-network-operator"/"openshift-service-ca.crt" (11-Dec-2025 15:54:42.962) (total time: 11245ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[23111345]: ---"Objects listed" error: 11245ms (15:54:54.208) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[23111345]: [11.245581014s] [11.245581014s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.208442 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.212149 5050 trace.go:236] Trace[1916795912]: "Reflector ListAndWatch" name:object-"openstack"/"openstack-config" (11-Dec-2025 15:54:43.477) (total time: 10735ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1916795912]: ---"Objects listed" error: 10735ms (15:54:54.212) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1916795912]: [10.735008906s] [10.735008906s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.212318 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.212590 5050 trace.go:236] Trace[656219307]: "Reflector ListAndWatch" name:object-"openshift-console"/"console-dockercfg-f62pw" (11-Dec-2025 15:54:43.191) (total time: 11021ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[656219307]: ---"Objects listed" error: 11021ms (15:54:54.212) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[656219307]: [11.021412009s] [11.021412009s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.212767 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.213110 5050 trace.go:236] Trace[1427340279]: "Reflector ListAndWatch" name:object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" (11-Dec-2025 15:54:43.672) (total time: 10540ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1427340279]: ---"Objects listed" error: 10540ms (15:54:54.213) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1427340279]: [10.540577777s] [10.540577777s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.213282 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.213688 5050 trace.go:236] Trace[570730896]: "Reflector ListAndWatch" name:object-"openshift-nmstate"/"nmstate-operator-dockercfg-b8vjd" (11-Dec-2025 15:54:42.177) (total time: 12035ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[570730896]: ---"Objects listed" error: 12035ms (15:54:54.213) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[570730896]: [12.035966398s] [12.035966398s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.213825 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-b8vjd" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.214227 5050 trace.go:236] Trace[1584067167]: "Reflector ListAndWatch" name:object-"openstack"/"horizon-horizon-dockercfg-d7bqh" (11-Dec-2025 15:54:43.672) (total time: 10541ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1584067167]: ---"Objects listed" error: 10541ms (15:54:54.214) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1584067167]: [10.541723468s] [10.541723468s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.214373 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-d7bqh" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.214594 5050 trace.go:236] Trace[1144134311]: "Reflector ListAndWatch" name:object-"openshift-ingress-canary"/"openshift-service-ca.crt" (11-Dec-2025 15:54:42.971) (total time: 11243ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1144134311]: ---"Objects listed" error: 11243ms (15:54:54.214) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1144134311]: [11.24317079s] [11.24317079s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.216784 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.215791 5050 trace.go:236] Trace[1997106076]: "Reflector ListAndWatch" name:object-"openstack"/"octavia-rsyslog-config-data" (11-Dec-2025 15:54:43.210) (total time: 11005ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1997106076]: ---"Objects listed" error: 11005ms (15:54:54.215) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1997106076]: [11.005133742s] [11.005133742s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.217132 5050 trace.go:236] Trace[399095670]: "Reflector ListAndWatch" name:object-"openstack"/"dataplane-adoption-secret" (11-Dec-2025 15:54:43.547) (total time: 10669ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[399095670]: ---"Objects listed" error: 10669ms (15:54:54.217) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[399095670]: [10.669404729s] [10.669404729s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.217152 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.217326 5050 trace.go:236] Trace[332559329]: "Reflector ListAndWatch" name:object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" (11-Dec-2025 15:54:43.147) (total time: 11070ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[332559329]: ---"Objects listed" error: 11070ms (15:54:54.217) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[332559329]: [11.07000895s] [11.07000895s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.217338 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.217468 5050 trace.go:236] Trace[1932061041]: "Reflector ListAndWatch" name:object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-f52rf" (11-Dec-2025 15:54:43.627) (total time: 10589ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1932061041]: ---"Objects listed" error: 10589ms (15:54:54.217) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1932061041]: [10.58958898s] [10.58958898s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.217479 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-f52rf" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.217581 5050 trace.go:236] Trace[695529953]: "Reflector ListAndWatch" name:object-"openstack"/"ovn-data-cert" (11-Dec-2025 15:54:42.954) (total time: 11263ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[695529953]: ---"Objects listed" error: 11263ms (15:54:54.217) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[695529953]: [11.263159025s] [11.263159025s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.217589 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.217134 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-config-data" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.216470 5050 trace.go:236] Trace[1472536549]: "Reflector ListAndWatch" name:object-"openshift-config-operator"/"openshift-service-ca.crt" (11-Dec-2025 15:54:43.081) (total time: 11135ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1472536549]: ---"Objects listed" error: 11135ms (15:54:54.216) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1472536549]: [11.135293439s] [11.135293439s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.217730 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.217069 5050 trace.go:236] Trace[1066548452]: "Reflector ListAndWatch" name:object-"openshift-machine-config-operator"/"openshift-service-ca.crt" (11-Dec-2025 15:54:43.570) (total time: 10646ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1066548452]: ---"Objects listed" error: 10646ms (15:54:54.217) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1066548452]: [10.646071983s] [10.646071983s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.217791 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.216396 5050 trace.go:236] Trace[1618284435]: "Reflector ListAndWatch" name:object-"metallb-system"/"metallb-memberlist" (11-Dec-2025 15:54:43.530) (total time: 10686ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1618284435]: ---"Objects listed" error: 10686ms (15:54:54.216) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1618284435]: [10.686098366s] [10.686098366s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.217850 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.218038 5050 trace.go:236] Trace[1558717969]: "Reflector ListAndWatch" name:object-"openstack"/"ceph-conf-files" (11-Dec-2025 15:54:42.549) (total time: 11668ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1558717969]: ---"Objects listed" error: 11668ms (15:54:54.217) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1558717969]: [11.668139164s] [11.668139164s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.218047 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.218188 5050 trace.go:236] Trace[1625442471]: "Reflector ListAndWatch" name:object-"openshift-authentication-operator"/"kube-root-ca.crt" (11-Dec-2025 15:54:43.013) (total time: 11204ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1625442471]: ---"Objects listed" error: 11204ms (15:54:54.218) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1625442471]: [11.204936525s] [11.204936525s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.218196 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.218292 5050 trace.go:236] Trace[1192925875]: "Reflector ListAndWatch" name:object-"openstack"/"octavia-api-config-data" (11-Dec-2025 15:54:43.693) (total time: 10524ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1192925875]: ---"Objects listed" error: 10524ms (15:54:54.218) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1192925875]: [10.524780944s] [10.524780944s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.218322 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-api-config-data" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.218293 5050 trace.go:236] Trace[40274594]: "Reflector ListAndWatch" name:object-"openshift-etcd-operator"/"etcd-ca-bundle" (11-Dec-2025 15:54:42.925) (total time: 11292ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[40274594]: ---"Objects listed" error: 11292ms (15:54:54.218) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[40274594]: [11.29282543s] [11.29282543s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.218351 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.218457 5050 trace.go:236] Trace[1996473534]: "Reflector ListAndWatch" name:object-"openstack"/"rabbitmq-server-dockercfg-zd6qh" (11-Dec-2025 15:54:43.717) (total time: 10500ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1996473534]: ---"Objects listed" error: 10500ms (15:54:54.218) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1996473534]: [10.500604406s] [10.500604406s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.218473 5050 trace.go:236] Trace[1177923722]: "Reflector ListAndWatch" name:object-"openstack-operators"/"test-operator-controller-manager-dockercfg-cclxg" (11-Dec-2025 15:54:43.007) (total time: 11211ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1177923722]: ---"Objects listed" error: 11211ms (15:54:54.218) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1177923722]: [11.211326137s] [11.211326137s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.218481 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-zd6qh" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.218485 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-cclxg" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.218542 5050 trace.go:236] Trace[738203771]: "Reflector ListAndWatch" name:object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" (11-Dec-2025 15:54:43.534) (total time: 10683ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[738203771]: ---"Objects listed" error: 10683ms (15:54:54.218) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[738203771]: [10.683579288s] [10.683579288s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.218554 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.218594 5050 trace.go:236] Trace[1136400077]: "Reflector ListAndWatch" name:object-"openstack"/"kube-root-ca.crt" (11-Dec-2025 15:54:43.425) (total time: 10793ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1136400077]: ---"Objects listed" error: 10793ms (15:54:54.218) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1136400077]: [10.793471032s] [10.793471032s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.218602 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.218654 5050 trace.go:236] Trace[1948890934]: "Reflector ListAndWatch" name:object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" (11-Dec-2025 15:54:43.051) (total time: 11167ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1948890934]: ---"Objects listed" error: 11167ms (15:54:54.218) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1948890934]: [11.167521082s] [11.167521082s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.218662 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.222857 5050 trace.go:236] Trace[1825400731]: "Reflector ListAndWatch" name:object-"openstack"/"placement-placement-dockercfg-4zzmp" (11-Dec-2025 15:54:43.084) (total time: 11138ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1825400731]: ---"Objects listed" error: 11138ms (15:54:54.222) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1825400731]: [11.138523696s] [11.138523696s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.222884 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4zzmp" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.223135 5050 trace.go:236] Trace[1913599279]: "Reflector ListAndWatch" name:object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" (11-Dec-2025 15:54:41.932) (total time: 12290ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1913599279]: ---"Objects listed" error: 12290ms (15:54:54.223) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1913599279]: [12.29060196s] [12.29060196s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.223147 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.223407 5050 trace.go:236] Trace[23220471]: "Reflector ListAndWatch" name:object-"openstack"/"alertmanager-metric-storage-tls-assets-0" (11-Dec-2025 15:54:42.293) (total time: 11930ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[23220471]: ---"Objects listed" error: 11930ms (15:54:54.223) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[23220471]: [11.930160944s] [11.930160944s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.223418 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.223574 5050 trace.go:236] Trace[1926963698]: "Reflector ListAndWatch" name:object-"openshift-service-ca-operator"/"service-ca-operator-config" (11-Dec-2025 15:54:42.282) (total time: 11940ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1926963698]: ---"Objects listed" error: 11940ms (15:54:54.223) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1926963698]: [11.940659035s] [11.940659035s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.223584 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.227797 5050 trace.go:236] Trace[2033091951]: "Reflector ListAndWatch" name:object-"openshift-machine-config-operator"/"kube-root-ca.crt" (11-Dec-2025 15:54:42.431) (total time: 11796ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[2033091951]: ---"Objects listed" error: 11796ms (15:54:54.227) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[2033091951]: [11.796229926s] [11.796229926s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.227825 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.227882 5050 trace.go:236] Trace[1494251716]: "Reflector ListAndWatch" name:object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" (11-Dec-2025 15:54:43.581) (total time: 10646ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1494251716]: ---"Objects listed" error: 10646ms (15:54:54.227) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1494251716]: [10.646604468s] [10.646604468s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.227900 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.228064 5050 trace.go:236] Trace[1082903981]: "Reflector ListAndWatch" name:object-"openstack"/"manila-api-config-data" (11-Dec-2025 15:54:42.115) (total time: 12112ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1082903981]: ---"Objects listed" error: 12112ms (15:54:54.228) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1082903981]: [12.112459407s] [12.112459407s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.228077 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.228160 5050 trace.go:236] Trace[781429608]: "Reflector ListAndWatch" name:object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" (11-Dec-2025 15:54:43.514) (total time: 10714ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[781429608]: ---"Objects listed" error: 10713ms (15:54:54.228) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[781429608]: [10.714006263s] [10.714006263s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.228173 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.228343 5050 trace.go:236] Trace[748418223]: "Reflector ListAndWatch" name:object-"metallb-system"/"frr-k8s-daemon-dockercfg-wks79" (11-Dec-2025 15:54:43.084) (total time: 11144ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[748418223]: ---"Objects listed" error: 11144ms (15:54:54.228) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[748418223]: [11.144133286s] [11.144133286s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.228353 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-wks79" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.228378 5050 trace.go:236] Trace[2043467862]: "Reflector ListAndWatch" name:object-"openshift-operators"/"obo-prometheus-operator-dockercfg-mpfzr" (11-Dec-2025 15:54:43.617) (total time: 10611ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[2043467862]: ---"Objects listed" error: 10611ms (15:54:54.228) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[2043467862]: [10.611262511s] [10.611262511s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.228390 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-mpfzr" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.227814 5050 trace.go:236] Trace[1713861542]: "Reflector ListAndWatch" name:object-"openshift-authentication"/"v4-0-config-system-serving-cert" (11-Dec-2025 15:54:43.465) (total time: 10762ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1713861542]: ---"Objects listed" error: 10762ms (15:54:54.227) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1713861542]: [10.762689907s] [10.762689907s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.228428 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.228563 5050 trace.go:236] Trace[1650507484]: "Reflector ListAndWatch" name:object-"openstack"/"ovncontroller-scripts" (11-Dec-2025 15:54:43.227) (total time: 11001ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1650507484]: ---"Objects listed" error: 11001ms (15:54:54.228) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1650507484]: [11.001251398s] [11.001251398s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.228572 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.228614 5050 trace.go:236] Trace[1189223914]: "Reflector ListAndWatch" name:object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-jbnlr" (11-Dec-2025 15:54:43.336) (total time: 10892ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1189223914]: ---"Objects listed" error: 10892ms (15:54:54.228) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1189223914]: [10.892155896s] [10.892155896s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.228625 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-jbnlr" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.228684 5050 trace.go:236] Trace[156036905]: "Reflector ListAndWatch" name:object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-8nf9c" (11-Dec-2025 15:54:43.646) (total time: 10582ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[156036905]: ---"Objects listed" error: 10582ms (15:54:54.228) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[156036905]: [10.582030727s] [10.582030727s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.228691 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-8nf9c" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.229864 5050 trace.go:236] Trace[908025793]: "Reflector ListAndWatch" name:object-"openstack"/"dns" (11-Dec-2025 15:54:42.606) (total time: 11623ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[908025793]: ---"Objects listed" error: 11623ms (15:54:54.229) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[908025793]: [11.623805416s] [11.623805416s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.229888 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.230233 5050 trace.go:236] Trace[271790152]: "Reflector ListAndWatch" name:object-"openstack"/"heat-cfnapi-config-data" (11-Dec-2025 15:54:42.405) (total time: 11825ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[271790152]: ---"Objects listed" error: 11825ms (15:54:54.230) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[271790152]: [11.825033777s] [11.825033777s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.230246 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.230507 5050 trace.go:236] Trace[1685052938]: "Reflector ListAndWatch" name:object-"openshift-nmstate"/"nginx-conf" (11-Dec-2025 15:54:42.959) (total time: 11271ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1685052938]: ---"Objects listed" error: 11271ms (15:54:54.230) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1685052938]: [11.271060257s] [11.271060257s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.230519 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.230538 5050 trace.go:236] Trace[1070102921]: "Reflector ListAndWatch" name:object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" (11-Dec-2025 15:54:42.335) (total time: 11895ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1070102921]: ---"Objects listed" error: 11895ms (15:54:54.230) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1070102921]: [11.895400672s] [11.895400672s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.230587 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.231084 5050 trace.go:236] Trace[1011704798]: "Reflector ListAndWatch" name:object-"openshift-network-console"/"networking-console-plugin" (11-Dec-2025 15:54:42.493) (total time: 11737ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1011704798]: ---"Objects listed" error: 11737ms (15:54:54.231) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1011704798]: [11.737933674s] [11.737933674s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.231098 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.231310 5050 trace.go:236] Trace[518440244]: "Reflector ListAndWatch" name:object-"openstack"/"glance-scripts" (11-Dec-2025 15:54:42.291) (total time: 11939ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[518440244]: ---"Objects listed" error: 11939ms (15:54:54.231) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[518440244]: [11.93934798s] [11.93934798s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.231324 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.231592 5050 trace.go:236] Trace[916417378]: "Reflector ListAndWatch" name:object-"openstack"/"memcached-config-data" (11-Dec-2025 15:54:42.583) (total time: 11647ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[916417378]: ---"Objects listed" error: 11647ms (15:54:54.231) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[916417378]: [11.647950324s] [11.647950324s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.231604 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.231698 5050 trace.go:236] Trace[1567417402]: "Reflector ListAndWatch" name:object-"openshift-etcd-operator"/"etcd-service-ca-bundle" (11-Dec-2025 15:54:43.486) (total time: 10745ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1567417402]: ---"Objects listed" error: 10745ms (15:54:54.231) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1567417402]: [10.745128647s] [10.745128647s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.231726 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.231727 5050 trace.go:236] Trace[743930558]: "Reflector ListAndWatch" name:object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" (11-Dec-2025 15:54:43.268) (total time: 10963ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[743930558]: ---"Objects listed" error: 10963ms (15:54:54.231) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[743930558]: [10.963294772s] [10.963294772s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.231831 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.232601 5050 trace.go:236] Trace[67274031]: "Reflector ListAndWatch" name:object-"openshift-etcd-operator"/"etcd-client" (11-Dec-2025 15:54:42.120) (total time: 12112ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[67274031]: ---"Objects listed" error: 12112ms (15:54:54.232) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[67274031]: [12.112321944s] [12.112321944s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.232613 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.233234 5050 trace.go:236] Trace[711421361]: "Reflector ListAndWatch" name:object-"openstack"/"nova-api-config-data" (11-Dec-2025 15:54:42.196) (total time: 12036ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[711421361]: ---"Objects listed" error: 12036ms (15:54:54.233) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[711421361]: [12.036749619s] [12.036749619s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.233458 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.233714 5050 trace.go:236] Trace[79707843]: "Reflector ListAndWatch" name:object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-nffdg" (11-Dec-2025 15:54:42.511) (total time: 11722ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[79707843]: ---"Objects listed" error: 11722ms (15:54:54.233) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[79707843]: [11.722347827s] [11.722347827s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.233903 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-nffdg" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.233993 5050 trace.go:236] Trace[1602542864]: "Reflector ListAndWatch" name:object-"openshift-authentication-operator"/"serving-cert" (11-Dec-2025 15:54:42.249) (total time: 11984ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1602542864]: ---"Objects listed" error: 11984ms (15:54:54.233) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1602542864]: [11.984090749s] [11.984090749s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.234050 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.234271 5050 trace.go:236] Trace[373934236]: "Reflector ListAndWatch" name:object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" (11-Dec-2025 15:54:42.021) (total time: 12212ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[373934236]: ---"Objects listed" error: 12212ms (15:54:54.234) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[373934236]: [12.212778125s] [12.212778125s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.234286 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.233828 5050 trace.go:236] Trace[1541414765]: "Reflector ListAndWatch" name:object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" (11-Dec-2025 15:54:43.523) (total time: 10710ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1541414765]: ---"Objects listed" error: 10710ms (15:54:54.233) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1541414765]: [10.710482209s] [10.710482209s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.234378 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.234489 5050 trace.go:236] Trace[828530694]: "Reflector ListAndWatch" name:object-"openshift-apiserver"/"serving-cert" (11-Dec-2025 15:54:41.967) (total time: 12267ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[828530694]: ---"Objects listed" error: 12267ms (15:54:54.234) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[828530694]: [12.267372758s] [12.267372758s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.234501 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.234805 5050 trace.go:236] Trace[1858835404]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Dec-2025 15:54:42.477) (total time: 11757ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1858835404]: ---"Objects listed" error: 11757ms (15:54:54.234) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1858835404]: [11.757376245s] [11.757376245s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.234821 5050 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.235664 5050 trace.go:236] Trace[2060465850]: "Reflector ListAndWatch" name:object-"openshift-marketplace"/"kube-root-ca.crt" (11-Dec-2025 15:54:42.546) (total time: 11688ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[2060465850]: ---"Objects listed" error: 11688ms (15:54:54.235) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[2060465850]: [11.688855189s] [11.688855189s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.235841 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.236233 5050 trace.go:236] Trace[93466984]: "Reflector ListAndWatch" name:object-"openstack"/"manila-config-data" (11-Dec-2025 15:54:42.264) (total time: 11971ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[93466984]: ---"Objects listed" error: 11971ms (15:54:54.236) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[93466984]: [11.971630615s] [11.971630615s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.236396 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.237327 5050 trace.go:236] Trace[1251477639]: "Reflector ListAndWatch" name:object-"openstack"/"octavia-housekeeping-config-data" (11-Dec-2025 15:54:41.989) (total time: 12247ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1251477639]: ---"Objects listed" error: 12247ms (15:54:54.237) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1251477639]: [12.247888786s] [12.247888786s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.237492 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-housekeeping-config-data" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.238490 5050 trace.go:236] Trace[607578373]: "Reflector ListAndWatch" name:object-"openstack"/"dnsmasq-dns-dockercfg-kpmgv" (11-Dec-2025 15:54:42.573) (total time: 11665ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[607578373]: ---"Objects listed" error: 11665ms (15:54:54.238) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[607578373]: [11.665278167s] [11.665278167s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.238509 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-kpmgv" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.245784 5050 trace.go:236] Trace[407424915]: "Reflector ListAndWatch" name:object-"openshift-authentication"/"v4-0-config-user-template-login" (11-Dec-2025 15:54:43.429) (total time: 10815ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[407424915]: ---"Objects listed" error: 10815ms (15:54:54.245) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[407424915]: [10.815998575s] [10.815998575s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.246136 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.249365 5050 trace.go:236] Trace[849091715]: "Reflector ListAndWatch" name:object-"metallb-system"/"manager-account-dockercfg-m6zt9" (11-Dec-2025 15:54:42.352) (total time: 11897ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[849091715]: ---"Objects listed" error: 11897ms (15:54:54.249) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[849091715]: [11.897052457s] [11.897052457s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.249400 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-m6zt9" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.251868 5050 trace.go:236] Trace[1308787372]: "Reflector ListAndWatch" name:object-"openshift-marketplace"/"marketplace-trusted-ca" (11-Dec-2025 15:54:42.537) (total time: 11714ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1308787372]: ---"Objects listed" error: 11714ms (15:54:54.251) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1308787372]: [11.714458205s] [11.714458205s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.252050 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.275400 5050 trace.go:236] Trace[101202887]: "Reflector ListAndWatch" name:object-"openshift-dns-operator"/"openshift-service-ca.crt" (11-Dec-2025 15:54:42.629) (total time: 11645ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[101202887]: ---"Objects listed" error: 11645ms (15:54:54.275) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[101202887]: [11.64595243s] [11.64595243s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.275437 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.289862 5050 trace.go:236] Trace[314118691]: "Reflector ListAndWatch" name:object-"openshift-console"/"console-config" (11-Dec-2025 15:54:42.554) (total time: 11735ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[314118691]: ---"Objects listed" error: 11735ms (15:54:54.289) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[314118691]: [11.735349395s] [11.735349395s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.289891 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.291161 5050 trace.go:236] Trace[1706080134]: "Reflector ListAndWatch" name:pkg/kubelet/config/apiserver.go:66 (11-Dec-2025 15:54:42.657) (total time: 11633ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1706080134]: ---"Objects listed" error: 11621ms (15:54:54.278) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1706080134]: [11.633339762s] [11.633339762s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.291201 5050 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.308072 5050 trace.go:236] Trace[460063651]: "Reflector ListAndWatch" name:object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" (11-Dec-2025 15:54:42.372) (total time: 11935ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[460063651]: ---"Objects listed" error: 11935ms (15:54:54.307) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[460063651]: [11.935370563s] [11.935370563s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.308229 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.325824 5050 trace.go:236] Trace[611659194]: "Reflector ListAndWatch" name:object-"openshift-network-console"/"networking-console-plugin-cert" (11-Dec-2025 15:54:43.730) (total time: 10595ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[611659194]: ---"Objects listed" error: 10595ms (15:54:54.325) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[611659194]: [10.595731705s] [10.595731705s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.325849 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.343080 5050 trace.go:236] Trace[870265371]: "Reflector ListAndWatch" name:object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" (11-Dec-2025 15:54:43.730) (total time: 10612ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[870265371]: ---"Objects listed" error: 10612ms (15:54:54.342) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[870265371]: [10.61273339s] [10.61273339s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.343329 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.372411 5050 trace.go:236] Trace[627878662]: "Reflector ListAndWatch" name:object-"metallb-system"/"frr-k8s-certs-secret" (11-Dec-2025 15:54:43.046) (total time: 11325ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[627878662]: ---"Objects listed" error: 11325ms (15:54:54.372) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[627878662]: [11.325705431s] [11.325705431s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.372435 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.423554 5050 trace.go:236] Trace[923828591]: "Reflector ListAndWatch" name:object-"openstack"/"keystone-scripts" (11-Dec-2025 15:54:43.732) (total time: 10690ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[923828591]: ---"Objects listed" error: 10690ms (15:54:54.423) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[923828591]: [10.690588656s] [10.690588656s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.423583 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.457320 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.506624 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.520245 5050 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.520366 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.572642 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.573117 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-zlz4m" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.664363 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5b74fbd87-zqsjt" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.666047 5050 trace.go:236] Trace[1812270289]: "Reflector ListAndWatch" name:object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" (11-Dec-2025 15:54:43.818) (total time: 10847ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1812270289]: ---"Objects listed" error: 10847ms (15:54:54.665) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1812270289]: [10.84714933s] [10.84714933s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.666071 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.666317 5050 trace.go:236] Trace[91431548]: "Reflector ListAndWatch" name:object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" (11-Dec-2025 15:54:43.789) (total time: 10877ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[91431548]: ---"Objects listed" error: 10877ms (15:54:54.666) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[91431548]: [10.877228736s] [10.877228736s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.666335 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.666549 5050 trace.go:236] Trace[1458835104]: "Reflector ListAndWatch" name:object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-w847r" (11-Dec-2025 15:54:43.896) (total time: 10769ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1458835104]: ---"Objects listed" error: 10769ms (15:54:54.666) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1458835104]: [10.769923581s] [10.769923581s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.666585 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-w847r" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.730367 5050 trace.go:236] Trace[1509387907]: "Reflector ListAndWatch" name:object-"openstack"/"ovsdbserver-nb" (11-Dec-2025 15:54:43.896) (total time: 10833ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1509387907]: ---"Objects listed" error: 10833ms (15:54:54.730) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1509387907]: [10.833798092s] [10.833798092s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.730398 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.730625 5050 trace.go:236] Trace[896668961]: "Reflector ListAndWatch" name:object-"openstack"/"manila-share-share1-config-data" (11-Dec-2025 15:54:43.793) (total time: 10937ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[896668961]: ---"Objects listed" error: 10937ms (15:54:54.730) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[896668961]: [10.937510021s] [10.937510021s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.730636 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.730829 5050 trace.go:236] Trace[1127043490]: "Reflector ListAndWatch" name:object-"openshift-service-ca-operator"/"kube-root-ca.crt" (11-Dec-2025 15:54:43.872) (total time: 10858ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1127043490]: ---"Objects listed" error: 10858ms (15:54:54.730) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1127043490]: [10.85833637s] [10.85833637s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.730841 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.731031 5050 trace.go:236] Trace[411798694]: "Reflector ListAndWatch" name:object-"openshift-multus"/"multus-daemon-config" (11-Dec-2025 15:54:43.925) (total time: 10805ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[411798694]: ---"Objects listed" error: 10805ms (15:54:54.730) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[411798694]: [10.805140674s] [10.805140674s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.731042 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.763383 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.774908 5050 trace.go:236] Trace[1916985219]: "Reflector ListAndWatch" name:object-"openshift-dns-operator"/"metrics-tls" (11-Dec-2025 15:54:43.951) (total time: 10823ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1916985219]: ---"Objects listed" error: 10823ms (15:54:54.774) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1916985219]: [10.823773223s] [10.823773223s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.774935 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.775168 5050 trace.go:236] Trace[196374583]: "Reflector ListAndWatch" name:object-"openstack"/"ovndbcluster-sb-config" (11-Dec-2025 15:54:43.974) (total time: 10800ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[196374583]: ---"Objects listed" error: 10800ms (15:54:54.775) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[196374583]: [10.800362555s] [10.800362555s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.775179 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.775297 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" podUID="3477354d-838b-48cc-a6c3-612088d82640" containerName="manager" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.796503 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.853054 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.853795 5050 trace.go:236] Trace[334031159]: "Reflector ListAndWatch" name:object-"openstack"/"openstack-cell1-scripts" (11-Dec-2025 15:54:44.034) (total time: 10819ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[334031159]: ---"Objects listed" error: 10819ms (15:54:54.853) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[334031159]: [10.819237682s] [10.819237682s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.854051 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.915581 5050 trace.go:236] Trace[1345514722]: "Reflector ListAndWatch" name:object-"openstack"/"nova-cell0-conductor-config-data" (11-Dec-2025 15:54:44.028) (total time: 10886ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1345514722]: ---"Objects listed" error: 10886ms (15:54:54.915) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1345514722]: [10.886780381s] [10.886780381s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.916261 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.916463 5050 trace.go:236] Trace[89691570]: "Reflector ListAndWatch" name:object-"openshift-ingress-canary"/"canary-serving-cert" (11-Dec-2025 15:54:44.041) (total time: 10874ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[89691570]: ---"Objects listed" error: 10874ms (15:54:54.916) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[89691570]: [10.874546274s] [10.874546274s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.916488 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.916753 5050 trace.go:236] Trace[1028910731]: "Reflector ListAndWatch" name:object-"openstack"/"octavia-hmport-map" (11-Dec-2025 15:54:44.138) (total time: 10778ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1028910731]: ---"Objects listed" error: 10778ms (15:54:54.916) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1028910731]: [10.778145971s] [10.778145971s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.916901 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"octavia-hmport-map" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.917183 5050 trace.go:236] Trace[1169494886]: "Reflector ListAndWatch" name:object-"metallb-system"/"controller-certs-secret" (11-Dec-2025 15:54:44.088) (total time: 10828ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1169494886]: ---"Objects listed" error: 10828ms (15:54:54.917) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1169494886]: [10.828625994s] [10.828625994s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.917457 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.927479 5050 trace.go:236] Trace[1621961714]: "Reflector ListAndWatch" name:object-"openstack"/"cinder-config-data" (11-Dec-2025 15:54:44.152) (total time: 10775ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1621961714]: ---"Objects listed" error: 10775ms (15:54:54.927) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1621961714]: [10.775266895s] [10.775266895s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.927851 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.953003 5050 trace.go:236] Trace[1853264669]: "Reflector ListAndWatch" name:object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" (11-Dec-2025 15:54:44.176) (total time: 10776ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1853264669]: ---"Objects listed" error: 10776ms (15:54:54.952) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1853264669]: [10.776887358s] [10.776887358s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.953073 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.961436 5050 trace.go:236] Trace[1118772108]: "Reflector ListAndWatch" name:object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" (11-Dec-2025 15:54:44.209) (total time: 10751ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1118772108]: ---"Objects listed" error: 10751ms (15:54:54.961) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1118772108]: [10.751636871s] [10.751636871s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.961791 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.991414 5050 trace.go:236] Trace[1073204617]: "Reflector ListAndWatch" name:object-"openshift-ingress-operator"/"metrics-tls" (11-Dec-2025 15:54:44.270) (total time: 10721ms): Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1073204617]: ---"Objects listed" error: 10721ms (15:54:54.991) Dec 11 15:54:54 crc kubenswrapper[5050]: Trace[1073204617]: [10.721070953s] [10.721070953s] END Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.991446 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 11 15:54:54 crc kubenswrapper[5050]: I1211 15:54:54.992363 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.022703 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.077294 5050 trace.go:236] Trace[1605710689]: "Reflector ListAndWatch" name:object-"openshift-multus"/"default-cni-sysctl-allowlist" (11-Dec-2025 15:54:44.322) (total time: 10754ms): Dec 11 15:54:55 crc kubenswrapper[5050]: Trace[1605710689]: ---"Objects listed" error: 10754ms (15:54:55.077) Dec 11 15:54:55 crc kubenswrapper[5050]: Trace[1605710689]: [10.754750194s] [10.754750194s] END Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.077322 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.081275 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-octavia-dockercfg-h4g5n" Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.092675 5050 trace.go:236] Trace[1884634004]: "Reflector ListAndWatch" name:object-"openshift-etcd-operator"/"etcd-operator-serving-cert" (11-Dec-2025 15:54:44.283) (total time: 10809ms): Dec 11 15:54:55 crc kubenswrapper[5050]: Trace[1884634004]: ---"Objects listed" error: 10809ms (15:54:55.092) Dec 11 15:54:55 crc kubenswrapper[5050]: Trace[1884634004]: [10.809302366s] [10.809302366s] END Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.092705 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.096637 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g2bpg"] Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.120406 5050 trace.go:236] Trace[831976848]: "Reflector ListAndWatch" name:object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" (11-Dec-2025 15:54:44.326) (total time: 10794ms): Dec 11 15:54:55 crc kubenswrapper[5050]: Trace[831976848]: ---"Objects listed" error: 10794ms (15:54:55.120) Dec 11 15:54:55 crc kubenswrapper[5050]: Trace[831976848]: [10.794289274s] [10.794289274s] END Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.120829 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.205472 5050 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-kbhwz" Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.205739 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.205948 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.206122 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.232100 5050 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cd66n container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 11 15:54:55 crc kubenswrapper[5050]: [+]log ok Dec 11 15:54:55 crc kubenswrapper[5050]: [+]etcd excluded: ok Dec 11 15:54:55 crc kubenswrapper[5050]: [+]etcd-readiness excluded: ok Dec 11 15:54:55 crc kubenswrapper[5050]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 11 15:54:55 crc kubenswrapper[5050]: [+]informer-sync ok Dec 11 15:54:55 crc kubenswrapper[5050]: [+]poststarthook/generic-apiserver-start-informers ok Dec 11 15:54:55 crc kubenswrapper[5050]: [+]poststarthook/max-in-flight-filter ok Dec 11 15:54:55 crc kubenswrapper[5050]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 11 15:54:55 crc kubenswrapper[5050]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 11 15:54:55 crc kubenswrapper[5050]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Dec 11 15:54:55 crc kubenswrapper[5050]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Dec 11 15:54:55 crc kubenswrapper[5050]: [+]poststarthook/project.openshift.io-projectcache ok Dec 11 15:54:55 crc kubenswrapper[5050]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 11 15:54:55 crc kubenswrapper[5050]: [+]poststarthook/openshift.io-startinformers ok Dec 11 15:54:55 crc kubenswrapper[5050]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 11 15:54:55 crc kubenswrapper[5050]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 11 15:54:55 crc kubenswrapper[5050]: [-]shutdown failed: reason withheld Dec 11 15:54:55 crc kubenswrapper[5050]: readyz check failed Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.232169 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" podUID="ea884e88-c0df-4212-976a-0d7ce1731fdc" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.286250 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.286343 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-config-data" Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.286267 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.341746 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.341950 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.388647 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-7bf58" Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.388885 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.393299 5050 generic.go:334] "Generic (PLEG): container finished" podID="fef8d631-c968-4ccd-92ec-e6fc5a2f6731" containerID="bfec6cc59ed05acb62e1f0f824dd77874de93bb26b4282c3f9bd4cc5ffdc26e5" exitCode=0 Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.393425 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fef8d631-c968-4ccd-92ec-e6fc5a2f6731","Type":"ContainerDied","Data":"bfec6cc59ed05acb62e1f0f824dd77874de93bb26b4282c3f9bd4cc5ffdc26e5"} Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.417228 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g2bpg" event={"ID":"daba01c5-6d3c-4e32-84ce-d8f67b685671","Type":"ContainerStarted","Data":"a0e6decee9f23f2ebce4aff924ac771e47d8cc30131d5d61c23ac7f5db6fe038"} Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.420089 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.421794 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-whqpr" Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.424592 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.454651 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.496382 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.511785 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.529561 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-mdjbl" Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.602647 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.611449 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.666345 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.666611 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.810553 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.815902 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.838076 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.850479 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 11 15:54:55 crc kubenswrapper[5050]: I1211 15:54:55.967436 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Dec 11 15:54:56 crc kubenswrapper[5050]: I1211 15:54:56.050287 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 11 15:54:56 crc kubenswrapper[5050]: I1211 15:54:56.078297 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 11 15:54:56 crc kubenswrapper[5050]: I1211 15:54:56.214777 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 11 15:54:56 crc kubenswrapper[5050]: I1211 15:54:56.270047 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 11 15:54:56 crc kubenswrapper[5050]: I1211 15:54:56.270349 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 11 15:54:56 crc kubenswrapper[5050]: I1211 15:54:56.353482 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 11 15:54:56 crc kubenswrapper[5050]: I1211 15:54:56.540274 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Dec 11 15:54:56 crc kubenswrapper[5050]: I1211 15:54:56.586171 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-fjq8l" Dec 11 15:54:56 crc kubenswrapper[5050]: I1211 15:54:56.687278 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 11 15:54:56 crc kubenswrapper[5050]: I1211 15:54:56.703478 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 11 15:54:56 crc kubenswrapper[5050]: I1211 15:54:56.754436 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-mvxd9" Dec 11 15:54:56 crc kubenswrapper[5050]: I1211 15:54:56.848237 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 11 15:54:56 crc kubenswrapper[5050]: I1211 15:54:56.848318 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 11 15:54:56 crc kubenswrapper[5050]: I1211 15:54:56.848269 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 11 15:54:56 crc kubenswrapper[5050]: I1211 15:54:56.939526 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-w4tzc" Dec 11 15:54:56 crc kubenswrapper[5050]: I1211 15:54:56.957522 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 11 15:54:56 crc kubenswrapper[5050]: I1211 15:54:56.966625 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-volume-volume1-0" podUID="b94bc94c-a636-4f0d-bd28-0f347e7b1143" containerName="cinder-volume" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:54:57 crc kubenswrapper[5050]: I1211 15:54:57.051400 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 11 15:54:57 crc kubenswrapper[5050]: I1211 15:54:57.170340 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 11 15:54:57 crc kubenswrapper[5050]: I1211 15:54:57.210730 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Dec 11 15:54:57 crc kubenswrapper[5050]: I1211 15:54:57.310618 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 11 15:54:57 crc kubenswrapper[5050]: I1211 15:54:57.316622 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 11 15:54:57 crc kubenswrapper[5050]: I1211 15:54:57.319463 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-g8gr8" Dec 11 15:54:57 crc kubenswrapper[5050]: I1211 15:54:57.331973 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 11 15:54:57 crc kubenswrapper[5050]: I1211 15:54:57.446601 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 11 15:54:57 crc kubenswrapper[5050]: I1211 15:54:57.498193 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-r54sd" Dec 11 15:54:57 crc kubenswrapper[5050]: I1211 15:54:57.561958 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 11 15:54:57 crc kubenswrapper[5050]: I1211 15:54:57.562395 5050 generic.go:334] "Generic (PLEG): container finished" podID="2b344bba-8e1d-415b-9b5f-e21d3144fe42" containerID="b5bea0bcdd2e66523511acdf8482695e27c2297dc7d6b7729ac1ce7866fb5763" exitCode=137 Dec 11 15:54:57 crc kubenswrapper[5050]: I1211 15:54:57.590888 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" event={"ID":"2b344bba-8e1d-415b-9b5f-e21d3144fe42","Type":"ContainerDied","Data":"b5bea0bcdd2e66523511acdf8482695e27c2297dc7d6b7729ac1ce7866fb5763"} Dec 11 15:54:57 crc kubenswrapper[5050]: I1211 15:54:57.596487 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 11 15:54:57 crc kubenswrapper[5050]: I1211 15:54:57.627600 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fef8d631-c968-4ccd-92ec-e6fc5a2f6731","Type":"ContainerStarted","Data":"c10fd2cf733f64be892fe212b1cbf509c253eb5c11a31772e41f09eba2376c7c"} Dec 11 15:54:57 crc kubenswrapper[5050]: I1211 15:54:57.631120 5050 generic.go:334] "Generic (PLEG): container finished" podID="daba01c5-6d3c-4e32-84ce-d8f67b685671" containerID="a97e775c1164a52c3657aa79794d5788ff41e62ab5be8853489d362f00e3c22d" exitCode=0 Dec 11 15:54:57 crc kubenswrapper[5050]: I1211 15:54:57.631174 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g2bpg" event={"ID":"daba01c5-6d3c-4e32-84ce-d8f67b685671","Type":"ContainerDied","Data":"a97e775c1164a52c3657aa79794d5788ff41e62ab5be8853489d362f00e3c22d"} Dec 11 15:54:57 crc kubenswrapper[5050]: I1211 15:54:57.649381 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-5gcmv" Dec 11 15:54:57 crc kubenswrapper[5050]: I1211 15:54:57.682881 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-backup-0" podUID="0f954250-5982-4088-839a-8faf7bfe203c" containerName="cinder-backup" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:54:57 crc kubenswrapper[5050]: I1211 15:54:57.691264 5050 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-p54cz" Dec 11 15:54:57 crc kubenswrapper[5050]: I1211 15:54:57.694724 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-5pb7q" Dec 11 15:54:57 crc kubenswrapper[5050]: I1211 15:54:57.789530 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c2d8343-b085-4545-9a26-2dd0bf907b5e-ssh-key\") pod \"6c2d8343-b085-4545-9a26-2dd0bf907b5e\" (UID: \"6c2d8343-b085-4545-9a26-2dd0bf907b5e\") " Dec 11 15:54:57 crc kubenswrapper[5050]: I1211 15:54:57.789604 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6c2d8343-b085-4545-9a26-2dd0bf907b5e-ceph\") pod \"6c2d8343-b085-4545-9a26-2dd0bf907b5e\" (UID: \"6c2d8343-b085-4545-9a26-2dd0bf907b5e\") " Dec 11 15:54:57 crc kubenswrapper[5050]: I1211 15:54:57.789830 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrls2\" (UniqueName: \"kubernetes.io/projected/6c2d8343-b085-4545-9a26-2dd0bf907b5e-kube-api-access-jrls2\") pod \"6c2d8343-b085-4545-9a26-2dd0bf907b5e\" (UID: \"6c2d8343-b085-4545-9a26-2dd0bf907b5e\") " Dec 11 15:54:57 crc kubenswrapper[5050]: I1211 15:54:57.789850 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c2d8343-b085-4545-9a26-2dd0bf907b5e-inventory\") pod \"6c2d8343-b085-4545-9a26-2dd0bf907b5e\" (UID: \"6c2d8343-b085-4545-9a26-2dd0bf907b5e\") " Dec 11 15:54:57 crc kubenswrapper[5050]: I1211 15:54:57.799940 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c2d8343-b085-4545-9a26-2dd0bf907b5e-kube-api-access-jrls2" (OuterVolumeSpecName: "kube-api-access-jrls2") pod "6c2d8343-b085-4545-9a26-2dd0bf907b5e" (UID: "6c2d8343-b085-4545-9a26-2dd0bf907b5e"). InnerVolumeSpecName "kube-api-access-jrls2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:54:57 crc kubenswrapper[5050]: I1211 15:54:57.844265 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c2d8343-b085-4545-9a26-2dd0bf907b5e-ceph" (OuterVolumeSpecName: "ceph") pod "6c2d8343-b085-4545-9a26-2dd0bf907b5e" (UID: "6c2d8343-b085-4545-9a26-2dd0bf907b5e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:54:57 crc kubenswrapper[5050]: I1211 15:54:57.855259 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c2d8343-b085-4545-9a26-2dd0bf907b5e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6c2d8343-b085-4545-9a26-2dd0bf907b5e" (UID: "6c2d8343-b085-4545-9a26-2dd0bf907b5e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:54:57 crc kubenswrapper[5050]: I1211 15:54:57.873920 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c2d8343-b085-4545-9a26-2dd0bf907b5e-inventory" (OuterVolumeSpecName: "inventory") pod "6c2d8343-b085-4545-9a26-2dd0bf907b5e" (UID: "6c2d8343-b085-4545-9a26-2dd0bf907b5e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:54:57 crc kubenswrapper[5050]: I1211 15:54:57.879247 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-lvj2r" Dec 11 15:54:57 crc kubenswrapper[5050]: I1211 15:54:57.898229 5050 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c2d8343-b085-4545-9a26-2dd0bf907b5e-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 15:54:57 crc kubenswrapper[5050]: I1211 15:54:57.898270 5050 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6c2d8343-b085-4545-9a26-2dd0bf907b5e-ceph\") on node \"crc\" DevicePath \"\"" Dec 11 15:54:57 crc kubenswrapper[5050]: I1211 15:54:57.898286 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrls2\" (UniqueName: \"kubernetes.io/projected/6c2d8343-b085-4545-9a26-2dd0bf907b5e-kube-api-access-jrls2\") on node \"crc\" DevicePath \"\"" Dec 11 15:54:57 crc kubenswrapper[5050]: I1211 15:54:57.898300 5050 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c2d8343-b085-4545-9a26-2dd0bf907b5e-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 15:54:57 crc kubenswrapper[5050]: W1211 15:54:57.931912 5050 logging.go:55] [core] [Channel #2945 SubChannel #2946]grpc: addrConn.createTransport failed to connect to {Addr: "/var/lib/kubelet/plugins/csi-hostpath/csi.sock", ServerName: "localhost", }. Err: connection error: desc = "transport: Error while dialing: dial unix /var/lib/kubelet/plugins/csi-hostpath/csi.sock: connect: connection refused" Dec 11 15:54:57 crc kubenswrapper[5050]: I1211 15:54:57.986777 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-nl629" Dec 11 15:54:57 crc kubenswrapper[5050]: I1211 15:54:57.986959 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 11 15:54:58 crc kubenswrapper[5050]: I1211 15:54:58.044942 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 11 15:54:58 crc kubenswrapper[5050]: I1211 15:54:58.086702 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="131d56da-b770-4452-97c9-b585434da431" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:54:58 crc kubenswrapper[5050]: I1211 15:54:58.114382 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 11 15:54:58 crc kubenswrapper[5050]: I1211 15:54:58.114384 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 11 15:54:58 crc kubenswrapper[5050]: I1211 15:54:58.114597 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 11 15:54:58 crc kubenswrapper[5050]: I1211 15:54:58.167844 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 11 15:54:58 crc kubenswrapper[5050]: I1211 15:54:58.229148 5050 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 11 15:54:58 crc kubenswrapper[5050]: I1211 15:54:58.294595 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Dec 11 15:54:58 crc kubenswrapper[5050]: I1211 15:54:58.393202 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 11 15:54:58 crc kubenswrapper[5050]: I1211 15:54:58.444591 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 11 15:54:58 crc kubenswrapper[5050]: I1211 15:54:58.460087 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 11 15:54:58 crc kubenswrapper[5050]: I1211 15:54:58.478284 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 11 15:54:58 crc kubenswrapper[5050]: I1211 15:54:58.569204 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 11 15:54:58 crc kubenswrapper[5050]: I1211 15:54:58.602040 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-n57x7" Dec 11 15:54:58 crc kubenswrapper[5050]: I1211 15:54:58.638190 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 11 15:54:58 crc kubenswrapper[5050]: I1211 15:54:58.714633 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 11 15:54:58 crc kubenswrapper[5050]: I1211 15:54:58.718492 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 11 15:54:58 crc kubenswrapper[5050]: I1211 15:54:58.744095 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" event={"ID":"2b344bba-8e1d-415b-9b5f-e21d3144fe42","Type":"ContainerStarted","Data":"de1806350dd12ec89d308d222fc276225df2211969d0f0bdc75cdbae8a039452"} Dec 11 15:54:58 crc kubenswrapper[5050]: I1211 15:54:58.751962 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-openstack-openstack-cell1-5pb7q" Dec 11 15:54:58 crc kubenswrapper[5050]: I1211 15:54:58.758589 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-openstack-openstack-cell1-5pb7q" event={"ID":"6c2d8343-b085-4545-9a26-2dd0bf907b5e","Type":"ContainerDied","Data":"b3b7be253ec6a8381c39dad3c2a10a0e22c3a93ac47b1a7613858931d1dc2f57"} Dec 11 15:54:58 crc kubenswrapper[5050]: I1211 15:54:58.758642 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3b7be253ec6a8381c39dad3c2a10a0e22c3a93ac47b1a7613858931d1dc2f57" Dec 11 15:54:58 crc kubenswrapper[5050]: I1211 15:54:58.879650 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 11 15:54:58 crc kubenswrapper[5050]: I1211 15:54:58.951710 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 11 15:54:58 crc kubenswrapper[5050]: I1211 15:54:58.986290 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-cgxqx" Dec 11 15:54:59 crc kubenswrapper[5050]: I1211 15:54:59.130349 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-x9p8j" Dec 11 15:54:59 crc kubenswrapper[5050]: I1211 15:54:59.131280 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 11 15:54:59 crc kubenswrapper[5050]: I1211 15:54:59.226593 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 11 15:54:59 crc kubenswrapper[5050]: I1211 15:54:59.333412 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 11 15:54:59 crc kubenswrapper[5050]: I1211 15:54:59.390320 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 11 15:54:59 crc kubenswrapper[5050]: I1211 15:54:59.412821 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 11 15:54:59 crc kubenswrapper[5050]: I1211 15:54:59.426612 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 11 15:54:59 crc kubenswrapper[5050]: I1211 15:54:59.430738 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 11 15:54:59 crc kubenswrapper[5050]: I1211 15:54:59.449383 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 11 15:54:59 crc kubenswrapper[5050]: I1211 15:54:59.502021 5050 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 11 15:54:59 crc kubenswrapper[5050]: I1211 15:54:59.606434 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 11 15:54:59 crc kubenswrapper[5050]: I1211 15:54:59.608543 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-f86tg" Dec 11 15:54:59 crc kubenswrapper[5050]: I1211 15:54:59.788343 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 11 15:54:59 crc kubenswrapper[5050]: I1211 15:54:59.789865 5050 generic.go:334] "Generic (PLEG): container finished" podID="daba01c5-6d3c-4e32-84ce-d8f67b685671" containerID="7091865c6daa971ea4041591f44b1882e4ca9c247bd4facfaf2f413ed4e25e45" exitCode=0 Dec 11 15:54:59 crc kubenswrapper[5050]: I1211 15:54:59.789955 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g2bpg" event={"ID":"daba01c5-6d3c-4e32-84ce-d8f67b685671","Type":"ContainerDied","Data":"7091865c6daa971ea4041591f44b1882e4ca9c247bd4facfaf2f413ed4e25e45"} Dec 11 15:54:59 crc kubenswrapper[5050]: I1211 15:54:59.826489 5050 generic.go:334] "Generic (PLEG): container finished" podID="2b344bba-8e1d-415b-9b5f-e21d3144fe42" containerID="7b9e51720e525e9c301693cbff21aa564f900560e49c2f7c34f0a29c1efe0d37" exitCode=1 Dec 11 15:54:59 crc kubenswrapper[5050]: I1211 15:54:59.826553 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" event={"ID":"2b344bba-8e1d-415b-9b5f-e21d3144fe42","Type":"ContainerDied","Data":"7b9e51720e525e9c301693cbff21aa564f900560e49c2f7c34f0a29c1efe0d37"} Dec 11 15:54:59 crc kubenswrapper[5050]: I1211 15:54:59.827654 5050 scope.go:117] "RemoveContainer" containerID="7b9e51720e525e9c301693cbff21aa564f900560e49c2f7c34f0a29c1efe0d37" Dec 11 15:55:00 crc kubenswrapper[5050]: I1211 15:55:00.093468 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 11 15:55:00 crc kubenswrapper[5050]: I1211 15:55:00.187972 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 11 15:55:00 crc kubenswrapper[5050]: I1211 15:55:00.195878 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 11 15:55:00 crc kubenswrapper[5050]: I1211 15:55:00.214353 5050 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cd66n container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 11 15:55:00 crc kubenswrapper[5050]: [+]log ok Dec 11 15:55:00 crc kubenswrapper[5050]: [+]etcd excluded: ok Dec 11 15:55:00 crc kubenswrapper[5050]: [+]etcd-readiness excluded: ok Dec 11 15:55:00 crc kubenswrapper[5050]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 11 15:55:00 crc kubenswrapper[5050]: [+]informer-sync ok Dec 11 15:55:00 crc kubenswrapper[5050]: [+]poststarthook/generic-apiserver-start-informers ok Dec 11 15:55:00 crc kubenswrapper[5050]: [+]poststarthook/max-in-flight-filter ok Dec 11 15:55:00 crc kubenswrapper[5050]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 11 15:55:00 crc kubenswrapper[5050]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 11 15:55:00 crc kubenswrapper[5050]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Dec 11 15:55:00 crc kubenswrapper[5050]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Dec 11 15:55:00 crc kubenswrapper[5050]: [+]poststarthook/project.openshift.io-projectcache ok Dec 11 15:55:00 crc kubenswrapper[5050]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 11 15:55:00 crc kubenswrapper[5050]: [+]poststarthook/openshift.io-startinformers ok Dec 11 15:55:00 crc kubenswrapper[5050]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 11 15:55:00 crc kubenswrapper[5050]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 11 15:55:00 crc kubenswrapper[5050]: [-]shutdown failed: reason withheld Dec 11 15:55:00 crc kubenswrapper[5050]: readyz check failed Dec 11 15:55:00 crc kubenswrapper[5050]: I1211 15:55:00.214410 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" podUID="ea884e88-c0df-4212-976a-0d7ce1731fdc" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:00 crc kubenswrapper[5050]: I1211 15:55:00.259616 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 11 15:55:00 crc kubenswrapper[5050]: I1211 15:55:00.304870 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 11 15:55:00 crc kubenswrapper[5050]: I1211 15:55:00.346160 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 11 15:55:00 crc kubenswrapper[5050]: I1211 15:55:00.404390 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-glgrh" Dec 11 15:55:00 crc kubenswrapper[5050]: I1211 15:55:00.519139 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-4x88l" Dec 11 15:55:00 crc kubenswrapper[5050]: I1211 15:55:00.523122 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 11 15:55:00 crc kubenswrapper[5050]: I1211 15:55:00.569243 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 11 15:55:00 crc kubenswrapper[5050]: I1211 15:55:00.572228 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 11 15:55:00 crc kubenswrapper[5050]: I1211 15:55:00.633283 5050 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 11 15:55:00 crc kubenswrapper[5050]: I1211 15:55:00.654149 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Dec 11 15:55:00 crc kubenswrapper[5050]: I1211 15:55:00.755763 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 11 15:55:00 crc kubenswrapper[5050]: I1211 15:55:00.819747 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 11 15:55:00 crc kubenswrapper[5050]: I1211 15:55:00.832862 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 11 15:55:00 crc kubenswrapper[5050]: I1211 15:55:00.834794 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-worker-scripts" Dec 11 15:55:00 crc kubenswrapper[5050]: I1211 15:55:00.912562 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-jrbb7" Dec 11 15:55:00 crc kubenswrapper[5050]: I1211 15:55:00.916092 5050 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 11 15:55:00 crc kubenswrapper[5050]: I1211 15:55:00.992159 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Dec 11 15:55:00 crc kubenswrapper[5050]: I1211 15:55:00.997516 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 11 15:55:01 crc kubenswrapper[5050]: I1211 15:55:01.044843 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 11 15:55:01 crc kubenswrapper[5050]: I1211 15:55:01.072906 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 11 15:55:01 crc kubenswrapper[5050]: I1211 15:55:01.158751 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5fb79d99b5-m4xgd" podUID="2086bc41-00a2-4c97-a491-08511f3ed6e5" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.117:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.117:8080: connect: connection refused" Dec 11 15:55:01 crc kubenswrapper[5050]: I1211 15:55:01.269523 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 11 15:55:01 crc kubenswrapper[5050]: I1211 15:55:01.412949 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-rsyslog-scripts" Dec 11 15:55:01 crc kubenswrapper[5050]: I1211 15:55:01.459557 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 11 15:55:01 crc kubenswrapper[5050]: I1211 15:55:01.468090 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 11 15:55:01 crc kubenswrapper[5050]: I1211 15:55:01.481539 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Dec 11 15:55:01 crc kubenswrapper[5050]: I1211 15:55:01.559648 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-qdrgd" Dec 11 15:55:01 crc kubenswrapper[5050]: I1211 15:55:01.603489 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-bvxnm" Dec 11 15:55:01 crc kubenswrapper[5050]: I1211 15:55:01.613909 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 11 15:55:01 crc kubenswrapper[5050]: I1211 15:55:01.633095 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-n7crp" Dec 11 15:55:01 crc kubenswrapper[5050]: I1211 15:55:01.655982 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 11 15:55:01 crc kubenswrapper[5050]: I1211 15:55:01.686749 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 11 15:55:01 crc kubenswrapper[5050]: I1211 15:55:01.736410 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 11 15:55:01 crc kubenswrapper[5050]: I1211 15:55:01.769802 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-volume-volume1-0" podUID="b94bc94c-a636-4f0d-bd28-0f347e7b1143" containerName="cinder-volume" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:01 crc kubenswrapper[5050]: I1211 15:55:01.784974 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Dec 11 15:55:01 crc kubenswrapper[5050]: I1211 15:55:01.893465 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 11 15:55:01 crc kubenswrapper[5050]: I1211 15:55:01.895804 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 11 15:55:02 crc kubenswrapper[5050]: I1211 15:55:02.001220 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-qbc2f" Dec 11 15:55:02 crc kubenswrapper[5050]: I1211 15:55:02.014657 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-9tcm2" Dec 11 15:55:02 crc kubenswrapper[5050]: I1211 15:55:02.032591 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-767f6d799d-cv7mn" Dec 11 15:55:02 crc kubenswrapper[5050]: I1211 15:55:02.072426 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-94hht" Dec 11 15:55:02 crc kubenswrapper[5050]: I1211 15:55:02.089503 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-fxl2b" Dec 11 15:55:02 crc kubenswrapper[5050]: I1211 15:55:02.116554 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 11 15:55:02 crc kubenswrapper[5050]: I1211 15:55:02.137712 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 11 15:55:02 crc kubenswrapper[5050]: I1211 15:55:02.170680 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-share-share1-0" podUID="29321ad8-528b-46ed-8c14-21a74038cddb" containerName="manila-share" probeResult="failure" output="Get \"http://10.217.1.146:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:55:02 crc kubenswrapper[5050]: I1211 15:55:02.170788 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/manila-share-share1-0" Dec 11 15:55:02 crc kubenswrapper[5050]: I1211 15:55:02.170975 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/manila-scheduler-0" podUID="9c5fd2fd-4df8-4f0f-982c-d3e6df852669" containerName="manila-scheduler" probeResult="failure" output="Get \"http://10.217.1.145:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:55:02 crc kubenswrapper[5050]: I1211 15:55:02.171083 5050 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/manila-scheduler-0" Dec 11 15:55:02 crc kubenswrapper[5050]: I1211 15:55:02.171552 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-xl9wl" Dec 11 15:55:02 crc kubenswrapper[5050]: I1211 15:55:02.172127 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manila-scheduler" containerStatusID={"Type":"cri-o","ID":"21d82c8ed257c6bbf41d15d936e7dbaada66175dbed0e14ecf37991389e1a973"} pod="openstack/manila-scheduler-0" containerMessage="Container manila-scheduler failed liveness probe, will be restarted" Dec 11 15:55:02 crc kubenswrapper[5050]: I1211 15:55:02.172183 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="9c5fd2fd-4df8-4f0f-982c-d3e6df852669" containerName="manila-scheduler" containerID="cri-o://21d82c8ed257c6bbf41d15d936e7dbaada66175dbed0e14ecf37991389e1a973" gracePeriod=30 Dec 11 15:55:02 crc kubenswrapper[5050]: I1211 15:55:02.176039 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-967d97867-7stc2" Dec 11 15:55:02 crc kubenswrapper[5050]: I1211 15:55:02.181990 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="manila-share" containerStatusID={"Type":"cri-o","ID":"938a7854156ef3bfb81302582302a81ef99f4c5853767f52c07821fcdff360d2"} pod="openstack/manila-share-share1-0" containerMessage="Container manila-share failed liveness probe, will be restarted" Dec 11 15:55:02 crc kubenswrapper[5050]: I1211 15:55:02.182095 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="29321ad8-528b-46ed-8c14-21a74038cddb" containerName="manila-share" containerID="cri-o://938a7854156ef3bfb81302582302a81ef99f4c5853767f52c07821fcdff360d2" gracePeriod=30 Dec 11 15:55:02 crc kubenswrapper[5050]: I1211 15:55:02.184142 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-xgbp2" Dec 11 15:55:02 crc kubenswrapper[5050]: I1211 15:55:02.211688 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 11 15:55:02 crc kubenswrapper[5050]: I1211 15:55:02.242349 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jq9vt" Dec 11 15:55:02 crc kubenswrapper[5050]: I1211 15:55:02.314809 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 11 15:55:02 crc kubenswrapper[5050]: I1211 15:55:02.315088 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-65swh" Dec 11 15:55:02 crc kubenswrapper[5050]: I1211 15:55:02.346633 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"octavia-config-data" Dec 11 15:55:02 crc kubenswrapper[5050]: I1211 15:55:02.361428 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-backup-0" podUID="0f954250-5982-4088-839a-8faf7bfe203c" containerName="cinder-backup" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:02 crc kubenswrapper[5050]: I1211 15:55:02.426488 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Dec 11 15:55:02 crc kubenswrapper[5050]: I1211 15:55:02.542540 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-pjhfc" Dec 11 15:55:02 crc kubenswrapper[5050]: I1211 15:55:02.543478 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 11 15:55:02 crc kubenswrapper[5050]: I1211 15:55:02.629721 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 11 15:55:02 crc kubenswrapper[5050]: I1211 15:55:02.662478 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-2bw5c" Dec 11 15:55:02 crc kubenswrapper[5050]: I1211 15:55:02.764697 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 11 15:55:02 crc kubenswrapper[5050]: I1211 15:55:02.773503 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 11 15:55:02 crc kubenswrapper[5050]: I1211 15:55:02.882604 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 11 15:55:02 crc kubenswrapper[5050]: I1211 15:55:02.901094 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 11 15:55:02 crc kubenswrapper[5050]: I1211 15:55:02.937900 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" event={"ID":"2b344bba-8e1d-415b-9b5f-e21d3144fe42","Type":"ContainerStarted","Data":"03d4bc1c07e317070ac1cffb2f74f4ca45abb486b8f65393efe16277afa43ea1"} Dec 11 15:55:03 crc kubenswrapper[5050]: I1211 15:55:03.109966 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="131d56da-b770-4452-97c9-b585434da431" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:03 crc kubenswrapper[5050]: I1211 15:55:03.190478 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 11 15:55:03 crc kubenswrapper[5050]: I1211 15:55:03.239981 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 11 15:55:03 crc kubenswrapper[5050]: I1211 15:55:03.283600 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 11 15:55:03 crc kubenswrapper[5050]: I1211 15:55:03.284037 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Dec 11 15:55:03 crc kubenswrapper[5050]: I1211 15:55:03.358088 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Dec 11 15:55:03 crc kubenswrapper[5050]: I1211 15:55:03.369510 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 11 15:55:03 crc kubenswrapper[5050]: I1211 15:55:03.443200 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 11 15:55:03 crc kubenswrapper[5050]: I1211 15:55:03.471396 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 11 15:55:03 crc kubenswrapper[5050]: I1211 15:55:03.475499 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 11 15:55:03 crc kubenswrapper[5050]: I1211 15:55:03.494932 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 11 15:55:03 crc kubenswrapper[5050]: I1211 15:55:03.546754 5050 scope.go:117] "RemoveContainer" containerID="33899e24adacef82929b0fbbcd58af7cb4f7aa3935b49ec9b5b4045a51da1d14" Dec 11 15:55:03 crc kubenswrapper[5050]: E1211 15:55:03.547219 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:55:03 crc kubenswrapper[5050]: I1211 15:55:03.550037 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 11 15:55:03 crc kubenswrapper[5050]: I1211 15:55:03.571003 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 11 15:55:03 crc kubenswrapper[5050]: I1211 15:55:03.596835 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 11 15:55:03 crc kubenswrapper[5050]: I1211 15:55:03.601352 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Dec 11 15:55:03 crc kubenswrapper[5050]: I1211 15:55:03.626296 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 11 15:55:03 crc kubenswrapper[5050]: I1211 15:55:03.629907 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 11 15:55:03 crc kubenswrapper[5050]: I1211 15:55:03.720686 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-68tnd" Dec 11 15:55:03 crc kubenswrapper[5050]: I1211 15:55:03.763561 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 11 15:55:03 crc kubenswrapper[5050]: I1211 15:55:03.803554 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-qd9ll" Dec 11 15:55:03 crc kubenswrapper[5050]: I1211 15:55:03.835030 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 11 15:55:03 crc kubenswrapper[5050]: I1211 15:55:03.858460 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 11 15:55:03 crc kubenswrapper[5050]: I1211 15:55:03.876750 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-5g8lw" Dec 11 15:55:03 crc kubenswrapper[5050]: I1211 15:55:03.907306 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 11 15:55:03 crc kubenswrapper[5050]: I1211 15:55:03.953500 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g2bpg" event={"ID":"daba01c5-6d3c-4e32-84ce-d8f67b685671","Type":"ContainerStarted","Data":"6999bdf777a1d901cca89da88f6dd3eb972c80edd1ca1e023678f2b6e068a574"} Dec 11 15:55:03 crc kubenswrapper[5050]: I1211 15:55:03.956477 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 11 15:55:03 crc kubenswrapper[5050]: I1211 15:55:03.958915 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-mz95s" Dec 11 15:55:04 crc kubenswrapper[5050]: I1211 15:55:04.104317 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-7zqpj" Dec 11 15:55:04 crc kubenswrapper[5050]: I1211 15:55:04.197235 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-djswv" Dec 11 15:55:04 crc kubenswrapper[5050]: I1211 15:55:04.296217 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 11 15:55:04 crc kubenswrapper[5050]: I1211 15:55:04.388383 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 11 15:55:04 crc kubenswrapper[5050]: I1211 15:55:04.561404 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 11 15:55:05 crc kubenswrapper[5050]: I1211 15:55:05.091251 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 11 15:55:05 crc kubenswrapper[5050]: I1211 15:55:05.103624 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 11 15:55:05 crc kubenswrapper[5050]: I1211 15:55:05.192722 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 11 15:55:05 crc kubenswrapper[5050]: I1211 15:55:05.218114 5050 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cd66n container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 11 15:55:05 crc kubenswrapper[5050]: [+]log ok Dec 11 15:55:05 crc kubenswrapper[5050]: [+]etcd excluded: ok Dec 11 15:55:05 crc kubenswrapper[5050]: [+]etcd-readiness excluded: ok Dec 11 15:55:05 crc kubenswrapper[5050]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 11 15:55:05 crc kubenswrapper[5050]: [+]informer-sync ok Dec 11 15:55:05 crc kubenswrapper[5050]: [+]poststarthook/generic-apiserver-start-informers ok Dec 11 15:55:05 crc kubenswrapper[5050]: [+]poststarthook/max-in-flight-filter ok Dec 11 15:55:05 crc kubenswrapper[5050]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 11 15:55:05 crc kubenswrapper[5050]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 11 15:55:05 crc kubenswrapper[5050]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Dec 11 15:55:05 crc kubenswrapper[5050]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Dec 11 15:55:05 crc kubenswrapper[5050]: [+]poststarthook/project.openshift.io-projectcache ok Dec 11 15:55:05 crc kubenswrapper[5050]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 11 15:55:05 crc kubenswrapper[5050]: [+]poststarthook/openshift.io-startinformers ok Dec 11 15:55:05 crc kubenswrapper[5050]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 11 15:55:05 crc kubenswrapper[5050]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 11 15:55:05 crc kubenswrapper[5050]: [-]shutdown failed: reason withheld Dec 11 15:55:05 crc kubenswrapper[5050]: readyz check failed Dec 11 15:55:05 crc kubenswrapper[5050]: I1211 15:55:05.218173 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" podUID="ea884e88-c0df-4212-976a-0d7ce1731fdc" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:05 crc kubenswrapper[5050]: I1211 15:55:05.312176 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 11 15:55:05 crc kubenswrapper[5050]: I1211 15:55:05.557357 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 11 15:55:05 crc kubenswrapper[5050]: I1211 15:55:05.558256 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 11 15:55:05 crc kubenswrapper[5050]: I1211 15:55:05.576884 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 11 15:55:05 crc kubenswrapper[5050]: I1211 15:55:05.579239 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 11 15:55:05 crc kubenswrapper[5050]: I1211 15:55:05.604820 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 11 15:55:05 crc kubenswrapper[5050]: I1211 15:55:05.743743 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 11 15:55:06 crc kubenswrapper[5050]: I1211 15:55:06.318240 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-c88pf" Dec 11 15:55:06 crc kubenswrapper[5050]: I1211 15:55:06.364041 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 11 15:55:06 crc kubenswrapper[5050]: I1211 15:55:06.415586 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-mc6vn" Dec 11 15:55:06 crc kubenswrapper[5050]: I1211 15:55:06.441816 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 11 15:55:06 crc kubenswrapper[5050]: I1211 15:55:06.571113 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 11 15:55:06 crc kubenswrapper[5050]: I1211 15:55:06.648632 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 11 15:55:06 crc kubenswrapper[5050]: I1211 15:55:06.876789 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-volume-volume1-0" podUID="b94bc94c-a636-4f0d-bd28-0f347e7b1143" containerName="cinder-volume" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:06 crc kubenswrapper[5050]: I1211 15:55:06.890598 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 11 15:55:07 crc kubenswrapper[5050]: I1211 15:55:07.400379 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-backup-0" podUID="0f954250-5982-4088-839a-8faf7bfe203c" containerName="cinder-backup" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:07 crc kubenswrapper[5050]: I1211 15:55:07.558820 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 11 15:55:08 crc kubenswrapper[5050]: I1211 15:55:08.028821 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="131d56da-b770-4452-97c9-b585434da431" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:10 crc kubenswrapper[5050]: I1211 15:55:10.213502 5050 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cd66n container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 11 15:55:10 crc kubenswrapper[5050]: [+]log ok Dec 11 15:55:10 crc kubenswrapper[5050]: [+]etcd excluded: ok Dec 11 15:55:10 crc kubenswrapper[5050]: [+]etcd-readiness excluded: ok Dec 11 15:55:10 crc kubenswrapper[5050]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 11 15:55:10 crc kubenswrapper[5050]: [+]informer-sync ok Dec 11 15:55:10 crc kubenswrapper[5050]: [+]poststarthook/generic-apiserver-start-informers ok Dec 11 15:55:10 crc kubenswrapper[5050]: [+]poststarthook/max-in-flight-filter ok Dec 11 15:55:10 crc kubenswrapper[5050]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 11 15:55:10 crc kubenswrapper[5050]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 11 15:55:10 crc kubenswrapper[5050]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Dec 11 15:55:10 crc kubenswrapper[5050]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Dec 11 15:55:10 crc kubenswrapper[5050]: [+]poststarthook/project.openshift.io-projectcache ok Dec 11 15:55:10 crc kubenswrapper[5050]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 11 15:55:10 crc kubenswrapper[5050]: [+]poststarthook/openshift.io-startinformers ok Dec 11 15:55:10 crc kubenswrapper[5050]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 11 15:55:10 crc kubenswrapper[5050]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 11 15:55:10 crc kubenswrapper[5050]: [-]shutdown failed: reason withheld Dec 11 15:55:10 crc kubenswrapper[5050]: readyz check failed Dec 11 15:55:10 crc kubenswrapper[5050]: I1211 15:55:10.214036 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" podUID="ea884e88-c0df-4212-976a-0d7ce1731fdc" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:11 crc kubenswrapper[5050]: I1211 15:55:11.157022 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5fb79d99b5-m4xgd" podUID="2086bc41-00a2-4c97-a491-08511f3ed6e5" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.117:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.117:8080: connect: connection refused" Dec 11 15:55:11 crc kubenswrapper[5050]: I1211 15:55:11.794481 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-volume-volume1-0" podUID="b94bc94c-a636-4f0d-bd28-0f347e7b1143" containerName="cinder-volume" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:12 crc kubenswrapper[5050]: I1211 15:55:12.012586 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g2bpg" podStartSLOduration=113.91927831 podStartE2EDuration="1m59.012561776s" podCreationTimestamp="2025-12-11 15:53:13 +0000 UTC" firstStartedPulling="2025-12-11 15:54:57.633030155 +0000 UTC m=+7588.476752741" lastFinishedPulling="2025-12-11 15:55:02.726313631 +0000 UTC m=+7593.570036207" observedRunningTime="2025-12-11 15:55:03.979507544 +0000 UTC m=+7594.823230130" watchObservedRunningTime="2025-12-11 15:55:12.012561776 +0000 UTC m=+7602.856284362" Dec 11 15:55:12 crc kubenswrapper[5050]: I1211 15:55:12.396351 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-backup-0" podUID="0f954250-5982-4088-839a-8faf7bfe203c" containerName="cinder-backup" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:13 crc kubenswrapper[5050]: I1211 15:55:13.012149 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="131d56da-b770-4452-97c9-b585434da431" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:13 crc kubenswrapper[5050]: I1211 15:55:13.655378 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g2bpg" Dec 11 15:55:13 crc kubenswrapper[5050]: I1211 15:55:13.655435 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g2bpg" Dec 11 15:55:13 crc kubenswrapper[5050]: I1211 15:55:13.715290 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g2bpg" Dec 11 15:55:14 crc kubenswrapper[5050]: I1211 15:55:14.125416 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g2bpg" Dec 11 15:55:14 crc kubenswrapper[5050]: I1211 15:55:14.547270 5050 scope.go:117] "RemoveContainer" containerID="33899e24adacef82929b0fbbcd58af7cb4f7aa3935b49ec9b5b4045a51da1d14" Dec 11 15:55:14 crc kubenswrapper[5050]: E1211 15:55:14.548279 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:55:15 crc kubenswrapper[5050]: I1211 15:55:15.212985 5050 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cd66n container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 11 15:55:15 crc kubenswrapper[5050]: [+]log ok Dec 11 15:55:15 crc kubenswrapper[5050]: [+]etcd excluded: ok Dec 11 15:55:15 crc kubenswrapper[5050]: [+]etcd-readiness excluded: ok Dec 11 15:55:15 crc kubenswrapper[5050]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 11 15:55:15 crc kubenswrapper[5050]: [+]informer-sync ok Dec 11 15:55:15 crc kubenswrapper[5050]: [+]poststarthook/generic-apiserver-start-informers ok Dec 11 15:55:15 crc kubenswrapper[5050]: [+]poststarthook/max-in-flight-filter ok Dec 11 15:55:15 crc kubenswrapper[5050]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 11 15:55:15 crc kubenswrapper[5050]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 11 15:55:15 crc kubenswrapper[5050]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Dec 11 15:55:15 crc kubenswrapper[5050]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Dec 11 15:55:15 crc kubenswrapper[5050]: [+]poststarthook/project.openshift.io-projectcache ok Dec 11 15:55:15 crc kubenswrapper[5050]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 11 15:55:15 crc kubenswrapper[5050]: [+]poststarthook/openshift.io-startinformers ok Dec 11 15:55:15 crc kubenswrapper[5050]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 11 15:55:15 crc kubenswrapper[5050]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 11 15:55:15 crc kubenswrapper[5050]: [-]shutdown failed: reason withheld Dec 11 15:55:15 crc kubenswrapper[5050]: readyz check failed Dec 11 15:55:15 crc kubenswrapper[5050]: I1211 15:55:15.213117 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" podUID="ea884e88-c0df-4212-976a-0d7ce1731fdc" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:16 crc kubenswrapper[5050]: I1211 15:55:16.736224 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-volume-volume1-0" podUID="b94bc94c-a636-4f0d-bd28-0f347e7b1143" containerName="cinder-volume" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:16 crc kubenswrapper[5050]: I1211 15:55:16.992135 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9jns7" Dec 11 15:55:17 crc kubenswrapper[5050]: I1211 15:55:17.335740 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-backup-0" podUID="0f954250-5982-4088-839a-8faf7bfe203c" containerName="cinder-backup" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:18 crc kubenswrapper[5050]: I1211 15:55:18.018322 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="131d56da-b770-4452-97c9-b585434da431" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:20 crc kubenswrapper[5050]: I1211 15:55:20.212283 5050 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cd66n container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 11 15:55:20 crc kubenswrapper[5050]: [+]log ok Dec 11 15:55:20 crc kubenswrapper[5050]: [+]etcd excluded: ok Dec 11 15:55:20 crc kubenswrapper[5050]: [+]etcd-readiness excluded: ok Dec 11 15:55:20 crc kubenswrapper[5050]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 11 15:55:20 crc kubenswrapper[5050]: [+]informer-sync ok Dec 11 15:55:20 crc kubenswrapper[5050]: [+]poststarthook/generic-apiserver-start-informers ok Dec 11 15:55:20 crc kubenswrapper[5050]: [+]poststarthook/max-in-flight-filter ok Dec 11 15:55:20 crc kubenswrapper[5050]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 11 15:55:20 crc kubenswrapper[5050]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 11 15:55:20 crc kubenswrapper[5050]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Dec 11 15:55:20 crc kubenswrapper[5050]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Dec 11 15:55:20 crc kubenswrapper[5050]: [+]poststarthook/project.openshift.io-projectcache ok Dec 11 15:55:20 crc kubenswrapper[5050]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 11 15:55:20 crc kubenswrapper[5050]: [+]poststarthook/openshift.io-startinformers ok Dec 11 15:55:20 crc kubenswrapper[5050]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 11 15:55:20 crc kubenswrapper[5050]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 11 15:55:20 crc kubenswrapper[5050]: [-]shutdown failed: reason withheld Dec 11 15:55:20 crc kubenswrapper[5050]: readyz check failed Dec 11 15:55:20 crc kubenswrapper[5050]: I1211 15:55:20.212580 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" podUID="ea884e88-c0df-4212-976a-0d7ce1731fdc" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:21 crc kubenswrapper[5050]: I1211 15:55:21.161451 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5fb79d99b5-m4xgd" podUID="2086bc41-00a2-4c97-a491-08511f3ed6e5" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.117:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.117:8080: connect: connection refused" Dec 11 15:55:21 crc kubenswrapper[5050]: I1211 15:55:21.161528 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5fb79d99b5-m4xgd" Dec 11 15:55:21 crc kubenswrapper[5050]: I1211 15:55:21.162343 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"53622f8671c2479d99b0c3589d2052c01e492456c4e585fc4fd295a95f250319"} pod="openstack/horizon-5fb79d99b5-m4xgd" containerMessage="Container horizon failed startup probe, will be restarted" Dec 11 15:55:21 crc kubenswrapper[5050]: I1211 15:55:21.162372 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5fb79d99b5-m4xgd" podUID="2086bc41-00a2-4c97-a491-08511f3ed6e5" containerName="horizon" containerID="cri-o://53622f8671c2479d99b0c3589d2052c01e492456c4e585fc4fd295a95f250319" gracePeriod=30 Dec 11 15:55:21 crc kubenswrapper[5050]: I1211 15:55:21.764098 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-volume-volume1-0" podUID="b94bc94c-a636-4f0d-bd28-0f347e7b1143" containerName="cinder-volume" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:21 crc kubenswrapper[5050]: I1211 15:55:21.821669 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g2bpg"] Dec 11 15:55:21 crc kubenswrapper[5050]: I1211 15:55:21.821905 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g2bpg" podUID="daba01c5-6d3c-4e32-84ce-d8f67b685671" containerName="registry-server" containerID="cri-o://6999bdf777a1d901cca89da88f6dd3eb972c80edd1ca1e023678f2b6e068a574" gracePeriod=2 Dec 11 15:55:21 crc kubenswrapper[5050]: I1211 15:55:21.958103 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-2t28t/must-gather-p95dx"] Dec 11 15:55:21 crc kubenswrapper[5050]: E1211 15:55:21.958789 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c2d8343-b085-4545-9a26-2dd0bf907b5e" containerName="validate-network-openstack-openstack-cell1" Dec 11 15:55:21 crc kubenswrapper[5050]: I1211 15:55:21.958807 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c2d8343-b085-4545-9a26-2dd0bf907b5e" containerName="validate-network-openstack-openstack-cell1" Dec 11 15:55:21 crc kubenswrapper[5050]: I1211 15:55:21.959008 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c2d8343-b085-4545-9a26-2dd0bf907b5e" containerName="validate-network-openstack-openstack-cell1" Dec 11 15:55:21 crc kubenswrapper[5050]: I1211 15:55:21.960227 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2t28t/must-gather-p95dx" Dec 11 15:55:21 crc kubenswrapper[5050]: I1211 15:55:21.962518 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-2t28t"/"default-dockercfg-lm8h9" Dec 11 15:55:21 crc kubenswrapper[5050]: I1211 15:55:21.962761 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-2t28t"/"kube-root-ca.crt" Dec 11 15:55:21 crc kubenswrapper[5050]: I1211 15:55:21.963066 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-2t28t"/"openshift-service-ca.crt" Dec 11 15:55:21 crc kubenswrapper[5050]: I1211 15:55:21.976075 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2t28t/must-gather-p95dx"] Dec 11 15:55:22 crc kubenswrapper[5050]: I1211 15:55:22.144621 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d5818b40-e580-431f-b892-075abf64ef47-must-gather-output\") pod \"must-gather-p95dx\" (UID: \"d5818b40-e580-431f-b892-075abf64ef47\") " pod="openshift-must-gather-2t28t/must-gather-p95dx" Dec 11 15:55:22 crc kubenswrapper[5050]: I1211 15:55:22.144935 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv5wr\" (UniqueName: \"kubernetes.io/projected/d5818b40-e580-431f-b892-075abf64ef47-kube-api-access-nv5wr\") pod \"must-gather-p95dx\" (UID: \"d5818b40-e580-431f-b892-075abf64ef47\") " pod="openshift-must-gather-2t28t/must-gather-p95dx" Dec 11 15:55:22 crc kubenswrapper[5050]: I1211 15:55:22.176687 5050 generic.go:334] "Generic (PLEG): container finished" podID="daba01c5-6d3c-4e32-84ce-d8f67b685671" containerID="6999bdf777a1d901cca89da88f6dd3eb972c80edd1ca1e023678f2b6e068a574" exitCode=0 Dec 11 15:55:22 crc kubenswrapper[5050]: I1211 15:55:22.176769 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g2bpg" event={"ID":"daba01c5-6d3c-4e32-84ce-d8f67b685671","Type":"ContainerDied","Data":"6999bdf777a1d901cca89da88f6dd3eb972c80edd1ca1e023678f2b6e068a574"} Dec 11 15:55:22 crc kubenswrapper[5050]: I1211 15:55:22.247300 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d5818b40-e580-431f-b892-075abf64ef47-must-gather-output\") pod \"must-gather-p95dx\" (UID: \"d5818b40-e580-431f-b892-075abf64ef47\") " pod="openshift-must-gather-2t28t/must-gather-p95dx" Dec 11 15:55:22 crc kubenswrapper[5050]: I1211 15:55:22.247438 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv5wr\" (UniqueName: \"kubernetes.io/projected/d5818b40-e580-431f-b892-075abf64ef47-kube-api-access-nv5wr\") pod \"must-gather-p95dx\" (UID: \"d5818b40-e580-431f-b892-075abf64ef47\") " pod="openshift-must-gather-2t28t/must-gather-p95dx" Dec 11 15:55:22 crc kubenswrapper[5050]: I1211 15:55:22.247857 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d5818b40-e580-431f-b892-075abf64ef47-must-gather-output\") pod \"must-gather-p95dx\" (UID: \"d5818b40-e580-431f-b892-075abf64ef47\") " pod="openshift-must-gather-2t28t/must-gather-p95dx" Dec 11 15:55:22 crc kubenswrapper[5050]: I1211 15:55:22.267708 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv5wr\" (UniqueName: \"kubernetes.io/projected/d5818b40-e580-431f-b892-075abf64ef47-kube-api-access-nv5wr\") pod \"must-gather-p95dx\" (UID: \"d5818b40-e580-431f-b892-075abf64ef47\") " pod="openshift-must-gather-2t28t/must-gather-p95dx" Dec 11 15:55:22 crc kubenswrapper[5050]: I1211 15:55:22.344029 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2t28t/must-gather-p95dx" Dec 11 15:55:22 crc kubenswrapper[5050]: I1211 15:55:22.424119 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-backup-0" podUID="0f954250-5982-4088-839a-8faf7bfe203c" containerName="cinder-backup" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:22 crc kubenswrapper[5050]: I1211 15:55:22.569367 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g2bpg" Dec 11 15:55:22 crc kubenswrapper[5050]: I1211 15:55:22.769183 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/daba01c5-6d3c-4e32-84ce-d8f67b685671-utilities\") pod \"daba01c5-6d3c-4e32-84ce-d8f67b685671\" (UID: \"daba01c5-6d3c-4e32-84ce-d8f67b685671\") " Dec 11 15:55:22 crc kubenswrapper[5050]: I1211 15:55:22.769352 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/daba01c5-6d3c-4e32-84ce-d8f67b685671-catalog-content\") pod \"daba01c5-6d3c-4e32-84ce-d8f67b685671\" (UID: \"daba01c5-6d3c-4e32-84ce-d8f67b685671\") " Dec 11 15:55:22 crc kubenswrapper[5050]: I1211 15:55:22.769408 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhkwl\" (UniqueName: \"kubernetes.io/projected/daba01c5-6d3c-4e32-84ce-d8f67b685671-kube-api-access-mhkwl\") pod \"daba01c5-6d3c-4e32-84ce-d8f67b685671\" (UID: \"daba01c5-6d3c-4e32-84ce-d8f67b685671\") " Dec 11 15:55:22 crc kubenswrapper[5050]: I1211 15:55:22.770742 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/daba01c5-6d3c-4e32-84ce-d8f67b685671-utilities" (OuterVolumeSpecName: "utilities") pod "daba01c5-6d3c-4e32-84ce-d8f67b685671" (UID: "daba01c5-6d3c-4e32-84ce-d8f67b685671"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:55:22 crc kubenswrapper[5050]: I1211 15:55:22.776427 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daba01c5-6d3c-4e32-84ce-d8f67b685671-kube-api-access-mhkwl" (OuterVolumeSpecName: "kube-api-access-mhkwl") pod "daba01c5-6d3c-4e32-84ce-d8f67b685671" (UID: "daba01c5-6d3c-4e32-84ce-d8f67b685671"). InnerVolumeSpecName "kube-api-access-mhkwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:55:22 crc kubenswrapper[5050]: I1211 15:55:22.851108 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/daba01c5-6d3c-4e32-84ce-d8f67b685671-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "daba01c5-6d3c-4e32-84ce-d8f67b685671" (UID: "daba01c5-6d3c-4e32-84ce-d8f67b685671"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:55:22 crc kubenswrapper[5050]: I1211 15:55:22.872623 5050 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/daba01c5-6d3c-4e32-84ce-d8f67b685671-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 15:55:22 crc kubenswrapper[5050]: I1211 15:55:22.872656 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhkwl\" (UniqueName: \"kubernetes.io/projected/daba01c5-6d3c-4e32-84ce-d8f67b685671-kube-api-access-mhkwl\") on node \"crc\" DevicePath \"\"" Dec 11 15:55:22 crc kubenswrapper[5050]: I1211 15:55:22.872666 5050 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/daba01c5-6d3c-4e32-84ce-d8f67b685671-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 15:55:22 crc kubenswrapper[5050]: I1211 15:55:22.966936 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-2t28t/must-gather-p95dx"] Dec 11 15:55:23 crc kubenswrapper[5050]: I1211 15:55:23.016223 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="131d56da-b770-4452-97c9-b585434da431" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:23 crc kubenswrapper[5050]: I1211 15:55:23.191361 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-2t28t/must-gather-p95dx" event={"ID":"d5818b40-e580-431f-b892-075abf64ef47","Type":"ContainerStarted","Data":"9d0c3d3b66773142b6e23f8c428f3bc6ba80d151883d0a45f97c85dc563696af"} Dec 11 15:55:23 crc kubenswrapper[5050]: I1211 15:55:23.194223 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g2bpg" event={"ID":"daba01c5-6d3c-4e32-84ce-d8f67b685671","Type":"ContainerDied","Data":"a0e6decee9f23f2ebce4aff924ac771e47d8cc30131d5d61c23ac7f5db6fe038"} Dec 11 15:55:23 crc kubenswrapper[5050]: I1211 15:55:23.194313 5050 scope.go:117] "RemoveContainer" containerID="6999bdf777a1d901cca89da88f6dd3eb972c80edd1ca1e023678f2b6e068a574" Dec 11 15:55:23 crc kubenswrapper[5050]: I1211 15:55:23.194246 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g2bpg" Dec 11 15:55:23 crc kubenswrapper[5050]: I1211 15:55:23.219732 5050 scope.go:117] "RemoveContainer" containerID="7091865c6daa971ea4041591f44b1882e4ca9c247bd4facfaf2f413ed4e25e45" Dec 11 15:55:23 crc kubenswrapper[5050]: I1211 15:55:23.236066 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g2bpg"] Dec 11 15:55:23 crc kubenswrapper[5050]: I1211 15:55:23.246814 5050 scope.go:117] "RemoveContainer" containerID="a97e775c1164a52c3657aa79794d5788ff41e62ab5be8853489d362f00e3c22d" Dec 11 15:55:23 crc kubenswrapper[5050]: I1211 15:55:23.247806 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g2bpg"] Dec 11 15:55:23 crc kubenswrapper[5050]: I1211 15:55:23.572459 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="daba01c5-6d3c-4e32-84ce-d8f67b685671" path="/var/lib/kubelet/pods/daba01c5-6d3c-4e32-84ce-d8f67b685671/volumes" Dec 11 15:55:24 crc kubenswrapper[5050]: I1211 15:55:24.362124 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5d666c4679-krnwf" Dec 11 15:55:25 crc kubenswrapper[5050]: I1211 15:55:25.214195 5050 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cd66n container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 11 15:55:25 crc kubenswrapper[5050]: [+]log ok Dec 11 15:55:25 crc kubenswrapper[5050]: [+]etcd excluded: ok Dec 11 15:55:25 crc kubenswrapper[5050]: [+]etcd-readiness excluded: ok Dec 11 15:55:25 crc kubenswrapper[5050]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 11 15:55:25 crc kubenswrapper[5050]: [+]informer-sync ok Dec 11 15:55:25 crc kubenswrapper[5050]: [+]poststarthook/generic-apiserver-start-informers ok Dec 11 15:55:25 crc kubenswrapper[5050]: [+]poststarthook/max-in-flight-filter ok Dec 11 15:55:25 crc kubenswrapper[5050]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 11 15:55:25 crc kubenswrapper[5050]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 11 15:55:25 crc kubenswrapper[5050]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Dec 11 15:55:25 crc kubenswrapper[5050]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Dec 11 15:55:25 crc kubenswrapper[5050]: [+]poststarthook/project.openshift.io-projectcache ok Dec 11 15:55:25 crc kubenswrapper[5050]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 11 15:55:25 crc kubenswrapper[5050]: [+]poststarthook/openshift.io-startinformers ok Dec 11 15:55:25 crc kubenswrapper[5050]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 11 15:55:25 crc kubenswrapper[5050]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 11 15:55:25 crc kubenswrapper[5050]: [-]shutdown failed: reason withheld Dec 11 15:55:25 crc kubenswrapper[5050]: readyz check failed Dec 11 15:55:25 crc kubenswrapper[5050]: I1211 15:55:25.214312 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" podUID="ea884e88-c0df-4212-976a-0d7ce1731fdc" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:26 crc kubenswrapper[5050]: I1211 15:55:26.736486 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-volume-volume1-0" podUID="b94bc94c-a636-4f0d-bd28-0f347e7b1143" containerName="cinder-volume" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:27 crc kubenswrapper[5050]: I1211 15:55:27.334169 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-backup-0" podUID="0f954250-5982-4088-839a-8faf7bfe203c" containerName="cinder-backup" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:27 crc kubenswrapper[5050]: I1211 15:55:27.546430 5050 scope.go:117] "RemoveContainer" containerID="33899e24adacef82929b0fbbcd58af7cb4f7aa3935b49ec9b5b4045a51da1d14" Dec 11 15:55:27 crc kubenswrapper[5050]: E1211 15:55:27.546780 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:55:28 crc kubenswrapper[5050]: I1211 15:55:28.014555 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="131d56da-b770-4452-97c9-b585434da431" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:30 crc kubenswrapper[5050]: I1211 15:55:30.097264 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-mbbdj" podUID="2b344bba-8e1d-415b-9b5f-e21d3144fe42" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:30 crc kubenswrapper[5050]: I1211 15:55:30.209100 5050 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cd66n container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Dec 11 15:55:30 crc kubenswrapper[5050]: I1211 15:55:30.209162 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" podUID="ea884e88-c0df-4212-976a-0d7ce1731fdc" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.9:8443: connect: connection refused" Dec 11 15:55:30 crc kubenswrapper[5050]: I1211 15:55:30.412722 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-openstack-openstack-cell1-z5mzj"] Dec 11 15:55:30 crc kubenswrapper[5050]: E1211 15:55:30.413302 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daba01c5-6d3c-4e32-84ce-d8f67b685671" containerName="registry-server" Dec 11 15:55:30 crc kubenswrapper[5050]: I1211 15:55:30.413321 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="daba01c5-6d3c-4e32-84ce-d8f67b685671" containerName="registry-server" Dec 11 15:55:30 crc kubenswrapper[5050]: E1211 15:55:30.413339 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daba01c5-6d3c-4e32-84ce-d8f67b685671" containerName="extract-content" Dec 11 15:55:30 crc kubenswrapper[5050]: I1211 15:55:30.413347 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="daba01c5-6d3c-4e32-84ce-d8f67b685671" containerName="extract-content" Dec 11 15:55:30 crc kubenswrapper[5050]: E1211 15:55:30.413372 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daba01c5-6d3c-4e32-84ce-d8f67b685671" containerName="extract-utilities" Dec 11 15:55:30 crc kubenswrapper[5050]: I1211 15:55:30.413379 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="daba01c5-6d3c-4e32-84ce-d8f67b685671" containerName="extract-utilities" Dec 11 15:55:30 crc kubenswrapper[5050]: I1211 15:55:30.413697 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="daba01c5-6d3c-4e32-84ce-d8f67b685671" containerName="registry-server" Dec 11 15:55:30 crc kubenswrapper[5050]: I1211 15:55:30.414573 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-z5mzj" Dec 11 15:55:30 crc kubenswrapper[5050]: I1211 15:55:30.426496 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-mvxd9" Dec 11 15:55:30 crc kubenswrapper[5050]: I1211 15:55:30.426777 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 11 15:55:30 crc kubenswrapper[5050]: I1211 15:55:30.426891 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 15:55:30 crc kubenswrapper[5050]: I1211 15:55:30.427058 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 11 15:55:30 crc kubenswrapper[5050]: I1211 15:55:30.530878 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-z5mzj"] Dec 11 15:55:30 crc kubenswrapper[5050]: I1211 15:55:30.581455 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe-inventory\") pod \"install-os-openstack-openstack-cell1-z5mzj\" (UID: \"bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe\") " pod="openstack/install-os-openstack-openstack-cell1-z5mzj" Dec 11 15:55:30 crc kubenswrapper[5050]: I1211 15:55:30.581510 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe-ssh-key\") pod \"install-os-openstack-openstack-cell1-z5mzj\" (UID: \"bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe\") " pod="openstack/install-os-openstack-openstack-cell1-z5mzj" Dec 11 15:55:30 crc kubenswrapper[5050]: I1211 15:55:30.581531 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ldnb\" (UniqueName: \"kubernetes.io/projected/bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe-kube-api-access-8ldnb\") pod \"install-os-openstack-openstack-cell1-z5mzj\" (UID: \"bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe\") " pod="openstack/install-os-openstack-openstack-cell1-z5mzj" Dec 11 15:55:30 crc kubenswrapper[5050]: I1211 15:55:30.581682 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe-ceph\") pod \"install-os-openstack-openstack-cell1-z5mzj\" (UID: \"bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe\") " pod="openstack/install-os-openstack-openstack-cell1-z5mzj" Dec 11 15:55:30 crc kubenswrapper[5050]: I1211 15:55:30.683716 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe-ceph\") pod \"install-os-openstack-openstack-cell1-z5mzj\" (UID: \"bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe\") " pod="openstack/install-os-openstack-openstack-cell1-z5mzj" Dec 11 15:55:30 crc kubenswrapper[5050]: I1211 15:55:30.684041 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe-inventory\") pod \"install-os-openstack-openstack-cell1-z5mzj\" (UID: \"bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe\") " pod="openstack/install-os-openstack-openstack-cell1-z5mzj" Dec 11 15:55:30 crc kubenswrapper[5050]: I1211 15:55:30.684086 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe-ssh-key\") pod \"install-os-openstack-openstack-cell1-z5mzj\" (UID: \"bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe\") " pod="openstack/install-os-openstack-openstack-cell1-z5mzj" Dec 11 15:55:30 crc kubenswrapper[5050]: I1211 15:55:30.684105 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ldnb\" (UniqueName: \"kubernetes.io/projected/bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe-kube-api-access-8ldnb\") pod \"install-os-openstack-openstack-cell1-z5mzj\" (UID: \"bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe\") " pod="openstack/install-os-openstack-openstack-cell1-z5mzj" Dec 11 15:55:30 crc kubenswrapper[5050]: I1211 15:55:30.690698 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe-ceph\") pod \"install-os-openstack-openstack-cell1-z5mzj\" (UID: \"bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe\") " pod="openstack/install-os-openstack-openstack-cell1-z5mzj" Dec 11 15:55:30 crc kubenswrapper[5050]: I1211 15:55:30.703645 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe-inventory\") pod \"install-os-openstack-openstack-cell1-z5mzj\" (UID: \"bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe\") " pod="openstack/install-os-openstack-openstack-cell1-z5mzj" Dec 11 15:55:30 crc kubenswrapper[5050]: I1211 15:55:30.710859 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe-ssh-key\") pod \"install-os-openstack-openstack-cell1-z5mzj\" (UID: \"bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe\") " pod="openstack/install-os-openstack-openstack-cell1-z5mzj" Dec 11 15:55:30 crc kubenswrapper[5050]: I1211 15:55:30.732796 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ldnb\" (UniqueName: \"kubernetes.io/projected/bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe-kube-api-access-8ldnb\") pod \"install-os-openstack-openstack-cell1-z5mzj\" (UID: \"bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe\") " pod="openstack/install-os-openstack-openstack-cell1-z5mzj" Dec 11 15:55:30 crc kubenswrapper[5050]: I1211 15:55:30.745594 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-z5mzj" Dec 11 15:55:31 crc kubenswrapper[5050]: I1211 15:55:31.739425 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-volume-volume1-0" podUID="b94bc94c-a636-4f0d-bd28-0f347e7b1143" containerName="cinder-volume" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:32 crc kubenswrapper[5050]: I1211 15:55:32.403265 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-backup-0" podUID="0f954250-5982-4088-839a-8faf7bfe203c" containerName="cinder-backup" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:33 crc kubenswrapper[5050]: I1211 15:55:33.009143 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="131d56da-b770-4452-97c9-b585434da431" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:33 crc kubenswrapper[5050]: I1211 15:55:33.627296 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-h4w7p" podUID="9c82f51b-e2a0-49e6-bc0e-d7679e439a6f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:55:33 crc kubenswrapper[5050]: I1211 15:55:33.668298 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-h4w7p" podUID="9c82f51b-e2a0-49e6-bc0e-d7679e439a6f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:55:33 crc kubenswrapper[5050]: I1211 15:55:33.771437 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="531337b1-3bd0-448d-a561-0b19b40214a6" containerName="galera" probeResult="failure" output="command timed out" Dec 11 15:55:33 crc kubenswrapper[5050]: I1211 15:55:33.771435 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="531337b1-3bd0-448d-a561-0b19b40214a6" containerName="galera" probeResult="failure" output="command timed out" Dec 11 15:55:33 crc kubenswrapper[5050]: I1211 15:55:33.823345 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-d878cb77-dmcvf"] Dec 11 15:55:34 crc kubenswrapper[5050]: I1211 15:55:34.323664 5050 generic.go:334] "Generic (PLEG): container finished" podID="29321ad8-528b-46ed-8c14-21a74038cddb" containerID="938a7854156ef3bfb81302582302a81ef99f4c5853767f52c07821fcdff360d2" exitCode=137 Dec 11 15:55:34 crc kubenswrapper[5050]: I1211 15:55:34.323935 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"29321ad8-528b-46ed-8c14-21a74038cddb","Type":"ContainerDied","Data":"938a7854156ef3bfb81302582302a81ef99f4c5853767f52c07821fcdff360d2"} Dec 11 15:55:34 crc kubenswrapper[5050]: I1211 15:55:34.326307 5050 generic.go:334] "Generic (PLEG): container finished" podID="9c5fd2fd-4df8-4f0f-982c-d3e6df852669" containerID="21d82c8ed257c6bbf41d15d936e7dbaada66175dbed0e14ecf37991389e1a973" exitCode=137 Dec 11 15:55:34 crc kubenswrapper[5050]: I1211 15:55:34.326335 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"9c5fd2fd-4df8-4f0f-982c-d3e6df852669","Type":"ContainerDied","Data":"21d82c8ed257c6bbf41d15d936e7dbaada66175dbed0e14ecf37991389e1a973"} Dec 11 15:55:35 crc kubenswrapper[5050]: I1211 15:55:35.208615 5050 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cd66n container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Dec 11 15:55:35 crc kubenswrapper[5050]: I1211 15:55:35.208672 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" podUID="ea884e88-c0df-4212-976a-0d7ce1731fdc" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.9:8443: connect: connection refused" Dec 11 15:55:35 crc kubenswrapper[5050]: I1211 15:55:35.777098 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="654ba650-97ba-422e-931f-c97a03d7ff9c" containerName="ovn-northd" probeResult="failure" output="command timed out" Dec 11 15:55:35 crc kubenswrapper[5050]: I1211 15:55:35.781848 5050 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-northd-0" podUID="654ba650-97ba-422e-931f-c97a03d7ff9c" containerName="ovn-northd" probeResult="failure" output="command timed out" Dec 11 15:55:36 crc kubenswrapper[5050]: I1211 15:55:36.838884 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-volume-volume1-0" podUID="b94bc94c-a636-4f0d-bd28-0f347e7b1143" containerName="cinder-volume" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:37 crc kubenswrapper[5050]: I1211 15:55:37.328131 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-backup-0" podUID="0f954250-5982-4088-839a-8faf7bfe203c" containerName="cinder-backup" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:38 crc kubenswrapper[5050]: I1211 15:55:38.005179 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="131d56da-b770-4452-97c9-b585434da431" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:38 crc kubenswrapper[5050]: I1211 15:55:38.546263 5050 scope.go:117] "RemoveContainer" containerID="33899e24adacef82929b0fbbcd58af7cb4f7aa3935b49ec9b5b4045a51da1d14" Dec 11 15:55:38 crc kubenswrapper[5050]: E1211 15:55:38.546824 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:55:40 crc kubenswrapper[5050]: I1211 15:55:40.208596 5050 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cd66n container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Dec 11 15:55:40 crc kubenswrapper[5050]: I1211 15:55:40.208934 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" podUID="ea884e88-c0df-4212-976a-0d7ce1731fdc" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.9:8443: connect: connection refused" Dec 11 15:55:41 crc kubenswrapper[5050]: I1211 15:55:41.745031 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-volume-volume1-0" podUID="b94bc94c-a636-4f0d-bd28-0f347e7b1143" containerName="cinder-volume" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:42 crc kubenswrapper[5050]: I1211 15:55:42.463158 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-backup-0" podUID="0f954250-5982-4088-839a-8faf7bfe203c" containerName="cinder-backup" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:43 crc kubenswrapper[5050]: I1211 15:55:43.007681 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="131d56da-b770-4452-97c9-b585434da431" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:45 crc kubenswrapper[5050]: I1211 15:55:45.209513 5050 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cd66n container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Dec 11 15:55:45 crc kubenswrapper[5050]: I1211 15:55:45.209821 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" podUID="ea884e88-c0df-4212-976a-0d7ce1731fdc" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.9:8443: connect: connection refused" Dec 11 15:55:45 crc kubenswrapper[5050]: I1211 15:55:45.945208 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-openstack-openstack-cell1-z5mzj"] Dec 11 15:55:46 crc kubenswrapper[5050]: E1211 15:55:46.196108 5050 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-must-gather:latest" Dec 11 15:55:46 crc kubenswrapper[5050]: E1211 15:55:46.196291 5050 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 11 15:55:46 crc kubenswrapper[5050]: container &Container{Name:gather,Image:quay.io/openstack-k8s-operators/openstack-must-gather:latest,Command:[/bin/bash -c Dec 11 15:55:46 crc kubenswrapper[5050]: echo "[disk usage checker] Started" Dec 11 15:55:46 crc kubenswrapper[5050]: target_dir="/must-gather" Dec 11 15:55:46 crc kubenswrapper[5050]: usage_percentage_limit="70" Dec 11 15:55:46 crc kubenswrapper[5050]: while true; do Dec 11 15:55:46 crc kubenswrapper[5050]: usage_percentage=$(df -P "$target_dir" | awk 'NR==2 {print $5}' | sed 's/%//') Dec 11 15:55:46 crc kubenswrapper[5050]: echo "[disk usage checker] Volume usage percentage: current = ${usage_percentage} ; allowed = ${usage_percentage_limit}" Dec 11 15:55:46 crc kubenswrapper[5050]: if [ "$usage_percentage" -gt "$usage_percentage_limit" ]; then Dec 11 15:55:46 crc kubenswrapper[5050]: echo "[disk usage checker] Disk usage exceeds the volume percentage of ${usage_percentage_limit} for mounted directory, terminating..." Dec 11 15:55:46 crc kubenswrapper[5050]: ps -o sess --no-headers | sort -u | while read sid; do Dec 11 15:55:46 crc kubenswrapper[5050]: [[ "$sid" -eq "${$}" ]] && continue Dec 11 15:55:46 crc kubenswrapper[5050]: pkill --signal SIGKILL --session "$sid" Dec 11 15:55:46 crc kubenswrapper[5050]: done Dec 11 15:55:46 crc kubenswrapper[5050]: exit 1 Dec 11 15:55:46 crc kubenswrapper[5050]: fi Dec 11 15:55:46 crc kubenswrapper[5050]: sleep 5 Dec 11 15:55:46 crc kubenswrapper[5050]: done & setsid -w bash <<-MUSTGATHER_EOF Dec 11 15:55:46 crc kubenswrapper[5050]: ADDITIONAL_NAMESPACES=kuttl,openshift-storage,openshift-marketplace,openshift-operators,sushy-emulator,tobiko OPENSTACK_DATABASES=ALL SOS_EDPM=all SOS_DECOMPRESS=0 gather Dec 11 15:55:46 crc kubenswrapper[5050]: MUSTGATHER_EOF Dec 11 15:55:46 crc kubenswrapper[5050]: sync && echo 'Caches written to disk'],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:must-gather-output,ReadOnly:false,MountPath:/must-gather,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nv5wr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod must-gather-p95dx_openshift-must-gather-2t28t(d5818b40-e580-431f-b892-075abf64ef47): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Dec 11 15:55:46 crc kubenswrapper[5050]: > logger="UnhandledError" Dec 11 15:55:46 crc kubenswrapper[5050]: E1211 15:55:46.198462 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"gather\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"copy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\"]" pod="openshift-must-gather-2t28t/must-gather-p95dx" podUID="d5818b40-e580-431f-b892-075abf64ef47" Dec 11 15:55:46 crc kubenswrapper[5050]: I1211 15:55:46.456127 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-z5mzj" event={"ID":"bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe","Type":"ContainerStarted","Data":"d5a859e15ad933abf15794ed09542e6bf0f9845d0e9214845a78fdb5376bb220"} Dec 11 15:55:46 crc kubenswrapper[5050]: E1211 15:55:46.459533 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"gather\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\", failed to \"StartContainer\" for \"copy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-must-gather:latest\\\"\"]" pod="openshift-must-gather-2t28t/must-gather-p95dx" podUID="d5818b40-e580-431f-b892-075abf64ef47" Dec 11 15:55:46 crc kubenswrapper[5050]: I1211 15:55:46.733610 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-volume-volume1-0" podUID="b94bc94c-a636-4f0d-bd28-0f347e7b1143" containerName="cinder-volume" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:46 crc kubenswrapper[5050]: I1211 15:55:46.733732 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Dec 11 15:55:46 crc kubenswrapper[5050]: I1211 15:55:46.734877 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-volume" containerStatusID={"Type":"cri-o","ID":"1775318ccb2e6444fd0e7771ecf1272cb1579807c16cb2b926ad61c7397affe6"} pod="openstack/cinder-volume-volume1-0" containerMessage="Container cinder-volume failed startup probe, will be restarted" Dec 11 15:55:46 crc kubenswrapper[5050]: I1211 15:55:46.734971 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-volume-volume1-0" podUID="b94bc94c-a636-4f0d-bd28-0f347e7b1143" containerName="cinder-volume" containerID="cri-o://1775318ccb2e6444fd0e7771ecf1272cb1579807c16cb2b926ad61c7397affe6" gracePeriod=30 Dec 11 15:55:47 crc kubenswrapper[5050]: I1211 15:55:47.327283 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-backup-0" podUID="0f954250-5982-4088-839a-8faf7bfe203c" containerName="cinder-backup" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:47 crc kubenswrapper[5050]: I1211 15:55:47.327626 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Dec 11 15:55:47 crc kubenswrapper[5050]: I1211 15:55:47.328621 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-backup" containerStatusID={"Type":"cri-o","ID":"98eff0f3cea0509221eca0e000876fd8812288edb7d37154c570eded55a82147"} pod="openstack/cinder-backup-0" containerMessage="Container cinder-backup failed startup probe, will be restarted" Dec 11 15:55:47 crc kubenswrapper[5050]: I1211 15:55:47.328682 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-backup-0" podUID="0f954250-5982-4088-839a-8faf7bfe203c" containerName="cinder-backup" containerID="cri-o://98eff0f3cea0509221eca0e000876fd8812288edb7d37154c570eded55a82147" gracePeriod=30 Dec 11 15:55:47 crc kubenswrapper[5050]: I1211 15:55:47.507576 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"29321ad8-528b-46ed-8c14-21a74038cddb","Type":"ContainerStarted","Data":"ff36a77c772086b4d7bee624b29886eacc56d72af90dabc8ccff22f5743f5b10"} Dec 11 15:55:47 crc kubenswrapper[5050]: I1211 15:55:47.520436 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"9c5fd2fd-4df8-4f0f-982c-d3e6df852669","Type":"ContainerStarted","Data":"b9c5e5d5d2c20afaaff097a4e364ba95d81b279644b3d4369b694b4d4235314b"} Dec 11 15:55:48 crc kubenswrapper[5050]: I1211 15:55:48.010228 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="131d56da-b770-4452-97c9-b585434da431" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:48 crc kubenswrapper[5050]: I1211 15:55:48.010734 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 11 15:55:48 crc kubenswrapper[5050]: I1211 15:55:48.012344 5050 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"0918c12ef77e4a03c8b33e9194ab473c7649b3f481ba8e874ade29222d0939ed"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed startup probe, will be restarted" Dec 11 15:55:48 crc kubenswrapper[5050]: I1211 15:55:48.012541 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="131d56da-b770-4452-97c9-b585434da431" containerName="cinder-scheduler" containerID="cri-o://0918c12ef77e4a03c8b33e9194ab473c7649b3f481ba8e874ade29222d0939ed" gracePeriod=30 Dec 11 15:55:48 crc kubenswrapper[5050]: I1211 15:55:48.531408 5050 generic.go:334] "Generic (PLEG): container finished" podID="0f954250-5982-4088-839a-8faf7bfe203c" containerID="98eff0f3cea0509221eca0e000876fd8812288edb7d37154c570eded55a82147" exitCode=143 Dec 11 15:55:48 crc kubenswrapper[5050]: I1211 15:55:48.531627 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"0f954250-5982-4088-839a-8faf7bfe203c","Type":"ContainerDied","Data":"98eff0f3cea0509221eca0e000876fd8812288edb7d37154c570eded55a82147"} Dec 11 15:55:48 crc kubenswrapper[5050]: I1211 15:55:48.531772 5050 scope.go:117] "RemoveContainer" containerID="2b1e9cc4138c3cee1e26c2a91b99b470ec5f4b50f5007ad3b68d6bbde49915a9" Dec 11 15:55:49 crc kubenswrapper[5050]: I1211 15:55:49.545568 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"0f954250-5982-4088-839a-8faf7bfe203c","Type":"ContainerStarted","Data":"1ec14175801d8475b360d27f9d8762a179069518ff96303eeea83f3cabc0603c"} Dec 11 15:55:50 crc kubenswrapper[5050]: I1211 15:55:50.208436 5050 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cd66n container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Dec 11 15:55:50 crc kubenswrapper[5050]: I1211 15:55:50.208729 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" podUID="ea884e88-c0df-4212-976a-0d7ce1731fdc" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.9:8443: connect: connection refused" Dec 11 15:55:51 crc kubenswrapper[5050]: I1211 15:55:51.568718 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-z5mzj" event={"ID":"bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe","Type":"ContainerStarted","Data":"1dfa27c6272f96426c1492974f98bcd24d719a1317e98c920646c427582341dc"} Dec 11 15:55:51 crc kubenswrapper[5050]: I1211 15:55:51.596170 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-openstack-openstack-cell1-z5mzj" podStartSLOduration=17.247606285 podStartE2EDuration="21.596151331s" podCreationTimestamp="2025-12-11 15:55:30 +0000 UTC" firstStartedPulling="2025-12-11 15:55:45.955740417 +0000 UTC m=+7636.799463003" lastFinishedPulling="2025-12-11 15:55:50.304285463 +0000 UTC m=+7641.148008049" observedRunningTime="2025-12-11 15:55:51.590733996 +0000 UTC m=+7642.434456592" watchObservedRunningTime="2025-12-11 15:55:51.596151331 +0000 UTC m=+7642.439873907" Dec 11 15:55:51 crc kubenswrapper[5050]: I1211 15:55:51.907561 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Dec 11 15:55:51 crc kubenswrapper[5050]: I1211 15:55:51.922430 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Dec 11 15:55:52 crc kubenswrapper[5050]: I1211 15:55:52.312796 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Dec 11 15:55:52 crc kubenswrapper[5050]: I1211 15:55:52.547893 5050 scope.go:117] "RemoveContainer" containerID="33899e24adacef82929b0fbbcd58af7cb4f7aa3935b49ec9b5b4045a51da1d14" Dec 11 15:55:52 crc kubenswrapper[5050]: E1211 15:55:52.548300 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:55:53 crc kubenswrapper[5050]: I1211 15:55:53.606279 5050 generic.go:334] "Generic (PLEG): container finished" podID="2086bc41-00a2-4c97-a491-08511f3ed6e5" containerID="53622f8671c2479d99b0c3589d2052c01e492456c4e585fc4fd295a95f250319" exitCode=137 Dec 11 15:55:53 crc kubenswrapper[5050]: I1211 15:55:53.606518 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fb79d99b5-m4xgd" event={"ID":"2086bc41-00a2-4c97-a491-08511f3ed6e5","Type":"ContainerDied","Data":"53622f8671c2479d99b0c3589d2052c01e492456c4e585fc4fd295a95f250319"} Dec 11 15:55:53 crc kubenswrapper[5050]: I1211 15:55:53.606925 5050 scope.go:117] "RemoveContainer" containerID="c2abb4cc449a9d8127c16b79dfe68ac27a46606a1e98e66a409122244f1e0137" Dec 11 15:55:55 crc kubenswrapper[5050]: I1211 15:55:55.208979 5050 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cd66n container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Dec 11 15:55:55 crc kubenswrapper[5050]: I1211 15:55:55.209414 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" podUID="ea884e88-c0df-4212-976a-0d7ce1731fdc" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.9:8443: connect: connection refused" Dec 11 15:55:56 crc kubenswrapper[5050]: I1211 15:55:56.640101 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5fb79d99b5-m4xgd" event={"ID":"2086bc41-00a2-4c97-a491-08511f3ed6e5","Type":"ContainerStarted","Data":"7472629b59d5c9f3d2d031ef1c5803ae49241cf989897a55d5cc4c872cc31d88"} Dec 11 15:55:57 crc kubenswrapper[5050]: I1211 15:55:57.336972 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-backup-0" podUID="0f954250-5982-4088-839a-8faf7bfe203c" containerName="cinder-backup" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:55:58 crc kubenswrapper[5050]: I1211 15:55:58.078089 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-2t28t/must-gather-p95dx"] Dec 11 15:55:58 crc kubenswrapper[5050]: I1211 15:55:58.091457 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-2t28t/must-gather-p95dx"] Dec 11 15:55:58 crc kubenswrapper[5050]: I1211 15:55:58.881276 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" podUID="effe4522-49a7-4d34-b64d-6ab0012f5548" containerName="oauth-openshift" containerID="cri-o://9ec87625c63bb19608c5fab2a9d7e38a82ac0120b7f7afb4ed2fe01cd46ecae8" gracePeriod=15 Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.698318 5050 generic.go:334] "Generic (PLEG): container finished" podID="effe4522-49a7-4d34-b64d-6ab0012f5548" containerID="9ec87625c63bb19608c5fab2a9d7e38a82ac0120b7f7afb4ed2fe01cd46ecae8" exitCode=0 Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.698458 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" event={"ID":"effe4522-49a7-4d34-b64d-6ab0012f5548","Type":"ContainerDied","Data":"9ec87625c63bb19608c5fab2a9d7e38a82ac0120b7f7afb4ed2fe01cd46ecae8"} Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.698877 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" event={"ID":"effe4522-49a7-4d34-b64d-6ab0012f5548","Type":"ContainerDied","Data":"f69ece235426f2d56583702038cafcec531810dbd2c457246b91befdd1da4c4d"} Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.698892 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f69ece235426f2d56583702038cafcec531810dbd2c457246b91befdd1da4c4d" Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.744860 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.811847 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-ocp-branding-template\") pod \"effe4522-49a7-4d34-b64d-6ab0012f5548\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.811923 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/effe4522-49a7-4d34-b64d-6ab0012f5548-audit-policies\") pod \"effe4522-49a7-4d34-b64d-6ab0012f5548\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.812038 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-serving-cert\") pod \"effe4522-49a7-4d34-b64d-6ab0012f5548\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.812096 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-session\") pod \"effe4522-49a7-4d34-b64d-6ab0012f5548\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.812140 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-trusted-ca-bundle\") pod \"effe4522-49a7-4d34-b64d-6ab0012f5548\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.812186 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vc85w\" (UniqueName: \"kubernetes.io/projected/effe4522-49a7-4d34-b64d-6ab0012f5548-kube-api-access-vc85w\") pod \"effe4522-49a7-4d34-b64d-6ab0012f5548\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.812226 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-cliconfig\") pod \"effe4522-49a7-4d34-b64d-6ab0012f5548\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.812249 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-user-template-provider-selection\") pod \"effe4522-49a7-4d34-b64d-6ab0012f5548\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.812311 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-user-template-login\") pod \"effe4522-49a7-4d34-b64d-6ab0012f5548\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.812347 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-user-idp-0-file-data\") pod \"effe4522-49a7-4d34-b64d-6ab0012f5548\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.812367 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-user-template-error\") pod \"effe4522-49a7-4d34-b64d-6ab0012f5548\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.812398 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/effe4522-49a7-4d34-b64d-6ab0012f5548-audit-dir\") pod \"effe4522-49a7-4d34-b64d-6ab0012f5548\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.812436 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-service-ca\") pod \"effe4522-49a7-4d34-b64d-6ab0012f5548\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.812481 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-router-certs\") pod \"effe4522-49a7-4d34-b64d-6ab0012f5548\" (UID: \"effe4522-49a7-4d34-b64d-6ab0012f5548\") " Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.813295 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "effe4522-49a7-4d34-b64d-6ab0012f5548" (UID: "effe4522-49a7-4d34-b64d-6ab0012f5548"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.813760 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/effe4522-49a7-4d34-b64d-6ab0012f5548-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "effe4522-49a7-4d34-b64d-6ab0012f5548" (UID: "effe4522-49a7-4d34-b64d-6ab0012f5548"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.835323 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/effe4522-49a7-4d34-b64d-6ab0012f5548-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "effe4522-49a7-4d34-b64d-6ab0012f5548" (UID: "effe4522-49a7-4d34-b64d-6ab0012f5548"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.841452 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "effe4522-49a7-4d34-b64d-6ab0012f5548" (UID: "effe4522-49a7-4d34-b64d-6ab0012f5548"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.841889 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "effe4522-49a7-4d34-b64d-6ab0012f5548" (UID: "effe4522-49a7-4d34-b64d-6ab0012f5548"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.852832 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/effe4522-49a7-4d34-b64d-6ab0012f5548-kube-api-access-vc85w" (OuterVolumeSpecName: "kube-api-access-vc85w") pod "effe4522-49a7-4d34-b64d-6ab0012f5548" (UID: "effe4522-49a7-4d34-b64d-6ab0012f5548"). InnerVolumeSpecName "kube-api-access-vc85w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.915496 5050 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/effe4522-49a7-4d34-b64d-6ab0012f5548-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.915531 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.915548 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vc85w\" (UniqueName: \"kubernetes.io/projected/effe4522-49a7-4d34-b64d-6ab0012f5548-kube-api-access-vc85w\") on node \"crc\" DevicePath \"\"" Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.915560 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.915572 5050 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/effe4522-49a7-4d34-b64d-6ab0012f5548-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.915584 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.982980 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "effe4522-49a7-4d34-b64d-6ab0012f5548" (UID: "effe4522-49a7-4d34-b64d-6ab0012f5548"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.983123 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "effe4522-49a7-4d34-b64d-6ab0012f5548" (UID: "effe4522-49a7-4d34-b64d-6ab0012f5548"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.983523 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "effe4522-49a7-4d34-b64d-6ab0012f5548" (UID: "effe4522-49a7-4d34-b64d-6ab0012f5548"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.983783 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "effe4522-49a7-4d34-b64d-6ab0012f5548" (UID: "effe4522-49a7-4d34-b64d-6ab0012f5548"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.984141 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "effe4522-49a7-4d34-b64d-6ab0012f5548" (UID: "effe4522-49a7-4d34-b64d-6ab0012f5548"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.984256 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "effe4522-49a7-4d34-b64d-6ab0012f5548" (UID: "effe4522-49a7-4d34-b64d-6ab0012f5548"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.984253 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "effe4522-49a7-4d34-b64d-6ab0012f5548" (UID: "effe4522-49a7-4d34-b64d-6ab0012f5548"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:55:59 crc kubenswrapper[5050]: I1211 15:55:59.984336 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "effe4522-49a7-4d34-b64d-6ab0012f5548" (UID: "effe4522-49a7-4d34-b64d-6ab0012f5548"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:56:00 crc kubenswrapper[5050]: I1211 15:56:00.017236 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 11 15:56:00 crc kubenswrapper[5050]: I1211 15:56:00.017277 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 11 15:56:00 crc kubenswrapper[5050]: I1211 15:56:00.017287 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 11 15:56:00 crc kubenswrapper[5050]: I1211 15:56:00.017298 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 11 15:56:00 crc kubenswrapper[5050]: I1211 15:56:00.017307 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 11 15:56:00 crc kubenswrapper[5050]: I1211 15:56:00.017318 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 15:56:00 crc kubenswrapper[5050]: I1211 15:56:00.017328 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 11 15:56:00 crc kubenswrapper[5050]: I1211 15:56:00.017339 5050 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/effe4522-49a7-4d34-b64d-6ab0012f5548-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 11 15:56:00 crc kubenswrapper[5050]: I1211 15:56:00.208599 5050 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cd66n container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Dec 11 15:56:00 crc kubenswrapper[5050]: I1211 15:56:00.208944 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" podUID="ea884e88-c0df-4212-976a-0d7ce1731fdc" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.9:8443: connect: connection refused" Dec 11 15:56:00 crc kubenswrapper[5050]: I1211 15:56:00.717381 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-d878cb77-dmcvf" Dec 11 15:56:00 crc kubenswrapper[5050]: I1211 15:56:00.717491 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-2t28t/must-gather-p95dx" podUID="d5818b40-e580-431f-b892-075abf64ef47" containerName="gather" containerID="cri-o://1ecfe23058e27617c1487799d3ab207d087a7379653379af811ce5ab55debdbc" gracePeriod=2 Dec 11 15:56:00 crc kubenswrapper[5050]: I1211 15:56:00.717659 5050 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-2t28t/must-gather-p95dx" podUID="d5818b40-e580-431f-b892-075abf64ef47" containerName="copy" containerID="cri-o://fd073c48ba38a0d3ab62843fe9687fdddc0a98723ef38ea922d55ec448668065" gracePeriod=2 Dec 11 15:56:00 crc kubenswrapper[5050]: I1211 15:56:00.769647 5050 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-d878cb77-dmcvf"] Dec 11 15:56:00 crc kubenswrapper[5050]: I1211 15:56:00.782557 5050 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-d878cb77-dmcvf"] Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.064698 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7865f45dd7-dnm2m"] Dec 11 15:56:01 crc kubenswrapper[5050]: E1211 15:56:01.066697 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5818b40-e580-431f-b892-075abf64ef47" containerName="gather" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.066734 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5818b40-e580-431f-b892-075abf64ef47" containerName="gather" Dec 11 15:56:01 crc kubenswrapper[5050]: E1211 15:56:01.066756 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5818b40-e580-431f-b892-075abf64ef47" containerName="copy" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.066765 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5818b40-e580-431f-b892-075abf64ef47" containerName="copy" Dec 11 15:56:01 crc kubenswrapper[5050]: E1211 15:56:01.066807 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="effe4522-49a7-4d34-b64d-6ab0012f5548" containerName="oauth-openshift" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.066816 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="effe4522-49a7-4d34-b64d-6ab0012f5548" containerName="oauth-openshift" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.067109 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5818b40-e580-431f-b892-075abf64ef47" containerName="gather" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.067151 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5818b40-e580-431f-b892-075abf64ef47" containerName="copy" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.067164 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="effe4522-49a7-4d34-b64d-6ab0012f5548" containerName="oauth-openshift" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.068803 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.074910 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.075108 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.075219 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.075252 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.075336 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.075370 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.075413 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.075461 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.075492 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.075531 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.075607 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.075676 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.082861 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7865f45dd7-dnm2m"] Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.086864 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.089374 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.106567 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.147271 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c5249b8f-bc31-4a99-8023-b99875cb5293-v4-0-config-user-template-login\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.147358 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c5249b8f-bc31-4a99-8023-b99875cb5293-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.147445 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c5249b8f-bc31-4a99-8023-b99875cb5293-audit-policies\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.147971 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c5249b8f-bc31-4a99-8023-b99875cb5293-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.148081 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c5249b8f-bc31-4a99-8023-b99875cb5293-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.148211 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c5249b8f-bc31-4a99-8023-b99875cb5293-v4-0-config-system-service-ca\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.148236 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c5249b8f-bc31-4a99-8023-b99875cb5293-v4-0-config-user-template-error\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.148255 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c5249b8f-bc31-4a99-8023-b99875cb5293-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.148472 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5249b8f-bc31-4a99-8023-b99875cb5293-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.148525 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c5249b8f-bc31-4a99-8023-b99875cb5293-v4-0-config-system-router-certs\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.148563 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c5249b8f-bc31-4a99-8023-b99875cb5293-audit-dir\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.148591 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c5249b8f-bc31-4a99-8023-b99875cb5293-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.148615 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c5249b8f-bc31-4a99-8023-b99875cb5293-v4-0-config-system-session\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.148658 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb769\" (UniqueName: \"kubernetes.io/projected/c5249b8f-bc31-4a99-8023-b99875cb5293-kube-api-access-qb769\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.156629 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5fb79d99b5-m4xgd" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.156691 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5fb79d99b5-m4xgd" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.252122 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c5249b8f-bc31-4a99-8023-b99875cb5293-v4-0-config-user-template-login\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.252244 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c5249b8f-bc31-4a99-8023-b99875cb5293-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.252358 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c5249b8f-bc31-4a99-8023-b99875cb5293-audit-policies\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.252437 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c5249b8f-bc31-4a99-8023-b99875cb5293-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.252506 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c5249b8f-bc31-4a99-8023-b99875cb5293-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.252608 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c5249b8f-bc31-4a99-8023-b99875cb5293-v4-0-config-system-service-ca\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.252653 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c5249b8f-bc31-4a99-8023-b99875cb5293-v4-0-config-user-template-error\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.252686 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c5249b8f-bc31-4a99-8023-b99875cb5293-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.252759 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5249b8f-bc31-4a99-8023-b99875cb5293-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.252820 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c5249b8f-bc31-4a99-8023-b99875cb5293-v4-0-config-system-router-certs\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.252922 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c5249b8f-bc31-4a99-8023-b99875cb5293-audit-dir\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.252971 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c5249b8f-bc31-4a99-8023-b99875cb5293-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.253049 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c5249b8f-bc31-4a99-8023-b99875cb5293-v4-0-config-system-session\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.253069 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c5249b8f-bc31-4a99-8023-b99875cb5293-audit-dir\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.253662 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c5249b8f-bc31-4a99-8023-b99875cb5293-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.253772 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c5249b8f-bc31-4a99-8023-b99875cb5293-v4-0-config-system-service-ca\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.254024 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb769\" (UniqueName: \"kubernetes.io/projected/c5249b8f-bc31-4a99-8023-b99875cb5293-kube-api-access-qb769\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.254188 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c5249b8f-bc31-4a99-8023-b99875cb5293-audit-policies\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.254730 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5249b8f-bc31-4a99-8023-b99875cb5293-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.257835 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c5249b8f-bc31-4a99-8023-b99875cb5293-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.257878 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c5249b8f-bc31-4a99-8023-b99875cb5293-v4-0-config-system-session\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.258291 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c5249b8f-bc31-4a99-8023-b99875cb5293-v4-0-config-user-template-login\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.258425 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c5249b8f-bc31-4a99-8023-b99875cb5293-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.258671 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c5249b8f-bc31-4a99-8023-b99875cb5293-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.258923 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c5249b8f-bc31-4a99-8023-b99875cb5293-v4-0-config-user-template-error\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.259028 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c5249b8f-bc31-4a99-8023-b99875cb5293-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.271405 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c5249b8f-bc31-4a99-8023-b99875cb5293-v4-0-config-system-router-certs\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.272301 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb769\" (UniqueName: \"kubernetes.io/projected/c5249b8f-bc31-4a99-8023-b99875cb5293-kube-api-access-qb769\") pod \"oauth-openshift-7865f45dd7-dnm2m\" (UID: \"c5249b8f-bc31-4a99-8023-b99875cb5293\") " pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.399152 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.561929 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="effe4522-49a7-4d34-b64d-6ab0012f5548" path="/var/lib/kubelet/pods/effe4522-49a7-4d34-b64d-6ab0012f5548/volumes" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.728866 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2t28t_must-gather-p95dx_d5818b40-e580-431f-b892-075abf64ef47/copy/0.log" Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.729678 5050 generic.go:334] "Generic (PLEG): container finished" podID="d5818b40-e580-431f-b892-075abf64ef47" containerID="fd073c48ba38a0d3ab62843fe9687fdddc0a98723ef38ea922d55ec448668065" exitCode=143 Dec 11 15:56:01 crc kubenswrapper[5050]: I1211 15:56:01.945952 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7865f45dd7-dnm2m"] Dec 11 15:56:01 crc kubenswrapper[5050]: W1211 15:56:01.952168 5050 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5249b8f_bc31_4a99_8023_b99875cb5293.slice/crio-8aa4435f80f0e0eac66ec5c854db974d2a1d4f5579111d100f95b8bb94a6b9e9 WatchSource:0}: Error finding container 8aa4435f80f0e0eac66ec5c854db974d2a1d4f5579111d100f95b8bb94a6b9e9: Status 404 returned error can't find the container with id 8aa4435f80f0e0eac66ec5c854db974d2a1d4f5579111d100f95b8bb94a6b9e9 Dec 11 15:56:02 crc kubenswrapper[5050]: I1211 15:56:02.342917 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-backup-0" podUID="0f954250-5982-4088-839a-8faf7bfe203c" containerName="cinder-backup" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:56:02 crc kubenswrapper[5050]: I1211 15:56:02.756077 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" event={"ID":"c5249b8f-bc31-4a99-8023-b99875cb5293","Type":"ContainerStarted","Data":"a56e6b5019483448349b67b7016e347b86bb4a5d8e88bc8f97a55a106218a0af"} Dec 11 15:56:02 crc kubenswrapper[5050]: I1211 15:56:02.757371 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" event={"ID":"c5249b8f-bc31-4a99-8023-b99875cb5293","Type":"ContainerStarted","Data":"8aa4435f80f0e0eac66ec5c854db974d2a1d4f5579111d100f95b8bb94a6b9e9"} Dec 11 15:56:02 crc kubenswrapper[5050]: I1211 15:56:02.759085 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:02 crc kubenswrapper[5050]: I1211 15:56:02.765169 5050 patch_prober.go:28] interesting pod/oauth-openshift-7865f45dd7-dnm2m container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.1.168:6443/healthz\": dial tcp 10.217.1.168:6443: connect: connection refused" start-of-body= Dec 11 15:56:02 crc kubenswrapper[5050]: I1211 15:56:02.765234 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" podUID="c5249b8f-bc31-4a99-8023-b99875cb5293" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.1.168:6443/healthz\": dial tcp 10.217.1.168:6443: connect: connection refused" Dec 11 15:56:02 crc kubenswrapper[5050]: I1211 15:56:02.790440 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" podStartSLOduration=29.790421155 podStartE2EDuration="29.790421155s" podCreationTimestamp="2025-12-11 15:55:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 15:56:02.788489284 +0000 UTC m=+7653.632211880" watchObservedRunningTime="2025-12-11 15:56:02.790421155 +0000 UTC m=+7653.634143741" Dec 11 15:56:03 crc kubenswrapper[5050]: I1211 15:56:03.546204 5050 scope.go:117] "RemoveContainer" containerID="33899e24adacef82929b0fbbcd58af7cb4f7aa3935b49ec9b5b4045a51da1d14" Dec 11 15:56:03 crc kubenswrapper[5050]: E1211 15:56:03.546824 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:56:03 crc kubenswrapper[5050]: I1211 15:56:03.768649 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2t28t_must-gather-p95dx_d5818b40-e580-431f-b892-075abf64ef47/copy/0.log" Dec 11 15:56:03 crc kubenswrapper[5050]: I1211 15:56:03.769068 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2t28t_must-gather-p95dx_d5818b40-e580-431f-b892-075abf64ef47/gather/0.log" Dec 11 15:56:03 crc kubenswrapper[5050]: I1211 15:56:03.769111 5050 generic.go:334] "Generic (PLEG): container finished" podID="d5818b40-e580-431f-b892-075abf64ef47" containerID="1ecfe23058e27617c1487799d3ab207d087a7379653379af811ce5ab55debdbc" exitCode=137 Dec 11 15:56:03 crc kubenswrapper[5050]: I1211 15:56:03.773742 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7865f45dd7-dnm2m" Dec 11 15:56:04 crc kubenswrapper[5050]: I1211 15:56:04.426416 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2t28t_must-gather-p95dx_d5818b40-e580-431f-b892-075abf64ef47/copy/0.log" Dec 11 15:56:04 crc kubenswrapper[5050]: I1211 15:56:04.426991 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2t28t_must-gather-p95dx_d5818b40-e580-431f-b892-075abf64ef47/gather/0.log" Dec 11 15:56:04 crc kubenswrapper[5050]: I1211 15:56:04.427063 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2t28t/must-gather-p95dx" Dec 11 15:56:04 crc kubenswrapper[5050]: I1211 15:56:04.547820 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d5818b40-e580-431f-b892-075abf64ef47-must-gather-output\") pod \"d5818b40-e580-431f-b892-075abf64ef47\" (UID: \"d5818b40-e580-431f-b892-075abf64ef47\") " Dec 11 15:56:04 crc kubenswrapper[5050]: I1211 15:56:04.548339 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5818b40-e580-431f-b892-075abf64ef47-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "d5818b40-e580-431f-b892-075abf64ef47" (UID: "d5818b40-e580-431f-b892-075abf64ef47"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 15:56:04 crc kubenswrapper[5050]: I1211 15:56:04.548462 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv5wr\" (UniqueName: \"kubernetes.io/projected/d5818b40-e580-431f-b892-075abf64ef47-kube-api-access-nv5wr\") pod \"d5818b40-e580-431f-b892-075abf64ef47\" (UID: \"d5818b40-e580-431f-b892-075abf64ef47\") " Dec 11 15:56:04 crc kubenswrapper[5050]: I1211 15:56:04.549473 5050 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d5818b40-e580-431f-b892-075abf64ef47-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 11 15:56:04 crc kubenswrapper[5050]: I1211 15:56:04.554237 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5818b40-e580-431f-b892-075abf64ef47-kube-api-access-nv5wr" (OuterVolumeSpecName: "kube-api-access-nv5wr") pod "d5818b40-e580-431f-b892-075abf64ef47" (UID: "d5818b40-e580-431f-b892-075abf64ef47"). InnerVolumeSpecName "kube-api-access-nv5wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:56:04 crc kubenswrapper[5050]: I1211 15:56:04.651697 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nv5wr\" (UniqueName: \"kubernetes.io/projected/d5818b40-e580-431f-b892-075abf64ef47-kube-api-access-nv5wr\") on node \"crc\" DevicePath \"\"" Dec 11 15:56:04 crc kubenswrapper[5050]: I1211 15:56:04.779976 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2t28t_must-gather-p95dx_d5818b40-e580-431f-b892-075abf64ef47/copy/0.log" Dec 11 15:56:04 crc kubenswrapper[5050]: I1211 15:56:04.780516 5050 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-2t28t_must-gather-p95dx_d5818b40-e580-431f-b892-075abf64ef47/gather/0.log" Dec 11 15:56:04 crc kubenswrapper[5050]: I1211 15:56:04.780636 5050 scope.go:117] "RemoveContainer" containerID="fd073c48ba38a0d3ab62843fe9687fdddc0a98723ef38ea922d55ec448668065" Dec 11 15:56:04 crc kubenswrapper[5050]: I1211 15:56:04.780659 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-2t28t/must-gather-p95dx" Dec 11 15:56:04 crc kubenswrapper[5050]: I1211 15:56:04.822209 5050 scope.go:117] "RemoveContainer" containerID="1ecfe23058e27617c1487799d3ab207d087a7379653379af811ce5ab55debdbc" Dec 11 15:56:05 crc kubenswrapper[5050]: I1211 15:56:05.208946 5050 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cd66n container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Dec 11 15:56:05 crc kubenswrapper[5050]: I1211 15:56:05.209079 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" podUID="ea884e88-c0df-4212-976a-0d7ce1731fdc" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.9:8443: connect: connection refused" Dec 11 15:56:05 crc kubenswrapper[5050]: I1211 15:56:05.561509 5050 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5818b40-e580-431f-b892-075abf64ef47" path="/var/lib/kubelet/pods/d5818b40-e580-431f-b892-075abf64ef47/volumes" Dec 11 15:56:07 crc kubenswrapper[5050]: I1211 15:56:07.329501 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-backup-0" podUID="0f954250-5982-4088-839a-8faf7bfe203c" containerName="cinder-backup" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:56:08 crc kubenswrapper[5050]: I1211 15:56:08.008139 5050 scope.go:117] "RemoveContainer" containerID="9ec87625c63bb19608c5fab2a9d7e38a82ac0120b7f7afb4ed2fe01cd46ecae8" Dec 11 15:56:10 crc kubenswrapper[5050]: I1211 15:56:10.209512 5050 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cd66n container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Dec 11 15:56:10 crc kubenswrapper[5050]: I1211 15:56:10.210113 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" podUID="ea884e88-c0df-4212-976a-0d7ce1731fdc" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.9:8443: connect: connection refused" Dec 11 15:56:11 crc kubenswrapper[5050]: I1211 15:56:11.159798 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5fb79d99b5-m4xgd" podUID="2086bc41-00a2-4c97-a491-08511f3ed6e5" containerName="horizon" probeResult="failure" output="Get \"http://10.217.1.117:8080/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.117:8080: connect: connection refused" Dec 11 15:56:11 crc kubenswrapper[5050]: I1211 15:56:11.991914 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/manila-share-share1-0" podUID="29321ad8-528b-46ed-8c14-21a74038cddb" containerName="manila-share" probeResult="failure" output="Get \"http://10.217.1.146:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:56:11 crc kubenswrapper[5050]: I1211 15:56:11.991935 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/manila-scheduler-0" podUID="9c5fd2fd-4df8-4f0f-982c-d3e6df852669" containerName="manila-scheduler" probeResult="failure" output="Get \"http://10.217.1.145:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:56:12 crc kubenswrapper[5050]: I1211 15:56:12.325283 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-backup-0" podUID="0f954250-5982-4088-839a-8faf7bfe203c" containerName="cinder-backup" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:56:15 crc kubenswrapper[5050]: I1211 15:56:15.209227 5050 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cd66n container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Dec 11 15:56:15 crc kubenswrapper[5050]: I1211 15:56:15.209535 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" podUID="ea884e88-c0df-4212-976a-0d7ce1731fdc" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.9:8443: connect: connection refused" Dec 11 15:56:16 crc kubenswrapper[5050]: I1211 15:56:16.927512 5050 generic.go:334] "Generic (PLEG): container finished" podID="b94bc94c-a636-4f0d-bd28-0f347e7b1143" containerID="1775318ccb2e6444fd0e7771ecf1272cb1579807c16cb2b926ad61c7397affe6" exitCode=137 Dec 11 15:56:16 crc kubenswrapper[5050]: I1211 15:56:16.927566 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"b94bc94c-a636-4f0d-bd28-0f347e7b1143","Type":"ContainerDied","Data":"1775318ccb2e6444fd0e7771ecf1272cb1579807c16cb2b926ad61c7397affe6"} Dec 11 15:56:16 crc kubenswrapper[5050]: I1211 15:56:16.928092 5050 scope.go:117] "RemoveContainer" containerID="be93e909105a092ced14aca5a09c48a140e87c3ee935eba54b3b801f908c1e84" Dec 11 15:56:17 crc kubenswrapper[5050]: I1211 15:56:17.339072 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-backup-0" podUID="0f954250-5982-4088-839a-8faf7bfe203c" containerName="cinder-backup" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:56:20 crc kubenswrapper[5050]: I1211 15:56:17.942080 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"b94bc94c-a636-4f0d-bd28-0f347e7b1143","Type":"ContainerStarted","Data":"d3f13f9ffa428ec252a0d2333ff705315311e630584427418d09814fc0cca672"} Dec 11 15:56:20 crc kubenswrapper[5050]: I1211 15:56:18.546958 5050 scope.go:117] "RemoveContainer" containerID="33899e24adacef82929b0fbbcd58af7cb4f7aa3935b49ec9b5b4045a51da1d14" Dec 11 15:56:20 crc kubenswrapper[5050]: E1211 15:56:18.547278 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:56:20 crc kubenswrapper[5050]: I1211 15:56:18.955483 5050 generic.go:334] "Generic (PLEG): container finished" podID="131d56da-b770-4452-97c9-b585434da431" containerID="0918c12ef77e4a03c8b33e9194ab473c7649b3f481ba8e874ade29222d0939ed" exitCode=137 Dec 11 15:56:20 crc kubenswrapper[5050]: I1211 15:56:18.956289 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"131d56da-b770-4452-97c9-b585434da431","Type":"ContainerDied","Data":"0918c12ef77e4a03c8b33e9194ab473c7649b3f481ba8e874ade29222d0939ed"} Dec 11 15:56:20 crc kubenswrapper[5050]: I1211 15:56:18.956359 5050 scope.go:117] "RemoveContainer" containerID="d3e1fc093c150e4e423d3557b3ec8c637526707132012ee62f3903c848934b44" Dec 11 15:56:20 crc kubenswrapper[5050]: I1211 15:56:20.209508 5050 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cd66n container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Dec 11 15:56:20 crc kubenswrapper[5050]: I1211 15:56:20.209844 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" podUID="ea884e88-c0df-4212-976a-0d7ce1731fdc" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.9:8443: connect: connection refused" Dec 11 15:56:21 crc kubenswrapper[5050]: I1211 15:56:21.005315 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"131d56da-b770-4452-97c9-b585434da431","Type":"ContainerStarted","Data":"cb7a7bf4b1e853e57f073df710e0dfb0178a32389f4ce0ad8e51c04f1d4dc05b"} Dec 11 15:56:21 crc kubenswrapper[5050]: I1211 15:56:21.716382 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Dec 11 15:56:22 crc kubenswrapper[5050]: I1211 15:56:22.074247 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/manila-scheduler-0" podUID="9c5fd2fd-4df8-4f0f-982c-d3e6df852669" containerName="manila-scheduler" probeResult="failure" output="Get \"http://10.217.1.145:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:56:22 crc kubenswrapper[5050]: I1211 15:56:22.074499 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/manila-share-share1-0" podUID="29321ad8-528b-46ed-8c14-21a74038cddb" containerName="manila-share" probeResult="failure" output="Get \"http://10.217.1.146:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:56:22 crc kubenswrapper[5050]: I1211 15:56:22.338426 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-backup-0" podUID="0f954250-5982-4088-839a-8faf7bfe203c" containerName="cinder-backup" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:56:22 crc kubenswrapper[5050]: I1211 15:56:22.988853 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 11 15:56:24 crc kubenswrapper[5050]: I1211 15:56:24.105541 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5fb79d99b5-m4xgd" Dec 11 15:56:25 crc kubenswrapper[5050]: I1211 15:56:25.209499 5050 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cd66n container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Dec 11 15:56:25 crc kubenswrapper[5050]: I1211 15:56:25.209608 5050 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" podUID="ea884e88-c0df-4212-976a-0d7ce1731fdc" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.9:8443: connect: connection refused" Dec 11 15:56:25 crc kubenswrapper[5050]: I1211 15:56:25.372646 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 11 15:56:25 crc kubenswrapper[5050]: I1211 15:56:25.485717 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 11 15:56:25 crc kubenswrapper[5050]: I1211 15:56:25.820215 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5fb79d99b5-m4xgd" Dec 11 15:56:26 crc kubenswrapper[5050]: I1211 15:56:26.742189 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-volume-volume1-0" podUID="b94bc94c-a636-4f0d-bd28-0f347e7b1143" containerName="cinder-volume" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:56:27 crc kubenswrapper[5050]: I1211 15:56:27.340707 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Dec 11 15:56:28 crc kubenswrapper[5050]: I1211 15:56:28.018561 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="131d56da-b770-4452-97c9-b585434da431" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:56:28 crc kubenswrapper[5050]: I1211 15:56:28.077212 5050 generic.go:334] "Generic (PLEG): container finished" podID="ea884e88-c0df-4212-976a-0d7ce1731fdc" containerID="870ab51995eb595a4e2d4b1790de952cb62e6bd39b249e1ed5b3a9e636607508" exitCode=0 Dec 11 15:56:28 crc kubenswrapper[5050]: I1211 15:56:28.077270 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" event={"ID":"ea884e88-c0df-4212-976a-0d7ce1731fdc","Type":"ContainerDied","Data":"870ab51995eb595a4e2d4b1790de952cb62e6bd39b249e1ed5b3a9e636607508"} Dec 11 15:56:29 crc kubenswrapper[5050]: I1211 15:56:29.090215 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" event={"ID":"ea884e88-c0df-4212-976a-0d7ce1731fdc","Type":"ContainerStarted","Data":"cab528af328f21ac274dd3a7ff7706a61a5935b355521c7386504ca8e973d7e4"} Dec 11 15:56:29 crc kubenswrapper[5050]: I1211 15:56:29.555892 5050 scope.go:117] "RemoveContainer" containerID="33899e24adacef82929b0fbbcd58af7cb4f7aa3935b49ec9b5b4045a51da1d14" Dec 11 15:56:29 crc kubenswrapper[5050]: E1211 15:56:29.556474 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:56:30 crc kubenswrapper[5050]: I1211 15:56:30.208665 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 15:56:30 crc kubenswrapper[5050]: I1211 15:56:30.209172 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 15:56:30 crc kubenswrapper[5050]: I1211 15:56:30.213224 5050 patch_prober.go:28] interesting pod/apiserver-76f77b778f-cd66n container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 11 15:56:30 crc kubenswrapper[5050]: [+]log ok Dec 11 15:56:30 crc kubenswrapper[5050]: [+]etcd ok Dec 11 15:56:30 crc kubenswrapper[5050]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 11 15:56:30 crc kubenswrapper[5050]: [+]poststarthook/generic-apiserver-start-informers ok Dec 11 15:56:30 crc kubenswrapper[5050]: [+]poststarthook/max-in-flight-filter ok Dec 11 15:56:30 crc kubenswrapper[5050]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 11 15:56:30 crc kubenswrapper[5050]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 11 15:56:30 crc kubenswrapper[5050]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 11 15:56:30 crc kubenswrapper[5050]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 11 15:56:30 crc kubenswrapper[5050]: [+]poststarthook/project.openshift.io-projectcache ok Dec 11 15:56:30 crc kubenswrapper[5050]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 11 15:56:30 crc kubenswrapper[5050]: [+]poststarthook/openshift.io-startinformers ok Dec 11 15:56:30 crc kubenswrapper[5050]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 11 15:56:30 crc kubenswrapper[5050]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 11 15:56:30 crc kubenswrapper[5050]: livez check failed Dec 11 15:56:30 crc kubenswrapper[5050]: I1211 15:56:30.213268 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" podUID="ea884e88-c0df-4212-976a-0d7ce1731fdc" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:56:31 crc kubenswrapper[5050]: I1211 15:56:31.722142 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-volume-volume1-0" podUID="b94bc94c-a636-4f0d-bd28-0f347e7b1143" containerName="cinder-volume" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 15:56:32 crc kubenswrapper[5050]: I1211 15:56:32.156539 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/manila-share-share1-0" podUID="29321ad8-528b-46ed-8c14-21a74038cddb" containerName="manila-share" probeResult="failure" output="Get \"http://10.217.1.146:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:56:32 crc kubenswrapper[5050]: I1211 15:56:32.157095 5050 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/manila-scheduler-0" podUID="9c5fd2fd-4df8-4f0f-982c-d3e6df852669" containerName="manila-scheduler" probeResult="failure" output="Get \"http://10.217.1.145:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 15:56:32 crc kubenswrapper[5050]: I1211 15:56:32.994778 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 11 15:56:35 crc kubenswrapper[5050]: I1211 15:56:35.216272 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 15:56:35 crc kubenswrapper[5050]: I1211 15:56:35.221453 5050 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-cd66n" Dec 11 15:56:36 crc kubenswrapper[5050]: I1211 15:56:36.722979 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Dec 11 15:56:38 crc kubenswrapper[5050]: I1211 15:56:38.673172 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Dec 11 15:56:39 crc kubenswrapper[5050]: I1211 15:56:39.020840 5050 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Dec 11 15:56:41 crc kubenswrapper[5050]: I1211 15:56:41.546656 5050 scope.go:117] "RemoveContainer" containerID="33899e24adacef82929b0fbbcd58af7cb4f7aa3935b49ec9b5b4045a51da1d14" Dec 11 15:56:41 crc kubenswrapper[5050]: E1211 15:56:41.547649 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:56:43 crc kubenswrapper[5050]: I1211 15:56:43.258020 5050 generic.go:334] "Generic (PLEG): container finished" podID="bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe" containerID="1dfa27c6272f96426c1492974f98bcd24d719a1317e98c920646c427582341dc" exitCode=0 Dec 11 15:56:43 crc kubenswrapper[5050]: I1211 15:56:43.258147 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-z5mzj" event={"ID":"bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe","Type":"ContainerDied","Data":"1dfa27c6272f96426c1492974f98bcd24d719a1317e98c920646c427582341dc"} Dec 11 15:56:44 crc kubenswrapper[5050]: I1211 15:56:44.792319 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-z5mzj" Dec 11 15:56:44 crc kubenswrapper[5050]: I1211 15:56:44.941766 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe-ssh-key\") pod \"bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe\" (UID: \"bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe\") " Dec 11 15:56:44 crc kubenswrapper[5050]: I1211 15:56:44.942519 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ldnb\" (UniqueName: \"kubernetes.io/projected/bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe-kube-api-access-8ldnb\") pod \"bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe\" (UID: \"bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe\") " Dec 11 15:56:44 crc kubenswrapper[5050]: I1211 15:56:44.942826 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe-ceph\") pod \"bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe\" (UID: \"bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe\") " Dec 11 15:56:44 crc kubenswrapper[5050]: I1211 15:56:44.943040 5050 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe-inventory\") pod \"bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe\" (UID: \"bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe\") " Dec 11 15:56:44 crc kubenswrapper[5050]: I1211 15:56:44.950595 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe-ceph" (OuterVolumeSpecName: "ceph") pod "bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe" (UID: "bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:56:44 crc kubenswrapper[5050]: I1211 15:56:44.950727 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe-kube-api-access-8ldnb" (OuterVolumeSpecName: "kube-api-access-8ldnb") pod "bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe" (UID: "bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe"). InnerVolumeSpecName "kube-api-access-8ldnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 15:56:44 crc kubenswrapper[5050]: I1211 15:56:44.978470 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe" (UID: "bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:56:44 crc kubenswrapper[5050]: I1211 15:56:44.980447 5050 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe-inventory" (OuterVolumeSpecName: "inventory") pod "bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe" (UID: "bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 15:56:45 crc kubenswrapper[5050]: I1211 15:56:45.046192 5050 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 15:56:45 crc kubenswrapper[5050]: I1211 15:56:45.046234 5050 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 15:56:45 crc kubenswrapper[5050]: I1211 15:56:45.046250 5050 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ldnb\" (UniqueName: \"kubernetes.io/projected/bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe-kube-api-access-8ldnb\") on node \"crc\" DevicePath \"\"" Dec 11 15:56:45 crc kubenswrapper[5050]: I1211 15:56:45.046261 5050 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe-ceph\") on node \"crc\" DevicePath \"\"" Dec 11 15:56:45 crc kubenswrapper[5050]: I1211 15:56:45.278326 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-openstack-openstack-cell1-z5mzj" event={"ID":"bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe","Type":"ContainerDied","Data":"d5a859e15ad933abf15794ed09542e6bf0f9845d0e9214845a78fdb5376bb220"} Dec 11 15:56:45 crc kubenswrapper[5050]: I1211 15:56:45.278361 5050 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5a859e15ad933abf15794ed09542e6bf0f9845d0e9214845a78fdb5376bb220" Dec 11 15:56:45 crc kubenswrapper[5050]: I1211 15:56:45.278414 5050 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-openstack-openstack-cell1-z5mzj" Dec 11 15:56:45 crc kubenswrapper[5050]: I1211 15:56:45.371463 5050 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-7qxkt"] Dec 11 15:56:45 crc kubenswrapper[5050]: E1211 15:56:45.372371 5050 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe" containerName="install-os-openstack-openstack-cell1" Dec 11 15:56:45 crc kubenswrapper[5050]: I1211 15:56:45.372394 5050 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe" containerName="install-os-openstack-openstack-cell1" Dec 11 15:56:45 crc kubenswrapper[5050]: I1211 15:56:45.372738 5050 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc4ff2ba-0977-4a04-a61c-e2af6e1a1bbe" containerName="install-os-openstack-openstack-cell1" Dec 11 15:56:45 crc kubenswrapper[5050]: I1211 15:56:45.374004 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-7qxkt" Dec 11 15:56:45 crc kubenswrapper[5050]: I1211 15:56:45.376684 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-cell1" Dec 11 15:56:45 crc kubenswrapper[5050]: I1211 15:56:45.376797 5050 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 15:56:45 crc kubenswrapper[5050]: I1211 15:56:45.376810 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-adoption-secret" Dec 11 15:56:45 crc kubenswrapper[5050]: I1211 15:56:45.378878 5050 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-dockercfg-mvxd9" Dec 11 15:56:45 crc kubenswrapper[5050]: I1211 15:56:45.395389 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-7qxkt"] Dec 11 15:56:45 crc kubenswrapper[5050]: I1211 15:56:45.573289 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/045bc7fb-8163-4c54-938c-b04fa7d9e1bb-inventory\") pod \"configure-os-openstack-openstack-cell1-7qxkt\" (UID: \"045bc7fb-8163-4c54-938c-b04fa7d9e1bb\") " pod="openstack/configure-os-openstack-openstack-cell1-7qxkt" Dec 11 15:56:45 crc kubenswrapper[5050]: I1211 15:56:45.573375 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/045bc7fb-8163-4c54-938c-b04fa7d9e1bb-ceph\") pod \"configure-os-openstack-openstack-cell1-7qxkt\" (UID: \"045bc7fb-8163-4c54-938c-b04fa7d9e1bb\") " pod="openstack/configure-os-openstack-openstack-cell1-7qxkt" Dec 11 15:56:45 crc kubenswrapper[5050]: I1211 15:56:45.573417 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/045bc7fb-8163-4c54-938c-b04fa7d9e1bb-ssh-key\") pod \"configure-os-openstack-openstack-cell1-7qxkt\" (UID: \"045bc7fb-8163-4c54-938c-b04fa7d9e1bb\") " pod="openstack/configure-os-openstack-openstack-cell1-7qxkt" Dec 11 15:56:45 crc kubenswrapper[5050]: I1211 15:56:45.573628 5050 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgm2q\" (UniqueName: \"kubernetes.io/projected/045bc7fb-8163-4c54-938c-b04fa7d9e1bb-kube-api-access-dgm2q\") pod \"configure-os-openstack-openstack-cell1-7qxkt\" (UID: \"045bc7fb-8163-4c54-938c-b04fa7d9e1bb\") " pod="openstack/configure-os-openstack-openstack-cell1-7qxkt" Dec 11 15:56:45 crc kubenswrapper[5050]: I1211 15:56:45.676651 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgm2q\" (UniqueName: \"kubernetes.io/projected/045bc7fb-8163-4c54-938c-b04fa7d9e1bb-kube-api-access-dgm2q\") pod \"configure-os-openstack-openstack-cell1-7qxkt\" (UID: \"045bc7fb-8163-4c54-938c-b04fa7d9e1bb\") " pod="openstack/configure-os-openstack-openstack-cell1-7qxkt" Dec 11 15:56:45 crc kubenswrapper[5050]: I1211 15:56:45.677252 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/045bc7fb-8163-4c54-938c-b04fa7d9e1bb-inventory\") pod \"configure-os-openstack-openstack-cell1-7qxkt\" (UID: \"045bc7fb-8163-4c54-938c-b04fa7d9e1bb\") " pod="openstack/configure-os-openstack-openstack-cell1-7qxkt" Dec 11 15:56:45 crc kubenswrapper[5050]: I1211 15:56:45.677296 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/045bc7fb-8163-4c54-938c-b04fa7d9e1bb-ceph\") pod \"configure-os-openstack-openstack-cell1-7qxkt\" (UID: \"045bc7fb-8163-4c54-938c-b04fa7d9e1bb\") " pod="openstack/configure-os-openstack-openstack-cell1-7qxkt" Dec 11 15:56:45 crc kubenswrapper[5050]: I1211 15:56:45.677328 5050 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/045bc7fb-8163-4c54-938c-b04fa7d9e1bb-ssh-key\") pod \"configure-os-openstack-openstack-cell1-7qxkt\" (UID: \"045bc7fb-8163-4c54-938c-b04fa7d9e1bb\") " pod="openstack/configure-os-openstack-openstack-cell1-7qxkt" Dec 11 15:56:45 crc kubenswrapper[5050]: I1211 15:56:45.684065 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/045bc7fb-8163-4c54-938c-b04fa7d9e1bb-inventory\") pod \"configure-os-openstack-openstack-cell1-7qxkt\" (UID: \"045bc7fb-8163-4c54-938c-b04fa7d9e1bb\") " pod="openstack/configure-os-openstack-openstack-cell1-7qxkt" Dec 11 15:56:45 crc kubenswrapper[5050]: I1211 15:56:45.684246 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/045bc7fb-8163-4c54-938c-b04fa7d9e1bb-ceph\") pod \"configure-os-openstack-openstack-cell1-7qxkt\" (UID: \"045bc7fb-8163-4c54-938c-b04fa7d9e1bb\") " pod="openstack/configure-os-openstack-openstack-cell1-7qxkt" Dec 11 15:56:45 crc kubenswrapper[5050]: I1211 15:56:45.684561 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/045bc7fb-8163-4c54-938c-b04fa7d9e1bb-ssh-key\") pod \"configure-os-openstack-openstack-cell1-7qxkt\" (UID: \"045bc7fb-8163-4c54-938c-b04fa7d9e1bb\") " pod="openstack/configure-os-openstack-openstack-cell1-7qxkt" Dec 11 15:56:45 crc kubenswrapper[5050]: I1211 15:56:45.699372 5050 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgm2q\" (UniqueName: \"kubernetes.io/projected/045bc7fb-8163-4c54-938c-b04fa7d9e1bb-kube-api-access-dgm2q\") pod \"configure-os-openstack-openstack-cell1-7qxkt\" (UID: \"045bc7fb-8163-4c54-938c-b04fa7d9e1bb\") " pod="openstack/configure-os-openstack-openstack-cell1-7qxkt" Dec 11 15:56:45 crc kubenswrapper[5050]: I1211 15:56:45.701515 5050 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-openstack-openstack-cell1-7qxkt" Dec 11 15:56:46 crc kubenswrapper[5050]: I1211 15:56:46.265964 5050 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-openstack-openstack-cell1-7qxkt"] Dec 11 15:56:46 crc kubenswrapper[5050]: I1211 15:56:46.290126 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-7qxkt" event={"ID":"045bc7fb-8163-4c54-938c-b04fa7d9e1bb","Type":"ContainerStarted","Data":"68325c6d291f4c3b11778b3720d457c2818d755ad7ed3fca99a26133ec571db0"} Dec 11 15:56:48 crc kubenswrapper[5050]: I1211 15:56:48.340477 5050 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-openstack-openstack-cell1-7qxkt" event={"ID":"045bc7fb-8163-4c54-938c-b04fa7d9e1bb","Type":"ContainerStarted","Data":"ece1957dec87b5986be114b0689af59d6bf193d53f2bcaad65fcb0550831027f"} Dec 11 15:56:48 crc kubenswrapper[5050]: I1211 15:56:48.366236 5050 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-openstack-openstack-cell1-7qxkt" podStartSLOduration=2.547283532 podStartE2EDuration="3.366215338s" podCreationTimestamp="2025-12-11 15:56:45 +0000 UTC" firstStartedPulling="2025-12-11 15:56:46.270241895 +0000 UTC m=+7697.113964471" lastFinishedPulling="2025-12-11 15:56:47.089173691 +0000 UTC m=+7697.932896277" observedRunningTime="2025-12-11 15:56:48.359040215 +0000 UTC m=+7699.202762811" watchObservedRunningTime="2025-12-11 15:56:48.366215338 +0000 UTC m=+7699.209937924" Dec 11 15:56:56 crc kubenswrapper[5050]: I1211 15:56:56.546305 5050 scope.go:117] "RemoveContainer" containerID="33899e24adacef82929b0fbbcd58af7cb4f7aa3935b49ec9b5b4045a51da1d14" Dec 11 15:56:56 crc kubenswrapper[5050]: E1211 15:56:56.547312 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a" Dec 11 15:57:07 crc kubenswrapper[5050]: I1211 15:57:07.546499 5050 scope.go:117] "RemoveContainer" containerID="33899e24adacef82929b0fbbcd58af7cb4f7aa3935b49ec9b5b4045a51da1d14" Dec 11 15:57:07 crc kubenswrapper[5050]: E1211 15:57:07.547628 5050 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-wcb2s_openshift-machine-config-operator(7e849b2e-7cd7-4e49-acd2-deab139c699a)\"" pod="openshift-machine-config-operator/machine-config-daemon-wcb2s" podUID="7e849b2e-7cd7-4e49-acd2-deab139c699a"